Search
binary man running with text ethics in the information age

What are the ethical and social implications of contemporary developments in information technology?

The Ethics in the Information Age event series draws together scholars from Indiana University and nationally to explore this question. Our aims are to foster insights through interdisciplinary dialogue, explore potential for scholarly collaboration, and generate student engagement on topics of clear relevance to us all.

 

 Co-Organizers:

 



photo of angie raymondAngie Raymond; Business Law and Ethics, Kelley School of Business, Bloomington, angraymo@indiana.edu

 

 

 

photo of scott shackelfordScott James Shackelford; Business Law and Ethics, Kelley School of Business, Bloomington, sjshacke@indiana.edu

 

 

Previous Co-Organizers:

 

photo of fabio rojasFabio Rojas; Sociology, College of Arts and Sciences, Bloomington, frojas@indiana.edu






photo of joseph tomainJoseph A. Tomain; Maurer School of Law, Bloomington, jtomain@indiana.edu

 

 

 

Event Schedule:

 

photo of aaron mckain

Rhetoric Versus the Robots: Mapping the Ethical and Legal Dimensions of Algorithmic Discrimination

Aaron McKain; Director of English, Digital Media, and Communication Arts at North Central University

 

Thursday Nov. 7th 3:30-5:30 PM

IU Bloomington Campus, Social Sciences Research Commons, Woodburn Hall second floor

In 2019, the use of algorithmic reasoning in law enforcement and commercial contexts has already triggered public outrage: Not only about the unreliability and inherent biases (e.g., racial and gendered) programmed into AI but also the Constitutional issues it presents in criminal proceedings.  In an effort to more usefully catalyze and curate community concerns about this technology – in a language understandable to legislators, lawyers, policymakers, and programmers – the Institute for Digital Humanity has begun a national pilot to foster inter-cultural and inter-faith conversations on digital ethics.  Moving beyond merely decrying “algorithmic discrimination,” our method (a) taxonomizes legal and ethical data concerns (via rhetorical theory’s categories of unreliability) and (b) triangulates these individual/community judgments with specific constitutional values (in criminal contexts) and a common civic nomenclature (for commercial contexts).  

Far from the typical academic “show and tell,” this public presentation is a training for interested professors, graduate students, and activists on how to (quickly) implement our discussion method (in classrooms and public venues) and, for interested parties, how to become part of our emerging pedagogical advocacy network as we build cross-partisan political coalitions dedicated to reforming and re-programming our post-digital world.

 


 

The Future of Privacy in the Digital Age

Fred Cate Vice President of Research at Indiana University

Jay Edelson Founder and CEO of Edelson PC 

Zackary Heck Associate Lawyer at Taft Law, Dayton OH

 

Wednesday April 17th at 5:30 PM

Zachary Heck's practice focuses on privacy and data security. Specifically, Zach assists clients in the areas of privacy compliance, defense litigation, class action defense and guidance in the aftermath of an information security event, including data breach.

Jay Edelson is considered one of the nation’s leading plaintiff’s lawyers, having secured over $1 billion in settlements and verdicts for his clients.  law360 described Jay as a  “Titan of the Plaintiff’s Bar“.  Jay has been recognized as one of “America’s top trial lawyers” in the mass action arena.  And he has been appointed to represent state and local governments on some of the largest issues of the day — ranging from opioids suits against the pharmaceutical companies to suits against Facebook for the Cambridge Analytca scandal.

IU Bloomington campus, Maurer School of Law, Moot Court Room

 


 

DNA: Law, Technology, and Ethics

Dr. Jody L. Madeira; Maurer School of Law, Indiana University

Professor Erin E. Murphy; NYU Law, New York City

Matthew B. White; sharing his experience as a child of fertility fraud and parent of donor children.

 

Friday March 1st 2019 1:15 - 2:45 PM

Bloomington campus, Maurer School of Law, Moot Court Room

Technological development continues to advance uses of DNA in our society. From catching the Golden State Killer, to reuniting families separated by ICE, to identifying the parents of donor children, uses of DNA raise some of the most challenging legal and ethical questions today. The sensitive nature of DNA, as well as the positive and negative ways in which its uses can affect individuals and society, require us to consider how the law ought to respond to these challenges. An expert panel will help us answer these questions.

After providing a broader framework involving DNA, law, and ethics, this panel will focus on familial DNA searches for criminal justice purposes, as well as the advent of direct-to-consumer technology and its unique consequences in the fertility fraud context.

*1.0 Ethics CLE Credit*

 


 

Big Data, AI, and Civic Virtue

Don Howardphoto of don howardPhilosophy, University of Notre Dame

 

Thursday Feb. 28th 5-6:30 PM

IU Bloomington, Hodge Hall, Room 1038

We face a rapidly growing arrays of serious ethical challenges with the ever more widespread employment of big data analytics and artificial intelligence. Prominent problems include algorithmic bias that can reinforce or exacerbate patterns of discrimination in the criminal justice system or the hiring and promotion practices of corporations and government agencies, the risk of the misuse of data analytics and AI for purposes of political repression and control by authoritarian regimes, and the integration of such technologies in automated weapons systems. In most of the literature on such ethical challenges, the focus is on the ethical responsibilities of individual makers and consumers of technology. This talk will suggest that a helpful complement or alternative to this individualist ethical perspective is the perspective of civic virtue. But technology making and technology deployment is usually the work of whole communities of makers and users, and the ethical impacts often affect not just the individual maker or user but the well being and well functioning of the communities within which those individuals live and work. With reference to a few specific applications of data analytics and AI, we will ask what are the impacts that are either corrosive to or promote the flourishing of relevant communities, what are the virtuous habits of action of whole communities and individuals in community that are maximally conducive to human well being, and how we engineer the relevant communities to maximize the likelihood that such virtuous habits of action will emerge and be sustained.

 


 

Technology Enhanced Discrimination

Virginia Eubanks Associate Professor of Political Science at the University at Albany, SUNY. She is the author of Automating Inequality: How High-Tech Tools Profile, Police, and Punish the PoorDigital Dead End: Fighting for Social Justice in the Information Age; and co-editor, with Alethia Jones, of Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith.

Jessica Eaglin Professor of Law, Indiana University

Angie Raymond Business Law and Ethics, Indiana University

 

Monday Feb. 11th at 3 PM

Bloomington Campus, Faculty conference room, Maurer Law School

 

Co-hosted by the Maurer School of Law, the Ostrom Workshop, and the Kelley School of Business, Business Law and Ethics


 

RoboTruckers: The Double Threat of AI for Low-Wage Work

Karen Levy; Information Science, Cornell University

 

Monday January 14th time, 12:00 pm

Bloomington Campus, Social Science Research Commons, Grand Hall

Of late, much attention has been paid to the risk artificial intelligence poses to employment, particularly in low-wage industries. The question has invited well-placed concern from policymakers, as the prospect of millions of low-skilled workers finding themselves rather suddenly without employment brings with it the potential for tremendous social and economic disruption. Long-haul truck driving is perceived as a prime target for such displacement, due to the fast-developing technical capabilities of autonomous vehicles (many of which lend themselves in particular to the specific needs of truck driving), characteristics of the nature of trucking labor, and the political economy of the industry. In most of the public rhetoric about the threat of the self-driving truck, the trucker is contemplated as a displaced party. He is displaced both physically and economically: removed from the cab of the truck, and from his means of economic provision. The robot has replaced his imperfect, disobedient, tired, inefficient body, rendering him redundant, irrelevant, and jobless. But the reality is more complicated. The intrusion of automation into the truck cab indeed presents a threat to the trucker—but the threat is not solely or even primarily being experienced, as it is so often described, as a displacement. The trucker is still in the cab, doing the work of truck driving—but he is increasingly joined there by intelligent systems that monitor his body directly. Hats that monitor his brain waves and head position, vests that track his heart rate, cameras trained on his eyelids for signs of fatigue or inattention: these systems flash lights in his face, jolt his seat, and send reports to his dispatcher or even his family members should the trucker’s focus waver. As more trucking firms integrate such technologies into their safety programs, truckers are not currently being displaced by intelligent systems. Rather, they are experiencing the emergence of intelligent systems as a compelled hybridization, a very intimate incursion into their work and bodies. This paper considers the dual, conflicting narratives of job replacement by robots and of bodily integration with robots, in order to assess the true range of potential effects of AI on low-wage work. [This paper is a chapter from Karen's book-in-progress, Data Driven: Truckers and the New Workplace Surveillance.]

 


 

AI Ethics and the Law

Alexander Duisbergphoto of alexander duisberg

 

Friday October 19th 2018 12:00-1:00 PM

Maurer School of Law, Room 122

 

Dr. Alexander Duisburg, partner at Bird & Bird in Germany, will give a lunch talk titled “AI, Ethics and the Law – a European Perspective” at noon in Room 122 today, as part of his visit to the Ostrom Workshop.

Artificial Intelligence, Robotics and Autonomous” Systems are transforming our lives at an incredible pace. Smart data, machine learning and autonomous cars are just a few of many appliances that will change the way we work and interact.

In Europe, the GDPR has set a new standard on how to deal with personal data, as part of the wider efforts to build the European data economy. At the same time, the ethical debate on science and new technology is shaping up. It is an important element to how Europeans set their agenda, including towards law and regulation on robotics and autonomous systems.

The presentation reflects the current state of debate on these issues. The practical example about autonomous vehicles shows how accountability, control and liability for self-learning systems is fit into the complex regulated environment of road traffic.

Alexander Duisberg is a partner of Bird & Bird in Munich, who specializes in data protection, digital transformation projects, Internet of Things, and complex technology transactions, with a particular focus on automotive, industrial and insurance sectors. He covers a range of matters, including agile development, platforms and the data economy, cloud, cyber security, licensing and technology disputes.

 

 


 

Fil Menczer 

photo of fil menczerComputer Science and Informatics, IU Bloomington

 

5 Reasons Why Social Networks Make us Vulnerable to Misinformation

Tuesday Sept. 25th 2018 at 12 PM

Bloomington campus, Woodburn Hall, SSRC/200

As social media become major channels for the diffusion of news and information, it becomes critical to understand how the complex interplay between cognitive, social, and algorithmic biases triggered by our reliance on social networks makes us vulnerable to misinformation. This talk overviews ongoing network analytics, modeling, and machine learning efforts to study the viral spread of misinformation and to develop tools for countering the online manipulation of opinions.