Can We Trust AI to Make Big Decisions?

August 29, 2024 • by Marc Airhart

Craig Watkins addresses the challenges of making artificial intelligence systems that are truly fair.

A signpost pointing in two directions, one reads "Nope" the other reads "Yup"

Today on AI for the Rest of Us, we’re talking about the ways that AI is being used—or might be used—to help make high-stakes decisions about all aspects of our lives—from who gets hired for a job—to what interest rates people get on loans—to whether or not someone who’s been convicted of a crime gets parole. Are AI systems better than humans at making these decisions? Why is it so tempting to give up our decision-making authority to machines? And what can we do to make sure these systems are fair and unbiased?

Craig Watkins is a professor in the Moody College of Communications at UT Austin who’s been wrestling with these questions. Watkins is executive director of the IC2 Institute and a principal investigator with Good Systems, a university-funded initiative that supports multi-disciplinary explorations of the technical, social, and ethical implications of artificial intelligence.

Dig Deeper

Video: Artificial Intelligence and the Future of Racial Justice, S. Craig Watkins, TEDxMIT (Dec. 2021)

Designing AI to Advance Racial Equity (Craig Watkins’ Good Systems project)

Dr. S. Craig Watkins on Why AI’s Potential to Combat or Scale Systemic Injustice Still Comes Down to Humans, Unlocking Us with Brené Brown, (Apr. 3, 2024)

Opinion: Are These States About to Make a Big Mistake on AI?, Politico (Apr. 2024)

Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study, The Lancet (This study found that GPT-4’s accuracy at diagnosing medical conditions varied depending on a person’s gender and race/ethnicity. Also, it was less likely to recommend advanced imaging for Black patients than Caucasian patients.) (Jan. 2024)

Wrongfully Accused by an Algorithm, New York Times, (the story of a Black man arrested for a crime he did not commit, on the basis of faulty facial recognition software) (June 2020)

Companies are on the hook if their hiring algorithms are biased, Quartz (2018)

Episode Credits

Our co-hosts are Marc Airhart, science writer and podcaster in the College of Natural Sciences and Casey Boyle, associate professor of rhetoric and director of UT’s Digital Writing & Research Lab.

Executive producers are Christine Sinatra and Dan Oppenheimer. 

Sound design and audio editing by Robert Scaramuccia. Theme music is by Aiolos Rue. Interviews are recorded at the Liberal Arts ITS recording studio.

About AI for the Rest of Us

AI for the Rest of Us is a joint production of The University of Texas at Austin’s College of Natural Sciences and College of Liberal Arts. This podcast is part of the University’s Year of AI initiative. The opinions expressed in this podcast represent the views of the hosts and guests, and not of The University of Texas at Austin. Listen via Apple Podcasts, Spotify, Amazon Podcasts, RSS, or anywhere you get your podcasts. You can also listen on the web at aifortherest.net. Have questions or comments? Contact: mairhart[AT]austin.utexas.edu 

Share


Tags

A man looks at his wristwatch and a ghostly hologram of a robot emerges, wearing a stethoscope

Podcast

The Algorithm Will See You Now

A woman in a white lab coat and gloves holds up a molecule that has been magnified to the size of her head

Podcast

How AI is Accelerating Discovery

A bright star surrouned by rocky debris from a destroyed planet

UT News

Astronomers Use AI to Find Elusive Stars 'Gobbling Up' Planets