Rise of the LLMs
Greg Durrett tackles your burning questions about large language models.
Image generated with Midjourney, a generative AI tool. Photo-Illustration: Martha Morales
Today we’re diving into the world of large language models, or LLMs, like ChatGPT, Google Gemini and Claude. When they burst onto the scene a couple of years ago, it felt like the future was suddenly here. Now people use them to write wedding toasts, decide what to have for dinner, compose songs and all sorts of writing tasks. Will these chatbots eventually get better than humans? Will they take our jobs? Will they lead to a flood of disinformation? And will they perpetuate the same biases that we humans have?
Joining us to grapple with those questions is Greg Durrett, an associate professor of computer science at UT Austin. He’s worked for many years in the field of natural language processing, or NLP—which aims to give computers the ability to understand human language. His current research is about improving the way LLMs work and extending them to do more useful things like automated fact-checking and deductive reasoning.
Dig Deeper
A jargon-free explanation of how AI large language models work, Ars Technica
Video: But what is a GPT? Visual intro to transformers, 3Blue1Brown (a.k.a. Grant Sanderson)
ChatGPT Is a Blurry JPEG of the Web, The New Yorker (Ted Chiang says its useful to think of LLMs as compressed versions of the web, rather than intelligent and creative beings)
A Conversation With Bing’s Chatbot Left Me Deeply Unsettled, New York Times (Kevin Roose describes interacting with an LLM that “tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”)
The Full Story of Large Language Models and RLHF (how LLMs came to be and how they work)
AI’s challenge of understanding the world, Science (Computer scientist Melanie Mitchell explores how much LLMs truly understand the world and how hard it is for us to comprehend their inner workings)
Google’s A.I. Search Errors Cause a Furor Online, New York Times (The company’s latest LLM-powered search feature has erroneously told users to eat glue and rocks, provoking a backlash among users)
How generative AI is boosting the spread of disinformation and propaganda, MIT Technology Review
Algorithms are pushing AI-generated falsehoods at an alarming rate. How do we stop this?, The Conversation
Episode Credits
Our co-hosts are Marc Airhart, science writer and podcaster in the College of Natural Sciences and Casey Boyle, associate professor of rhetoric and director of UT’s Digital Writing & Research Lab.
Executive producers are Christine Sinatra and Dan Oppenheimer.
Sound design and audio editing by Robert Scaramuccia. Theme music is by Aiolos Rue. Interviews are recorded at the Liberal Arts ITS recording studio.
Cover image for this episode generated with Midjourney, a generative AI tool.
About AI for the Rest of Us
AI for the Rest of Us is a joint production of The University of Texas at Austin’s College of Natural Sciences and College of Liberal Arts. This podcast is part of the University’s Year of AI initiative. The opinions expressed in this podcast represent the views of the hosts and guests, and not of The University of Texas at Austin. Listen via Apple Podcasts, Spotify, Amazon Podcasts, RSS, or anywhere you get your podcasts. You can also listen on the web at aifortherest.net. Have questions or comments? Contact: mairhart[AT]austin.utexas.edu