New AI Sees Like a Human, Filling in the Blanks

June 18, 2019 • by Marc Airhart

An artificial intelligence agent that can glance quickly at parts of a new environment and infer the full scene might be more effective on dangerous missions.

Silhouettes of a human and a robot looking at a painting in a museum

Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do—take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions.

Animated image of a shoreline scene slowly filled in, one tile at a time
A new AI agent developed by researchers at the University of Texas at Austin takes a few "glimpses" of its surroundings, representing less than 20 percent of the full 360 degree view, and infers the rest of the whole environment. What makes this system so effective is that it’s not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene. Credit: David Steadman/Santhosh Ramakrishnan/University of Texas at Austin.

Share


Several students and researchers in Texas Robotics t-shirts and holding controllers accompany a variety of robots walking and rolling down Speedway

UT News

Marching Forward: How UT is Shaping the Future of Robotics

Hundreds of cords of light radiate from a circuit

Department of Computer Science

UT Austin Becomes an AI Research Powerhouse with NVIDIA Blackwell GPUs