New AI Sees Like a Human, Filling in the Blanks

June 18, 2019 • by Marc Airhart

An artificial intelligence agent that can glance quickly at parts of a new environment and infer the full scene might be more effective on dangerous missions.

Silhouettes of a human and a robot looking at a painting in a museum

Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do—take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions.

Animated image of a shoreline scene slowly filled in, one tile at a time
A new AI agent developed by researchers at the University of Texas at Austin takes a few "glimpses" of its surroundings, representing less than 20 percent of the full 360 degree view, and infers the rest of the whole environment. What makes this system so effective is that it’s not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene. Credit: David Steadman/Santhosh Ramakrishnan/University of Texas at Austin.

Share


A scientist sits at a desk writing

Department of Computer Science

University of Texas Theoretical Computer Scientist Wins Gödel Prize

A professor in a UT electrical and computer engineering shirt points at a screen with scientific imagery, as two seated students and another researcher smile and look on.

Cockrell School of Engineering

Can AI Make Critical Communications Chips Easier to Design?