New AI Sees Like a Human, Filling in the Blanks

June 18, 2019 • by Marc Airhart

An artificial intelligence agent that can glance quickly at parts of a new environment and infer the full scene might be more effective on dangerous missions.

Silhouettes of a human and a robot looking at a painting in a museum

Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do—take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions.

Animated image of a shoreline scene slowly filled in, one tile at a time
A new AI agent developed by researchers at the University of Texas at Austin takes a few "glimpses" of its surroundings, representing less than 20 percent of the full 360 degree view, and infers the rest of the whole environment. What makes this system so effective is that it’s not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene. Credit: David Steadman/Santhosh Ramakrishnan/University of Texas at Austin.

Share


Hook 'Em, the longhorn mascot, holds up hook 'em hands in front of a sign that reads Antone's SXSW 2024 as people mill about behind him.

UT News

Longhorns Bring Their Expertise to SXSW 2026

A blur of people and patients moving through a hospital hallway with effects that nod to technological influences

Texas AI

Robots at Your Service in the Open World

Two students view the UT campus out an upper-floor, burnt orange oriel while another student ascends stairs

Announcements

UT Austin Launches New School of Computing