AI without visual sensors creates an internal map to find the path

The memory of a “blind” AI traversing a room contains information that could help another AI get to its destination faster. This memory transfer could improve robot navigation

Artificial intelligence can make its own internal map of its surroundings to navigate, like animals and humans do, even if it cannot see. The map can be transferred between different AIs and can be useful for improving robots’ navigation in the real world or improving their object manipulation skills.

When we encounter a new environment, our brain creates an internal map that is unique to us, unlike the map you might see on Google, and we use it to find the same things again or take shortcuts.

Now, Eric Wijmans of the Georgia Institute of Technology in Atlanta and his colleagues have trained an AI agent without visual sensors – it is practically blind – to navigate its environment and find a destination. They showed that he creates an internal two-dimensional map of his surroundings.

These blind agents existed in a virtual world called Habitat and had only one type of perception, known as ego movement, which told them how far they had progressed and in what direction. The agents were then ordered to find their way from one point to another through an obstacle-strewn room, with their only reference being how close they were to their destination.

The agents were able to find a route to their destination that was 1.59 times as long as the shortest path. Such agents who can see typically choose a route that is 1.19 times as long as the shortest path.

When the memory of the blind agents was transplanted into the sighted, they were able to find an even shorter path. When the blind agent’s memory was removed, he lost the ability to navigate effectively.

Some previous AI models moved around the environment without any external maps, but since the work of the AI ​​is opaque and they figure out how to do things themselves, it was not clear if they used homemade internal maps, as humans and other animals do. or use visual cues like landmarks instead.

Wijmans and his team now think agents are building and relying on internal maps because their blind agents couldn’t use visual cues.

The emergence of internal maps could be beneficial, says Aniruddha Kembhavi of the Allen Institute for Artificial Intelligence in Seattle, Washington. “If you can teach [an AI] for navigation, and this results in a display capability, then you could probably use this model, or fine-tune this model for other tasks that also require display.”

This can also extend to non-navigational tasks, says Kembhavi, such as creating 3D maps of an object after it has been lifted and using them to further precisely manipulate the object.

Link: arxiv.org/abs/2301.13261

Content Source

Dallas Press News – Latest News:
Dallas Local News || Fort Worth Local News | Texas State News || Crime and Safety News || National news || Business News || Health News

texasstandard.news contributed to this report.

Related Articles

Back to top button