12/11/2019 / By Michael Alexander
Soon, computers and robots might be looking at the world with a slightly more human point of view.
In a study published in Science Robotics, researchers successfully taught an artificial intelligence (AI) agent – a computer system with the ability to endow robots or other machines with intelligence – the human-like ability to infer its entire environment from just a few, quick glances.
Researchers from The University of Texas at Austin were able to train the agent by using a combination of reinforcement training and deep learning, a method in which the program imitates the workings of the human brain in processing data and creating patterns for use in decision making. In this specific study, the agent was trained to select a short sequence of glances, and then infer the appearance of the entire environment. Throughout the research’s run, the program was rewarded if it reduced “uncertainty” about the unobserved parts of its environment, and subsequently penalized if it failed to do so. (Related: AI robot can draw what you’re thinking by reading your brain impulses.)
Compared to other AI agents, which are generally trained for very specific tasks in environments they have experienced before, the agent developed by the researchers is a general-purpose one, and is meant to gather visual information which can then be used for a wide range of tasks.
“We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise,” Grauman said, adding that the agent “…behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.”
According to the researchers, the goal in relation to computer vision is to develop the algorithms and representations that will allow a computer to autonomously analyze visual information under tight time constraints.
In their study, the researchers said this would be critical in a search-and-rescue application, such as in a burning building wherein a robot would be called upon to quickly locate people in need of rescue, as well as flames and hazardous materials and then relay that information to firefighters. Additionally, the researchers noted that this skill is necessary for the future development not just of effective search-and-rescue robots, but also robots that will be used during dangerous missions, both civilian and military.
According to the study, while the current use of their program is limited to a stationary unit – the researchers liken it to a person standing in one spot, albeit with the ability to point a camera in any direction – the researchers are currently developing the system further for it to work in a fully mobile robot.
The researchers also developed another program – dubbed the “Sidekick” – to help the core AI and speed up its training.
“Using extra information that’s present purely during training helps the [primary] agent learn faster,” Ramakrishnan, who led the “Sidekick” program, said.
According to Grauman and Ramakrishnan, their team decided to use the term “Sidekick” to “…signify how a sidekick to a hero (e.g., in a comic or movie) provides alternate points of view, knowledge, and skills that the hero does not have,” adding that unlike the main algorithm, a sidekick “complements the hero (agent), yet cannot solve the main task at hand.”
Sources include:
Tagged Under: AI, artificial intelligence, Deep Learning, future tech, goodtech, robotics, robots, search and rescue, Terminator robots, vision recognition, weapons technology
COPYRIGHT © 2018 MILITARYTECHNOLOGY.NEWS
All content posted on this site is protected under Free Speech. MilitaryTechnology.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. MilitaryTechnology.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.