Skip to main content

Highlight

Self-learning robot

Achievement/Results

NSF-funded researchers Nicholas Butko and Javier Movellan at the University of California at San Diego have invented a method whereby a robot uses its own experience to learn about the relationship between its sensors (cameras) and actuators (motors), and autonomously tracks changes in those relationships. They demonstrated the effectiveness of their method in Nobody and Diego, two robots with different morphologies (Figure 1). Using the proposed algorithm, these robots quickly discovered the parameters of their own motor systems that would enable them to orient their cameras toward and follow visual objects of interest (Figure 2). This automates what was previously a time consuming calibration procedure that needed expert technicians, and also lowers the robots’ maintenance requirements.

The practical applications of this work include deploying robots that are robust to changes in their operational parameters while they are in the field, and robots that will be used by non-experts. For example, Butko & Movellan will use their method in RUBI, a social robot designed to interact with and help teach preschool children. RUBI often has to cope with changes to her motor parameters caused by rough play from children, and teachers should not be burdened with maintaining her.

The scientific contribution of this work is a greater understanding of the problems faced by the developing human brain. Butko & Movellan chose to approach the problem by creating a generative probabilistic model encoding the computational relationships among a robot, its surrounding environment, its sensors, and its actuators. The same problems are faced by humans, and the same relationships exist; thus similar solutions may have been found by biology. According to the generative model, the robot should build a visual “map” of its environment and use this map as a reference for discovering the relationship between motor commands and changes in what it sees moment to moment. Similar maps have been observed in monkey LIP. These maps have been hypothesized to support the continuity of visual experience across fixations, but the same maps may also be used to tune the parameters used by the brain in making accurate saccades as the body changes over the course of development.

This work will be archived in the Proceedings of the International Conference on Development and Learning (ICDL2010). Nicholas Butko was an NSF IGERT (Integrative Graduate Education and Research Traineeship) fellow in the Vision and Learning in Humans and Machines Traineeship program at UCSD run by Professors Virginia de Sa and Garrison Cottrell. The work is an excellent example of how taking lessons from human development can improve machine learning and in turn how the resulting machine system can give us better insight into the problems faced by developing humans. This work was supported by Project One, NSF project IIS-INT2-Large 0808767.

Address Goals

Self-learning robots will allow more robot applications with lower human time investment. This could have huge societal benefit. The work also helps give a greater understanding of the problems faced by the developing human brain which is of great interest and could also have huge societal benefit. This work has been publicized to teachers and parents through their robot in the preschool classroom. It involves graduate student research.