Skip to main content

Highlight

Perceiving intentions and goals within virtual worlds

Achievement/Results

A group of six graduate students at Rutgers University, funded by a grant from the Interdisciplinary Graduate Education and Research Training program of the National Science Foundation, have built a new kind of interactive virtual environment to study human behavior, and in particular, the ability to understand and interpret the goals and intentions of living things. This project, a collaboration among students specializing in Computer Science and Perceptual and Cognitive Psychology, wove together theories and methods taken from computer vision, machine learning, computer graphics, visual perception and human cognition. One of the most remarkable perceptual capacities of human beings is their ability to infer the intentions of other people simply from observing their actions. Even a brief glance at the motions and activities of others allows us to decide whether we are being approached by a friend or foe. Visual cues, in particular the pattern of perceived motion, can betray even subtle intentions or goals, and allow us to formulate the appropriate response.

Scientists with interest in fields ranging from behavioral economics to neuropsychology are interested in how people infer intentions from perceptual cues. One important application is the study of autism, where impairments in the ability to interpret the intentions of others is often noted. IGERT faculty member Maggie Shiffrar, an expert on the perception of motion, found that those with impaired social abilities also have deficits in the ability to detect the presence of coherent motion of human figures, but no deficits in the ability to detect the motion of inanimate objects. Findings such as these support the conclusion that perception of intentionality in social situations rests on mechanisms in the brain that specialize in analyzing the motion of living things.

Rutgers’s IGERT in Interdisciplinary Perceptual Science has initiated a new multi-student collaboration that has created a novel approach for the study of the perception of intentionality. The graduate students, Steve Cholewiak, Peter Pantelis, Paul Ringstad, Kevin Sanik, Ari Weinstein and Chia-chien Wu, working under the advisement of IGERT faculty member Jacob Feldman, created a virtual environment populated by “intelligent” autonomous agents. Agents were controlled by computer programs that endowed them with sensory capacities, reasoning abilities and the ability to learn and remember the recent past. The environment contained obstacles and sources of food. The virtual agents were programmed to learn the features of the environment through exploration, while at the same time gathering virtual food and avoiding obstacles. Agents were also programmed to compete with other agents, and could, as needed, attack or flee. In a series of controlled evolutionary simulations, where agents “died” only to be re-born with new skills, agents evolved towards increasingly rational and adaptive behavior.

An important use of this environment is to study human interpretation of the agents’ actions. Human observers watching the agents were generally able to interpret the agents’ internal mental states and deduce what the agent was “thinking” based on its actions. IGERT trainee Peter Pantelis found that human observers watching the agents typically focused on an agent?s behavior toward other agents nearby with the goal of classifying an agent?s intentions as either “hostile” or “friendly.”

This virtual environment is a novel platform for investigating key problems at the interface of computer science and perceptual psychology, including: visual interpretation of complex dynamic displays, population dynamics of artificial perceptual agents, and computational procedures for interpretation of intentional action. Future versions of this platform will assess judgments of intentionality by allowing avatars controlled by human observers to participate directly in all the activities and competitions in this new virtual laboratory.

Address Goals

Discovery: The ability of human beings to infer the goals and intentions of others is a classical problem within psychology. Applications are far-reaching, ranging from behavioral economics, to the study of autism and other central nervous system disorders, to the science and technology behind the design of computer-based but realistic and convincing virtual environments. This highlight describes a novel laboratory for the study of the perception of goals and intentionality that allows human observers to not only judge but also participate activity in the environment.

Learning: The virtual laboratory created in this highlight provides an example of the success and benefits of inter-disciplinary education and training that assembles a team of scientists with different backgrounds to pool their talents in the interests of achieving common goals.

Research infrastructure: The design of this virtual laboratory can be made public and available to a wide community of scientists who can built on the basic structure in order to address a host of related problems and applications. Key characteristics include the use of programmed intelligent agents that operate on rules not known to the human observer and the option for human observers to interact with these agents. Environments such as these not only provide a working laboratory for the study of human perception and cognition, but also an environment to use for training and education of the human participants.