Active Vision Project > Overview
Coevolution of Active Vision and Feature Selection |
We show that the co-evolution of active vision and feature selection can greatly reduce the computational complexity required to produce a given visual performance. Active vision is the sequential and interactive process of selecting and analyzing parts of a visual scene. Feature selection instead is the development of sensitivity to relevant features in the visual scene to which the system selectively responds. Each of these processes has been investigated and adopted in machine vision. However, the combination of active vision and feature selection is still largely unexplored. In our experiments behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire. This sensitivity is used to locate, bring, and keep these features in particular regions of the vision system, resembling strategies observed in simple insects. |
Active Vision and Receptive Field Development |
In this project we went one step further and investigated the ontogenetic development of receptive fields in an evolutionary mobile robot with active vision. In contrast to the previous work where synaptic weights for both receptive field and behavior were genetically encoded and evolved on the same time scale, here the synaptic weights for receptive fields develop during the life of the individual. In these experiments, behavioral abilities and receptive fields develop on two different temporal scales, phylogenetic and ontogenetic respectively. The evolutionary experiments are carried out in physics-based simulation and the evolved controllers are tested on the physical robot in an outdoor environment. Such a neural architecture with visual plasticity for a freely moving behavioral system also allows us to explore the role of active body movement in the formation of the visual system. More specifically we study the development of visual receptive fields and behavior of robots under active and passive movement conditions. We show that the receptive fields and behavior of robots developed under active condition significantly differ from those developed under passive condition. A set of analyses suggest that the coherence of receptive fields developed in active condition plays an important role in the performance of the robot. More info | Publications |
Omnidirectional Active Vision |
The omnidirectional camera is a relatively new optic device that provides a 360 degrees field of view, and it has been widely used in many practical applications including surveillance systems and robot navigation. However, in most applications visual systems uniformly process the entire image, which would be computationally expensive when detailed information is required. In other cases the focus is determined for particular uses by the designers or users. In other words, the system is not allowed to freely interact with the environment and selectively choose visual features. Contrarily, all vertebrates and several insects -- even those with a very large field of view -- share the steerable eyes with a foveal region, which means that they have been forced to choose necessary information from a vast visual field at any given time so as to survive. Such a sequential and interactive process of selecting and analyzing behaviorally-relevant parts of a visual scene is called active vision. In this project we explore omnidirectional active vision: coupled with an omnidirectional camera, a square artificial retina can immediately access any visual feature located in any direction, which is impossible for the conventional pan-tilt camera because of the mechanical constraints. It is challenging for the artificial retina to select behaviorally-relevant features in such a broad field of view. More info | Publications |
Active Vision for 3D Landmark-Navigation |
Active vision may be useful to perform landmark-based navigation where landmark relationship requires active scanning of the environment. In this project we explore this hypothesis by evolving the neural system controlling vision and behavior of a mobile robot equipped with a pan/tilt camera so that it can discriminate visual patterns and arrive at the goal zone. The experimental setup employed in this article requires the robot to actively move its gaze direction and integrate information over time in order to accomplish the task. We show that the evolved robot can detect separate features in a sequential manner and discriminate the spatial relationships. An intriguing hypothesis on landmark-based navigation in insects derives from the present results. More info | Publications |
Active Vision for Bipedal Walking of Humanoid Robots |
Coming soon. More info | Publications |
Active Vision Project > Overview