Vision and complementary cues
Vision and complementary cues
In many visual tasks where functioning in a changing and in part unpredictable environment is required, the contribution of different complementary cues is a key aspect to achieve robustness. Complementarity implies that distinct cues bring in added information within different contexts. For example intensity images will reveal relevant visual (appearance) details (texture, colour patterns) of an object surface, but these details cannot be well separated from unwanted illumination artefacts (shadows, highlights). On the other hand measured depth (3d) information is mostly unaffected by photometric variations while it does not encompass details in terms of visual appearance.
In this area each applied task will be analysed in light of the industrial requirements and task-directed cue integration will be performed. This cue selection and integration strategy follows the line-of–thought of Hager1 who noted: “The task the system must carry out determines what information is needed and to what level of refinement.”
To accomplish the K-Project’s goal of robust vision systems Area 1 brings in the required research know-how, while Area 2 considers all research and development aspects to rapidly transfer developed vision-based technology towards practical deployment.
The individual scientific projects of this area address three possible areas where additionality between distinct information channels prevails:
- Multi-modal computer vision where (i) cue selection (what kind of cues) and (ii) cue combination (how to combine) play an important role.
- Spatio-temporal processing which exploits the strongly correlated data structure in the space-time volume of aggregated data. In contrast to single-frame based analysis schemes, considering the space-time structure brings a robustness enhancement for many detection, segmentation and tracking tasks.
- The Knowledge and interaction project will investigate statistical learning techniques exploiting information from labelled and unlabelled data sets and novel ways to integrate expert knowledge by advanced visualization schemes allowing for intuitive, minimally interactive user input.
1G. D. Hager, Task-Directed Sensor Fusion and Planning. A Computational Approach, The Kluwer International, Series in Engineering and Computer Science, Kluwer Academic Publishers, Dordrecht, Netherlands. ISBN 0-7923-9108-X, 1990