Brostow, GJ and Shotton, J and Fauqueur, J and Cipolla, R (2008) Segmentation and recognition using structure from motion point clouds. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5302 L. pp. 44-57. ISSN 0302-9743Full text not available from this repository.
We propose an algorithm for semantic segmentation based on 3D point clouds derived from ego-motion. We motivate five simple cues designed to model specific patterns of motion and 3D world structure that vary with object category. We introduce features that project the 3D cues back to the 2D image plane while modeling spatial layout and context. A randomized decision forest combines many such features to achieve a coherent 2D segmentation and recognize the object categories present. Our main contribution is to show how semantic segmentation is possible based solely on motion-derived 3D world structure. Our method works well on sparse, noisy point clouds, and unlike existing approaches, does not need appearance-based descriptors. Experiments were performed on a challenging new video database containing sequences filmed from a moving car in daylight and at dusk. The results confirm that indeed, accurate segmentation and recognition are possible using only motion and 3D world structure. Further, we show that the motion-derived information complements an existing state-of-the-art appearance-based method, improving both qualitative and quantitative performance. © 2008 Springer Berlin Heidelberg.
|Divisions:||Div F > Machine Intelligence|
|Depositing User:||Unnamed user with email firstname.lastname@example.org|
|Date Deposited:||16 Jul 2015 13:36|
|Last Modified:||26 Jul 2015 00:08|