Partially Occluded Object-Specific Segmentation
in View-Based Recognition



We present a novel object-specific segmentation method which can be used in view-based object recognition systems. Previous object segmentation approaches generate inexact results especially in partially occluded and cluttered environment because their top-down strategies fail to explain the details of various specific objects. On the contrary, our segmentation method efficiently exploits the information of the matched model views in view-based recognition because the aligned model view to the input image can serve as the best top-down cue for object segmentation. In this paper, we cast the problem of partially occluded object segmentation as that of labelling displacement and foreground status simultaneously for each pixel between the aligned model view and an input image. The problem is formulated by a maximum a posteriori Markov random field (MAP-MRF) model which minimizes a particular energy function. Our method overcomes complex occlusion and clutter and provides accurate segmentation boundaries by combining a bottom-up segmentation cue together. We demonstrate the efficiency and robustness of it by experimental results on various objects under occluded and cluttered environments.

paper thumbnail


CVPR 2007 paper. (pdf, 0.5MB)


Minsu Cho, Kyoung Mu Lee. Partially Occluded Object-Specific Segmentation in View-Based Recognition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2007