Tracking by Sampling Trackers










Authors

Abstract

We propose a novel tracking framework called visual tracker sampler that tracks a target robustly by searching for the appropriate trackers in each frame. Since the real- world tracking environment varies severely over time, the trackers should be adapted or newly constructed depending on the current situation. To do this, our method obtains sev- eral samples of not only the states of the target but also the trackers themselves during the sampling process. The track- ers are efficiently sampled using the Markov Chain Monte Carlo method from the predefined tracker space by propos- ing new appearance models, motion models, state repre- sentation types, and observation types, which are the basic important components of visual trackers. Then, the sam- pled trackers run in parallel and interact with each other while covering various target variations efficiently. The ex- periment demonstrates that our method tracks targets accu- rately and robustly in the real-world tracking environments and outperforms the state-of-the-art tracking methods.

paper thumbnail

Paper

ICCV 2011 paper. (pdf, 5.7MB) Supplementary material. (pdf, 33KB) PPT. (ppt, 5.2MB)

Citation

1. Junseok Kwon, Kyoung Mu Lee. Tracking by Sampling Trackers, IEEE International Conference on Computer Vision (ICCV) 2011 Bibtex

2. Junseok Kwon, Kyoung Mu Lee. Visual Tracking Decomposition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2010 [project page]


Results

video. (wmv, 34.4MB)


Code

binary code. (zip, 248MB, version 0.3 2012/05/03)
dataset :images. (zip,141MB)
dataset :videos. (zip,493MB)

% Ironman, matrix, soccer(noise), skating1(noise), david, and tiger sequences are included in the datasets.
% Other test sequences are available at [VTD tracker]
% Original tiger1 and david indoor sequences are available at
http://vision.ucsd.edu/~bbabenko/project_miltrack.shtml
http://www.cs.toronto.edu/~dross/ivt/