The VOT challenges provide the visual tracking community with a precisely defined and repeatable way of comparing short-term trackers as well as a common platform for discussing the evaluation and advancements made in the field of visual tracking.

The goal of the challenges is to build up a repository of considerable benchmarks and to organize workshops or similar events in order to push forward research in visual tracking.


An online repository of sequences and results.

The database is currently under construction.

Citing VOT Challenge

When using any of VOT benchmarks in your paper, please cite the VOT journal paper as well as the relevant VOT workshop paper describing the relevant benchmark.

@article {VOT_TPAMI, author = {Matej Kristan and Jiri Matas and Ale\v{s} Leonardis and Tomas Vojir and Roman Pflugfelder and Gustavo Fernandez and Georg Nebehay and Fatih Porikli and Luka \v{C}ehovin}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, title={A Novel Performance Evaluation Methodology for Single-Target Trackers}, year={2016}, month={Nov}, volume={38}, number={11}, pages={2137-2155}, doi={10.1109/TPAMI.2016.2516982}, ISSN={0162-8828} }

Highlights and news

VOT2018 announcement

We have received several inquiries about whether the VOT challenge will be organized this year.

The VOT2018 is planned, but the form depends on several factors. The VOT workshop proposal is pending and a joint results paper will be subject to acceptance of the workshop proposal, but the challenge does not depend on the proposal acceptance. If all goes well, the challenge will be announced in early April and will open in late April or early May, with results submission deadline in early June.

This year we plan to run (i) the VOT main challenge, (ii) the VOT-TIR sub-challenge, the and (iii) VOT-realtime sub-challenge. In addition, a new, VOT-long-term sub-challenge is planned, which will address long-term trackers (class of trackers that copes with situations where targets leave the field of view and re-enter it after some time).

The VideoNet initiative

VideoNet is a new initiative to bring together the community of researchers that have put effort into creating benchmarks for video tasks. The goal of the VideoNet is to exchange ideas on how to improve annotations, evaluation measures, and learn from each other’s experiences. More information available at the official page.

VOT paper was accepted at Transactions on Pattern Analysis and Machine Intelligence

We are happy to announce that the VOT (2013-2014) methodology paper has been accepted to Transactions on Pattern Analysis and Machine Intelligence (TPAMI). The paper can be accessed from the VOT publications page link. For future reference, if you use any of the VOT datasets in your evaluation, please cite this paper as methodology reference as well as the relevant VOT workshop paper for the dataset.