The VOT challenges provide the visual tracking community with a precisely defined and repeatable way of comparing short-term trackers as well as a common platform for discussing the evaluation and advancements made in the field of visual tracking.

The goal of the challenges is to build up a repository of considerable benchmarks and to organize workshops or similar events in order to push forward research in visual tracking.


Database

An online repository of sequences and results.

The database is currently under construction.

Citing VOT Challenge

When using any of VOT benchmarks in your paper, please cite the VOT journal paper as well as the relevant VOT workshop paper describing the relevant benchmark.

@article {VOT_TPAMI, author = {Matej Kristan and Jiri Matas and Ale\v{s} Leonardis and Tomas Vojir and Roman Pflugfelder and Gustavo Fernandez and Georg Nebehay and Fatih Porikli and Luka \v{C}ehovin}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, title={A Novel Performance Evaluation Methodology for Single-Target Trackers}, year={2016}, month={Nov}, volume={38}, number={11}, pages={2137-2155}, doi={10.1109/TPAMI.2016.2516982}, ISSN={0162-8828} }

Highlights and news

The new VOT2020 evaluation protocol

We have introduced a new short-term evaluation protocol and performance evaluation measures for the VOT2020 Challenge. More information about the new protocol and measures can be found here.

VOT2020 challenge is now open!

The VOT2020 is now officially open. You can participate by submitting a scientific paper and/or evaluating your tracker on one of the four VOT2020 subchallenges. Please see the challenge homepage for details.

Due to a COVID-19-caused lock down, the VOT-RGBT sub-challenge cannot be launched at the intended date. We hope that we will still be able to launch the sub-challenge before the end of March and we will inform you by no later than March 30th.

The VideoNet initiative

VideoNet is a new initiative to bring together the community of researchers that have put effort into creating benchmarks for video tasks. The goal of the VideoNet is to exchange ideas on how to improve annotations, evaluation measures, and learn from each other’s experiences. More information available at the official page.

VOT paper was accepted at Transactions on Pattern Analysis and Machine Intelligence

We are happy to announce that the VOT (2013-2014) methodology paper has been accepted to Transactions on Pattern Analysis and Machine Intelligence (TPAMI). The paper can be accessed from the VOT publications page link. For future reference, if you use any of the VOT datasets in your evaluation, please cite this paper as methodology reference as well as the relevant VOT workshop paper for the dataset.