The iLIDS-VID dataset involves 300 different pedestrians observed across two disjoint camera views in public open space. Two versions are contained: static images based (see the folder named "ILIDS-VID\images") and image sequences based (see the folder named "ILIDS-VID\sequences").

Details

This dataset was created from the pedestrians observed in two non-overlapping camera views from the i-LIDS Multiple-Camera Tracking Scenario (MCTS) dataset which was captured at an airport arrival hall under a multi-camera CCTV network. It comprises 600 image sequences of 300 distinct individuals, with one pair of image sequences from two camera views for each person. Each image sequence has variable length ranging from 23 to 192 image frames, with an average number of 73. The iLIDS-VID dataset is very challenging due to clothing similarities among people, lighting and viewpoint variations across camera views, cluttered background and random occlusions. To facilitate the evaluation of single-shot based person re-identification methods on this dataset, we also provided a static images based version by randomly selecting one image from every person image sequence. Benchmarked training/test people splits are provided for fair comparisons across different state-of-the-art methods in the literature.

For downloading this data set, we have assumed that you have the right to access the i-LIDS MCTS scenario. The dataset is intended for research purposes only and as such cannot be used commercially. Please cite the following publication when this dataset is used in any academic and research reports.

Reference

  1. X. Ma, X. Zhu, S. Gong, X. Xie, J. Hu, K-M. Lam and Y. Zhong.
    Person Re-Identification by Unsupervised Video Matching.
    Pattern Recognition, Vol. 65, pp. 197-210, May 2017. (PR)
    Technical Report DOI
  2. T. Wang, S. Gong, X. Zhu and S. Wang.
    Person Re-Identification by Discriminative Selection in Video Ranking.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 38, No. 12, pp. 2501-2514, December 2016. (TPAMI)
    Technical Report DOI Project page
  3. T. Wang, S. Gong, X. Zhu and S. Wang.
    Person Re-Identification by Video Ranking.
    In Proc. European Conference on Computer Vision (ECCV), Zurich, Switzerland, September 2014.
    Project page

State-Of-The-Art Results

Method / Rank151020
[1] Person Re-Identification by Discriminative Selection in Video Ranking. T. Wang, S. Gong, X. Zhu, S. Wang (TPAMI, 2016) 39.5 61.1 71.7 81.0
[2] Top-push Video-based Person Re-identification. J. You, A. Wu, X. Li, W-S Zheng (CVPR, 2016) 56.3 87.6 95.6 98.3
[3] Recurrent Convolutional Network for Video-Based Person Re-Identification. N. McLaughlin, J. M. Rincon, P. Miller (CVPR, 2016) 58.0 84.0 91.0 96.0
[4] A Spatio-Temporal Appearance Representation for Viceo-Based Pedestrian Re-Identification. K. Liu, B. Ma, W. Zhang, R. Huang (ICCV, 2015) 44.3 71.7 83.7 91.7
[5] Deep Recurrent Convolutional Networks for Video-based Person Re-identification: An End-to-End Approach. L. Wu, C. Shen, A. Hengel (arXiv, 2016) 46.1 76.8 89.7 95.6
[6] MARS: A Video Benchmark for Large-Scale Person Re-identification. L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, Q. Tian (ECCV, 2016) 53.0 81.4 -- 95.1
[7] Video-Based Person Re-Identification by Simultaneously Learning Intra-Video and Inter-Video Distance Metrics. X. Zhu, X-Y Jing, F. Wu, H. Feng (IJCAI, 2016) 48.7 81.1 89.2 97.3
[8] Improving Person Re-identification via Pose-aware Multi-shot Matching. Y-J Cho, K-J Yoon (CVPR, 2016) 30.3 56.3 70.3 82.7
[9] Multi-Shot Human Re-Identification Using Adaptive Fisher Discriminant Analysis. Y. Li, Z. Wu, S. Karanam, R. J. Radke (BMVC, 2015) 37.5 62.7 73.0 81.8
[10] Person Re-identification for Real-world Surveillance Systems. Furqan M. Khan and Francois Bremond (arXiv, 2016) 39.9 65.5 77.0 84.2
[11] Person Re-Identification with Discriminatively Trained Viewpoint Invariant Dictionaries. S. Karanam, Y. Li, R. J. Radke (ICCV, 2015) 25.9 48.2 57.3 68.9
[12] Temporally Aligned Pooling Representation for Video-Based Person Re-Identification. C. Gao, J. Wang, L. Liu, J-G Yu, N. Sang (ICIP, 2016) 55.0 87.5 93.8 97.2
[13] A Systematic Evaluation and Benchmark for Person Re-Identification: Features, Metrics, and Datasets. Srikrishna Karanam, Mengran Gou, Ziyan Wu, Angels Rates-Borras, Octavia Camps, Richard J. Radke (arXiv, 2016) 75.7 90.1 93.6 96.5
[14] Learning Bidirectional Temporal Cues for Video-based Person Re-Identification. W. Zhang, X. Yu, X. He (IEEE TCSVT, 2017) 55.3 85.0 91.7 95.1
[15] See the Forest for the Trees: Joint Spatial and Temporal Recurrent Neural Networks for Video-based Person Re-identification. Z. Zhou, Y. Huang, W. Wang, Liang Wang, T. Tan (CVPR, 2017) 55.2 86.5 - 97.0
[16] Jointly Attentive Spatial-Temporal Pooling Networks for Video-based Person Re-Identification. S. Xu, Y. Cheng, K. Gu, Y. Yang, S. Chang, P. Zhou (ICCV, 2017) 62 86 94 98
[17] Multi-shot Person Re-identification using Part Appearance Mixture. Furqan M. Khan and Francois Bremond (WACV, 2017) 79.5 95.1 97.6 99.1