1 · Introduction

This repository contains the evaluation scripts for the online self-localization challenge of the ApolloScapes dataset, Where we extended the dataset with more scenes and 100x large data including recorded videos under different lighting conditions, i.e. morning, noon and night, with stereo pair of images. A test dataset for each new scene will be withheld for benchmark. (Notice we will not have point cloud for the very large data due to size of dataset).

Details and download of data from different roads are available. Here are some interesting facts:

For each road, we record it by driving from start-to-end and then end-to-start at different day times, which means at each site along the road, a scene will be looked at from two opposit directions. We provide the set of record id recorded from start-to-end and the set of record id from end-to-start in training set for each road at LoopDirection. One may discover the corresponding images from the camera pose we provided.

TIn this challenge, we recard records from forward (start-to-end) and inverse (end-to-start) driving as records from two different roads, which means we will not have forward videos as training while have inverse driving as testing videos. However, it could be interesting to do that in your research as showed in the work of Semantic Visual Localization.

2 · Dataset Download

Sample data

Training data

Point Cloud

Testing data

More data structure information of our dataset, the evaluation metric and script can be found via our github website.
If you want to participate in our challenge, please submit your results here.

3 · Leaderboard

Team Name

4 · Publication

Please cite our paper in your publications if our dataset is used in your research.
Xinyu Huang, Xinjing Cheng, Qichuan Geng, Binbin Cao, Dingfu Zhou, Peng Wang, Yuanqing Lin, and Ruigang Yang, The ApolloScape Dataset for Autonomous Driving, arXiv: 1803.06184, 2018
[PDF]   [BibTex]

The dataset we released is  desensitized street view  for academic use only.