1 · Introduction

We use Apollo acquisition car to collect traffic data in urban traffic of varying light conditions and traffic densities. This dataset includes sequential LiDAR-based 3D point clouds with high quality annotation and consists of many challenging scenarios where vehicles, bicycles, and pedestrians move among one another. This dataset can be used for 3D detection and tracking tasks.

2 · Data Download

The 3D Lidar object detection and tracking benchmark consists of about 53min training sequences and 50min testing sequences. The data is captured at 10HZ and labeled at 2HZ. We provide all the raw data and labeled data.

Training data
Testing data

3 · Data Structure

The folder structure of the 3D Lidar object detection and tracking is as follows:

1) tracking_train.zip : training data for 3D Lidar object detection and tracking.Lidar data is in PCD (Point Cloud Data) and bin file format.

2) tracking_train_label.zip: labelled data for 3D Lidar object detection and tracking.

∙ Each file is a 1min sequence with 2fps.
∙ Each line in every file contains frame_id, object_id, object_type, position_x, position_y, position_z, object_length, object_width, object_height, heading.
∙ For object_type, 1 for small vehicles, 2 for big vehicles, 3 for pedestrian, 4 for bicyclist and 5 for others. We consider the first two types as one type (vehicles) in this challenge.
∙ Position is in the world coordinate system. We just think the traffic in a 2D plane and ignore its position_z. The unit for the position and bounding box is meter.
∙ The heading value is the steering radian with respect to the direction of the object.
∙ You do not need to use all the info we provide. We just consider your predicted position_x and position_y in the next several seconds.

3) tracking_test.zip: testing data for 3D Lidar object detection and tracking.

4 · Evaluation

Once you want to test your method on the test set, please run your approach on the provided test dataset and submit your results to our Challenge.

We have labeled five types of instances in our dataset: 1 for small vehicles, 2 for big vehicles, 3 for pedestrian, 4 for bicyclist and 5 for others. However, we consider the first two types as one type (vehicles) in this challenge. We do evaluation just for vehicles, pedestrian and bicyclist.

5 · Metric formula

1) 3D detection

We use similar metric defined in KITTI[2]. The goal in the 3D object detection task is to train object detectors for the classes 'vehicle', 'pedestrian', and 'bicyclist'. The object detectors must provide the 3D bounding box (3D dimensions and 3D locations) and the detection score/confidence. We also note that not all objects in the point clouds have been labeled. We evaluate 3D object detection performance using mean average precision (mAP), based on IoU. Evaluation criterion similar to the 2D object detection benchmark (using 3D bounding box overlap). The final metric will be the mean of mAP of vehicles ( ), pedestrian( ) and bicyclist( ).

2) 3D tracking

Follow the CLEARMOT [1], we use the multiple object tracking accuracy (MOTA) as the evaluation criterion.

Where , , , and are are the number of misses, of false positives, of mismatches, and of objects present respectively, for time .

For object and tracker hypothese , we use the intersection-over-union ( IoU ) threshold to define the mismatch.

where and are the corresponding 3D bounding boxes for and . We set IoU threshold as 0.5. If IoU( , ) is less than 0.5, we think the tracker has missed the object.

We do evaluation just for vehicles, pedestrian and bicyclist and their corresponding MOTA are ( ), ( ) and ( ). The Mean MOTA is defined as follows:

6 · Rules of ranking

1) 3D detection

Result benchmark will be:

RankMethodmAPmmAPvmAPpmAPbTeam Name
xxxxxxxxxxxxxxx

Our ranking will determined by the mAP

2) 3D tracking

Result benchmark will be:

RankMethodMOTAmMOTAvMOTApMOTAbTeam Name
xxxxxxxxxxxxxxx

Our ranking will be determined by MOTAm.

7 · Format of submission file

1) 3D detection

2) 3D tracking

8 · Publication

Please cite our paper in your publications if our dataset is used in your research.

TrafficPredict: Trajectory Prediction for Heterogeneous Traffic-Agents [PDF]
Yuexin Ma, Xinge Zhu, Sibo Zhang, Ruigang Yang, Wenping Wang, and Dinesh Manocha.
AAAI(oral), 2019

9 · Reference

[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.

[2]Geiger, Andreas, Philip Lenz, and Raquel Urtasun. "Are we ready for autonomous driving? the kitti vision benchmark suite." CVPR, 2012.

The dataset we released is  desensitized street view  for academic use only.