1 · Introduction

We use Apollo acquisition car to collect traffic data, including camera-based images and LiDAR based point clouds, and generate trajectories by high quality annotation. We collected the trajectory dataset in Beijing, consisting of varying light conditions and traffic densities. The dataset includes many challenging scenarios where vehicles, bicycles, and pedestrians move among one another.

2 · Data Download

The trajectory prediction benchmark consists of 53min training sequences and 50min testing sequences. The data we provide is 2fps

3 · Data Structure

The folder structure of the trajectory prediction is as follows:

1) prediction_train.zip: training data for trajectory prediction.

∙ Each file is a 1min sequence with 2fps.
∙ Each line in every file contains frame_id, object_id, object_type, position_x, position_y, position_z, object_length, object_width, object_height, heading.
∙ For object_type, 1 for small vehicles, 2 for big vehicles, 3 for pedestrian, 4 for bicyclist and 5 for others. We consider the first two types as one type (vehicles) in this challenge.
∙ Position is in the world coordinate system. We just think the traffic in a 2D plane and ignore its position_z. The unit for the position and bounding box is meter.
∙ The heading value is the steering radian with respect to the direction of the object.
∙ You do not need to use all the info we provide. We just consider your predicted position_x and position_y in the next several seconds.

2) prediction_test.zip: testing data for trajectory prediction.

∙ Each line contains frame_id, object_id, object_type, position_x, position_y, position_z, object_length, object_width, object_height, heading.
∙ Every six frames in the prediction_test.txt is a testing sequence. Each sequence is independent. You need read the file carefully.

4 · Evaluation

Once you want to test your method on the test set, please run your approach on the provided test dataset and submit your results to our Challenge.

We have labeled five types of instances in our dataset: 1 for small vehicles, 2 for big vehicles, 3 for pedestrian, 4 for bicyclist and 5 for others. However, we consider the first two types as one type (vehicles) in this challenge. We do evaluation just for vehicles, pedestrian and bicyclist. The setting for this problem is to observe 3s (6 positions) and predict trajectories for the following 3s (6 positions). We think the objects in the last frame of observation as considered objects and compare the error between your predicted locations and the ground truth for these considered objects.

5 · Metric formula

We use the following metrics[1] to measure the performance of algorithms used for predicting the trajectories of each type of objects.

1. Average displacement error (ADE): The mean Euclidean distance over all the predicted positions and real positions during the prediction time.

2. Final displacement error (FDE): The mean Euclidean distance between the final predicted positions and the corresponding true locations.
Because the trajectories of cars, bicyclist and pedestrians have different scales, we use the following weighted sum of ADE (WSADE) and weighted sum of FDE (WSFDE) as metrics.

where , , and are related to reciprocals of the average velocity of vehicles, pedestrian and bicyclist in the dataset. We adopt 0.20, 0.58, 0.22 respectively.

6 · Rules of ranking

Result benchmark will be:

RankMethodWSADEADEvADEpADEbWSFDEFDEvFDEpFDEbTeam Name
xxxxxxxxxxxxxxxxxxxxxxx

Our ranking will determined by WSADE of all types of objects.

7 · Format of submission file

You just need to submit a prediction_result.txt.
- Each line contains frame_id, object_id, object_type, position_x, and position_y in order.
- We think every six frames as a predicted sequence result for a sequence in the test data. Pay attention to make right correspondence to test data. It means sequences in test data and your result should have the same number and same order. Same objects should have the same id. Different frames should have different ids.

8 · Publication

Please cite our paper in your publications if our dataset is used in your research.

TrafficPredict: Trajectory Prediction for Heterogeneous Traffic-Agents [PDF]
Yuexin Ma, Xinge Zhu, Sibo Zhang, Ruigang Yang, Wenping Wang, and Dinesh Manocha.
AAAI(oral), 2019

9 · Reference

[1] Pellegrini S, Ess A, Schindler K, et al. You'll never walk alone: Modeling social behavior for multi-target tracking[C]. Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009: 261-268.

The dataset we released is  desensitized street view for academic use only.