NewWe just released the GrandTour Dataset — request access here.

Participation

🏁 Localization Challenge

We have launched the localization challenge on the online EvalAI platform.

(The challenge is currently getting approved. You will need to register to EvalAI to access the challenge.)

  • Validation Phase Dates: Start: 30.07.2025, End: 30.07.2026.

  • Test Phase Dates: Start: TBA, End: TBA.

  • Goal: Predict the 3-DoF robot position accurately.

  • Submissions: Up to 3 submissions per day allowed, all year.

  • Leaderboards: Public (all data) and Private (hidden split) leaderboards. The private leaderboard uses a non-specified subset to prevent over-fitting.

  • Showcase: Link your GitHub, Project Page, or arXiv Paper on the leaderboard. Email the organizers with your project information.

Submission Format

A submission can be done by uploading a .zip file that contains trajectories of the prism frame ( see Sensor Frames, frame 1 ). The name of the .zip file does not have any significance; however, each result .tum (or .txt, doesn't matter) file within the .zip file must match with a mission Short Name of the submitted split.

For the validation phase (see Mission Splits), a submission that contains results for all missions would look like:

coolest_submission_file_name.zip
    HEAP-1.tum
    ARC-3.tum
Trajectory Format

As one of the default formats within the localization community, we use the TUM format. We suggest using the (EVO evaluation tool), for your pose conversions.

The text files should have the trajectory of the prism expressed in your estimation frame in the following format:

Example Trajectory
# timestamp x y z q_x q_y q_z q_w

1731588412.024509 -13.128140449523926 -4.5507922172546387 -0.84699088335037231 0 0 0 1
1731588412.1095092 -13.128137588500977 -4.5510959625244141 -0.84701794385910034 0 0 0 1
...

(Coming soon) we will provide example sumbission instruction in the: (Grand Tour Dataset repository).

Metrics

The evaluation metrics will consist of quantitative metrics such as ATE, Last error and RTE. Furthermore, the methods complexity and the input modalities will also be accounted for. More details will be available soon.

Mission Splits

Our mission splits consist of validation and test sets. The validation phase allows you to verify your submission pipeline, while the test phase is used for the official challenge evaluation and no GroundTruth is available. Details about the split can be found in the Data Overview in the Explore section: