Indoor LiDAR-based SLAM dataset

Evaluation


The indoor LiDAR-based SLAM dataset consists of three scenes captured by multi-beam laser scanners in indoor environments with various complexity. The original scan frame data from scanners are provided. Users can test their LiDAR SLAM algorithm on these data.
We provide two ways of evaluation as follows:
(1) Evaluation using downloaded ground truth
The ground truth point cloud data of this dataset was obtained by a high-precision Terrestrial Laser Scanner (Riegl VZ-1000). Users can compare their point cloud results to the ground truth TLS point cloud.
(2) Evaluation by submitting results
More ground truth TLS point clouds are available for performance comparison. To participate in the comparison, users need to submit the trajectory generated by their SLAM algorithms. The evaluation results will be listed on the webpage.
Submission data format. In the submitted trajectory file, each line in the trajectory file should be a pose of the platform composed of position and orientation with respect to the initial position. The detailed format is: {frame_id p_x p_y s_z q_x q_y q_z q_w}, where frame_id is the corresponding index of lidar frame with the current pose, p_x, p_y, and p_z are the translation component of the current pose, q_x, q_y, q_z, and q_w are the quaternion representation of the rotation component of the current pose. Space separates each above number.
Evaluation criterion. Our evaluation firstly reconstructs the point cloud based on the submitted trajectory. Then, voxel filtering of 3cm is performed to ensure the same resolution of the point cloud. The distance between each point in the point cloud and the corresponding nearest neighbor point in the reference point cloud is calculated as the absolute error of each point. The point cloud generated by the SLAM algorithm uses the local coordinate system of the first frame as the global coordinate system. To make a fair comparison, we manually registered the first frame of the SLAM point cloud to the reference point cloud to obtain a transformation matrix T. By subsequently applied T to each evaluated point cloud, this point is aligned to the reference point cloud. The evaluation table will rank methods according to the average of absolute errors.


Data Description


Name Size Data description Ground truth
mimap in slam 001.43 GBA two-floor building scene. The scans scanned by a Velodyne Ultra pack include data of individual rooms, non-enclosed loop corridors and stairs. Ground truth point cloud scanned by a Rigel VZ 1000 includes the corridors and stairs.
mimap in slam 010.93 GBA five-floor building scene. The scans scanned by a Velodyne Ultra pack include data of non-enclosed loop corridors and stairs. No ground truth data is provided.Please submit your results for evaluation
mimap in slam 021.96 GBA five-floor building scene. The scans scanned by a Velodyne HDL-32e include data of enclosed loop corridors and stairs. No ground truth data is provided.Please submit your results for evaluation

Download



mimap_in_slam_00.zip( 1.43 GB )    [Google]  [Baidu]

mimap_in_slam_01.zip( 0.93 GB )    [Google]  [Baidu]

mimap_in_slam_02.zip( 2.96 GB )    [Google]  [Baidu Fetch Code:got0 ]

Download per country

unknown
2866
France
532
"United States
501
United States
429
"Singapore
328
nul
269
Germany
207
"Germany
201
China
163
Singapore
51
"Cyprus
41
"France
38
Cyprus
36
"China
35
Hong Kong SAR China
34
Israel
33
"Finland
32
"Canada
27
Canada
15
South Korea
13
Finland
12
United Kingdom
8
"South Korea
7
"Netherlands
7
"Hong Kong SAR China
6
Vietnam
6
Switzerland
5
Netherlands
5
Turkey
5
Japan
5
Ukraine
5
Ethiopia
5
"Japan
4
"Taiwan
3
"Colombia
3
"Australia
3
"Russia
3
Thailand
2
"Switzerland
2
"India
2
"Poland
1
Austria
1
India
1
Taiwan
1
Sri Lanka
1
"Latvia
1
"New Zealand
1
"Turkey
1
Indonesia
1

Copyright


The MiMAP benchmark is published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License (https://creativecommons.org/licenses/by-nc-sa/3.0/).You must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license. Contact us if you are interested in commercial usage.


Citation


If you use MiMAP benchmark, please cite both the following papers:

  • C. Wen, Y. Dai, Y. Xia, Y. Lian, C. Wang, J. Li, Towards Efficient 3-D Colored Mapping in GPS/GNSS-denied Environments, IEEE Geoscience and Remote Sensing Letters, 17, 147-151, 2020.

  • C. Wang, S. Hou, C. Wen, Z. Gong, Q. Li, X. Sun, J. Li, Semantic Line Framework-based Indoor Building Modeling using Backpacked Laser Scanning Point Cloud, ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 143, pp. 150-166, 2018.