Skip to content

A geometric verification benchmark targeting long-term loop closure detection.

License

Notifications You must be signed in to change notification settings

jarvisyjw/GV-Bench

Repository files navigation

GV-Bench: Benchmarking Local Feature Matching for Geometric Verification of Long-term Loop Closure Detection

Description

Abstract

Visual loop closure detection is an important module in visual simultaneous localization and mapping (SLAM), which associates current camera observation with previously visited places. Loop closures correct drifts in trajectory estimation to build a globally consistent map. However, a false loop closure can be fatal, so verification is required as an additional step to ensure robustness by rejecting the false positive loops. Geometric verification has been a well-acknowledged solution that leverages spatial clues provided by local feature matching to find true positives. Existing feature matching methods focus on homography and pose estimation in long-term visual localization, lacking references for geometric verification. To fill the gap, this paper proposes a unified benchmark targeting geometric verification of loop closure detection under long-term conditional variations. Furthermore, we evaluate six representative local feature matching methods (handcrafted and learning-based) under the benchmark, with in-depth analysis for limitations and future directions.

Run-time Analysis

We measure the runtime of six methods listed in Table I on NVIDIA GeForce RTX 3090 GPU and Intel i7-13700K CPU over 10K runs. The results are shown in figure below as inference time over performance, i.e., max recall @100 precision. We can conclude that the runtime of six local feature matching methods is at a millisecond level on a modern GPU. The choice can be made based on the trade-off between time efficiency and performance.

Description

News

  • 🚀 Releasing the visualization of image matching results. (google drive)

  • 🚀 🚀 Releasing the benchmark (easy)! Checkout the image pairs from dataset/release/pairs and images from google drive

  • ⭐ Benchmark usage is coming soon!

Release Timeline

  • Appendix for visualization
    • Visualization of image matches
    • Visualization of inliers' distirbution (SP.+SG.)

      Description

  • Release benchmark sequences.
    • Benchmark-easy (Day, Night, Weather, Season)
    • Benchmark-hard (For sever viewpoint and conditional variations.)
  • Release Local feature extraction and matching implementation
  • Release evaluation tools
  • Release data analysis tools
  • Expansion to other verification methods
    • Dopplergangers
    • Semantics
    • Keypoint topology
  • Release sequence version of benchmark (TODO)

Installation

We use part of the HLoc code for feature extraction and matching.

git clone && cd GV-Bench
git submodule init
git submodule update
cd third_party/Hierarchival-Localization
git checkout gvbench # this is a customized fork version
python -m pip install -e .

Usage

  • Download the dataset sequences from google drive and put it under the dataset/ folder.

  • Extract and match feature using hloc.

    • Extract features: SIFT, SuperPoint, and DISK

      cd third_party/Hierarchical-Localization
      python gvbench_utils.py --extraction --image_path /path/to image/ --output_path /path/to/output
      python gvbench_utils.py --extraction --image_path dataset/images/ --output_path dataset/output/features/
    • Match features: SIFT-NN, SIFT-LightGlue (Not yet implemented), SuperPoint-NN, DISK-NN, SuperPoint-SuperGlue, SuperPoint-LightGlue, DISK-LightGlue, LoFTR

      # all methods except LoFTR
      cd third_party/Hierarchical-Localization
      python gvbench_utils.py --matching --pairs /path/to/pairs 
                  --features /path/to/feature/root/folder 
                  --output_path /path/to/matches/output
      
      # LoFTR is different from above methods thus
      cd third_party/Hierarchical-Localization
      python gvbench_utils.py --matching_loftr --pairs datasets/GV-Bench/release/gt/day.txt 
              --features datasets/GV-Bench/release/features 
              --output_path datasets/GV-Bench/release/matches
    • We also provide the easy to run scripts

      cd third_party/Hierarchical-Localization
      bash ./gvbench_utils
    • Image pairs files

      • We prepare pairs (GT) file for matching under dataset/gt foler.
      • Make sure to use the fork hloc for feature extraction and matching https://github.com/jarvisyjw/Hierarchical-Localization.git -b gvbench
  • Evaluation

    • We provide out-of-box scripts
    cd GV-Bench/scripts
    bash ./evaluation <day> # run script with 
    #sequence name: day, night, season, weather
    • Exp Results (IROS-Version):

      • The log files are automatically generated as {exp}.log and {exp}.npy
      np.save(str(export_dir), {'prob': num_matches_norm,
                                        'qImages': qImages,
                                        'rImages': rImages,
                                        'gt': labels, 
                                        'inliers': inliers_list,
                                        'all_matches': pointMaps,
                                        'precision': precision, 
                                        'recall': recall, 
                                        'TH': TH,
                                        'average_precision': average_precision,
                                        'Max Recall': r_recall})
      • Exp Results (Easy):

      Max Recall @100 Precision (MR%)

      Method Day Night Weather Season
      SIFT+NN 35.974 8.664 45.726 28.423
      SP.+NN 53.523 15.055 55.025 47.835
      SP.+SG. 48.267 30.609 66.514 74.633
      DISK+NN 58.029 24.484 27.842 28.629
      DISK+LG. 60.989 25.040 36.886 72.419
      LoFTR 45.336 18.206 12.096 18.206

      Average Precision (AP%)

      Method Day Night Weather Season
      SIFT+NN 98.094 62.106 99.727 98.869
      SP+NN 99.315 86.819 99.804 99.707
      SP.+SG. 99.634 95.969 99.911 99.907
      DISK+NN 97.367 51.723 99.725 99.037
      DISK+LG. 99.656 81.447 99.875 99.864
      LoFTR 99.500 97.881 99.874 97.881

      Max Recall @100 Precision (MR%)

      Method Day Night Weather Season
      MR 35.465 1.991 30.011 22.435
      AP 97.056 60.759 99.574 99.134
  • Visualization

    • Demos are presented in plot_data.ipynb

Acknowledgement

  • This work builds upon hloc, thanks for their amazing work.
  • Contact: jingwen.yu@connect.ust.hk

About

A geometric verification benchmark targeting long-term loop closure detection.

Topics

Resources

License

Stars

Watchers

Forks

Languages