- 🎉 New! Training code is now available 🚀
- 🎉 New! The test code and pretrained model have been released. 🚀
This repository contains the official implementation of the paper:
LiftFeat: 3D Geometry-Aware Local Feature Matching, to be presented at ICRA 2025.
Overview of LiftFeat's achitecture
LiftFeat is a lightweight and robust local feature matching network designed to handle challenging scenarios such as drastic lighting changes, low-texture regions, and repetitive patterns. By incorporating 3D geometric cues through surface normals predicted from monocular depth, LiftFeat enhances the discriminative power of 2D descriptors. Our proposed 3D geometry-aware feature lifting module effectively fuses these cues, leading to significant improvements in tasks like relative pose estimation, homography estimation, and visual localization.
If you use conda as virtual environment,you can create a new env with:
git clone https://github.com/lyp-deeplearning/LiftFeat.git
cd LiftFeat
conda create -n LiftFeat python=3.8
conda activate LiftFeat
pip install -r requirements.txt
To run LiftFeat on an image,you can simply run with:
python demo.py --img1=<reference image> --img2=<query image>
We provide a simple real-time demo that matches a template image to each frame of a video stream using our LiftFeat method.
You can run the demo with the following command:
python tools/demo_match_video.py --img your_template.png --video your.mp4
We also provide a sample template image and video with lighting variation for demonstration purposes.
We have added a new application to evaluate LiftFeat on visual odometry (VO) tasks.
We use sequences from the KITTI dataset to demonstrate frame-to-frame motion estimation. Running the script below will generate the estimated camera trajectory and the error curve:
python tools/demo_vo.py --path1 /path/to/gray/images --path2 /path/to/color/images --id 03
We also provide a sample KITTI sequence for quick testing.
To train LiftFeat as described in the paper, you will need MegaDepth & COCO_20k subset of COCO2017 dataset as described in the paper XFeat: Accelerated Features for Lightweight Image Matching You can obtain the full COCO2017 train data at https://cocodataset.org/. However, we make available a subset of COCO for convenience. We simply selected a subset of 20k images according to image resolution. Please check COCO terms of use before using the data.
To reproduce the training setup from the paper, please follow the steps:
- Download COCO_20k containing a subset of COCO2017;
- Download MegaDepth dataset. You can follow LoFTR instructions, we use the same standard as LoFTR. Then put the megadepth indices inside the MegaDepth root folder following the standard below:
{megadepth_root_path}/train_data/megadepth_indices #indices
{megadepth_root_path}/MegaDepth_v1 #images & depth maps & poses
- Finally you can call training
python train.py --megadepth_root_path <path_to>/MegaDepth --synthetic_root_path <path_to>/coco_20k --ckpt_save_path /path/to/ckpts
All evaluation code are in evaluation, you can download HPatch dataset following D2-Net and download MegaDepth test dataset following LoFTR.
Download and process HPatch
cd /data
# Download the dataset
wget https://huggingface.co/datasets/vbalnt/hpatches/resolve/main/hpatches-sequences-release.zip
# Extract the dataset
unzip hpatches-sequences-release.zip
# Remove the high-resolution sequences
cd hpatches-sequences-release
rm -rf i_contruction i_crownnight i_dc i_pencils i_whitebuilding v_artisans v_astronautis v_talent
cd <LiftFeat>/data
ln -s /data/hpatches-sequences-release ./HPatch
Download and process MegaDepth1500
We provide download link to megadepth_test_1500
tar xvf <path to megadepth_test_1500.tar>
cd <LiftFeat>/data
ln -s <path to megadepth_test_1500> ./megadepth_test_1500
Homography Estimation
python evaluation/HPatch_evaluation.py
Relative Pose Estimation
For Megadepth1500 dataset:
python evaluation/MegaDepth1500_evaluation.py
If you find this code useful for your research, please cite the paper:
@misc{liu2025liftfeat3dgeometryawarelocal,
title={LiftFeat: 3D Geometry-Aware Local Feature Matching},
author={Yepeng Liu and Wenpeng Lai and Zhou Zhao and Yuxuan Xiong and Jinchi Zhu and Jun Cheng and Yongchao Xu},
year={2025},
eprint={2505.03422},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.03422},
}
We would like to thank the authors of the following open-source repositories for their valuable contributions, which have inspired or supported this work:
We deeply appreciate the efforts of the research community in releasing high-quality codebases.