This repo provides code for estimating the inlier threshold for two-view relative pose estimation using RANSAC.
Assuming you have uv
installed:
git clone git@github.com:Parskatt/simfitpp.git
cd simfitpp
uv sync
Below is an example of the API to esimate the threshold for a pair of images.
import numpy as np
from simfitpp import SIMFITPP, PoseLibFundamental
N = 1000
D = 2
x_A = 100*np.random.randn(N, D)
x_B = 100*np.random.randn(N, D)
th_guess = 1.
geom_estimator = PoseLibFundamental(
min_iterations=500,
max_iterations=500,
success_prob=0.9999
)
th_estimator = SIMFITPP(
geom_estimator,
alpha = 0.99,
max_iter = 4,
ftol = 0.01,
train_fraction = 0.5,
th_min = 0.25,
th_max = 8.)
th_est, is_success = th_estimator.estimate_threshold(x_A, x_B, th_guess)
The code in this codebase is a reproduction of the internal code used for the paper. As such there might be minor discrepencies. I've checked that the code approximately reproduces SuperPoint + SuperGlue results on ScanNet-1500. Note that due to randomness, the results may be slightly higher or lower than the paper results.
bash scripts/download_data.sh
python tests/test_simfitpp.py
This should print out some AUC values. The expected values should be around (+- 0.5)
[np.float64(0.14488472274986597), np.float64(0.28170116410278817), np.float64(0.4212129576093516)]
[np.float64(0.22573529158871225), np.float64(0.3982318903526437), np.float64(0.5513910056317942)]
[np.float64(0.13375165543509254), np.float64(0.2614397234088632), np.float64(0.39866589161181637)]
@InProceedings{edstedt2025simfitpp,
author = {Edstedt, Johan},
title = {{Less Biased Noise Scale Estimation for Threshold-Robust RANSAC}},
booktitle = {{Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}},
month = {June},
year = {2025}
}