Sparfels: Fast Reconstruction from Sparse Unposed Imagery

INRIA
arXiv

Fast training of Sparfels. Examples of reconstruction meshes obtained within 3 minutes from sparse pose-free images via our method. We used here 6 (left) and 3 (right) input images respectively from scenes of datasets MVImgNet and BMVS.

Abstract

We present a method for Sparse view reconstruction with surface element splatting that runs within 3 minutes on a consumer-grade GPU. While few methods address sparse radiance field learning from noisy or unposed sparse cameras, shape recovery remains relatively underexplored in this setting. Several radiance and shape learning test-time optimization methods address the sparse posed setting by learning data priors or using combinations of external monocular geometry priors. Differently, we propose an efficient and simple pipeline harnessing a single recent 3D foundation model. We leverage its various task heads, notably point maps and camera initializations to instantiate a bundle adjusting 2D Gaussian Splatting (2DGS) model, and image correspondences to guide camera optimization midst 2DGS training. Key to our contribution is a novel formulation of splatted color variance along rays, which can be computed efficiently. Reducing this moment in training leads to more accurate shape reconstructions. We demonstrate state-of-the-art performances in the sparse uncalibrated setting in reconstruction and novel view benchmarks based on established multi-view datasets.

Comparison Results

Visual comparisons on the sparse reconstruction setting of DTU dataset.

Visual comparison of surface reconstruction results on MVImgNet (first 5) and MipNeRF360 (last 2) datasets.

Visual comparison of surface reconstruction results on BlendedMVS dataset.

BibTeX

@article{jena2025sparfels,
  title={Sparfels: Fast Reconstruction from Sparse Unposed Imagery},
  author={Jena, Shubhendu and Ouasfi, Amine and Younes, Mae and Boukhayma, Adnane},
  journal={arXiv preprint arXiv:},
  year={2025}
}