
Neural Implicit Surface Reconstruction from Noisy Camera Observations
Sarthak Gupta,1Patrik Huber, 2
1Indian Institute of Technology Roorkee, Roorkee, Uttarakhand, India - 247667
2University of York, Deramore Lane, Heslington, York, YO10 5GH, United Kingdom
mrsarthakgupta@gmail.com, patrik.huber@york.ac.uk
Abstract
Representing 3D objects and scenes with neural radiance
fields has become very popular over the last years. Recently,
surface-based representations have been proposed, that allow
to reconstruct 3D objects from simple photographs. However,
most current techniques require an accurate camera calibra-
tion, i.e. camera parameters corresponding to each image,
which is often a difficult task to do in real-life situations. To
this end, we propose a method for learning 3D surfaces from
noisy camera parameters. We show that we can learn camera
parameters together with learning the surface representation,
and demonstrate good quality 3D surface reconstruction even
with noisy camera observations.
Introduction
The idea of representing 3D objects and scenes with neural
networks, instead of traditional mesh-like representations,
has gained significant traction recently. In (Mildenhall et al.
2020), an approach called NeRF is proposed, Neural Radi-
ance Fields, where a neural network is used together with
a volumetric representation to learn the representation of a
scene from a collection of calibrated multi-view 2D images.
However, a volumetric representation is not the best rep-
resentation in many cases; for example, many objects like
faces are better represented using surfaces. This was tackled
in a follow-up work by (Wang et al. 2021a), called NeuS,
who proposed to use neural implicit surfaces together with
volume rendering for multi-view reconstruction. This year,
(Wang et al. 2021b) addressed another shortcoming of clas-
sical NeRF’s, by extending the NeRF method to work on im-
age data when camera calibration data is not present. This is
achieved by jointly estimating the scene representation and
optimising for the camera parameters, and the authors have
shown promising results for frontal-facing scenes.
In this work, we propose to marry the benefits of each
of these approaches: We propose a method to learn a neu-
ral implicit surface based representation of objects from
noisy camera observations. We show that the classical NeuS
method fails to learn an object completely if camera param-
eters are not precise, whereas our approach succeeds. Fur-
thermore, what is often taken as ground-truth camera pa-
rameters come themselves from a multi-view reconstruction
Copyright © 2023, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
Figure 1: (Left): Reconstruction using (Wang et al. 2021a)
with ground truth camera parameters. (Centre): Reconstruc-
tion using (Wang et al. 2021a) with noisy camera parame-
ters, where the approach completely fails. (Right): Recon-
struction using the proposed approach with noisy camera
parameters. For each of the two objects, the lower images
represent the actual image of the object while the upper im-
age is the rendered image of the reconstructed surface.
software like COLMAP (Sch¨
onberger and Frahm 2016), and
thus likely with estimation errors. By making camera param-
eters learnable, the approach gains the ability to correct for
these errors, and to potentially achieve better quality than
methods that take camera parameters as given truth.
Methodology
We draw inspiration from (Wang et al. 2021b), where the
authors make the camera parameters of a NeRF learnable
to allow them to converge to values that result in desirable
3D scene reconstructions. While (Wang et al. 2021b) works
with unknown camera parameters for only forward-facing
input images with rotational and translational perturbations
of up to ±20◦, our approach works successfully on images
from 360◦view angles. The latter results in a problem that
is much harder to solve and is prone to local optima that do
not produce good results when trying to learn the camera
arXiv:2210.01548v1 [cs.CV] 2 Oct 2022