
Boosting Point Clouds Rendering via Radiance Mapping
Xiaoyang Huang1*, Yi Zhang1*, Bingbing Ni1†, Teng Li2, Kai Chen3, Wenjun Zhang1
1Shanghai Jiao Tong University, Shanghai 200240, China,
2Anhui University, 3Shanghai AI Lab
{huangxiaoyang, yizhangphd, nibingbing}@sjtu.edu.cn
Abstract
Recent years we have witnessed rapid development in NeRF-
based image rendering due to its high quality. However, point
clouds rendering is somehow less explored. Compared to
NeRF-based rendering which suffers from dense spatial sam-
pling, point clouds rendering is naturally less computation in-
tensive, which enables its deployment in mobile computing
device. In this work, we focus on boosting the image qual-
ity of point clouds rendering with a compact model design.
We first analyze the adaption of the volume rendering for-
mulation on point clouds. Based on the analysis, we simplify
the NeRF representation to a spatial mapping function which
only requires single evaluation per pixel. Further, motivated
by ray marching, we rectify the the noisy raw point clouds
to the estimated intersection between rays and surfaces as
queried coordinates, which could avoid spatial frequency col-
lapse and neighbor point disturbance. Composed of rasteriza-
tion, spatial mapping and the refinement stages, our method
achieves the state-of-the-art performance on point clouds ren-
dering, outperforming prior works by notable margins, with
a smaller model size. We obtain a PSNR of 31.74 on NeRF-
Synthetic, 25.88 on ScanNet and 30.81 on DTU. Code and
data are publicly available1.
Introduction
The rising trend of AR/VR application calls for better im-
age quality and higher computation efficiency in render-
ing technology. Recent works mainly focus on NeRF-based
(Mildenhall et al.) rendering due to its photo-realistic effect.
Nevertheless, NeRF-based rendering suffers from heavy
computation cost, since its representation assumes no ex-
plicit geometry is known, and requires burdensome spatial
sampling. This drawback severely hampers its application
in mobile computing devices, such as smart phones or AR
headsets. On the other hand, point clouds (Huang et al.),
which have explicit geometry, are easy to obtained as the
depth sensors become prevalent and MVS algorithms (Yao
et al.; Wang et al.) get powerful. It deserves more attention
to develop high-performance rendering methods based on
*These authors contributed equally.
†Corresponding Author.
Copyright © 2023, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
1https://github.com/seanywang0408/RadianceMapping
point clouds, which is so far insufficiently explored. In this
work, we introduce a point clouds rendering method which
achieves comparable rendering performance to NeRF.
The main difference between NeRF-based rendering and
point clouds rendering is that the latter is designed upon the
noisy surface of objects. On the bright side, it is a beneficial
geometric prior which could greatly reduce the query times
in 3D space. On the bad side, this prior is noisy and sparse,
since the point clouds are generally reconstructed by MVS
algorithms or collected by depth sensors. It needs additional
approaches to alleviate the artifact brought by the noise and
sparsity. Therefore, most of the current point clouds render-
ing methods require two steps. One is the spatial feature
mapping, and the other is image-level refinement. The spa-
tial feature mapping step is similar to the NeRF represen-
tation, which maps a 3D coordinate to its color, density or
latent feature. The refinement step is usually implemented
as a convolutional neural network. In this work, we mainly
focus on the spatial feature mapping step. Previous works
use point clouds voxelization (Dai et al.), learnable param-
eters (R¨
uckert, Franke, and Stamminger; Kopanas et al.) or
linear combination of sphere basis (Rakhimov et al.) as map-
ping functions. However, these methods suffer either from
high computation cost, large storage requirements, or unsat-
isfactory rendering performance. To this end, we introduce
a much simpler but surprisingly effective mapping function.
Motivated by the volume rendering formulation in NeRF, we
analyze its adaptation on point clouds rendering scenarios.
It is concluded that in a point cloud scene, the volumetric
rendering could be simplified to the modeling of the view-
dependent color of the first-time intersection between the
estimated surface and the ray. In other words, we augment
each 3D point (i.e., most probably a surface point) with a
learnable feature indicating first-hit color. Thereby the point
clouds rendering task could be re-cast within the high fi-
delity NeRF framework, without consuming redundant com-
putation on internal ray samples. We name it radiance map-
ping. Moreover, based on radiance mapping, we rectify the
raw point cloud coordinates that are fed into the mapping
function using the z-buffer in rasterization to obtain a query
point which lies exactly on the camera ray. This approach
allows us to obtain a more accurate geometry and avoid spa-
tial frequency collapse. The radiance mapping function con-
sisted of a 5-layer MLP is only 0.75M large, which is much
arXiv:2210.15107v2 [cs.CV] 8 Dec 2022