become fat and deformed at areas with large CoC.
Nonetheless, such inaccuracies do not exist in ray-
traced DoF (Cook et al., 1984) as we can simulate
a thin lens and query the scene for intersections, not
being limited to what is rendered in the rasterized im-
age. Nonetheless, achieving interactive frame rates
with ray tracing is difficult due to the high computa-
tional costs of calculating ray-geometry intersections
and multiple shading for each pixel, even with the lat-
est GPUs developed for ray tracing. Hence, hybrid
rendering, which aims to combine existing rasteriza-
tion techniques with ray tracing, is being researched.
2 RELATED WORK
2.1 Hybrid Rendering
Examples of hybrid rendering on related effects in-
clude Macedo et al. (2018) and Marrs et al. (2018)
which invoke ray tracing for reflections and anti-
aliasing respectively only on pixels where rasteri-
zation techniques are unable to achieve realistic or
desirable results. Beck et al. (1981), Hertel et al.
(2009) and Lauterbach and Manocha (2009) employ
the same strategy to produce accurate shadows.
The concept of hybrid rendering can also be ex-
tended to general rendering pipelines. For example,
Cabeleira (2010) uses rasterization for diffuse illu-
mination and ray tracing for reflections and refrac-
tions. Barr´
e-Brisebois et al. (2019) is also one such
pipeline that has replaced effects like screen-space re-
flections with their ray trace counterparts to achieve
better image quality. Another commonly-used ap-
proach is Chen and Liu (2007), the substitution of
primary ray generation with rasterization in recursive
ray tracing by Whitted (1979). Andrade et al. (2014)
improves upon this technique by observing a render
time limit through the prioritization of only the most
important scene objects for ray tracing.
2.2 DoF
Many DoF rendering techniques have been devised
over the years. Potmesil and Chakravarty (1982)
first introduced the concept of CoC for a point based
on a thin lens model which simulates the effects of
the lens and aperture of a physical camera. It em-
ploys a post-processing technique that converts sam-
pled points into their CoCs. The intensity distribu-
tions of CoCs overlapping with each pixel are then
accumulated to produce the final colour for the pixel.
Haeberli and Akeley (1990) integrates images ren-
dered from different sample points across the aper-
ture of the lens with an accumulation buffer. On the
other hand, Cook et al. (1984) traces multiple rays
from these different sample points on the lens into the
scene using a technique now commonly known as dis-
tributed ray tracing, for which improvements in ray
budget have been made in Hou et al. (2010) and Lei
and Hughes (2013).
For rendering with real-time performance con-
straints, spatial reconstruction and temporal accumu-
lation approaches have also been developed. For in-
stance, Dayal et al. (2005) introduces adaptive spatio-
temporal sampling, choosing to sample more based
on colour variance in the rendered image with selec-
tive rendering by Chalmers et al. (2006) and favouring
newer samples for temporal accumulation in dynamic
scenes. Schied et al. (2017) also uses temporal ac-
cumulation to raise the effective sample count on top
of image reconstruction guided by variance estima-
tion. Such techniques have been applied for DoF such
as in Hach et al. (2015), Leimk¨
uhler et al. (2018),
Weier et al. (2018), Yan et al. (2016) and Zhang et al.
(2019). More advanced reconstruction techniques for
DoF have also been introduced, such as Belcour et al.
(2013), Lehtinen et al. (2011), Mehta et al. (2014) and
Vaidyanathan et al. (2015) which sample light fields
as well as Shirley et al. (2011) which selectively blurs
pixels of low frequency content in stochastic sam-
pling. A more adaptive temporal accumulation ap-
proach from Schied et al. (2018) which is responsive
to changes in sample attributes such as position and
normal has also been proposed to mitigate ghosting
and lag in classic temporal accumulation approaches.
Micropolygon-based techniques have also proven
to be capable of DoF like in Fatahalian et al. (2009)
and Sattlecker and Steinberger (2015). Catmull
(1984) solves for per-pixel visibility by performing
depth sorting on overlapping polygons for each pixel.
Following this, approaches based on multi-layer im-
ages like Franke et al. (2018), Kraus and Strengert
(2007), Lee et al. (2008), Lee et al. (2009) and Sel-
grad et al. (2015) have also been introduced where
the contributions from each layer are accumulated to
produce the final image. Such layered approaches
are computationally expensive although they can gen-
erate relatively accurate results in terms of semi-
transparencies. Bukowski et al. (2013), Jimenez
(2014), Valient (2013) and state-of-the-art Unreal En-
gine approach Abadie (2018) divide the scene into the
background and foreground, and runs a gathering fil-
ter separately for each. We adopt such a technique,
which performs better in terms of rendering time even
in comparison to Yan et al. (2016), which avoids the
problem of separating the scene by depth by factoring
high-dimensional filters into 1D integrals.