stead. Hou et al. (2010) is an approach that makes use
of 4D hyper-trapezoids to perform micropolygon ray
tracing. Methods of approximating the visibility func-
tion to sample have also been devised such as in Sung
et al. (2002). However, as explained in Sattlecker and
Steinberger (2015), at low sample rates, these meth-
ods will also exhibit ghosting artifacts whereas in-
creasing the number of samples would lead to noise.
To produce the right amount of blur, some real-
time approaches like Rosado (2007), Ritchie et al.
(2010) and Sousa (2013) make use of per-pixel ve-
locity information by accumulating samples along the
magnitude and direction of velocities in the colour
buffer. Other techniques in Korein and Badler (1983),
Catmull (1984) and Choi and Oh (2017) accumulate
the colours of visible passing geometry or pixels with
respect to a particular screen space position while
Gribel et al. (2011) makes use of screen space line
samples instead. Potmesil and Chakravarty (1983)
represents the relationship between objects and their
corresponding image points as point-spread functions
(PSFs), which are then used to convolve points in
motion. Leimk¨
uhler et al. (2018) splats the PSF of
every pixel in an accelerated fashion using sparse
representations of their Laplacians instead. Time-
dependent edge equations, as explained in Akenine-
M¨
oller et al. (2007) and Gribel et al. (2010), and 4D
polyhedra primitives (Grant, 1985) have also been
used for MBlur geometry processing. Recently, a
shading rate-based approach involving content and
motion-adaptive shading in Yang et al. (2019) has also
been developed for the generation of MBlur.
In particular, attempts to simulate nonlinear
MBlur include Gribel et al. (2013) and Woop et al.
(2017). Our hybrid technique only considers linear
inter-frame image space motion for now, but we in-
tend to provide support for higher-order geometry
motion in the future. We also assume mainstream
ray tracing acceleration architecture widely available
in modern gaming workstations. The GA10x RT
Core of the newest NVIDIA Ampere architecture
provides hardware acceleration for ray-traced motion
blur (NVIDIA Corporation, 2021) but is only found in
the premium GeForce RTX 30 Series graphics cards.
3 DESIGN
Our hybrid MBlur approach, as illustrated in Fig-
ure 2, compensates for missing information in post-
processed MBlur with the revealed background pro-
duced by a ray trace-based technique.
A Geometry Buffer (G-Buffer) is first gener-
ated under a deferred shading set-up, rendering tex-
tures containing per-pixel information such as cam-
era space depth, screen space velocity and rasterized
colour. The same depth, velocity and colour infor-
mation for background geometry is produced by our
novel ray reveal pass within a ray mask for pixels in
the inner blur of moving foreground objects. A tile-
dilate pass is then applied to these 2 sets of buffers
to determine the sampling range of our gathering fil-
ter in the subsequent post-process pass. Both the ray-
revealed and rasterized output are blurred by this post-
process pass and lastly composited together.
3.1 Post-process
The McGuire et al. (2012) post-process MBlur ren-
ders each pixel by gathering sample contributions
from a heuristic range of nearby pixels. We adapt
this approach to produce a motion-blurred effect sep-
arately for rasterized and ray-revealed information.
As suggested in Rosado (2007), motion vectors
are given by first calculating per-pixel world space
positional differences between every frame and its
previous frame, followed by a translation to screen
space. By using the motion vector between the last
frame and the current frame, we simulate the expo-
sure time of one frame. Then, we follow the approach
of McGuire et al. (2012) in calculating the displace-
ment of the pixel within the exposure time by scal-
ing the inter-frame motion vector with the frame rate
of the previous frame as well as the exposure time.
Considering the full exposure as one unit of time,
this displacement can be interpreted as a per-exposure
velocity vector. This approach uses the assumption
that the motion vector of each pixel between the pre-
vious frame and the current frame remains constant
throughout the exposure time.
Jimenez (2014) is a technique that is based on
McGuire et al. (2012). As described in Jimenez
(2014), the major problems to be addressed when
producing a post-processed MBlur are the range of
sampling, the amount of contribution of each sam-
ple as well as the recovery of background geometry
information for inner blur. With our method for cal-
culating per-exposure velocities, we adopt McGuire
et al. (2012)’s approach in determining the magni-
tude of the sampling range and representing different
amounts of sample contribution, as illustrated in the
Appendix for completeness. As shown in Figure 3,
McGuire et al. (2012) centers the sampling area at
the target pixel, creating a blur effect both outwards
and inwards from the edge of the object. Although
this produces a more uniform blur for thin objects and
the specular highlights of curved surfaces, it poses the
problem of having to smoothen the transition between