
Hybrid MBlur: Using Ray Tracing to Solve the Partial Occlusion
Artifacts in Real-Time Rendering of Motion Blur Eect
Tan Yu Wei
National University of Singapore
yuwei@u.nus.edu
Cui Xiaohan
National University of Singapore
cuixiaohan@u.nus.edu
Anand Bhojan
National University of Singapore
banand@comp.nus.edu.sg
Figure 1: From left to right: original scene, adapted post-process [McGuire et al. 2012] (295 fps), UE4 post-process, hybrid (205
fps) and ray-traced MBlur on The Modern Living Room (CC BY) with a GeForce RTX 2080 Ti
ABSTRACT
For a foreground object in motion, details of its background which
would otherwise be hidden are uncovered through its inner blur.
This paper presents a novel hybrid motion blur rendering technique
combining post-process image ltering and hardware-accelerated
ray tracing. In each frame, we advance rays recursively into the
scene to retrieve background information for inner blur regions and
apply a post-process ltering pass on the ray-traced background and
rasterized colour before compositing them together. Our approach
achieves more accurate partial occlusion semi-transparencies for
moving objects while maintaining interactive frame rates.
CCS CONCEPTS
•Computing methodologies →Rendering
;
Ray tracing
;
•Ap-
plied computing →Computer games.
KEYWORDS
real-time, motion blur, ray tracing, post-processing, hybrid render-
ing, games
ACM Reference Format:
Tan Yu Wei, Cui Xiaohan, and Anand Bhojan. 2020. Hybrid MBlur: Using
Ray Tracing to Solve the Partial Occlusion Artifacts in Real-Time Rendering
of Motion Blur Eect. In Special Interest Group on Computer Graphics and
Interactive Techniques Conference Posters (SIGGRAPH ’20 Posters), August 17,
2020. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3388770.
3407436
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
SIGGRAPH ’20 Posters, August 17, 2020, Virtual Event, USA
©2020 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-7973-1/20/08.
https://doi.org/10.1145/3388770.3407436
1 INTRODUCTION
Blurred regions by nature uncover information of hidden back-
ground areas which are otherwise excluded from the render, as
an eect of partial occlusion. Moving objects blur outwards and
inwards at their silhouette, causing the region surrounding their
silhouette to appear semi-transparent [Jimenez 2014]. Outer blur
represents an object’s blur into its background, while inner blur
refers to the blur produced within the silhouette of the object itself.
One key to Motion Blur (MBlur) rendering is hence the recovery
of background colour in inner blur regions which is inaccurate with
screen space approaches. Post-process techniques like McGuire et al
.
[2012] approximate the background of the inner blur with neigh-
bouring pixels when the background colour of the target pixel
cannot be retrieved from raster information. This approach not
only produces a mere approximation of the true background geom-
etry of inner blur regions, but also leads to inaccuracies between
real and approximated backgrounds for sharp and blurred regions
respectively. Our technique addresses these issues by obtaining the
exact colour of occluded background with ray tracing for a more
accurate MBlur.
2 DESIGN
We rst obtain per-pixel information such as camera space depth,
world space surface normal vector, screen space mesh ID and ve-
locity as well as rasterized colour under deferred shading. The
same depth and colour information for background geometry are
retrieved by our novel ray reveal pass within a ray mask for pixels
in the inner blur of moving foreground objects. A tile-dilate pass is
then applied to these 2 sets of buers to determine the sampling
range of our gathering lter in the subsequent post-process pass
which is adapted from McGuire et al
.
[2012]. Both the ray-revealed
result and rasterized output are then blurred by the post-process
pass, and lastly composited together to produce our nal image.
arXiv:2210.05364v1 [cs.GR] 11 Oct 2022