
Hybrid DoF: Ray-Traced and Post-Processed Hybrid Depth of
Field Eect for Real-Time Rendering
Tan Yu Wei
National University of Singapore
yuwei@u.nus.edu
Nicholas Chua
National University of Singapore
nicholaschuayunzhi@u.nus.edu
Nathan Biette
National University of Singapore
nathan.biette@u.nus.edu
Anand Bhojan
National University of Singapore
banand@comp.nus.edu.sg
Figure 1: From left to right: original scene, adapted post-process [Jimenez 2014] with foreground overblur, UE4 post-process
[Abadie 2018], hybrid (143 fps) and ray-traced DoF on The Modern Living Room (CC BY) with a GeForce RTX 2080 Ti
ABSTRACT
Depth of Field (DoF) in games is usually achieved as a post-process
eect by blurring pixels in the sharp rasterized image based on
the dened focus plane. This paper describes a novel real-time
DoF technique that uses ray tracing with image ltering to achieve
more accurate partial occlusion semi-transparencies on edges of
blurry foreground geometry. This hybrid rendering technique lever-
ages ray tracing hardware acceleration as well as spatio-temporal
reconstruction techniques to achieve interactive frame rates.
CCS CONCEPTS
•Computing methodologies →Rendering
;
Ray tracing
;
•Ap-
plied computing →Computer games.
KEYWORDS
real-time, depth of eld, ray tracing, post-processing, hybrid ren-
dering, games
ACM Reference Format:
Tan Yu Wei, Nicholas Chua, Nathan Biette, and Anand Bhojan. 2020. Hybrid
DoF: Ray-Traced and Post-Processed Hybrid Depth of Field Eect for Real-
Time Rendering. In Special Interest Group on Computer Graphics and Interac-
tive Techniques Conference Posters (SIGGRAPH ’20 Posters), August 17, 2020.
ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3388770.3407426
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
SIGGRAPH ’20 Posters, August 17, 2020, Virtual Event, USA
©2020 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-7973-1/20/08.
https://doi.org/10.1145/3388770.3407426
1 INTRODUCTION
Under partial occlusion in Depth of Field (DoF), background in-
formation is revealed through the semi-transparent silhouettes of
blurry foreground geometry. Partial occlusion is inaccurate with
traditional post-processing as the rasterized image does not store
any information behind foreground objects. We can query the scene
for background intersections with ray tracing but at poorer perfor-
mance. Hence, we propose a DoF eect based on hybrid rendering,
which combines rasterization and ray tracing techniques for better
visual quality while maintaining interactive frame rates.
2 DESIGN
A G-Buer is rst produced in deferred shading with a sharp ras-
terized image that undergoes post-process ltering adapted from
Jimenez [2014]. Selected areas of the scene with higher rates of
post-processing inaccuracy are then chosen to undergo distributed
ray tracing with spatio-temporal reconstruction. The post-process
and ray trace colours are nally composited together. We build
our technique on the thin lens model by Potmesil and Chakravarty
[1982], and dene our scene such that points in front of the focus
plane are in the near eld and points behind are in the far eld.
2.1 Ray Trace
We shoot a variable number of rays into the scene through an
adaptive ray mask. Employing a selective rendering approach like
in adaptive frameless rendering, we aim to shoot more rays at edges
to create clean semi-transparencies but less at regions with fewer
details such as plain surfaces. Like Canny Edge Detection, we apply
a Gaussian lter on the G-Buer rst to reduce noise and jaggies
along diagonal edges. The ltered G-Buer then passes through
arXiv:2210.04981v1 [cs.GR] 10 Oct 2022