Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR Applications_2

2025-04-27 0 0 83.63KB 7 页 10玖币
侵权投诉
arXiv:2210.05365v1 [cs.GR] 11 Oct 2022
Pacific Graphics (2021) Work-In-Progress
M. Okabe, S. Lee, B. Wuensche and S. Zollmann (Editors)
Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR
Applications
Tan Yu Wei , Louiz Kim-Chan , Anthony Halim , Anand Bhojan
School of Computing, National University of Singapore
Abstract
We introduce a novel distributed rendering approach to generate high-quality graphics in thin-client games and VR applications.
Many mobile devices have limited computational power to achieve ray tracing in real-time. Hence, hardware-accelerated cloud
servers can perform ray tracing instead and have their output streamed to clients in remote rendering. Applying the approach
of distributed hybrid rendering, we leverage the computational capabilities of both the thin client and powerful server by
performing rasterization locally while offloading ray tracing to the server. With advancements in 5G technology, the server
and client can communicate effectively over the network and work together to produce a high-quality output while maintaining
interactive frame rates. Our approach can achieve better visuals as compared to local rendering but faster performance as
compared to remote rendering.
CCS Concepts
Computing methodologies Rendering; Ray tracing; Applied computing Computer games;
1. Introduction
With recent advancements in GPU hardware acceleration, ray trac-
ing is now feasible in real-time applications and is no longer lim-
ited to offline rendering. Hybrid rendering techniques that combine
ray tracing and rasterization can generate higher quality visuals
while maintaining interactive frame rates on PC applications. In
contrast, the graphics capabilities of mobile and VR devices are
weaker so the use of ray tracing can lead to undesirably low frame
rates. Hence, current cloud gaming providers apply remote render-
ing by performing rendering on powerful cloud infrastructure and
streaming the output to the users’ access devices [HMR09].
However, thin-client devices, albeit with lower computational
power, can perform rasterization in real-time. Rather than solely
relying on the server for rendering, we leverage this limited graph-
ics capability of the client [CWC15] via distributed rendering. In
particular, our approach can help to alleviate the server’s workload
so it can focus on ray tracing. The ray-traced information can then
be used to achieve better visual quality in the final output.
Nonetheless, in order to combine the graphics capabilities of
both the client and server to meet real-time performance con-
straints, we require excellent network bandwidth and latency for ef-
ficient two-way communication and data transfer. Fortunately, this
can be achieved with the newest developments in 5G technology.
With faster networks, we can improve the overall performance of
ray tracing-incorporated real-time rendering, bringing more realis-
tic graphics to thin-client games and VR applications.
2. Design
For distributed rendering, we adopt the UDP network protocol for
efficient communication between the cloud server and client de-
vice. Nonetheless, UDP does not handle retransmission of lost
packets. Retransmitted packets are also not useful for real-time in-
teractive applications as they will be outdated for the current frame.
Hence, in the event of a timeout, we make use of the latest available
previously received data instead.
As for hybrid rendering, we adapt the DirectX implementation of
a simple hybrid rendering pipeline [Wym18] with Lambertian shad-
ing [Kop14] and ray-traced shadows. We first perform a G-buffer
rasterization pass for deferred shading to obtain the necessary per-
pixel information required. Next, we perform ray tracing to query
the relative visibility of every light in the scene with respect to each
pixel. Lastly, we combine the G-buffer and light visibility informa-
tion in computing the final pixel colour Ias shown.
I=
Id
π
i
ki·saturate(N·Li)Ii(1)
In the above formula, Id/πrefers to the diffuse BRDF of the
pixel with Idas its material diffuse colour. Nrepresents the pixel’s
surface normal while ki,Liand Iirefer to the relative visibility,
direction and intensity of light irespectively.
© 2021 The Author(s)
Eurographics Proceedings © 2021 The Eurographics Association.
Tan et al. / Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR Applications
2.1. Rasterization
For each frame, the client first sends the dynamic updates in scene
information including any user input and camera movement to the
server for synchronization. Rasterization is then performed to ob-
tain per-pixel information needed for the rest of the rendering. The
server requires the pixels’ world positions for the tracing of shadow
rays, while the client requires their world space normal vectors and
material diffuse colours for Lambertian shading.
For this particular pipeline, we perform rasterization on both the
client and server at the same time. Alternatively, to avoid repeated
work, we can also make the client or server perform rasterization
and send the necessary information required by the other end over
the network. If we let the client do the rasterization, we can lever-
age its limited computational capabilities and also minimize the
amount of data that needs to be transferred (i.e. world positions
only). On the other hand, rasterization on the server will most likely
be faster because of its superior graphics hardware. The server can
then proceed with ray tracing immediately rather than wait for the
thin client to complete rasterization and send its data over.
Nonetheless, we choose to perform rasterization on both ends
to improve the overall rendering performance. By performing ras-
terization on the server, we minimize the time taken to obtain the
visibility bitmap. Although the client takes longer for rasterization,
it is possible that the client would have completed rasterization by
the time it receives the visibility bitmap from the server. It can then
promptly proceed with Lambertian shading for the final output.
In this case, both the client and server require raster information.
However, for other hybrid rendering pipelines, it would be better
to avoid repeated work. For instance, [Cab10] introduces a pipeline
that performs direct lighting computation in parallel with the ray
tracing process. Such independent work distribution can enable our
approach to attain better performance than pure remote rendering.
2.2. Ray Tracing
Shadow mapping is usually adopted to achieve real-time shad-
ows in interactive applications. However, ray-traced shadows pro-
duce more accurate shadow boundaries by avoiding texture artifacts
such as jaggies and shadow acne. Hence, we perform hardware-
accelerated ray tracing on the server to incorporate high-quality
ray-traced shadows in real-time.
For every pixel, we obtain its corresponding world position ef-
ficiently through rasterization as compared to ray casting from the
pixel centre. From the world position, we then trace a shadow ray
to every light in the scene. If this light-direction ray for light iis
not obstructed by any object, the light is visible from the pixel and
ki=1. The pixel is hence deemed to be illuminated by the light.
On the other hand, if the ray is obstructed by some object, ki=0
which indicates that the light is not visible from the pixel. Hence,
the light does not contribute to the pixel’s final colour.
This visibility boolean is stored in a frame-sized bitmap with a
compact bitmask per pixel, where every bit represents the relative
visibility of a light from the pixel. For every visible light, its direc-
tion Liand intensity Iiare then included in the computation of the
final pixel colour. The size of the per-pixel bitmask and number of
overall visibility bitmaps can be increased based on the number of
lights in the scene. To minimize the duration of visibility bitmap
transfer, we apply LZ4 lossless compression to reduce the amount
of data sent over the network. This duration can be further brought
down with the help of high speed 5G networks.
3. Discussion
We tested our distributed hybrid rendering approach on
THE MODERN LIVING ROOM (CC BY) with three lights.
Hardware-accelerated ray tracing was performed on a PC with a
GeForce RTX 2080 GPU. A reasonably interactive frame rate of
34 fps was achieved with our basic prototype without substantial
optimization.
With more premium GPUs out in the market and the rapid de-
velopment of 5G networks as well as edge computing, we believe
that even higher frame rates can be attained with our current im-
plementation. Nonetheless, we are continuing to explore optimiza-
tions in data transfer over the network as well as between local
GPU and CPUs, and aim to enhance performance further with bet-
ter data compression and transmission techniques. In the event of
lost packets, we are also working on more accurate alternatives to
the raw data of previous frames. For instance, we believe that we
can use scene information like motion vectors in conjunction with
history frames to estimate the contents of lost packets.
Eventually, we hope to bring our distributed rendering approach
to more advanced pipelines and incorporate lighting and camera
effects such as reflections, global illumination and depth of field.
Acknowledgements
This work is supported by the Singapore Ministry of Educa-
tion Academic Research grant T1 251RES1812, “Dynamic Hybrid
Real-time Rendering with Hardware Accelerated Ray-tracing and
Rasterization for Interactive Applications”. We thank developers
Nicholas Nge and Alden Tan in our lab for their assistance.
References
[Cab10] CABELEIRA J. P. G.: Combining Rasterization and
Ray Tracing Techniques to Approximate Global Illumination in
Real-Time. Master’s thesis, Portugal, Nov. 2010. URL:
http://voltaico.net/files/article.pdf.2
[CWC15] CUERVO E., WOLMAN A., COX L. P., LEBECK K.,
RAZEEN A., SAROI U S., MUSUVATHI M.: Kahawai: High-
quality mobile gaming using gpu offload. In Proceedings of
the 13th Annual International Conference on Mobile Systems,
Applications, and Services (New York, NY, USA, 2015), Mo-
biSys ’15, Association for Computing Machinery, p. 121–135.
URL: https://doi.org/10.1145/2742647.2742657,
doi:10.1145/2742647.2742657.1
[HMR09] HOLTHE O., MOGS TAD O., RONNINGEN L. A.: Geelix
LiveGames: Remote playing of video games. In 2009 6th IEEE Con-
sumer Communications and Networking Conference (Jan 2009), pp. 1–
2. URL: https://doi.org/10.1109/CCNC.2009.4784713,
doi:10.1109/CCNC.2009.4784713.1
[Kop14] KOPPAL S. J.: Lambertian Reflectance.
Springer US, Boston, MA, 2014, pp. 441–443. URL:
https://doi.org/10.1007/978-0-387-31439-6_534,
doi:10.1007/978-0-387-31439-6_534.1
© 2021 The Author(s)
Eurographics Proceedings © 2021 The Eurographics Association.
摘要:

arXiv:2210.05365v1[cs.GR]11Oct2022PacificGraphics(2021)Work-In-ProgressM.Okabe,S.Lee,B.WuenscheandS.Zollmann(Editors)Cloud-AssistedHybridRenderingforThin-ClientGamesandVRApplicationsTanYuWei,LouizKim-Chan,AnthonyHalim,AnandBhojanSchoolofComputing,NationalUniversityofSingaporeAbstractWeintroduceanovel...

展开>> 收起<<
Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR Applications_2.pdf

共7页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:7 页 大小:83.63KB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 7
客服
关注