Comparison of Popular Video Conferencing Apps Using Client-side Measurements on Different Backhaul Networks

2025-04-29 0 0 7.56MB 14 页 10玖币
侵权投诉
Comparison of Popular Video Conferencing Apps
Using Client-side Measurements on Dierent
Backhaul Networks
Rohan Kumar Dhruv Nagpal
Vinayak Naik
[f20181013,f20180095,naik]@goa.bits-pilani.ac.in
BITS Pilani, Goa
India
Dipanjan Chakraborty
dipanjan@hyderabad.bits-pilani.ac.in
BITS Pilani, Hyderabad
India
ABSTRACT
Video conferencing platforms have been appropriated during
the COVID-19 pandemic for dierent purposes, including
classroom teaching. However, the platforms are not designed
for many of these objectives. When users, like educationists,
select a platform, it is unclear which platform will perform
better given the same network and hardware resources to
meet the required Quality of Experience (QoE). Similarly,
when developers design a new video conferencing platform,
they do not have clear guidelines for making design choices
given the QoE requirements.
In this paper, we provide a set of networks and systems
measurements, and quantitative user studies to measure the
performance of video conferencing apps in terms of both,
Quality of Service (QoS) and QoE. Using those metrics, we
measure the performance of Google Meet, Microsoft Teams,
and Zoom, which are three popular platforms in education
and business. We nd a substantial dierence in how the
three apps treat video and audio streams. We see that their
choice of treatment aects their consumption of hardware
resources. Our quantitative user studies conrm the ndings
of our quantitative measurements. While each platform has
its benets, we nd that no app is ideal. A user can choose
a suitable platform depending on which of the following,
audio, video, or network bandwidth, CPU, or memory are
more important.
1 INTRODUCTION
When the COVID-19 pandemic hit, countries worldwide
went into strict lockdown, and schools, universities, oces,
and places of business closed down. Video conferencing plat-
forms like Google Meet, Microsoft Teams, and Zoom were
appropriated in dierent domains like classroom education,
healthcare, family functions, corporate events, meetings, and
shopping, for people to continue functioning. However, the
video conferencing platforms were not envisioned to be used
in scenarios where the usage and network infrastructure
are very diverse in terms of devices employed and band-
width. With the continuing cycle of COVID waves, many
of the platforms will likely be continued to be used for dif-
ferent purposes, including classroom education. However,
vital domains such as school education have been badly hit
during the COVID-19 pandemic, especially in developing
countries, because of several socio-cultural factors, including
the aordability of devices and network bandwidth [11]. In
this work, we focus on the technological factors aecting
the quality of school and university education during the
COVID-19 pandemic. We conduct Quality of Service (QoS)
experiments through client-side network measurements on
three popular video conferencing platforms, namely, Google
Meet, Microsoft Teams, and Zoom, in an ecologically valid
scenario for classroom education, under dierent network
conditions with varying modes of operation within the apps.
We also conduct Quality of Experience (QoE) experiments
through quantitative user studies over the same platforms
subjected to the same network and operational variations.
Our work serves to inform educationists in developing coun-
tries on choosing a platform to continue conducting class-
room education in online modes. In addition, our work tells
designers of platforms in prioritizing dierent aspects for
the education domain.
In the absence of access to server-side measurements, we
conduct client-side measurements for the QoE experiments
to determine several network characteristics, like upload and
download payload sizes and the Inter-Packet Arrival Times
(IPAT). We also make quantitative comparisons between
the audio and video qualities at the sender and the receiver
sides. We conduct quantitative user studies to see how the
network and hardware usage of the various apps aect the
user experience. These insights, we believe, will empower
policymakers and educationists to choose a platform for their
needs. On the other hand, it will inform developers in low-
resource contexts about which characteristics or features are
essential.
The salient contributions of this paper are as follows.
1
arXiv:2210.09651v1 [cs.MM] 18 Oct 2022
ACM MobiCOVID’22, Seoul, South Korea Rohan Kumar et al.
(1)
We study the network usage at the client-side of three
popular video-conferencing platforms and correlate
that with video and audio quality to understand whether
and how the two are related.
(2)
We conduct this study using Google Meet, which is
widely used in the education domain, and Microsoft
Teams and Zoom, which are commonly used in the
corporate space and education.
(3)
We quantitatively measure network usage and video-
audio quality. Since video-audio quality is also a subjec-
tive metric, we quantitatively measure the perceived
quality through a user experience study.
(4)
We use bandwidth, download payloads, upload pay-
loads, and IPAT (inter-packet arrival times) to measure
network characteristics.
(5)
We quantitatively measure video characteristics in
terms of PSNR (Peak Signal to Noise Ratio) and SSIM
(Structural Similarity Index Measure).
(6)
We measure energy in dierent audio frequencies, the
bitrate, and the number of channels for studying the
audio characteristics.
(7)
We measure these metrics for the three apps on wired
broadband over WiFi and 4G mobile Internet connec-
tions. These two varieties of connections are the most
widely used for online classes during the COVID-19
pandemic.
(8)
In our measurements, the bandwidth of wired broad-
band on optical ber is roughly about 150 Mbps while
that of 4G mobile Internet is about 11 Mbps. We want
to see how the platforms behave in terms of their net-
work usage and video-audio quality when presented
with dierent backhaul networks.
(9)
We vary the settings of the platforms in terms of micro-
phone and camera as they result in creating dierent
payloads for the network.
Organization of the Paper
The paper is organized as fol-
lows. Section 2 describes the metrics for quantitative mea-
surements of network usage, video quality, and audio quality.
All of these count towards the Quality of Service (QoS) class
of metrics. We explain the survey that we conduct to mea-
sure the quality of video and audio qualitatively, which count
towards the class of QoE. In Section 3, we analyze the perfor-
mance of the two apps using the metrics for wired broadband
and 4G mobile Internet connections. We compare our work
with the existing bodywork in Section 4. Finally, we conclude
in Section 5 and mention the future work in Section 6.
2 MEASURING THE PERFORMANCE
In this section, we describe each of the metrics we use and
discuss the setup used to collect values of those metrics.
2.1 Measuring Quantitative Performance
We measure the apps’ performance in terms of their network
usage at the client-ends, both transmitter, and receiver of the
video. An advantage of measuring at the client-end is that no
special access is required at the server. Any user can measure
performance without needing any special access to the apps.
The Upload Payload is the total payload in the packets sent
from the video source to the server. The Download Payload
is the total payload in the packets sent from the server to
the video receiver. The IPAT is the time dierence between
any two successive packets at the receiver. To compare the
network performance of both the apps, we measure Upload
Payload at the transmitter-end of the video, Download Pay-
load at the receiver-end of the video, and Standard Deviation
in IPAT. We analyze CPU utilization, memory usage, and
battery consumption for the two dierent networks.
We perform the measurements over a session for each
app, lasting for fteen minutes. Over the session, we play a
recorded video of a lecture from a university, which mim-
ics the scenario of streaming live or recorded classes and
meetings for which these apps are heavily being used. In
total, we perform twelve dierent combinations for the ses-
sion, depending on whether the microphone and camera are
switched ON or OFF. We tabulate these combinations in Ta-
ble 1. These twelve test combinations give us all the possible
congurations of the state of the apps and the accessories.
While the video contains the speaker and the slides, the cam-
era transmits the video at the receiver’s end. Since we need
at least one speaker for the video conference, the speaker’s
video is transmitted in all twelve combinations. We use Wire-
shark to capture sent and received packets. We use Numpy
and Pandas Python libraries with the Wireshark packet cap-
ture to compute the network metrics. We use a python script
to measure resource consumption of the conferencing apps’
processes. The script uses the psutil [
2
] library to capture
resource consumption characteristics.
We record the sessions using the app’s recording feature to
measure the video quality and audio quality against the local
copy of the video and audio. There are multiple techniques
available to evaluate video quality [
24
]. We use PSNR and
SSIM to compare the video quality. PSNR is a quantitative
video quality metric that gives us the inverse of the error be-
tween the original and the recorded frames. A higher PSNR
indicates better quality. SSIM is a more complex quantitative
metric that considers perceptual quality [
25
]. Its value lies
between zero and one, the latter value implying that the
two frames are the same. We use the YUV color encoding
to calculate the SSIM and PSNR values. ‘Y’ component de-
picts the brightness, ‘U’ the blue projection, and ‘V’ the red
projection [
1
]. We use Spek [
3
] to compare the audio qual-
ity, which gives us energy distribution for dierent audible
2
Comparison of Popular Video Conferencing Apps ACM MobiCOVID’22, Seoul, South Korea
Table 1: Measurements Performed
Seq No App Mic Camera
1 Google Meet OFF OFF
2 Google Meet ON OFF
3 Google Meet OFF ON
4 Google Meet ON ON
5 MS Teams OFF OFF
6 MS Teams ON OFF
7 MS Teams OFF ON
8 MS Teams ON ON
9 Zoom OFF OFF
10 Zoom ON OFF
11 Zoom OFF ON
12 Zoom ON ON
Table 2: Conguration of network and end-hosts
Participant’s
Role
Sender Receiver
CPU Intel i5-8265U Intel i5-8250U
RAM 16 GB 16 GB
OS Windows 10 Windows 10
Broadband
Internet
Connection
WLAN 802.11ac
over 150 Mbps
Optical Fiber
WLAN 802.11ac
over 150 Mbps
Optical Fiber
4G Mobile
Internet
11 Mbps 10.5 Mbps
Battery 41 Wh 41 Wh
Browser Google Chrome Google Chrome
frequencies. The higher the energy distribution among the
frequencies, the better is the audio quality [
5
]. We repeat
each measurement three times on dierent days and report
an average of those.
We conduct the measurements over two network congu-
rations - (a) wired broadband networks with the end-hosts
connected via a WiFi network and (b) 4G mobile Internet. We
give details of the congurations of networks and end-hosts
in Table 2. To the extent possible, we keep the conguration
the same at sender and receiver.
2.2 Measuring Qualitative Performance
We measure qualitative performance to assess the users’
video and audio experience. We survey to determine the
factors that inuence qualitative user experience and cor-
relate with the quantitative metrics. Once we can establish
a correlation, the app developers will improve overall user
experience and product performance by focusing on these
measurable metrics.
We take help of fteen participants to evaluate the quali-
tative performance. We ask these survey subjects to view the
original video before showing them the same video trans-
mitted over Google Meet, Microsoft Teams, and Zoom. We
ask them to gauge dierences in the quality of streamed
content in terms of Video Quality, Audio Quality, Resolution,
Video-Audio Synchronisation, Buering/Frame Drops, and
Lag on a 5-point Likert scale, with one being the worst and
ve being the best. We randomize the order of the contents
across all the subjects.
3 PERFORMANCE ANALYSIS
We analyze the collected quantitative and qualitative metrics
and correlate the two.
3.1 Quantitative Performance over Wired
Broadband via WiFi
In Table 3, we see that Microsoft Teams uses higher Band-
width and approximately 10% higher Payloads for all the
twelve measurements, which implies that it is sending more
data from the sender to the server and from the server to the
receiver. We present a detailed view of Upload Bandwidth
and Download Bandwidth when both the mic and the camera
are switched ON, in Figures 1 and 2, respectively. We observe
similar plots for all the other seven measurements. Due to
space constraints, we show plots only for one measurement.
The standard deviations in IPAT for Zoom and Google Meet
are minor, which is essential for having low jitter [
22
]. The
standard deviation in IPAT for Microsoft Teams is almost
twice that of Google Meet and Zoom when both the mic and
the camera are OFF, as seen in Table 3. This implies that the
packets pertaining to the video being played by the sender
are sent irregularly in the case of Microsoft Teams. It will re-
sult in a poor perceived quality of the video. We see in Table 3
that Zoom has a considerably higher PSNR for all tests, which
suggests that the video streaming of Zoom contains minimal
noise as compared to Microsoft Teams and Google Meet. The
video quality of Google Meet was reduced by a noticeable
amount when the camera was switched ON. A dip indicates
this in the SSIM value. A low SSIM value when the camera
is switched ON suggests that Google Meet compresses the
screen-sharing video to a greater extent to compensate for
the added payload when the camera is switched ON. The
PSNR data of Microsoft Teams and Zoom are higher than
Google Meet in all the measurements. On further inspecting
Y, U, and V components of SSIM in Table 4 , we observe that
the ‘Y’ value for Google Meet is signicantly lower than that
of Microsoft Teams and Zoom when both the microphone
and camera are switched ON, but the ‘U’ and ‘V’ values are
3
摘要:

ComparisonofPopularVideoConferencingAppsUsingClient-sideMeasurementsonDifferentBackhaulNetworksRohanKumarDhruvNagpalVinayakNaik[f20181013,f20180095,naik]@goa.bits-pilani.ac.inBITSPilani,GoaIndiaDipanjanChakrabortydipanjan@hyderabad.bits-pilani.ac.inBITSPilani,HyderabadIndiaABSTRACTVideoconferencingp...

展开>> 收起<<
Comparison of Popular Video Conferencing Apps Using Client-side Measurements on Different Backhaul Networks.pdf

共14页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:14 页 大小:7.56MB 格式:PDF 时间:2025-04-29

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 14
客服
关注