Quantifying Uncertainty with Probabilistic Machine Learning Modeling in Wireless Sensing Amit Kachroo

2025-04-26 0 0 407.56KB 6 页 10玖币
侵权投诉
Quantifying Uncertainty with Probabilistic Machine
Learning Modeling in Wireless Sensing
Amit Kachroo
Amazon Lab126
Sunnyvale, California, USA
amkachro@amazon.com
Sai Prashanth Chinnapalli
Amazon Lab126
Sunnyvale, California, USA
saic@amazon.com
Abstract—The application of machine learning (ML) tech-
niques in wireless communication domain has seen a tremendous
growth over the years especially in the wireless sensing domain.
However, the questions surrounding the ML model’s inference
reliability, and uncertainty associated with its predictions are
never answered or communicated properly. This itself raises
a lot of questions on the transparency of these ML systems.
Developing ML systems with probabilistic modeling can solve this
problem easily, where one can quantify uncertainty whether it is
arising from the data (irreducible error or aleotoric uncertainty)
or from the model itself (reducible or epistemic uncertainty).
This paper describes the idea behind these types of uncertainty
quantification in detail and uses a real example of WiFi channel
state information (CSI) based sensing for motion/no-motion cases
to demonstrate the uncertainty modeling. This work will serve
as a template to model uncertainty in predictions not only for
WiFi sensing but for most wireless sensing applications ranging
from WiFi to millimeter wave radar based sensing that utilizes
AI/ML models.
Index Terms—probabilistic modeling, Bayesian networks, wire-
less sensing, WiFi, uncertainty quantification
I. INTRODUCTION
The application of artificial intelligence (AI)/ machine learn-
ing (ML) algorithms in wireless sensing applications is seeing
an immense growth over the years, and in-fact these models
are now embedded into real-world products and features.
The main advantages of utilizing AI/ML techniques over
conventional principle techniques is the reduced computation
complexity, increased energy efficiency, and better optimal so-
lutions [1]. However, one of the biggest challenges associated
with these AI/ML models is in its inference reliability or put
simply, how confident is the model in its predictions. This
unfortunately has not been clearly understood or quantified
in the domain of AI/ML in wireless sensing area. Recently,
there has been a lot of research on this topic [2]–[4], but these
are mostly limited to medical or computer vision field. This
paper will therefore present the science behind the uncertainty
modeling, which will help to answer the model reliability.
We will dwell into more details about this method with a
real example of WiFi channel state information (CSI) based
sensing for motion/no-motion detection application.
Generally, uncertainty is classified into two broad cate-
gories, aleotoric and epistemic uncertainty [5]–[7]. Aleotoric
derives its name from the Latin word “alea” that means the roll
of a dice and Epistemic derives its name from the Greek word
“episteme”, which can be roughly translated as knowledge.
Therefore, aleotoric uncertainty is the internal randomness of a
phenomena and epistemic is presumed to derive from the lack
of knowledge regarding the phenomena. In wireless sensing
applications, this is a very common phenomenon where a
AI/ML model trained on a certain set of home environments
performs in a very uncertain way when tested on a different
home environment. The main reason of the failure of these
ML models is because of the model not only learns the
features extracted from the radio frequency (RF) signals but
also learns the other information from environment, which
is not desired in RF sensing. This extra information is due
to the fact that the RF signals are highly dependent on the
scattering conditions such as reflections/ diffraction’s from the
objects in the environment (walls, furniture, etc.), and also on
the position and distance of different objects or human/pets
from the radio device. Therefore, it becomes a necessity for
AI/ML model developed for wireless sensing to communicate
its reliability or uncertainty associated with its predictions.
In this work, we will discuss the method to include un-
certainty (aleotoric and epistemic) into the ML model, and
discuss pros and cons of this approach in detail. In summary,
the main contributions of the paper are,
Understanding different types of uncertainty associated
with a AI/ML model.
Modeling uncertainty in a AI/Ml model with a real life
example of WiFi sensing.
Discussions on the results highlighting the need of incor-
porating uncertainty in wireless sensing applications.
This paper is organized as follows, Section II discusses the
different types of uncertainty in detail for AI/ML models, Sec-
tion III presents the details of the WiFi CSI based motion/no-
motion detection application. In Section IV, we will discuss
the model and the results of uncertainty quantification for our
example in detail and finally, the conclusion with future work
are drawn in Section V.
II. UNCERTAINTY IN AI/ML MODELS
To start with, the term uncertainty actually in itself means
lack of knowledge to a particular outcome. In AI/ML domain,
this can be attributed to either data itself (measurement noise,
or wrong labeling) or to the model (model parameters) or lack
arXiv:2210.06416v1 [eess.SP] 12 Oct 2022
of training data. This is broadly classified as aleotoric and
epistemic uncertainty.
Aleotoric or indirect uncertainty- This type of uncertainty
arises from the unaccounted factors, such as environment
settings, noise in the input data, or bad input feature
selections. It is also known as an irreducible error and
can’t be remediated with more data. One of the solution
to overcome such uncertainty is to make sure that the
data collection strategy is carefully designed and the
measurement environment is constrained so that the effect
of environment or any external factors is minimized. Also
careful feature selection that represents the phenomena
or application is of utmost importance to avoid such
uncertainty.
Epistemic or direct uncertainty- This type of uncertainty
usually arises from the lack of knowledge about the
model or data. One example can be over-generalization,
where the ML model is very complex as compared to
the amount of data it is trained on and thereby overgen-
eralizes on a test dataset. This type of uncertainty can
be overcomed by collecting more data or experimenting
with different ML model architectures or by chang-
ing/tweaking model parameters. Since, this uncertainty
is caused inherently by the model/data, therefore it can
be easily reduced by more data or by different model
architecture.
Epistemic uncertainty is also used to detect dataset shifts
(test data has different distribution than training), or adversarial
inputs. Modeling epistemic uncertainty is challenging than
modeling the aleotoric one. The later one is incorporated in the
model loss function while the epistemic is highly dependent
on the model itself and may vary from one model architecture
to other.
A. Modeling Aleotoric and Epistemic Uncertainty
Given a dataset, D={Xi, yi}, i ∈ {1, . . . , n}, where Xi
is the ith input, yiis the ith output and nis the total number
of input examples in the dataset, the ML model can be then
described as a function, ˆ
f:Xi7→ ˆyi, or
ˆyi=ˆ
f(Xi)(1)
and lets assume the original data generating process can be
given by a function f:Xi7→ yi, such that yi=f(Xi) + i,
where irepresents the irreducible error caused by measure-
ment errors during data collection or by wrong labeling in the
training data or bad input feature selection. Thus, the mean
square error (MSE) between the actual labels and predicted
labels from the model will be given as,
E(yiˆyi)2=E(f(Xi) + iˆ
f(Xi)),
= [f(Xi)ˆ
f(Xi)]2
| {z }
reducible error
+Var(i).
| {z }
irreducible error
(2)
The first part in (2) is model dependent and therefore rep-
resents epistemic uncertainty while the second term (variance
of i) is the irreducible or aleotoric uncertainty. This variance
of iis also known as the Bayes error, which actually is the
lowest possible prediction error than can be achieved with
any model. In literature, is generally modeled as an inde-
pendent and identically distributed (i.i.d) Normal distribution,
i N (µi, σi). To incorporate the aleotoric uncertainty in
a AI/ML model, the final layer can be therefore replaced
with a probabilistic layer, usually a normally distributed one
with a mean of µand a standard deviation of σ. During
training/testing phase, samples are drawn from this layer for
prediction and also for aleotoric uncertainty quantification1.
The problem with this approach is to figure out how to learn
the parameters of this Normal distribution. This can be solved
by defining a new cost function, negative log-likelihood (NLL)
that represents the loss between a distribution and the true
output label.
The NLL is equivalent to maximizing the likelihood of
observing a data given a distribution with its parameters.
In NLL, the logarithmic probabilities associated with each
class is summed up for a dataset. This closely resembles the
cross entropy loss function except in cross entropy, the last
classification activation is implicitly applied before taking the
logarithmic transformation, while in NLL this is not the case.
The NLL is given as,
NLL =log P(yi|Xi;µ, σ).(3)
With NLL as a cost function and the last layer as a proba-
bilistic layer, the aleotoric uncertainty can thus be modeled as
described in Algorithm 1. The independent normal layer can
be implemented in any modern day ML packages. In our case,
we used the TensorFlow Probability package [8] to model such
layer.
Algorithm 1 method to measure Aleotoric uncertainty
Input: D(Xi, yi), replace output layer with probabilistic
node: N(µ, σ), define optimizer as rmsProp and set it’s
learning rate, set num epochs for training.
Output: ˆyi
1: for epoch = 1 to num epochs do
2: i) Calculate loss and gradients using NLL (3) and the
defined optimizer
3: ii) Apply gradients and update weights
4: iii) Monitor loss and accuracy
5: end for
6: Determine the parameters µand σfrom the output layer
Once the mean and standard deviation is determined, we
can then easily figure out the 95% confidence interval for
the trained data or even for the test data. For classification
problems, the last layer can be modeled as a categorical
distribution, where for each class, there is a learned distribution
and based on the learned parameters for these distribution,
1This assumes the ML architecture chosen is able to give high accuracy
before replacing the output layer.
摘要:

QuantifyingUncertaintywithProbabilisticMachineLearningModelinginWirelessSensingAmitKachrooAmazonLab126Sunnyvale,California,USAamkachro@amazon.comSaiPrashanthChinnapalliAmazonLab126Sunnyvale,California,USAsaic@amazon.comAbstract—Theapplicationofmachinelearning(ML)tech-niquesinwirelesscommunicationdom...

展开>> 收起<<
Quantifying Uncertainty with Probabilistic Machine Learning Modeling in Wireless Sensing Amit Kachroo.pdf

共6页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:6 页 大小:407.56KB 格式:PDF 时间:2025-04-26

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 6
客服
关注