CALIBRATING AI MODELS FOR FEW-SHOT DEMODULATION VIA CONFORMAL PREDICTION Kfir M. Cohen1 Sangwoo Park1 Osvaldo Simeone1Shlomo Shamai Shitz2

2025-05-01 0 0 344.14KB 5 页 10玖币
侵权投诉
CALIBRATING AI MODELS FOR FEW-SHOT DEMODULATION
VIA CONFORMAL PREDICTION
Kfir M. Cohen1, Sangwoo Park1, Osvaldo Simeone1Shlomo Shamai (Shitz)2
1KCLIP Lab, Department of Engineering, King’s College London, UK.
2Viterbi Faculty of Electrical and Computing Engineering, The Technion, Haifa, Israel.
ABSTRACT
AI tools can be useful to address model deficits in the design
of communication systems. However, conventional learning-
based AI algorithms yield poorly calibrated decisions, un-
abling to quantify their outputs uncertainty. While Bayesian
learning can enhance calibration by capturing epistemic un-
certainty caused by limited data availability, formal calibra-
tion guarantees only hold under strong assumptions about the
ground-truth, unknown, data generation mechanism. We pro-
pose to leverage the conformal prediction framework to ob-
tain data-driven set predictions whose calibration properties
hold irrespective of the data distribution. Specifically, we in-
vestigate the design of baseband demodulators in the pres-
ence of hard-to-model nonlinearities such as hardware imper-
fections, and propose set-based demodulators based on con-
formal prediction. Numerical results confirm the theoretical
validity of the proposed demodulators, and bring insights into
their average prediction set size efficiency.
Index TermsCalibration, Conformal Prediction, De-
modulation
1. INTRODUCTION
Artificial intelligence (AI) models typically report a confi-
dence measure associated with each prediction, which reflects
the model’s self-evaluation of the accuracy of a decision. No-
tably, neural networks implement probabilistic predictors that
produce a probability distribution across all possible values of
the output variable. As an example, Fig. 1 illustrates the oper-
ation of a neural network-based demodulator [1, 2, 3], which
outputs a probability distribution on the constellation points
given the corresponding received baseband sample. The self-
reported model confidence, however, may not be a reliable
The work of K. M. Cohen, S. Park and O. Simeone has been supported by
the European Research Council (ERC) under the European Union’s Horizon
2020 research and innovation programme, grant agreement No. 725731. The
work of O. Simeone has also been supported by an Open Fellowship of the
EPSRC.
The work of S. Shamai has been supported by the European Union’s
Horizon 2020 Research And Innovation Programme, grant agreement No.
694630.
The authors acknowledge use of the research computing facility at
King’s College London, Rosalind (https://rosalind.kcl.ac.uk).
(b)
(a) x1
x2
{,,,}{ ,}
set predictor:
inaccurate
well-calibrated
inaccurate
poorly-calibrated
accurate
well-calibrated
accurate
poorly-calibrated
(c)
x3
x4
probability probability probability probability
{, , }{ }
probabilistic
predictor:
input space
Fig. 1. QPSK demodulation with a demodulator trained using
a limited number of pilots (gray symbols): (a) Constellation
symbols (colored markers), optimal hard prediction (dashed
lines), and model trained using the few pilots (solid lines).
Accuracy and calibration of the trained predictor depend on
the test input (gray square). (b) Probabilistic predictors ob-
tained from the trained model (solid bars) and optimal pre-
dictive probabilities (dashed bars), with thick line indicating
the hard prediction. (c) Set predictors output a subset of the
constellation symbols for each input.
measure of the true, unknown, accuracy of the prediction, in
which case we say that the AI model is poorly calibrated.
Poor calibration may be a substantial problem when AI-based
decisions are processed within a larger system such as a com-
munication network.
Deep learning models tend to produce either overconfi-
dent decisions when designed following a frequentist frame-
work [4]; or else calibration levels that rely on strong as-
sumptions about the ground-truth, unknown, data generation
mechanism when Bayesian learning is applied [5, 6, 7, 8, 9,
10]. This paper investigates the adoption of conformal pre-
diction (CP) [11, 12, 13] as a framework to design provably
well-calibrated AI predictors, with distribution-free calibra-
tion guarantees that do not require making any assumption
about the ground-truth data generation mechanism.
Consider again the example in Fig. 1, which corresponds
to the problem of designing a demodulator for a QPSK con-
stellation in the presence of an I/Q imbalance that rotates and
distorts the constellation. The hard decision regions of an op-
arXiv:2210.04621v1 [eess.SP] 10 Oct 2022
摘要:

CALIBRATINGAIMODELSFORFEW-SHOTDEMODULATIONVIACONFORMALPREDICTIONKrM.Cohen1,SangwooPark1,OsvaldoSimeone1ShlomoShamai(Shitz)21KCLIPLab,DepartmentofEngineering,King'sCollegeLondon,UK.2ViterbiFacultyofElectricalandComputingEngineering,TheTechnion,Haifa,Israel.ABSTRACTAItoolscanbeusefultoaddressmodelde...

展开>> 收起<<
CALIBRATING AI MODELS FOR FEW-SHOT DEMODULATION VIA CONFORMAL PREDICTION Kfir M. Cohen1 Sangwoo Park1 Osvaldo Simeone1Shlomo Shamai Shitz2.pdf

共5页,预览1页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:5 页 大小:344.14KB 格式:PDF 时间:2025-05-01

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 5
客服
关注