The Calibration Generalization Gap A. Michael Carrell University of Cambridge

2025-05-06 0 0 815.76KB 10 页 10玖币
侵权投诉
The Calibration Generalization Gap
A. Michael Carrell
University of Cambridge
ac2411@cam.ac.uk
Neil Mallinar
UC San Diego
nmallina@ucsd.edu
James Lucas
NVIDIA
jlucas@cs.toronto.edu
Preetum Nakkiran
Apple
preetum@apple.com
Abstract
Calibration is a fundamental property of a good predictive model: it requires that the model predicts correctly in
proportion to its condence. Modern neural networks, however, provide no strong guarantees on their calibration—
and can be either poorly calibrated or well-calibrated depending on the setting. It is currently unclear which factors
contribute to good calibration (architecture, data augmentation, overparameterization, etc), though various claims
exist in the literature.
We propose a systematic way to study the calibration error: by decomposing it into (1) calibration error on the
train set, and (2) the calibration generalization gap. This mirrors the fundamental decomposition of generalization.
We then investigate each of these terms, and give empirical evidence that (1) DNNs are typically always calibrated on
their train set, and (2) the calibration generalization gap is upper-bounded by the standard generalization gap. Taken
together, this implies that models with small generalization gap (|Test Error - Train Error|) are well-calibrated. This
perspective unies many results in the literature, and suggests that interventions which reduce the generalization
gap (such as adding data, using heavy augmentation, or smaller model size) also improve calibration. We thus
hope our initial study lays the groundwork for a more systematic and comprehensive understanding of the relation
between calibration, generalization, and optimization.
1 Introduction
When machine learning models are deployed in the real world, as components of larger systems, we often want to
know more about their behavior than just overall loss or accuracy. For classication models, for example, it is helpful
to know not only their test error, but also estimates of predictive uncertainty — how condent the model is on various
inputs. Calibrated models can be more useful than uncalibrated ones, because their condences are operationally
meaningful: conditioning on high-condence predictions is “almost as good” as conditioning on high-accuracy
predictions1, but can be done without knowing the ground-truth.
Many machine learning models produce outputs in a probability simplex, and it is natural to ask whether these
outputs are intrinsically calibrated (even without post-hoc modications). For deep neural networks (DNNs), we can
ask, informally:
Are DNNs typically well-calibrated, on standard tasks?
This question has received much attention over the last several decades, but the literature remains muddled: early
works found that networks were reasonably calibrated [Niculescu-Mizil and Caruana, 2005], then Guo et al. [2017a]
found that “modern” networks (in 2017) were poorly calibrated, and most recently Minderer et al. [2021] argued that
current networks (in 2021) are in fact calibrated. The issue is complicated because the notion of “deep neural network”
has itself evolved over time: with dierent architectures, optimizers, and even dierent benchmark datasets on which
calibration is measured.
A version of this work appeared at the ICML 2022 Workshop on Distribution-Free Uncertainty Quantication.
1Formally, this is true in expectation, for rst-order moments.
1
arXiv:2210.01964v2 [cs.LG] 6 Oct 2022
Figure 1: ECE and error of an overparameterized model throughout training. We show reliability diagrams at various
point during training. In the early stage of training (before batch
1000
), both test and train ECEs are small (
<0.05
),
even though the test error itself is high (
>20%
). After batch
1000
, the generalization gap grows, and the test and
train ECEs also diverge. In the late stage of training, as the model becomes overcondent, the test ECE approaches
test error, while the train ECE goes to zero. Throughout, the dierence in test and train ECE is upper-bounded by the
dierence in test and train error.
A more rened question is thus to ask: when are DNNs calibrated? That is, for what settings of architecture,
optimizer, data distributions, etc. Various prior works have argued that certain individual factors have signicant
impact of calibration— for example, Minderer et al. [2021] claims non-convolutional models tend to be better calibrated,
and Wen et al. [2021] highlights the dierence in calibration when using data augmentations. A priori, a complete
understanding of this question may require jointly understanding all of these factors— but we can hope for better
abstractions which let us study only those aspects of our design choices which aect calibration.
In this work, we take steps to clarify the landscape of calibration in deep neural networks. First, we propose a
simple framework for reasoning about calibration: Just as we can classically decompose the Test Error into the Train
Error and the Generalization Gap, we can similarly decompose the Test Calibration Error (Test ECE) as:
TestECE
|{z }
Calibration on Test Set
TrainECE
| {z }
Calibration on Train Set
+|TestECE TrainECE|
| {z }
Calibration Generalization Gap
(1)
This bounds the test calibration error in terms of an optimization quantity (calibration on the train set) and a general-
ization quantity. We can then study these two terms individually, much like the classical approach of generalization
theory.
With this framework in hand, we then apply it to study the calibration of DNNs. We rst observe that the Train
2
摘要:

TheCalibrationGeneralizationGap*A.MichaelCarrellUniversityofCambridgeac2411@cam.ac.ukNeilMallinarUCSanDiegonmallina@ucsd.eduJamesLucasNVIDIAjlucas@cs.toronto.eduPreetumNakkiranApplepreetum@apple.comAbstractCalibrationisafundamentalpropertyofagoodpredictivemodel:itrequiresthatthemodelpredictscorrectl...

展开>> 收起<<
The Calibration Generalization Gap A. Michael Carrell University of Cambridge.pdf

共10页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:10 页 大小:815.76KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 10
客服
关注