Considerations for Visualizing Uncertainty in Clinical Machine Learning Models

2025-05-02 0 0 867.2KB 9 页 10玖币
侵权投诉
Considerations for Visualizing Uncertainty in Clinical Machine Learning
Models
CAITLIN F. HARRIGAN,
Department of Computer Science, University of Toronto, Canada and Vector Institute,
Canada
GABRIELA MORGENSHTERN,
Department of Computer Science, University of Toronto, Canada, Genetics and
Genome Biology, The Hospital for Sick Children, Canada, and Vector Institute, Canada
ANNA GOLDENBERG, Department of Computer Science, University of Toronto, Canada, Genetics and Genome
Biology, The Hospital for Sick Children, Canada, and Vector Institute, Canada
FANNY CHEVALIER, Department of Computer Science, University of Toronto, Canada
Low uncertainty
High uncertainty
(A) errorCloud (B) errorBlob (C) errorFade
Fig. 1. Sketch renderings of three ways of displaying uncertainty, used to probe clinicians in our study. A: errorCloud acts as a
’baseline’. B: errorBlob highlights areas of low uncertainty through variation on the spatial channel [
9
]. C: errorFade highlights areas
of high uncertainty through variation on colour [
9
]. Axis scales are intentionally unmarked; relevant scale is probed on in interviews.
Clinician-facing predictive models are increasingly present in the healthcare setting. Regardless of their success with respect to
performance metrics, all models have uncertainty. We investigate how to visually communicate uncertainty in this setting in an
actionable, trustworthy way. To this end, we conduct a qualitative study with cardiac critical care clinicians. Our results reveal that
clinician trust may be impacted most not by the degree of uncertainty, but rather by how transparent the visualization of what the
sources of uncertainty are. Our results show a clear connection between feature interpretability and clinical actionability.
CCS Concepts: Human-centered computing Empirical studies in visualization.
Both authors contributed equally to this research.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not
made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specic permission and/or a fee. Request permissions from permissions@acm.org.
©2021 Association for Computing Machinery.
Manuscript submitted to ACM
1
arXiv:2210.12220v1 [cs.HC] 21 Oct 2022
CHI ’21, May 08–09, 2021, Yokohama, Japan Harrigan and Morgenshtern, et al.
ACM Reference Format:
Caitlin F. Harrigan, Gabriela Morgenshtern, Anna Goldenberg, and Fanny Chevalier. 2021. Considerations for Visualizing Uncertainty
in Clinical Machine Learning Models. In Proceedings of CHI ’21 Workshop: Realizing AI in Healthcare: Challenges Appearing in the Wild
(CHI ’21). ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/nnnnnnn
1 INTRODUCTION
Supporting clinical care through integrating predictive machine learning (ML) into clinician workows has potential
to improve the standard of care for patients. A predictive model is one that uses statistical approaches to generate
predictions of unseen outcomes [
11
]. Regardless of how robust they are, these models have uncertainty, which hinders
adoption due to lack of trust [
2
]. In this work, we investigate what design considerations are perceived to most impact
trust and clinical actionability when communicating predictive uncertainty, through a qualitative study.
Clinicians in the critical care unit are adept at establishing a holistic picture of patient state by mentally integrating
bedside data with information derived from physical exams, patient histories, and lab results. Critical care is a particular
setting, because in it the clinicians consume raw features alongside model output. A model’s output is just one more
data point whose uncertainty the clinician must account for.
ML models have two main types of uncertainty: noise in the data, and systematic uncertainty in the model. A
deployed model must, additionally, deal with missing data, which may be missing at random, or (much more likely)
missing because of some clinical complication. Accounting for uncertainty in measures and predictions is a key part of
clinical reasoning on the part of the healthcare team [
8
]. While there exists literature on visualizing uncertainty [
5
],
how such approaches, or what characteristics of uncertainty may aect trust and actionability in clinical practice, is
poorly understood. This work aims to ll that gap. We conducted interviews with 5 clinicians to understand: 1) how
clinicians’ perception of uncertainty impacts trust and actionability; 2) what barriers exist in making ML predictions
amenable to clinical inference; 3) how these insights can inform visualization design. We take a model of cardiac arrest
as a case study, but aspects of our ndings may be generalizable to visualizations in other patient care environments.
We dene a clinically
actionable
visual as one which has the potential to inform clinical decision making. For
example, increasing the frequency of patient bedside or remote monitoring.
Trust
is the level of perceived credibility
attributed to a visualization. In ML literature, the degree of trustworthiness in a model results is strongly related to
its interpretability [
4
]. Our clinician interviews suggest that trust and visualization actionability are most positively
impacted when design prioritizes transparent communication around missing data and the overall prediction trend.
2 BACKGROUND
2.1 Related Work
Our work is similar to that of Jeery et al. [
6
], who employ participatory design strategies to explore nurses’ preferences
for the display of a predictive model of cardiac arrest. Findings for desired visualization elements are closely aligned
with our own ndings, and included a temporal trendline of predicted cardiopulmonary arrest probability with an
overlapping view of relevant lab values, vital signs, treatments, interventions, and a patient baseline. However, Jeery
et al. do not investigate the implications of displaying uncertainty alongside predicted values.
Hullman’s [
5
] review of uncertainty visualization user studies reveals that in most evaluations concerning inter-
pretation of uncertainty, there exists a bias in the instrumentation towards evaluating accuracy, rather than decision
quality. Thus, we follow recommendations that evaluators focus on collecting participant feedback on how judgement
is made, and what information they found helpful in making it [5].
2
摘要:

ConsiderationsforVisualizingUncertaintyinClinicalMachineLearningModelsCAITLINF.HARRIGAN∗,DepartmentofComputerScience,UniversityofToronto,CanadaandVectorInstitute,CanadaGABRIELAMORGENSHTERN∗,DepartmentofComputerScience,UniversityofToronto,Canada,GeneticsandGenomeBiology,TheHospitalforSickChildren,Can...

展开>> 收起<<
Considerations for Visualizing Uncertainty in Clinical Machine Learning Models.pdf

共9页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:9 页 大小:867.2KB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 9
客服
关注