A H UMAN -CENTERED EXPLAINABLE AI F RAMEWORK FOR BURN DEPTH CHARACTERIZATION A P REPRINT

2025-04-30 0 0 2.02MB 8 页 10玖币
侵权投诉
A HUMAN-CENTERED EXPLAINABLE AI FRAMEWORK FOR
BURN DEPTH CHARACTERIZATION
A PREPRINT
Maxwell J. Jacobson
Purdue University
jacobs57@purdue.edu
Daniela Chanci Arrubla
Emory University
daniela.chanci.arrubla@emory.edu
Maria Romeo Tricas
Purdue University
mromeotr@purdue.edu
Gayle Gordillo
Indiana University
gmgordil@iu.edu
Yexiang Xue
Purdue University
yexiang@purdue.edu
Chandan Sen
Indiana University
cksen@iu.edu
Juan Wachs
Purdue University
jpwachs@purdue.edu
January 3, 2023
ABSTRACT
Approximately 1.25 million people in the United States are treated each year for burn injuries. Precise
burn injury classification is an important aspect of the medical AI field. In this work, we propose an
explainable human-in-the-loop framework for improving burn ultrasound classification models. Our
framework leverages an explanation system based on the LIME classification explainer to corroborate
and integrate a burn expert’s knowledge — suggesting new features and ensuring the validity of the
model. Using this framework, we discover that B-mode ultrasound classifiers can be enhanced by
supplying textural features. More specifically, we confirm that texture features based on the Gray
Level Co-occurance Matrix (GLCM) of ultrasound frames can increase the accuracy of transfer
learned burn depth classifiers. We test our hypothesis on real data from porcine subjects. We show
improvements in the accuracy of burn depth classification — from 88% to 94% — once modified
according to our framework.
Keywords
Burn Analysis
·
Ultrasound
·
Deep Learning
·
Computer Vision
·
Explainability
·
Human-in-the-loop
1 Introduction
Our work focuses on the task of burn depth estimation. Performing this task accurately can be critical for the welfare of
burn victims, but it is also a challenge due to the high variability in the visual appearance of burns. Different imaging
modalities have been explored to solve this problem and improve the accuracy of the diagnosis Sen et al. [2016] —
e.g. color photographs, ultrasound, infrared thermography, laser speckle imaging, and laser doppler imaging Thatcher
et al. [2016]. Recent studies combine imaging modalities with machine learning and deep learning models. Cirillo et
al Cirillo et al. [2019] compared the performance of VGG16 Simonyan and Zisserman [2015], GoogleNet Szegedy
et al. [2015], ResNet50 He et al. [2016], and ResNet101, pretrained on ImageNet Deng et al. [2009], to classify their
labeled dataset of RGB burn images. Similarly, Chauhan & Goyal Chauhan and Goyal [2020] carried out the burn
depth classification based on the specific characteristics of the body region in which the injury was located. However,
RGB images can negatively influence the accuracy of predictions due to lighting conditions, skin color, or general
variability in wound presentation. Besides, RGB images, optical coherence tomography Singla et al. [2018], spatial
frequency-domain imaging Rowland et al. [2019], and ultrasound Lee et al. [2020] have been utilized for feature
extraction. Harmonic B-mode ultrasound (HUSD) — a non-invasive sound-based imaging technique — is used in
this work. B-mode ultrasound is based on the transmission of small pulses of ultrasound. Echoes reflected back to
the transducer from body tissues that have different acoustic impedances, which can be measured to build a 3D map
of tissues. Second harmonic frequency echoes are used in order to reduce the artifacts in the image produced by the
reflection of echoes at different frequencies Narouze [2018].
arXiv:2210.13535v2 [cs.CV] 2 Jan 2023
Human-centered XAI for Burn Depth Characterization A PREPRINT
Figure 1: Our human-in-the-loop through explainability framework. A human possessing expert knowledge trains a
classifier. Our LIME-based explainer provides insights to the human based on the current model. The human applies
expert knowledge to interpret the explanation. The human expert modifies the model appropriately, and the process can
be repeated. In our case, we modify the network by adding new features based on expert medical knowledge regarding
burns.
Computer vision based techniques lack the human expertise in current medical ML models. Therefore, a human-in-the-
loop system built on Explainable Artificial Intelligence is proposed here. Human-in-the-loop (HITL) is an Artificial
Intelligence (AI) paradigm that assumes the presence of human experts that can guide the learning or operation of
the otherwise-autonomous system. Lundberg et al. Lundberg et al. [2018] developed and tested a system to prevent
hypoxaemia during surgery by providing anaesthesiologists with interpretable hypoxaemia risks and contributing
factors. Later, Sayres et al. Sayres et al. [2019] proposed and evaluated a system to assist diabetic retinopathy grading
by ophtalmologists using a deep learning model and integrated gradients explanation.
An Explainable Artificial Intelligence (XAI) is an intelligent system which can be explained and understood by a human
Gohel et al. [2021]. For this, we utilize LIME (Local Interpretable Model-agnostic Explanations) Ribeiro et al. [2016],
a recent method that is able to explain the predictions of any classifier model in an interpretable manner. LIME operates
by roughly segmenting an image into feature regions, then assigning saliency scores to each region. Higher scoring
zones are more important in arriving at the classification result of the studied model. The algorithm first creates random
permutations of the image to be explained. Then, the classification model is run on those samples. Distances between
those samples and the original image can be calculated, which are then converted to weights by being mapped between
zero and one using a kernel function. Finally, a simple linear model is fitted around the initial prediction to generate
explanations. This explanation provided by LIME is the result of the following minimization:
ξ(x) = argmin
gG
L(f, g, πx) + Ω(g)(1)
Let
x
and
f
be the image and classifier to be examined and
G
as the class of interpretable models like decision trees and
linear models. The complexity of the model (e.g. depth of a decision tree, number of non-zero weights in a linear model)
should be as small as possible to maintain explainability. This complexity can be defined as
Ω(g)
. Explanations by
LIME are found by fitting explainable models — minimizing the sum of the local faithfulness loss
L
and the complexity
score. Permutated sampling is used to approximate this local faithfulness loss. For this reason, a proximity measure
πx(z)
calculates the distance between
x
and another image
z
. The objective is minimizing the fidelity function while
maintaining a measure of complexity low enough to be interpretable. This minimization makes no assumptions about
f
in order to generate a model-agnostic explanation.
Explainable intelligence is useful when combined with HITL systems because they provide understandable and
qualitative information about the relationship between the instance’s components and the model’s prediction. Therefore,
an expert can make an informed decision about whether the model is reliable, and can make the necessary changes if it
is not — eventually reaching a confident result. This is extremely important, above all in the medical field, because of
the severe ethical implications that suppose a wrong medical diagnosis.
By deploying our explainable human-in-the-loop method, we were able to confirm the importance of one family of
features which can enhance a convolutional burn prediction classifier — statistical texture. From the Gray Level
Co-ocurrence Matrix (GLCM) — a method that represents the second-order statistical information of gray levels
2
摘要:

AHUMAN-CENTEREDEXPLAINABLEAIFRAMEWORKFORBURNDEPTHCHARACTERIZATIONAPREPRINTMaxwellJ.JacobsonPurdueUniversityjacobs57@purdue.eduDanielaChanciArrublaEmoryUniversitydaniela.chanci.arrubla@emory.eduMariaRomeoTricasPurdueUniversitymromeotr@purdue.eduGayleGordilloIndianaUniversitygmgordil@iu.eduYexiangXueP...

展开>> 收起<<
A H UMAN -CENTERED EXPLAINABLE AI F RAMEWORK FOR BURN DEPTH CHARACTERIZATION A P REPRINT.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:8 页 大小:2.02MB 格式:PDF 时间:2025-04-30

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注