On Explainability in AI-Solutions A Cross-Domain Survey Simon D Duque Anton10000000340059165 Daniel

2025-05-02 0 0 362.86KB 13 页 10玖币
侵权投诉
On Explainability in AI-Solutions:
A Cross-Domain Survey?
Simon D Duque Anton1[0000000340059165], Daniel
Schneider2[0000000217362417], and Hans D Schotten2[0000000150053635]
1comlet Verteilte Systeme GmbH, 66482 Zweibruecken, Germany
simon.duque-anton@comlet.de
2DFKI, 67663 Kaiserslautern, Germany
{Daniel, Hans Dieter}.{Schneider, Schotten}@dfki.de
Abstract. Artificial Intelligence (AI) increasingly shows its potential
to outperform predicate logic algorithms and human control alike. In
automatically deriving a system model, AI algorithms learn relations in
data that are not detectable for humans. This great strength, however,
also makes use of AI methods dubious. The more complex a model,
the more difficult it is for a human to understand the reasoning for
the decisions. As currently, fully automated AI algorithms are sparse,
every algorithm has to provide a reasoning for human operators. For
data engineers, metrics such as accuracy and sensitivity are sufficient.
However, if models are interacting with non-experts, explanations have
to be understandable.
This work provides an extensive survey of literature on this topic, which,
to a large part, consists of other surveys. The findings are mapped to ways
of explaining decisions and reasons for explaining decisions. It shows that
the heterogeneity of reasons and methods of and for explainability lead
to individual explanatory frameworks.
Keywords: Artificial Intelligence ·Explainability ·Survey ·Cross-Domain.
1 Introduction
Industrial revolutions provided humanity with novel technologies that fundamen-
tally changed fields of work, mostly in manufacturing and processing industries.
Currently, the fourth industrial revolution is said to introduce flexibility and
ad-hoc connectivity to once inflexible industrial Operational Technology (OT)
networks. This newly integrated means of connectivity in industrial networks,
?This is a pre-print of an invited paper published in the Computer Safety, Reliability,
and Security. SAFECOMP 2022 Workshops. Please cite as:
S.D. Duque Anton, D. Schneider, H.D. Schotten. On Explainability in AI-Solutions:
A Cross-Domain Survey. In: Computer Safety, Reliability, and Security. SAFE-
COMP 2022 Workshops, SAFECOMP 2022, vol 13415. Springer, September 2022,
DOI: 10.1007/978-3-031-14862-0 17
arXiv:2210.05173v1 [cs.AI] 11 Oct 2022
2 S. D. Duque Anton
across physical factory boundaries, allows for new use cases and improved ef-
ficiency. Apart from the connectivity aspect, the introduction of Artificial In-
telligence (AI) methods presents new paradigms. In industrial environments,
AI methods are applied to production and resource planning [27], detection of
anomalies in production processes [24,25,23], and improvement of processes [61].
Apart from industrial applications, AI methods lend themselves readily on other
domains, such as finance and banking [14], medicine [32] and elderly care [51],
autonomous driving [57,56] network management [58,39,40,22], and for control of
unmanned vehicles [55,42], just to name a few. In all of these fields, automation
and AI are intended to perform tedious and repetitive tasks to relieve work-
ers. However, this autonomous performance of tasks requires trustworthy and
understandable algorithms as an enabler. If a task is to be performed by an
algorithm, the outcome must not deviate from expectations, jitters in the input
data cannot change the outcome in an undesirable fashion. Especially regarding
AI algorithms, understanding the reasoning behind a decision is complex and
often hardly possible for human operators. This is an issue, especially regarding
regulatory standards and acceptance issues of users. Consequently, AI algorithms
need to be understandable and provide predictable outcomes in a reliable fash-
ion to further their application. This need has created the term of Explainable
Artificial Intelligence (XAI) that encompasses the need for AI methods to not
only provide sound results, but also provide the reasoning in a useful and un-
derstandable manner.
This work aims at providing an overview of requirements as well as solutions
for the explainability of AI algorithms. Works related to explainability in AI
are discussed in Section 2. Methods and techniques for explaining outcomes
of AI algorithms are introduced in Section 3. Common application scenarios
that require explainable AI methods are presented in Section 4. This work is
concluded in Section 5.
2 Related Work
This section captures an overview of related works discussing explainability of
AI methods in different domains. A comprehensive overview is provided in Ta-
ble 1. This table lists the respective work, the domain which is discussed and
the method used to explain the decision or recommendation made by the AI
method. Reddy discusses the requirements formulated by stakeholders for ac-
ceptance of AI decisions in medical treatment and research while formulating
the point that some argue in favour of higher accuracy algorithms instead of
well-explainable ones [52]. Neugebauer et al. present a surrogate AI model, that
addresses parameter changes of the base model and consequently highlights
the relevant parameters in the decision, aiding its explainability [48]. Vilone
and Longo survey scientific research addressing XAI and categorise findings in
human-based explanations that aim at mimicking human reasoning, and objec-
tive metrics such as accuracy [64]. Caro-Mart´ınez et al. introduce a conceptual
On Explainability in AI-Solutions 3
model for e-commerce recommender systems that extends existing models with
four ontological elements: User motivation and goals, required knowledge, the
recommendation process itself, and the presentation to the user [15]. Liang et
al. present a novel online Explainable Recommendation System (ERS) that, in
contrast to commonly used offline ERSs, can be updated and instantly pro-
vides explanations with recommendations [44]. Holzinger and M¨uller discuss a
mapping appraoch of explainability with causability [38]. That means creating
links between the reasoning an AI algorithm implicitely makes and the intuitive
conclusions humans draw. This is applied to the area of image-based pattern
recognition in medical treatment. Ehsan et al. present a concept for integrating
social transparency into XAI solutions [26]. They present a framework based
on expert-interviews. Angelov et al. discuss the relation of AI algorithms with
high accuracy and high explainability factors [6]. A taxonomy is provided, global
vs. local model explanation techniques for different domains and algorithms are
surveyed and set in context with the remaining challenges. Mohseni et al. pro-
vide a survey of existing literature that clusters available methods and research
approaches respective to their design goals as well as evaluation measures [47].
Their framework is founded on the distinction between the provided categories.
Belle and Papantonis evaluate feasible methods of explainability on the use case
of a data scientist that aims to convince stakeholders [11]. Shin discusses the
relation of causability and explainability in XAI and the influence on trust and
user behaviour [59]. Singh et al. evaluate methods to explain AI conclusions
in the medical domain [60]. Barredo Arrieta et al. present an extensive survey
on literature and solutions in XAI on which they base requirements and chal-
lenges yet to conquer [9]. Ultimately, they create the concept of fair AI that
explains and accounts for decisions made. Lundberg et al. introduce a game-
theoretic model for optimal explanations in tree-based algorithms [46]. Local
explanations are combined to obtain a global explanation of the trained tree in
a human-understandable format. Ploug and Holm present the concept of con-
testable AI decision making in a clinical context [49]. Contestability means that
the decision algorithm has to provide information to the data used, any system
biases, system performance in terms of algorithmic metrics, and the decision
responsibility carried by humans or algorithms. If the decision is contested, the
algorithm has to provide insight that the alternate solution was considered as
well and has been adequately taken into account. Arya et al. introduce a col-
lection of explainability tools that are combined into a framework to provide
researchers and data scientists with the opportunity to exctract explanations
for various algorithms [8]. Linardatos et al. introduce a survey and taxonomy,
distinguishing between different types of interpretability in AI methods before
presenting an exhaustive list of tools and methods [45]. Tjoa and Guan discuss
challenges and risks of explainability in medical AI applications [62]. They survey
existing solutions for different algorithm types while also introducing risks and
challenges with existing solutions. Roscher et al. present an overview of methods
to conserve scientific interpretability and explainability in natural sciences [53].
Beaudoin et al. introduce a framework for explainability that can be applied in
摘要:

OnExplainabilityinAI-Solutions:ACross-DomainSurvey?SimonDDuqueAnton1[0000000340059165],DanielSchneider2[0000000217362417],andHansDSchotten2[0000000150053635]1comletVerteilteSystemeGmbH,66482Zweibruecken,Germanysimon.duque-anton@comlet.de2DFKI,67663Kaiserslautern,GermanyfDaniel,HansDieterg.fSchneider...

展开>> 收起<<
On Explainability in AI-Solutions A Cross-Domain Survey Simon D Duque Anton10000000340059165 Daniel.pdf

共13页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:13 页 大小:362.86KB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 13
客服
关注