On Explainability in AI-Solutions 3
model for e-commerce recommender systems that extends existing models with
four ontological elements: User motivation and goals, required knowledge, the
recommendation process itself, and the presentation to the user [15]. Liang et
al. present a novel online Explainable Recommendation System (ERS) that, in
contrast to commonly used offline ERSs, can be updated and instantly pro-
vides explanations with recommendations [44]. Holzinger and M¨uller discuss a
mapping appraoch of explainability with causability [38]. That means creating
links between the reasoning an AI algorithm implicitely makes and the intuitive
conclusions humans draw. This is applied to the area of image-based pattern
recognition in medical treatment. Ehsan et al. present a concept for integrating
social transparency into XAI solutions [26]. They present a framework based
on expert-interviews. Angelov et al. discuss the relation of AI algorithms with
high accuracy and high explainability factors [6]. A taxonomy is provided, global
vs. local model explanation techniques for different domains and algorithms are
surveyed and set in context with the remaining challenges. Mohseni et al. pro-
vide a survey of existing literature that clusters available methods and research
approaches respective to their design goals as well as evaluation measures [47].
Their framework is founded on the distinction between the provided categories.
Belle and Papantonis evaluate feasible methods of explainability on the use case
of a data scientist that aims to convince stakeholders [11]. Shin discusses the
relation of causability and explainability in XAI and the influence on trust and
user behaviour [59]. Singh et al. evaluate methods to explain AI conclusions
in the medical domain [60]. Barredo Arrieta et al. present an extensive survey
on literature and solutions in XAI on which they base requirements and chal-
lenges yet to conquer [9]. Ultimately, they create the concept of fair AI that
explains and accounts for decisions made. Lundberg et al. introduce a game-
theoretic model for optimal explanations in tree-based algorithms [46]. Local
explanations are combined to obtain a global explanation of the trained tree in
a human-understandable format. Ploug and Holm present the concept of con-
testable AI decision making in a clinical context [49]. Contestability means that
the decision algorithm has to provide information to the data used, any system
biases, system performance in terms of algorithmic metrics, and the decision
responsibility carried by humans or algorithms. If the decision is contested, the
algorithm has to provide insight that the alternate solution was considered as
well and has been adequately taken into account. Arya et al. introduce a col-
lection of explainability tools that are combined into a framework to provide
researchers and data scientists with the opportunity to exctract explanations
for various algorithms [8]. Linardatos et al. introduce a survey and taxonomy,
distinguishing between different types of interpretability in AI methods before
presenting an exhaustive list of tools and methods [45]. Tjoa and Guan discuss
challenges and risks of explainability in medical AI applications [62]. They survey
existing solutions for different algorithm types while also introducing risks and
challenges with existing solutions. Roscher et al. present an overview of methods
to conserve scientific interpretability and explainability in natural sciences [53].
Beaudoin et al. introduce a framework for explainability that can be applied in