Explanation-by-Example Based on Item Response Theory Lucas F. F. Cardoso15 Jos e de S. Ribeiro12 Vitor C. A. Santos14 Ra ssa L.

2025-04-27 0 0 507.19KB 15 页 10玖币
侵权投诉
Explanation-by-Example Based on Item
Response Theory
Lucas F. F. Cardoso1,5, Jos´e de S. Ribeiro1,2, Vitor C. A. Santos1,4, Ra´ıssa L.
Silva3, Marcelle P. Mota1, Ricardo B. C. Prudˆencio4, and Ronnie C. O. Alves5
1ICEN, Universidade Federal do Par´a, Bel´em, Brazil
lucas.cardoso@icen.ufpa.br, mpmota@ufpa.br
2IFPA, Instituto Federal do Par´a, Bel´em, Brazil
jose.ribeiro@ifpa.edu.br
3IRMB, Universit´e Montpellier, France
r.lorenna@gmail.com
4CIn, Universidade Federal de Pernambuco, Recife, Brazil
rbcp@cin.ufpe.br
5ITV, Instituto Tecnol´ogico Vale, Bel´em, Brazil
{vitor.cirilo.santos,ronnie.alves}@itv.org
Abstract. Intelligent systems that use Machine Learning classification
algorithms are increasingly common in everyday society. However, many
systems use black-box models that do not have characteristics that allow
for self-explanation of their predictions. This situation leads researchers
in the field and society to the following question: How can I trust the
prediction of a model I cannot understand? In this sense, XAI emerges
as a field of AI that aims to create techniques capable of explaining the
decisions of the classifier to the end-user. As a result, several techniques
have emerged, such as Explanation-by-Example, which has a few initia-
tives consolidated by the community currently working with XAI. This
research explores the Item Response Theory (IRT) as a tool to explaining
the models and measuring the level of reliability of the Explanation-by-
Example approach. To this end, four datasets with different levels of
complexity were used, and the Random Forest model was used as a hy-
pothesis test. From the test set, 83.8% of the errors are from instances
in which the IRT points out the model as unreliable.
Keywords: Explainable Artificial Intelligence (XAI) ·Machine Learn-
ing (ML) ·Item Response Theory (IRT) ·Classification.
1 Introduction
The expansion and increasing use of Artificial Intelligence (AI) systems creates
advances that enable these systems to learn and make decisions on their own
[11]. Thus, AI becomes increasingly common in everyday society by providing for
simple or complex decisions in people’s lives to be taken via intelligent systems.
Such decisions range from recommending movies based on the user’s preferences
to diagnosing a disease based on patient’s exams [15].
arXiv:2210.01638v1 [cs.LG] 4 Oct 2022
2 Cardoso, L. et al.
The question “Can the decision made by a black-box model be trusted for
a context-sensitive problem?” has been asked not only by the scientific commu-
nity, but also by the society as a whole. For example, in 2018 the General Data
Protection Regulation was implemented in the European Union. It is geared at
securing anyone the right to an explanation as to why an intelligent system made
a given decision [20]. In this sense, for a continuous advance in AI applications,
the entire community is faced with the barrier of model explainability [9,11]. To
address this issue, a new field of study is growing rapidly: Explained Artificial
Intelligence (XAI). Developed by AI and Human Computer Interaction (HCI)
researchers, XAI is a user-centric field of study aimed at developing techniques
to make the functioning of these systems and models more transparentand con-
sequently more reliable [2]. Recent research shows that the trust calibration on
the models’ decision is very important, since exaggerated or measured confidence
can lead to critical problems depending on the context [19].
The models that have high success rates to solve real-world problems are
usually of the black-box type. In other words, they are not easily explained and,
therefore, applying XAI techniques is required so that they can be explained and
then interpreted by the end user[9,2]. The emergence of XAI techniques based
on different methodologies is a real fact today, but there are still many gaps
in literatute. For example, XAI methods based on Explanation-by-Example in
a model-agnostic fashion 6are still underexplored by the scientific community
[8,10,18]. Techniques based on Explanation-by-Example use previously known
ou model-generated data instances to explain them, thus providing for a good
understanding of this model and decisions thereof. This is a technique that may
be natural for human beings, since humans seek to explain certain decisions they
themselves make based on previously known examples and experiences [2].
This research explores a new measure of XAI based on the working principles
of Item Response Theory (IRT), which is commonly used in psychometric tests
to assess the performance of individuals on a set of items (e.g., questions) with
different levels of difficulty [3]. To this end, the IRT was adapted for Machine
Learning (ML) evaluation, treating classifiers as individuals and test instances
as items [16]. In previous works [16,5] IRT was used to evaluate ML models and
datasets for classification problems. By applying IRT concepts, the authors were
able to provide new information about the data and the performance of the mod-
els in order to grant more robustness to the preexisting evaluation techniques.
In addition, the IRT’s main feature is to explore the individual’s performance on
a specific item and then compute the information about the individual’s ability
and item complexity in order to explain why a respondent got an item right or
wrong. Thus, it is understood that IRT can be used as a means to comprehend
the relationship between the performance of a model and the data, thus helping
in explaining models and understanding the model’s predictions at a local level.
Given the intrinsic characteristics of the IRT, it is understood that it can be
fitted within the universe of techniques based on Explanation-by-Example. At
the same time, the IRT also has concepts that allow to explain and interpret
6Model-Agnostic: it does not depend on the type of model to be explained [18].
Explanation-by-Example Based on Item Response Theory 3
the model in general and to shed light on details not yet explored by other XAI
techniques. Based on this motivation, this research work proposes the use of IRT
as a new Explanation-by-Example approach, in a model-agnostic way, aiming at
greater reliability on the model’s decisions by the end user. For the experiment,
4 datasets were selected with different levels of complexity indicated by [22]
with the Random Forest algorithm acting as the target of the explanation. The
objective of this research is to explore how the concepts from the IRT can help
to open the black-box and indicate the confidence of the model’s prediction.
The remainder of this paper is divided into the following sections: Section 2
provides a contextualization about XAI and IRT; Section 3 explains how IRT is
applied to ML and then to XAI; Section 4 provides the results and discussions
of the proposal presented herein; Section 5 carries the conclusion of the herein
research and final considerations related thereof.
2 Background
2.1 Explainable Artificial Intelligence - XAI
Based on the growing need to gain confidence in black-box models, the XAI
community has proposed different methodologies, techniques and tools to ex-
plain these models. It is argued that, based on the creation of model explanation
layers, a human user can create their interpretations and thus better understand
how the model’s decisions were generated, therefore obtaining greater confidence
[2,17]. One of the most popular categories of XAI techniques currently available
is the so-called post-hoc explanations. The main particularity of these post-hoc
explanations is the fact that they only use training data, test data, model output
data and the model itself, already properly trained to generate the explanations
[2]. One of the most current and necessary characteristics that an XAI tech-
nique can feature is the fact that it is applicable to computational models of
independent structural natures (neural network, tree, vector of weights etc,...).
This feature is called model-agnostic [17].
Among the current post-hoc XAI techniques, the following stand out: Text
Explanations, Visual Explanations, Local Explanations, Explanations-by-Example,
Explanations-by-Simplification and Feature Relevance Explanations. Out of these,
this research highlights the Explanation-by-Example as a poorly explored tech-
nique by the XAI community. In fact, there is a smaller number of research
works that present a clear proposal or tool that can be used in a replicable way
for different real-world problems [17,8,10,18].
Example-based explanation methods select specific instances of the dataset
in order to explain the behavior of models or to explain the underlying data dis-
tribution [17]. Explanations based on examples are mostly model-agnostic, since
they make any model more interpretable. The most popular tool proposals for
example-based explanations are: Counterfactual explanations [25], Adversarial
examples [4], Prototypes [13] and Influential instances [14]. Each of these propos-
als seeks to carry out the process of identifying relevant instances of the dataset,
which directly, or even indirectly, explain and justify the model’s output [17].
摘要:

Explanation-by-ExampleBasedonItemResponseTheoryLucasF.F.Cardoso1;5,JosedeS.Ribeiro1;2,VitorC.A.Santos1;4,RassaL.Silva3,MarcelleP.Mota1,RicardoB.C.Prud^encio4,andRonnieC.O.Alves51ICEN,UniversidadeFederaldoPara,Belem,Brazillucas.cardoso@icen.ufpa.br,mpmota@ufpa.br2IFPA,InstitutoFederaldoPara,Bel...

展开>> 收起<<
Explanation-by-Example Based on Item Response Theory Lucas F. F. Cardoso15 Jos e de S. Ribeiro12 Vitor C. A. Santos14 Ra ssa L..pdf

共15页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:15 页 大小:507.19KB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 15
客服
关注