Estimating productivity gains in digital automation
for BP workflows; (v) and provide an empirical analysis of labour-redistribution of the model on a well-document
public case study.
This work is organised as follows: Section II provides related work, and Section III introduces key economic modelling
concepts. Section IV describes our methodology. The proposed model is explained in section V. Section VI and VII
introduce results and discussion, respectively. Lastly, we present our conclusions.
2 Related Work
In the mid-2010s, early Artificial Intelligence (AI) firm adopters stated that the acquisition of Natural language
Processing (NLP) systems generated savings and productivity increments [
8
]. However, empirical evidence showed
that productivity benefits were not easy to track [
9
]. AI integration exhibited the conditions of a Solow’s Paradox redux.
This phenomenon motivated several works. They aimed to identify the reasons for the absence or emergence of AI
productivity effects [
10
]. These studies followed two main approaches: pessimistic reading of the empirical past and
optimism about the future [
5
]. The first group suggests that AI provided benefits, but statistics did not accurately capture
them. The second one states that AI advantages are not yet part of the business operation [
11
]. Both perspectives employ
macroeconomic information and lack the suitable tools to seize the data up to the task level [
5
]. As a consequence,
productivity assessment transfers AI benefits to other evaluation axes. The outcome, then, is an averaged result[
12
]. On
the other hand, the optimistic course implies that there is a period in which technology is not mature enough to produce
a discernible influence on productivity growth[13, 1].
It is clear, then, that a model capable of measuring productivity variations at a task level can provide the methodological
support to disambiguate the current controversies. Some other works have found that a mismatch between technology
and workers’ skills negatively affects productivity [
14
]. AI systems assemble new "hybrid" duties that become part of
the automated process[
15
]. The analyses conclude that labour composition is another factor that might harm productivity
estimation [
16
,
17
]. In other words, AI systems might require more labour to execute the automated BP [
18
]. Hence,
process inputs would change, and so the production volume [
8
]. Therefore, labour composition is another relevant
factor for AI productivity assessment.
Apart from these two main study segments, we identified a third one: AI Capital. AI capital is the portion of the
income generated by those activities automated with AI systems [
19
]. AI cash inflows are not challenging to track [
20
].
Unfortunately, it is complex to determine the root cause of these income streams [21]. Some authors state that market
conditions are responsible for the generated wealth [
22
]. In other words, AI Capital models require considerable data to
produce results.
The three approaches employ either Cobb-Douglas equations or variations to represent companies’ production [
15
,
23
,
10
]. This function family is exponential [
2
]. Hence, parameter determination requires considerable effort [
16
]. To ease
computations, researchers employ logarithmic techniques. In this fashion, linear regression methods are suitable to
determine production parameters[
22
]. Unfortunately, business processes store averaged information across different
databases[
21
]. Hence, AI-systems contributions dilute across the business [
22
]. Determining the productivity gains
behind digital technologies provides a fundamental translation of value between the technology investment and the
financial results [1, 10].
To tackle this issue, we explored emerging process mining frameworks which induce and business processes based
on event logs[
24
], where an event is the basic unit of a BP. Each event provides basic information such as completion
timestamp, executor, and activity costs[
25
]. There are several process mining log formats. However, process mining
researchers and practitioners adopted the IEEE XES format as a standard [
26
] which has a formal and canonical schema,
recording compulsory fields and offering the possibility of including customised data in XML structure[
27
]. We
found that process mining would provide adequate granularity and a feasibility argument to facilitate the measurement
of productivity changes, dialoguing with the lower barriers provided by logs in contrast to competing more formal
frameworks (such as Business Process management - BPM workflows).
Our examination of the relevant literature revealed a three-point gap: 1) labour productivity analysis should cover the
task level characterisation[
5
,
28
], 2) the interest in Solow’s Paradox grew more towards measuring the impact of shifts
in labour input than towards the identification of the sources of AI capital streams[
29
], 3) process mining might provide
the required data to produce better productivity evaluations[27].
3 Estimating Productivity: Key Definitions
In this section, we outline key critical definitions used throughout this paper.
2