potential long-term impact on people’s well-being, values, expectations, and fair treatment, and
ultimately on whom a computer system serves and whom it harms. We elaborate on each of the
risks to sensitize practitioners and researchers developing and deploying respective systems.
2. Related Work
For several years now, ML methods have been used for the analysis of social media posts
regarding various types of natural disasters, like oods, hurricanes, earthquakes, res, and
draughts around the globe [
5
]. Systems have been developed to facilitate early warnings and to
support disaster responses or damage assessments [
4
]. NLP methods can help to distinguish
informative from uninformative texts posted on social media, classify the type of crisis event
the text belongs to [
6
,
11
], or the type of crisis-related content that is discussed (e.g., warnings,
utilities, needs, aected people [
4
]). The same can be done based on photos through CV
approaches [
8
]. The semantic content of posts can be further leveraged with spatial and/or
temporal information to facilitate crisis mapping. For the Chennai ood in 2015, Anbalagan
and Valliyammai [
2
] built a crisis mapping system that classied related tweets regarding their
content type (e.g., requests for help, sympathy, warnings, weather information, infrastructure
damages, etc.). This information was combined with the geographic coordinates derived from
textually mentioned locations via geoparsing. Tools like this which can identify and locate a
crisis-related event can help emergency responders navigate complex information streams.
In 2015, Crawford and Finn [
12
] outlined dierent classes of limitations of using social media
data in crisis informatics.
Ontological limitations
: Social media activities spike around more
sensational instances, although crises onsets are oftentimes followed-up by long-term eects.
So, the time frame of a virtual
discourse is not representative
of the actual crisis timeline.
Further, applications for humanitarian aid have in the past demonstrated a risk of reifying
power imbalances
: “Although crowdsourcing projects can allow the voices of those closest
to a disaster to be heard, some projects most strongly enhance the agency of international
humanitarians” (p. 495, [
12
]).
Epistemological limitations
: The interpretability of social
media data is limited by the role that platforms play in shaping the data. Recommendation
systems determine what users get to see and share. Moreover, a platform can be seen as a
cultural context, with its trends and communicative patterns. Contents may exaggerate real
events and be charged with opinion and emotion. Finally, distinguishing between human- and
bot-generated messages is not always feasible.
Ethical issues
: The main point here is the issue
of
privacy
. Personal statements of users are gathered at a time in which they are especially
vulnerable. Their posts oftentimes include sensitive information about location or well-being
and the needs of themselves or others. Crawford and Finn [
12
] claim that consent must not be
sacriced for “the greater good”.
The privacy issue was also listed as one ethical risk factor by Alexander [
13
], alongside
the loss of discretion caused by a tendency for sharing intimate details. Moreover, the au-
thor pointed out that especially wealthy and technologically literate individuals benet from
digital means of disaster management. This adds to the previously mentioned reication of
power imbalances. Finally, the spread of rumors and misinformation through users, as well as
ideology-driven governance of platforms aect the reliability of details and can cause an overall