Comparing Computational Architectures for Automated Journalism Yan V . Sym10000000204010586 Joao Gabriel M. Campos10000000211063694

2025-04-27 0 0 598.54KB 13 页 10玖币
侵权投诉
Comparing Computational Architectures
for Automated Journalism
Yan V. Sym1,[0000000204010586], Jo˜
ao Gabriel M. Campos1,[0000000211063694],
Marcos M. Jos´
e1,[0000000346634386], Fabio G. Cozman 1,[0000000340774935]
1Escola Polit´
ecnica, Universidade de S˜
ao Paulo, S˜
ao Paulo, Brazil
Abstract. The majority of NLG systems have been designed following either a
template-based or a pipeline-based architecture. Recent neural models for data-
to-text generation have been proposed with an end-to-end deep learning flavor,
which handles non-linguistic input in natural language without explicit interme-
diary representations. This study compares the most often employed methods for
generating Brazilian Portuguese texts from structured data. Results suggest that
explicit intermediate steps in the generation process produce better texts than
the ones generated by neural end-to-end architectures, avoiding data hallucina-
tion while better generalizing to unseen inputs. Code and corpus are publicly
available. 1
Keywords – Natural Language Generation, Automated journalism, Blue Amazon
1. Introduction
Natural Language Generation (NLG) is a subfield at the intersection of linguistics, com-
puter science, and artificial intelligence, concerned with generating readable, coherent
and meaningful explanatory text or speech so as to describe non-linguistic input data
[Reiter and Dale 2000]. NLG is often viewed as complementary to Natural Language
Understanding (NLU) and part of Natural Language Processing (NLP). Whereas in NLU
the goal is to understand input sentences to produce machine representations, in NLG
the system must make decisions about how to transform representations into meaningful
words and phrases [Liddy 2001].
Multiple successful examples of data-to-text systems can be found in weather
forecasting [Sripada et al. 2004], financial and analytical reporting, industrial monitor-
ing [Kim et al. 2020] and conversational agents. Amongst NLG applications, robot-
journalism is one of the most prominent endeavors thanks to the abundance of structured
data streams available today, thus allowing automated systems to report recurring material
with high-fidelity and lexical variation [Graefe 2016].
Traditionally, most data-to-text applications have been designed in a modular fash-
ion as this facilitates reuse in different domains; going directly from input to output
with rules has been simply too complex [Gatt and Krahmer 2018]. In such systems, non-
linguistic input data is converted into natural language through several explicit interme-
diate transformations and sequential tasks related to content selection, sentence planning
and linguistic realization [Ferreira et al. 2019]. The two most frequently used automated
journalism architectures are the template-based approach, which is application-dependent
and lacks generalization capabilities due to its rule-based nature, and the pipeline-based
1https://github.com/C4AI/blab-reporter
arXiv:2210.04107v1 [cs.CL] 8 Oct 2022
approach, which embodies linguistic insights to convert data to text by applying a series
of sequential steps.
The emergence of neural-based NLG systems in the recent years has changed the
field: provided there is enough labeled data for training a machine learning model, learn-
ing a direct mapping from structured input to textual output has become reality [Li 2017].
This has led to the recent development of deep learning end-to-end models, which directly
learn input-output mappings and rely far less on explicit intermediary representations and
linguistic insights.
Even though it is technically feasible to use neural end-to-end methods in real
world applications, this does not necessarily mean that they are superior to rule-based
approaches in every scenario. Recent empirical studies have demonstrated that a combi-
nation of template and pipeline systems produce texts that are more appropriate than the
neural-based approaches, which frequently hallucinate content unsupported by the seman-
tic input [Ferreira et al. 2019]. For the particular task of automated journalism, reporting
inaccurate data would seriously undermine a robot’s credibility and could have serious
implications on sensitive domains, such as environmental reports. A modular model also
has the advantage of allowing for auditing, while neural end-to-end approaches behave as
black-boxes [Campos et al. 2020].
In this paper, we compare the three most frequently used architectures for auto-
mated journalism – template-based, pipeline-based and end-to-end neural models – us-
ing a common domain, the Blue Amazon. With an offshore area of 3.6 million square
kilometers along the Brazilian coast, the Blue Amazon is Brazil’s exclusive economic
zone (EEZ); it is a oceanic region brimming with marine species and energy resources
[Thompson and Muggah 2015]. Ocean monitoring, climate change and environmental
sustainability are promising fields for automated journalism applications. The oceans are
severely damaged environments, and if current trends continues, there will be disastrous
consequences for the planet as it is essential to halt climate change, fostering economic
growth and preserving biodiversity [e Costa et al. 2022]. Although connecting with pub-
lic audiences in an approachable way typically requires coverage by trained human jour-
nalists, accurate and low latency information reports can be very helpful. There is a vast
and ever-growing body of information about the oceans; clearly, society can benefit from
a robot journalism system. To address this issue, we created our robot journalism applica-
tion which combines different NLG approaches to generate daily reports about the Blue
Amazon and publish them on Twitter. 2
A corpus of verbalizations of non-linguistic data in Brazilian Portuguese was cre-
ated based on syntactical and lexical patterning abstracted from data collected from pub-
licly available sources. Intermediate representations were annotated for each entry in
order to develop our corpus. A combination of automatic and human evaluation together
with a qualitative analysis was then carried out to measure the fluency, semantics and
lexical variety of the generated texts.
This main contributions of this work are the construction of a publicly avail-
able Brazilian Portuguese NLG dataset, a comparison between the three most frequently
used automated journalism architectures and an application which combines different ap-
2https://twitter.com/BLAB_Reporter
Figure 1. Left: tides chart for the Rio de Janeiro (RJ) city, taken from the Tides
Chart website. Right: vessel positions near the Santos (SP) port on a given day.
Taken from the Marine Traffic website.
proaches to publish daily reports about the Blue Amazon on Twitter. In Section 2, we
present our Blue Amazon dataset for automated journalism, and in Section 3 we discuss
our approach in building a template-based architecture. Also, in Section 4, we present and
discuss our pipeline architecture with six sequential modules. In Section 5, we discuss
the end-to-end architecture and utilize it by training four different neural networks to gen-
erate the output text. In Section 6 we present the main results of this work and in Section
7 we discuss the results by providing some qualitative analysis. Finally, we conclude in
Section 8.
2. Non-linguistic Data about the Blue Amazon
The experiments presented in this work were run with a corpus of Brazilian Portuguese
verbalizations for the Blue Amazon domain. We initially developed web crawlers which
extracted daily information from publicly available sources, including weather, temper-
ature, tides charts, earthquakes, vessel positioning and oil extraction. Weather data and
tides charts are extracted through the Tides Chart website, which provides information
about high tides, low tides, tide charts, fishing times, ocean conditions, water tempera-
tures and weather forecasts for thousands of cities around the world. Figure 1 (left) shows
an example of tides charts for the following week in Rio de Janeiro (RJ).
Vessel positioning is collected from the Marine Traffic website, which is an
open, community-based project that provides real-time information about ship move-
ments around the world and also their current location in ports and harbors. Figure 1
(right) shows an example of vessel positions near the Santos (SP) port on a given day.
Real time data regarding earthquakes in the Brazilian coast are taken from the Seismo-
logical Center at the University of S˜
ao Paulo, and information regarding oil extractinon is
obtained from the Brazilian government portal. After the data are collected and cleaned,
they are stored in the MongoDB database, a NoSQL document-oriented database program
which provides more flexibility and scalability over relational databases when input data
is constantly changing [Stonebraker 2010].
We created the corpus based on information collected during 90 consecutive days
for 50 cities in the Brazilian coast, and then performed content selection for past time-
series data using feedback from domain experts. The intent messages were then sorted
摘要:

ComparingComputationalArchitecturesforAutomatedJournalismYanV.Sym1;[0000000204010586],Jo˜aoGabrielM.Campos1;[0000000211063694],MarcosM.Jos´e1;[0000000346634386],FabioG.Cozman1;[0000000340774935]1EscolaPolit´ecnica,UniversidadedeS˜aoPaulo,S˜aoPaulo,BrazilAbstract.ThemajorityofNLGsystemshavebeendesign...

展开>> 收起<<
Comparing Computational Architectures for Automated Journalism Yan V . Sym10000000204010586 Joao Gabriel M. Campos10000000211063694.pdf

共13页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:13 页 大小:598.54KB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 13
客服
关注