MasakhaNER 2.0 Africa-centric Transfer Learning for Named Entity Recognition David Ifeoluwa Adelani12 Graham Neubig3 Sebastian Ruder4 Shruti Rijhwani3

2025-05-02 0 0 1.07MB 22 页 10玖币
侵权投诉
MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity
Recognition
David Ifeoluwa Adelani1,2,, Graham Neubig3, Sebastian Ruder4, Shruti Rijhwani3,
Michael Beukman5, Chester Palen-Michel6, Constantine Lignos6, Jesujoba O. Alabi1,
Shamsuddeen H. Muhammad7, Peter Nabende8, Cheikh M. Bamba Dione9, Andiswa Bukula10,
Rooweither Mabuya10 , Bonaventure F. P. Dossou11, Blessing Sibanda, Happy Buzaaba12,
Jonathan Mukiibi8, Godson Kalipe, Derguene Mbaye13, Amelia Taylor14, Fatoumata Kabore15,
Chris Chinenye Emezue16, Anuoluwapo Aremu, Perez Ogayo3, Catherine Gitau,
Edwin Munkoh-Buabeng17, Victoire M. Koagne, Allahsera Auguste Tapo18, Tebogo Macucwa19,
Vukosi Marivate19, Elvis Mboning, Tajuddeen Gwadabe, Tosin Adewumi20,
Orevaoghene Ahia21, Joyce Nakatumba-Nabende8, Neo L. Mokono19, Ignatius Ezeani22,
Chiamaka Chukwuneke22, Mofetoluwa Adeyemi23, Gilles Q. Hacheme24, Idris Abdulmumin25,
Odunayo Ogundepo23, Oreen Yousuf15, Tatiana Moteu Ngoli, Dietrich Klakow1
Masakhane NLP, 1Saarland University, Germany, 2University College London, UK, 3Carnegie Mellon University, USA,
4Google Research, 5University of the Witwatersrand, South Africa, 6Brandeis University, USA, 7LIAAD-INESC TEC, Portugal,
8Makerere University, Uganda 9University of Bergen, Norway, 10SADiLaR, South Africa, 11 Mila Quebec AI Institute, Canada,
12RIKEN Center for AI Project, Japan, 13 Baamtu, Senegal, 14Malawi University of Business and Applied Science, Malawi,
15Uppsala University, Sweden, 16TU Munich, Germany, 17TU Clausthal, Germany, 18Rochester Institute of Technology, USA,
19University of Pretoria, South Africa, 20Luleå University of Technology, Sweden, 21University of Washington, USA,
22Lancaster University, UK, 23University of Waterloo, Canada, 24Ai4innov, France, 25Ahmadu Bello University, Nigeria.
Abstract
African languages are spoken by over a bil-
lion people, but are underrepresented in NLP
research and development. The challenges im-
peding progress include the limited availabil-
ity of annotated datasets, as well as a lack
of understanding of the settings where cur-
rent methods are effective. In this paper, we
make progress towards solutions for these chal-
lenges, focusing on the task of named en-
tity recognition (NER). We create the largest
human-annotated NER dataset for 20 African
languages, and we study the behavior of state-
of-the-art cross-lingual transfer methods in an
Africa-centric setting, demonstrating that the
choice of source language significantly affects
performance. We show that choosing the
best transfer language improves zero-shot F1
scores by an average of 14 points across 20
languages compared to using English. Our re-
sults highlight the need for benchmark datasets
and models that cover typologically-diverse
African languages.
1 Introduction
Many African languages are spoken by millions
or tens of millions of speakers. However, these
languages are poorly represented in NLP research,
and the development of NLP systems for African
languages is often limited by the lack of datasets
for training and evaluation (Adelani et al.,2021b).
Additionally, while there has been much re-
cent work in using zero-shot cross-lingual trans-
fer (Ponti et al.,2020;Pfeiffer et al.,2020;
Ebrahimi et al.,2022) to improve performance
on tasks for low-resource languages with multilin-
gual pretrained language models (PLMs) (Devlin
et al.,2019a;Conneau et al.,2020), the settings un-
der which contemporary transfer learning methods
work best are still unclear (Pruksachatkun et al.,
2020;Lauscher et al.,2020;Xia et al.,2020). For
example, several methods use English as the source
language because of the availability of training data
across many tasks (Hu et al.,2020;Ruder et al.,
2021), but there is evidence that English is often
not the best transfer language (Lin et al.,2019;
de Vries et al.,2022;Oladipo et al.,2022), and the
process of choosing the best source language to
transfer from remains an open question.
There has been recent progress in creating bench-
mark datasets for training and evaluating models
in African languages for several tasks such as ma-
chine translation (
et al.,2020;Reid et al.,2021;
Adelani et al.,2021a,2022;Abdulmumin et al.,
2022), and sentiment analysis (Yimam et al.,2020;
Muhammad et al.,2022). In this paper, we focus on
the standard NLP task of named entity recognition
(NER) because of its utility in downstream applica-
tions such as question answering and information
arXiv:2210.12391v2 [cs.CL] 15 Nov 2022
extraction. For NER, annotated datasets exist only
in a few African languages (Adelani et al.,2021b;
Yohannes and Amagasa,2022), the largest of which
is the MasakhaNER dataset (Adelani et al.,2021b)
(which we call MasakhaNER 1.0 in the remainder
of the paper). While MasakhaNER 1.0 covers 10
African languages spoken mostly in West and East
Africa, it does not include any languages spoken
in Southern Africa, which have distinct syntactic
and morphological characteristics and are spoken
by 40 million people.
In this paper, we tackle two current challenges
in developing NER models for African languages:
(1) the lack of typologically- and geographically-
diverse evaluation datasets for African languages;
and (2) choosing the best transfer language for
NER in an Africa-centric setting, which has not
been previously explored in the literature.
To address the first challenge, we create the
MasakhaNER 2.0 corpus, the largest human-
annotated NER dataset for African languages.
MasakhaNER 2.0 contains annotated text data from
20 languages widely spoken in Sub-Saharan Africa
and is complementary to the languages present in
previously existing datasets (e.g., Adelani et al.,
2021b). We discuss our annotation methodology
as well as perform benchmarking experiments on
our dataset with state-of-the-art NER models based
on multilingual PLMs.
In addition, to better understand the effect of
source language on transfer learning, we exten-
sively analyze different features that contribute to
cross-lingual transfer, including linguistic charac-
teristics of the languages (i.e., typological, geo-
graphical, and phylogenetic features) as well as
data-dependent features such as entity overlap
across source and target languages (Lin et al.,
2019). We demonstrate that choosing the best
transfer language(s) in both single-source and co-
training setups leads to large improvements in NER
performance in zero-shot settings; our experiments
show an average of a 14 point increase in F1 score
as compared to using English as source language
across 20 target African languages. We release the
data, code, and models on Github1
2 Related Work
African NER Datasets
There are some human-
annotated NER datasets for African languages
1https://github.com/masakhane-io/
masakhane-ner/tree/main/MasakhaNER2.0
such as the SaDiLAR NER corpus (Eiselen,2016)
covering 10 South African languages, LORELEI
(Strassel and Tracey,2016), which covers nine
African languages but is not open-sourced, and
some individual language efforts for Amharic (Jib-
ril and Tantug,2022), Yorùbá (Alabi et al.,
2020), Hausa (Hedderich et al.,2020), and
Tigrinya (Yohannes and Amagasa,2022). Closest
to our work is the MasakhaNER 1.0 corpus (Ade-
lani et al.,2021b), which covers 10 widely spo-
ken languages in the news domain, but excludes
languages from the southern region of Africa like
isiZulu, isiXhosa, and chiShona with distinct syn-
tactic features (e.g., noun prefixes and capitaliza-
tion in between words) which limits transfer learn-
ing from other languages. We include five lan-
guages from Southern Africa in our new corpus.
Cross-lingual Transfer
Leveraging cross-
lingual transfer has the potential to drastically
improve model performance without requiring
large amounts of data in the target language (Con-
neau et al.,2020) but it is not always clear from
which language we must transfer from (Lin et al.,
2019;de Vries et al.,2022). To this end, recent
work investigates methods for selecting good
transfer languages and informative features. For
instance, token overlap between the source and
target language is a useful predictor of transfer
performance for some tasks (Lin et al.,2019;
Wu and Dredze,2019). Linguistic distance (Lin
et al.,2019;de Vries et al.,2022), word order (K
et al.,2020;Pires et al.,2019) and script dif-
ferences (de Vries et al.,2022), and syntactic
similarity (Karamolegkou and Stymne,2021) have
also been shown to impact performance. Another
research direction attempts to build models of
transfer performance that predicts the best transfer
language for a target language by using some
linguistic and data-dependent features (Lin et al.,
2019;Ahuja et al.,2022).
3 Languages and Their Characteristics
3.1 Focus Languages
Table 1 provides an overview of the languages in
our MasakhaNER 2.0 corpus. We focus on 20 Sub-
Saharan African languages
2
with varying numbers
of speakers (between 1M–100M) that are spoken
by over 500M people in around 27 countries in
2
Our selection was also constrained by the availability
of volunteers that speak the languages in different NLP/AI
communities in Africa.
African No. of % Entities #
Language Family Region Speakers Source Train / dev / test in Tokens Tokens
Bambara (bam) NC / Mande West 14M MAFAND-MT (Adelani et al.,2022) 4462/ 638/ 1274 6.5 155,552
Ghomálá’ (bbj) NC / Grassfields Central 1M MAFAND-MT (Adelani et al.,2022) 3384/ 483/ 966 11.3 69,474
Éwé (ewe) NC / Kwa West 7M MAFAND-MT (Adelani et al.,2022) 3505/ 501/ 1001 15.3 90420
Fon (fon) NC / Volta-Niger West 2M MAFAND-MT (Adelani et al.,2022) 4343/ 621/ 1240 8.3 173,099
Hausa (hau) Afro-Asiatic / Chadic West 63M Kano Focus and Freedom Radio 5716/ 816/ 1633 14.0 221,086
Igbo (ibo) NC / Volta-Niger West 27M IgboRadio and Ka O
.dI
.Taa 7634/ 1090/ 2181 7.5 344,095
Kinyarwanda (kin) NC / Bantu East 10M IGIHE, Rwanda 7825/ 1118/ 2235 12.6 245,933
Luganda (lug) NC / Bantu East 7M MAFAND-MT (Adelani et al.,2022) 4942/ 706/ 1412 15.6 120,119
Luo (luo) Nilo-Saharan East 4M MAFAND-MT (Adelani et al.,2022) 5161/ 737/ 1474 11.7 229,927
Mossi (mos) NC / Gur West 8M MAFAND-MT (Adelani et al.,2022) 4532/ 648/ 1294 9.2 168,141
Naija (pcm) English-Creole West 75M MAFAND-MT (Adelani et al.,2022) 5646/ 806/ 1613 9.4 206,404
Chichewa (nya) NC / Bantu South-East 14M Nation Online Malawi 6250/ 893/ 1785 9.3 263,622
chiShona (sna) NC / Bantu South 12M VOA Shona 6207/ 887/ 1773 16.2 195,834
Kiswahili (swa) NC / Bantu East & Central 98M VOA Swahili 6593/ 942/ 1883 12.7 251,678
Setswana (tsn) NC / Bantu South 14M MAFAND-MT (Adelani et al.,2022) 3489/ 499/ 996 8.8 141,069
Akan/Twi (twi) NC / Kwa West 9M MAFAND-MT (Adelani et al.,2022) 4240/ 605/ 1211 6.3 155,985
Wolof (wol) NC / Senegambia West 5M MAFAND-MT (Adelani et al.,2022) 4593/ 656/ 1312 7.4 181,048
isiXhosa (xho) NC / Bantu South 9M Isolezwe Newspaper 5718/ 817/ 1633 15.1 127,222
Yorùbá (yor) NC / Volta-Niger West 42M Voice of Nigeria and Asejere 6877/ 983/ 1964 11.4 244,144
isiZulu (zul) NC / Bantu South 27M Isolezwe Newspaper 5848/ 836/ 1670 11.0 128,658
Table 1: Languages and Data Splits for MasakhaNER 2.0 Corpus. Language, family (NC: Niger-Congo),
number of speakers, news source, and data split in number of sentences
the Western, Eastern, Central and Southern regions
of Africa. The selected languages cover four lan-
guage families. 17 languages belong to the Niger-
Congo language family, and one language belongs
to each of the Afro-Asiatic (Hausa), Nilo-Saharan
(Luo), and English Creole (Naija) families. Al-
though many languages belong to the Niger-Congo
language family, they have different linguistic char-
acteristics. For instance, Bantu languages (eight in
our selection) make extensive use of affixes, unlike
many languages of non-Bantu subgroups such as
Gur, Kwa, and Volta-Niger.
3.2 Language Characteristics
Script and Word Order
African languages
mainly employ four major writing scripts: Latin,
Arabic, N’ko and Ge’ez. Our focus languages
mostly make use of the Latin script. While N’ko
is still actively used by the Mande languages like
Bambara, the most widely used writing script for
the language is Latin. However, some languages
use additional letters that go beyond the standard
Latin script, e.g., “
E
”, “
O
”, “
N
”, “
e
.
”, and more than
one character letters like “bv”, “gb”, “mpf”, “ntsh”.
17 of the languages are tonal except for Naija,
Kiswahili and Wolof. Nine of the languages make
use of diacritics (e.g., é, ë, ñ). All languages use
the SVO word order, while Bambara additionally
uses the SOV word order.
Morphology and Noun classes
Many African
languages are morphologically rich. According to
the World Atlas of Language Structures (WALS;
Nichols and Bickel,2013), 16 of our languages
employ strong prefixing or suffixing inflections.
Niger-Congo languages are known for their system
of noun classification. 12 of the languages actively
make use of between 6–20 noun classes, includ-
ing all Bantu languages, Ghomálá’, Mossi, Akan
and Wolof (Nurse and Philippson,2006;Payne
et al.,2017;Bodomo and Marfo,2002;Babou and
Loporcaro,2016). While noun classes are often
marked using affixes on the head word in Bantu
languages, some non-Bantu languages, e.g., Wolof
make use of a dependent such as a determiner that
is not attached to the head word. For the other
Niger-Congo languages such as Fon, Ewe, Igbo
and Yorùbá, the use of noun classes is merely ves-
tigial (Konoshenko and Shavarina,2019). Three
of our languages from the Southern Bantu family
(chiShona, isiXhosa and isiZulu) capitalize proper
names after the noun class prefix as in the lan-
guage names themselves. This characteristic may
limit transfer from languages without this feature
as NER models overfit on capitalization (Mayhew
et al.,2019). Appendix B provides more details
regarding the languages’ linguistic characteristics.
4 MasakhaNER 2.0 Corpus
4.1 Data source and collection
We annotate news articles from local sources. The
choice of the news domain is based on the avail-
ability of data for many African languages and the
variety of named entities types (e.g., person names
and locations) as illustrated by popular datasets
such as CoNLL-03 (Tjong Kim Sang and De Meul-
der,2003).
3
Table 1 shows the sources and sizes
3
We also considered using Wikipedia as our data source,
but did not due to quality issues (Alabi et al.,2020).
of the data we use for annotation. Overall, we col-
lected between 4.8K–11K sentences per language
from either a monolingual or a translation corpus.
Monolingual corpus
We collect a large monolin-
gual corpus for nine languages, mostly from local
news articles except for chiShona and Kiswahili
texts, which were crawled from Voice of America
(VOA) websites.
4
As Yorùbá text was missing dia-
critics, we asked native speakers to manually add
diacritics before annotation. During data collection,
we ensured that the articles are from a variety of
topics e.g. politics, sports, culture, technology, so-
ciety, and education. In total, we collected between
8K–11K sentences per language.
Translation corpus
For the remaining languages
for which we were unable to obtain sufficient
amounts of monolingual data, we use a transla-
tion corpus, MAFAND-MT (Adelani et al.,2022),
which consists of French and English news articles
translated into 11 languages. We note that transla-
tionese may lead to undesired properties, e.g., un-
naturalness. However, we did not observe serious
issues during the annotation. The number of sen-
tences is constrained by the size of the MAFAND-
MT corpus, which is between 4,800–8,000.
4.2 NER Annotation Methodology
We annotated the collected monolingual texts with
the ELISA annotation tool (Lin et al.,2018) with
four entity types: Personal name (
PER
), Loca-
tion (
LOC
), Organization (
ORG
), and date and time
(
DATE
), similar to MasakhaNER 1.0 (Adelani et al.,
2021b). We made use of the MUC-6 annotation
guide.
5
The annotation was carried out by three na-
tive speakers per language recruited from AI/NLP
communities in Africa. To ensure high-quality an-
notation, we recruited a language coordinator to su-
pervise annotation in each language. We organized
two online workshops to train language coordina-
tors on the NER annotation. As part of the training,
each coordinator annotated 100 English sentences,
which were verified. Each coordinator then trained
three annotators in their team using both English
and African language texts with the support of the
workshop organizers. All annotators and language
coordinators received appropriate remuneration.6
At the end of annotation, language coordinators
worked with their team to resolve disagreements
4www.voashona.com/ and www.voaswahili.com/
5https://cs.nyu.edu/~grishman/muc6.html
6$10 per hour, annotating about 200 sentences per hour.
Fleiss’ QC flags Fleiss’ QC flags
Lang. Kappa fixed? Lang. Kappa fixed?
bam 0.980 7pcm 0.966 7
bbj 1.000 3nya 0.988 3
ewe 0.991 3sna 0.957 3
fon 0.941 7swa 0.974 3
hau 0.950 7tsn 0.962 7
ibo 0.965 7twi 0.932 7
kin 0.943 7wol 0.979 3
lug 0.950 3xho 0.945 3
luo 0.907 7yor 0.950 3
mos 0.927 7zul 0.953 3
Table 2: Inter-annotator agreement for our datasets cal-
culated using Fleiss’ kappa κat the entity level before
adjudication. QC flags (3) are the languages that fixed
the annotations for all Quality Control flagged tokens.
using the adjudication function of ELISA, which
ensures a high inter-annotator agreement score.
4.3 Quality Control
As discussed in subsection 4.2, language coordi-
nators helped resolve several disagreements in an-
notation prior to quality control. Table 2 reports
the Fleiss Kappa score after the intervention of lan-
guage coordinators (i.e. post-intervention score).
The pre-intervention Fleiss Kappa score was much
lower. For example, for
pcm
, the pre-intervention
Fleiss Kappa score was 0.648 and improved to
0.966 after the language coordinator discussed the
disagreements with the annotators.
For the quality control, annotations were auto-
matically adjudicated when there was agreement,
but were flagged for further review when anno-
tators disagreed on mention spans or types. The
process for reviewing and fixing quality control
issues was voluntary and so not all languages were
further reviewed (see Table 2).
We automatically identified positions in the an-
notation that were more likely to be annotation
errors and flagged them for further review and cor-
rection. The automatic process flags tokens that are
commonly annotated as a named entity but were
not marked as a named entity in a specific position.
For example, the token Province may appear com-
monly as part of a named entity and infrequently
not as a named entity, so when it is seen as not
marked it was flagged. Similarly, we flagged to-
kens that had near-zero entropy with regard to a
certain entity type, for example a token almost al-
ways annotated as ORG but very rarely annotated
as PER. We also flagged potential sentence bound-
ary errors by identifying sentences with few tokens
PLM # Lang. Languages in MasakhaNER 2.0
mBERT-cased (110M) 104 swa,yor
XLM-R-base/large
(270M / 550M)
100 hau,swa,xho
mDeBERTaV3 (276M)
100 hau,swa,xho
RemBERT (575M) 110 hau
,
ibo
,
nya
,
sna
,
swa
,
xho
,
yor
,
zul
AfriBERTa (126M) 11 hau,ibo,kin,pcm,swa,yor
AfroXLMR-base/large
(270M/550M)
20 hau
,
ibo
,
kin
,
nya
,
pcm
,
sna
,
swa
,
xho,yor,zul
Table 3: Language coverage and size for PLMs.
or sentences which end in a token that appears to
be an abbreviation or acronym. As shown in Table
2, before further adjudication and correction there
was already relatively high inter-annotator agree-
ment measured by Fleiss’ Kappa at the mention
level.
After quality control, we divided the annotation
into training, development, and test splits consist-
ing of 70%, 10%, and 20% of the data respectively.
Appendix A provide details on the number of to-
kens per entity (PER, LOC, ORG, and DATE) and
the fraction of entities in the tokens.
5 Baseline Experiments
5.1 Baseline Models
As baselines, we fine-tune several multilingual
PLMs including mBERT (Devlin et al.,2019b),
XLM-R (base & large; Conneau et al.,2020), mDe-
BERTaV3 (He et al.,2021), AfriBERTa (Ogueji
et al.,2021), RemBERT (Chung et al.,2021), and
AfroXLM-R (base & large; Alabi et al.,2022). We
fine-tune the PLMs on each language’s training
data and evaluate performance on the test set using
HuggingFace Transformers (Wolf et al.,2020).
Massively multilingual PLMs
Table 3 shows
the language coverage and size of different mas-
sively multilingual PLMs trained on 100–110 lan-
guages. mBERT was pre-trained using masked
language modeling (MLM) and next-sentence pre-
diction on 104 languages, including
swa
and
yor
.
RemBERT was trained with a similar objective, but
makes use of a larger output embedding size dur-
ing pre-training and covers more African languages.
XLM-R was trained only with MLM on 100 lan-
guages and on a larger pre-training corpus. mDe-
BERTaV3 makes use of ELECTRA-style (Clark
et al.,2020) pre-training, i.e., a replaced token de-
tection (RTD) objective instead of MLM.
Africa-centric multilingual PLMs
We also ob-
tained NER models by fine-tuning two PLMs
that are pre-trained on African languages. AfriB-
ERTa (Ogueji et al.,2021) was pre-trained on
less than 1 GB of text covering 11 African lan-
guages, including six of our focus languages, and
has shown impressive performance on NER and
sentiment classification for languages in its pre-
training data (Adelani et al.,2021b;Muhammad
et al.,2022). AfroXLM-R (Alabi et al.,2022) is
a language-adapted (Pfeiffer et al.,2020) version
of XLM-R that was fine-tuned on 17 African lan-
guages and three high-resource languages widely
spoken in Africa (“eng”, “fra”, and “ara”). Ap-
pendix J provides the model hyper-parameters for
fine-tuning the PLMs.
5.2 Baseline Results
Table 4 shows the results of training NER models
on each language using the eight multilingual and
Africa-centric PLMs. All PLMs provided good
performance in general. However, we observed
worse results for mBERT and AfriBERTa espe-
cially for languages they were not pre-trained on.
For instance, both models performed between 6–
12 F1 worse for
bbj
,
wol
or
zul
compared to
XLM-R-base. We hypothesize that the perfor-
mance drop is largely due to the small number of
African languages covered by mBERT as well as
AfriBERTa’s comparatively small model capacity.
XLM-R-base gave much better performance (
>1.0
F1) on average compared to mBERT and AfriB-
ERTa. We found the larger variants of mBERT
and XLM-R, i.e., RemBERT and XLM-R-large to
give much better performance (
>2.0
F1) than the
smaller models. Their larger capacity facilitates
positive transfer, yielding better performance for
unseen languages. Surprisingly, mDeBERTaV3
provided slightly better results than XLM-R-large
and RemBERT despite its smaller size, demon-
strating the benefits of the RTD pre-training (Clark
et al.,2020).
The best PLM is AfroXLM-R-large, which out-
performs mDeBERTaV3, RemBERT and AfriB-
ERTa by
+1.3
F1,
+2.0
F1 and
+4.0
F1 respec-
tively. Even the performance of its smaller variant,
AfroXLM-R-base is comparable to mDeBERTaV3.
Overall, our baseline results highlight that large
PLMs, PLM with improved pre-training objectives,
and PLMs pre-trained on the target African lan-
guages are able to achieve reasonable baseline per-
formance. Combining these criteria provides im-
proved performance, such as AfroXLM-R-large, a
摘要:

MasakhaNER2.0:Africa-centricTransferLearningforNamedEntityRecognitionDavidIfeoluwaAdelani1;2;,GrahamNeubig3,SebastianRuder4,ShrutiRijhwani3,MichaelBeukman5,ChesterPalen-Michel6,ConstantineLignos6,JesujobaO.Alabi1,ShamsuddeenH.Muhammad7,PeterNabende8,CheikhM.BambaDione9,AndiswaBukula10,Roowei...

展开>> 收起<<
MasakhaNER 2.0 Africa-centric Transfer Learning for Named Entity Recognition David Ifeoluwa Adelani12 Graham Neubig3 Sebastian Ruder4 Shruti Rijhwani3.pdf

共22页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:22 页 大小:1.07MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 22
客服
关注