Metadata Privacy Beyond Tunneling for Instant Messaging Boel Nelson Aarhus University

2025-05-02 0 0 1.51MB 26 页 10玖币
侵权投诉
Metadata Privacy Beyond Tunneling for Instant Messaging
Boel Nelson
Aarhus University
boel@cs.dk
Elena Pagnin
Chalmers University of Technology
elenap@chalmers.se
Aslan Askarov
Aarhus University
aslan@cs.au.dk
Abstract—Transport layer data leaks metadata unintentionally
– such as who communicates with whom. While tools for
strong transport layer privacy exist, they have adoption
obstacles, including performance overheads incompatible
with mobile devices. We posit that by changing the objective
of metadata privacy for all traffic, we can open up a new
design space for pragmatic approaches to transport layer
privacy. As a first step in this direction, we propose using
techniques from information flow control and present a
principled approach to constructing formal models of systems
with metadata privacy for some, deniable, traffic. We prove
that deniable traffic achieves metadata privacy against strong
adversaries– this constitutes the first bridging of information
flow control and anonymous communication to our knowledge.
Additionally, we show that existing state-of-the-art protocols
can be extended to support metadata privacy, by designing a
novel protocol for deniable instant messaging (DenIM), which
is a variant of the Signal protocol. To show the efficacy
of our approach, we implement and evaluate a proof-of-
concept instant messaging system running DenIM on top of
unmodified Signal. We empirically show that the DenIM on
Signal can maintain low-latency for unmodified Signal traffic
without breaking existing features, while at the same time
supporting deniable Signal traffic.
1. Introduction
Modern instant messaging (IM) services strive for strong
end-to-end security. Services such as Signal, WhatsApp [
1
],
Wire [
2
], and Facebook Messenger [
3
], all use the Signal
protocol that is formally secure [
4
] and achieves ambi-
tious security goals, such as post-compromise security,
backward secrecy, confidentiality, and integrity. Still, these
IM services lack strong metadata privacy, making them
vulnerable to traffic analysis attacks. This is a serious
deficiency, because traffic analysis remains an effective
mechanism [
5
], [
6
] for surveillance and censorship used by
governments, organizations, and internet service providers
in over 100 countries [
7
]. For example, China’s “great
firewall” actively probes and censors privacy tools [
8
].
Although collecting metadata may seem non-intrusive,
metadata is used to make critical decisions – “we kill
people based on metadata”, as former US government
official general Hayden [
9
] put it. “Harvest today, analyze
tomorrow” is a viable adversarial strategy for many such
actors.
While the general problem of metadata privacy has been
extensively studied, there are both social and technical
barriers that prevent adoption of existing privacy tools to
IM services. On a social level, people are either unaware
of privacy tools [
10
] or have diverse misconceptions
about them [
11
]. Adding to the existing problem, people
also find these tools too complicated to use, or lack the
knowledge of how to use them [
12
]. For example, Norcie
et al. [
13
] investigated the Tor Browser Bundle and found
that users experienced usability issues such as the browser’s
launch time and difficulties downloading and installing
it. Beyond the challenges of usability, there are risks of
being scrutinized for having a particular app installed [
14
],
[15].
On a technical level, available tools are far from perfect.
The Tor project [
16
] although relatively popular with 2M
active users [
17
], is vulnerable to de-anonymization [
18
],
denial of service (DoS) [
19
], and traffic analysis [
20
].
Because Tor can be automatically fingerprinted [
5
], [
6
], it
is also easy to block (ironically, the authors of this paper
themselves were blocked from accessing the Tor project’s
website on their organization network). Metadata private
focused IM tools that run on Tor [
21
], [
22
], [
23
], [
24
]
suffer from the same issues. Other tools that hide traffic
by imitating well-known apps do not produce credible
traffic [25].
The strongest guarantees for metadata privacy are provided
by dedicated protocols. In particular, round-based, DC-nets
like, protocols [
26
], [
27
], [
28
], [
29
], where predetermined
rounds make traffic patterns indistinguishable, are able
to resist traffic analysis. However, round-based protocols
are both resource exhaustive and inflexible. The rounds
themselves require constant overhead, which results in poor
performance [
30
], making them especially infeasible for
resource constrained devices such as phones or wearables.
Moreover, a major obstacle with round-based protocols is
that they depend on fixed sets of individuals participating.
That is, participants cannot join or leave without changing
the privacy guarantees. Finally, round-based protocols are
also easy to fingerprint and block.
Existing approaches to metadata privacy all have in com-
mon that they focus on the strong objective of metadata
privacy for all users all the time. However, such a strong
objective significantly delimits the design space of possible
solutions. We propose a different, pragmatic objective:
rather than offering privacy to all users all the time,
let us offer privacy to all users some of the time. This
shift in objective expands the design space for metadata
privacy to new solutions.
Our new approach is to incorporate metadata privacy
into an existing store-and-forward IM protocol. To that
extent, we present Deniable Instant Messaging (DenIM)–
an IM protocol that provides both message confidentiality
arXiv:2210.12776v3 [cs.CR] 6 Mar 2024
and metadata privacy. DenIM distinguishes two kinds of
messages: (i) regular messages that do not require metadata
privacy, and (ii) deniable messages that do require it.
Regular and deniable communication is combined in one
system, and users decide which messages to send privately.
To withstand traffic analysis, deniable messages are not
communicated immediately, instead they are piggybacked
on top of the regular messages, which in turn requires
that all messages are extended by a small known amount
of bytes. The store-and-forward server breaks the link
between the sender and receiver of a private message
by buffering the message until there is an opportunity to
piggyback it on some other regular message to the receiver.
It is vital that the messages are extended even if the
communicating parties have nothing to say, in which case
a dummy payload is sent instead. To minimize overhead,
the size of the payload must be small in proportion to the
overall communication.
The importance of incorporating metadata privacy into an
existing IM protocol aligns with earlier observations in
the literature. As EFF put it, “An app with great security
features is worthless if none of your friends and contacts
use it” [
31
]. In their paper “Practical Traffic Analysis
Attacks on Secure Messaging Applications’’, Bahramali et
al. [
32
] recommend that metadata privacy for IM should
be adopted by IM services to be effective. An encouraging
development in this direction already is WhatsApp’s use
of the Noise Protocol Framework (NPF) [
33
] to protect
certain metadata [
1
]. Finally, Zuckerman’s [
34
]cute cat
theory of censorship posits that platforms that combine
entertainment with political activism are more resilient to
censorship than dedicated political platforms.
In our case, we pair DenIM with the (unmodified) Signal
protocol. We call the resulting system DenIM on Signal.
We chose Signal because it provides the state-of-the art se-
curity guarantees in instant messaging, including: forward
secrecy, backward secrecy (post-compromise security), data
confidentiality and integrity (see [4] for detail).
DenIM’s piggybacking of deniable messages is a form of
tunneling (e.g., [
35
], [
36
], [
37
]). Yet tunneling alone is
insufficient. The reason is that in settings where adversaries
are legitimate users in the system, information may be
leaked inadvertently through parts of the protocol state
that are shared between all users. For example, in Signal,
adversaries can gain information about other users through
the state of the key distribution center because the protocol
allow users to run out of keys – this allows adversaries
to count the number of keys a user has and in extension
allows the adversary to deduce how many conversations a
user is part of.
To ensure that DenIM guarantees metadata privacy for
deniable messages, including unknown attacks, we use
techniques from secure information flow. Our insight is to
model user deniable behavior as user strategies [
38
] – a
technical device that is traditionally used for specifying
semantic security of interactive and nondeterministic pro-
grams. In DenIM, a user strategy is a function that given a
history of the user’s communication determines their next
deniable action, e.g., send a deniable message, request key
material from the server to initiate new deniable communi-
cation, or block a user from receiving deniable messages.
We recast the notion of metadata privacy as strategy-based
noninterference: user strategies must not leak through the
protocol. The significance of this insight is that because
noninterference is an end-to-end characterization, proving
noninterference requires that there is no way in which the
sensitive information may leak anywhere in the protocol,
not just on the transport layer. In essence, this guides the
features and non-features of DenIM. For example, DenIM
restricts the notification of user blocking, because notifying
a user that they have been blocked leaks information about
the blocking user’s deniable behavior.
The contributions of this paper are as follows:
It presents a deniable variant of the Signal protocol
(Section 2.2) called DenIM (Section 4.1), that supports
both the original strong cryptographic guarantees of
Signal, and metadata privacy.
It presents a system design that layers deniable Signal
messages on top of the unmodified Signal protocol,
which we call DenIM on Signal (Section 4).
It presents a formal privacy analysis (Section 5) that
constitutes a principled approach of using information
flow techniques to guarantee privacy by proving
noninterference.
It presents a proof-of-concept implementation (Sec-
tion 6.1) of an instant messaging system with DenIM.
It presents an empirical evaluation (Sections 6.2
and 6.3) of the performance of DenIM.
2. Background
This section provides an overview of instant messaging
(IM), and the main machinery in the Signal protocol which
we design a deniable version of in Section 4.1.
2.1. Instant messaging
In 2019, instant messaging (IM) services had seven billion
registered accounts worldwide [
39
]. The most popular IM
services include WhatsApp (2B users), Facebook messen-
ger (1.3B users), iMessage (estimated to 1B users), Tele-
gram (550M users), and Snapchat (538M users) [
40
], [
41
].
While IM appears deceptively simple, the sheer amount
of users and traffic (69M messages/min in 2021 [
42
])
present several engineering challenges. Keeping up with
the demands, requires deploying and maintaining robust
systems. As an example, WhatsApp’s architecture handles
over one million connections per server [43].
All major IM services, including WhatsApp, Facebook
messenger, Telegram and Snapchat, use centralized servers
to forward messages [
32
]. Many IM apps also come
with end-to-end encryption, in addition to server-client
encryption (through TLS). Telegram uses their own pro-
tocol, MTProto [
44
], iMessage uses RSAES-OAEP [
45
],
and Snapchat uses an unnamed encryption scheme for
some of its content [
46
]. The most popular protocol is
Signal [
47
], [
48
], which also has the strongest security
guarantees of the mentioned protocols, and is used by
WhatsApp [
1
], Facebook Messenger [
3
], Wire [
2
], Chat-
Secure, Conversations, Pond, the Signal app, and Silent
Circle [
4
]. The Signal protocol is formally secure [
4
], and is
based on Off-the-Record Messaging (OTR) [
49
] and Silent
2
Circle Instant Messaging Protocol (SCIMP) [
50
]. Despite
strong cryptographic guarantees, none of the centralized
IM services support transport layer privacy for IM.
2.2. The Signal protocol
At a high level the Signal protocol realizes an end-to-end
secure communication channel between two parties that
exchange instant messages in a possibly asynchronous
way (i.e., they may not be online at the same time).
Signal distinguished itself among the landscape of mes-
saging protocols in that it achieves ambitious security
goals including: forward secrecy, backward secrecy (post-
compromise security), data confidentiality and integrity
(see [
4
] for detail). This is obtained by managing several
different cryptographic keys (Table 1), relying on a semi-
trusted centralized server (to store and forward messages,
and implement a key distribution center), and cleverly
combining three cryptographic primitives: a key derivation
function (KDF), a non-interactive key-exchange protocol
(namely DH for Diffie-Hellman) for initiating new sessions,
and an authenticated encryption scheme with associated
data (AEAD).
2.2.1. Keys used in Signal. In Signal, each user
U
holds a
set of keys that identify the user, and are used to initiate new
sessions (chats) and to AEAD-encrypt messages. Table 1
provides a categorization of the cryptographic key material
of Signal that is relevant to this work. Keys employed only
to set up new sessions are highlighted with the symbol
.
Name Key(s) Usage
Identity key-pair{idpkU,idskU}Long-term
Mid-term key-pair{prepkU,presk U}Mid-term
Ephemeral key-pair(){epkU,eskU}One-time
Master secret ms One-time
Message key mkx,yOne-time
TABLE 1: List of Signal’s keys that are relevant to this
work. Ephemeral keys are used in various parts of the
Signal protocol, when employed in session initialization
they are commonly called one-time keys.
2.2.2. Overview of the Signal Protocol. What follows
recalls the essential facts needed to understand this work
(a full formalization of Signal is available in [
51
], [
4
]).
The Signal protocol is made of three main steps:
User registration. Run once in the lifetime of a
user in the system. This step entails storing a user’s
public key material in the Signal server, namely
idpkU, prepkU
, and a set of (one-time) ephemeral public
keys {epk(1)
U, . . . , epk(n)
U}.
New-session initialization. Run once per new session
initiated by the user. This step is used to start a new chat.
The requesting user
A
interacts with the server to obtain
the handle of
B
, another user, consisting of
idpkB, prepkB
and a single one-time public key
epk(i)
B
.
A
uses
B
s keys
together with their identity secret key, long term secret
key and an ephemeral secret key to run a non-interactive
key-exchange and generate a master secret key
msAB
, that
is computable only by Aand B.
The double ratchet mechanism for messaging. Run
every time the user receives or sends a new message. In
Signal every message is AEAD-encrypted under a different
message key
mkx,y
. We index the message keys by two
non-negative integers
x, y
that operate as coordinates. The
value
x
identifies the current sender,
y
the number of
messages sent by the current sender since the last change
of speaker. Thus even values of
x
correspond to events
where the current speaker is the initiator of the chat, while
y
denotes how many messages the sender of level
x
has
sent so far. In order to securely derive new keys from
previous ones, the double ratchet mechanism ingeniously
combines two KDFs.
3. System design
This section presents the scope and goals of our deniable
messaging system, DenIM on Signal. We start by defining
the threat model, which will dictate the necessary design
goals and trust assumptions.
3.1. Threat model
We consider a global active adversary who participates in
the deniable protocol. The adversary can:
Observe the entire network, including messages to
and from the server, and to and from the users.
Insert or modify traffic.
Participate in the protocol. This gives to the adversary
access to the parts of the protocol state that are acces-
sible to all protocol participants, including requesting
other users’ keys from the key distribution center
(KDC), and sending messages.
The adversary cannot compromise the internal state of
honest parties, including servers.
Under this threat model, the adversary could for example
be an internet service provider, or a nation-state. Given
these capabilities, the goal of the adversary is:
To learn or to alter the payload of deniable traffic
between honest parties.
To learn whether a given network message contains
deniable payload or not.
To learn whether two parties have an ongoing ex-
change of deniable traffic or not.
Note that our system makes traffic ’deniable’ on the
transport layer, which is different from e.g., deniable
encryption [
52
] where the goal is to give deniability for
the message content (plaintext) rather than hiding fact of
communication.
3.2. Design goals
The high-level goal of our system is to be resilient against
adversaries with the goals in Section 3.1. Additionally,
tunneling deniable traffic inside instant messaging systems
requires making decisions regarding performance trade-
offs between the deniable traffic and the regular traffic.
We derive the design goals for security and privacy
(Section 3.2.1) based on the threat model and instant
messaging use case, and design goals for performance
(Section 3.2.2) from the use case.
3
3.2.1. Security and privacy goals.
Confidentiality of users’ deniable behavior. A conse-
quence of our threat model is that an adversary could try
to infer users’ deniable behavior both by observing the
network, or by observing shared protocol states. Adequate
protection measures therefore depend on data the users
generate by interacting with the deniable protocol not
leaking into channels the adversary can observe (the
network and the shared state). That is, a successful im-
plementation depends on proving noninterference between
the deniable protocol and the protocol it piggybacks on
– noninterference ensures all of the users’ input to the
deniable protocol is kept confidential, not just that the
network traffic is protected.
Privacy guarantees independent of the number of online
users. To achieve strength-in-numbers, we aim for the
design where the privacy guarantees do not depend on the
dynamic behavior of the system, i.e., users may join or
leave the system without significantly affecting the privacy
of others. This means that the system should tunnel the
deniable traffic using an observable protocol that does not
achieve transport layer privacy on its own.
Strong security guarantees for deniable messages.
Message content should be protected using state-of-the-
art techniques, which for IM can be achieved via the
Signal protocol. Signal is more than just a mere key
exchange protocol; it is designed to deliver not only confi-
dentiality and integrity, but also more advanced security
features such as key healing. We aim to maintain the
same security benefits provided by Signal by carefully
building our deniable messaging machinery around the
Signal protocol in a way that does not impact Signal’s
security functionalities.
3.2.2. Performance goals.
Parameterizable bandwidth overhead for deniable
traffic. To control the privacy-performance trade-offs in
the system, the deniable payload overhead should be a
global tuneable parameter that is set on a case-by-case
basis to match a user population’s demand for deniable
traffic. There should be no limitation on the length of the
regular traffic.
Prioritize low latency for regular traffic. We prioritize
the performance of regular traffic – it is important that
users continue to use the regular IM system – above
the performance of the deniable traffic. This creates an
asymmetry in the latency of regular and deniable com-
munication. When using the protocol for regular Signal,
traffic is forwarded immediately resulting in low latency
overhead. For DenIM, the latency depends on when traffic
can be safely piggybacked. While a system with different
privacy guarantees for different messages like this has not
yet been studied from the usability perspective, we assume
that a higher latency overhead for deniable communication
is tolerable as the privacy guarantees are stronger.
3.3. Trust assumptions
The previously stated design goals, combined with
the threat model, leads to the following trust assump-
tions:
The adversary cannot access the internal state of
honest parties.
Users trust receivers of their deniable traffic, i.e. users
are by design not able to deny having sent traffic to
their intended receiver.
Users’ deniable behavior does not influence their
regular behavior, e.g., a user does not send more
regular traffic than they normally would to piggyback
their deniable traffic.
The forwarding servers are trusted.
The KDC is trusted, and can generate ephemeral
keys on behalf of a user in case the user’s deniable
ephemeral keys have been depleted.
Users do not issue deniable key requests for adver-
saries’ keys, and do not respond to deniable Signal
sessions initiated by adversaries.
Note that our trust assumptions to a large extent are
inherited from the use case, IM, and from the Signal
protocol. For example, centralized, trusted servers is the
natural setting for IM. Moreover, Signal assumes that a
user is able to verify that the receiver of messages is a
trusted party using an out of bounds channel – in their
deployment they support this by providing a QR code that
both parties are supposed to verify in person.
Our formal model (Section 5) incorporates the trust as-
sumptions at a technical level.
4. DenIM on Signal
This section presents DenIM on Signal, an instant messag-
ing system that supports two different protocols: regular
Signal, and our deniable variant of Signal, Deniable Instant
Messaging (DenIM). DenIM is a centralized IM protocol
with both the cryptographic guarantees of the Signal pro-
tocol, and transport layer privacy for messages. In DenIM
on Signal the deniable protocol, DenIM, piggybacks on
an unmodified version of Signal.
At a high level, DenIM on Signal provides users with
two communication abstractions: sending ‘regular’ Signal
traffic that is not resilient to traffic analysis, and sending
‘deniable’ Signal messages that come with transport layer
privacy. To prevent an adversary from trivially inferring
which users are communicating, DenIM on Signal uses
a simple centralized architecture where traffic is routed
through a trusted server. The server forwards the regular
Signal traffic immediately, and stores the DenIM traffic
until there is regular traffic for the intended recipient to
piggyback on. To prevent an adversary from tracing traffic
by fingerprinting it as it is forwarded by the server, the
traffic between clients and server is sent over TLS.
We model and prove the security of our implementation
in Section 5, and empirically evaluate how bandwidth
overhead affects system performance both for deniable
and regular traffic in Section 6.
4.1. Protocol details
In this section we elaborate on the technical details of
DenIM. We explain how the deniable part of a network
message is created (Section 4.1.1), how and where deniable
4
queue
2c
D
pad
Server
1a pad
1b send
R D
2b forward
pad
2a R
R
pad
4a
D
R
4b
D
dummy
padding
forward
3a
3b send
R
Figure 1: Diagram representation of the communication
flow in DenIM. R and D denote regular and deniable
communication, respectively. Double lined boxes represent
TLS tunneled traffic. Odd steps (1 and 3) are performed by
clients, even steps (2 and 4) are performed by the server.
parts get buffered (Section 4.1.2), and which content can be
carried in the deniable part i.e., what deniable actions are
supported by DenIM (Section 4.1.4). DenIM is a variant of
Signal – we make a minor change to the Signal protocol
(Section 4.1.3) to ensure that DenIM is a deniable variant of
Signal (the standard Signal protocol would otherwise leak
information about a user’s deniable sessions), but otherwise
encapsulates Signal. We stress that this modification does
not impact the cryptographic security.
Communication flow by example. Figure 1 presents an
example of communication flow in DenIM on Signal. The
purple user (upper left) has queued a deniable message
(D) waiting to be sent to the orange user (lower right). As
purple sends a regular message (R) to the green user (upper
right), part of their deniable message for the orange user
is added to the deniable padding. The server immediately
forwards the regular message to green, and as there are
no deniable messages queued for green, the message is
padded with dummy padding. Next, the blue user (lower
left) sends a regular message to orange. The server forwards
the regular message to orange, and adds purple’s deniable
message that has been waiting on the server to the padding
of the message for orange.
4.1.1. Deniable padding. The size of the deniable part of a
network message is
lq
, where
l
is the length of the regular
part, and
q
is a system-wide padding parameter set by
the server. Both
l
and
q
are publicly known. Because
of the strict size limit on the deniable part, deniable
communication is chunked to fit the deniable part. If
there is no deniable communication (or its length is less
than
lq
), the deniable part is padded to always reach
length lq.
4.1.2. Deniable buffers. Each client keeps a deniable
buffer to allow the user to queue new deniable messages
at any time. Any time the user sends a regular message,
lq
bytes of the oldest deniable message in the buffer are
added as padding to the regular message.
The server keeps one deniable buffer per user. When
receiving a message, the server extracts the regular part of
the message, and creates a new message for the receiver
and adds a deniable part – either from the recipient’s
deniable buffer or dummy padding. Note that depending
on the implementation strategy, there may be subtle
timing channels here. In particular, it may be desirable to
handle the deniable parts of the incoming message only
after the response has been processed. This is because
differences in timing could leak information about the
deniable part – such as how many deniable actions were
piggybacked.
4.1.3. Changes to Signal. A known (and often overseen)
weakness of Signal is that if a user runs out of ephemeral
keys, new sessions are initialized with less randomness by
reusing the mid-term key instead of a one-time use key.
Standard Signal mitigates this issue by letting users refill
ephemeral keys at any point in time to avoid running out
of keys.
However, if the key distribution center (KDC) were to
fail to return a deniable ephemeral key for a user because
they have run out of keys, it would leak information about
the number of deniable sessions a user has. Therefore,
the number of deniable ephemeral keys each user stores
in the KDC must be secret to the adversary. To limit
traffic between the KDC and the user, and still maintain
the randomness for the generation of the master secret,
DenIM lets the KDC generate new deniable ephemeral
keys on the behalf of the user. To keep the KDC and
client in sync, the client provides a seed for the KDC to
be used as input for a deterministic key generator upon
user registration. The KDC keeps a counter for how many
times the key generator has been used, and the value of
the counter is sent to the corresponding client with each
deniable message. We stress that this change to Signal
still means that the deniable messages will be end-to-end
encrypted between sender and recipient – the server will at
most have access to one of the three keys (the ephemeral
one) used to generate the master secret.
4.1.4. Supported deniable actions. In DenIM we support
all actions needed to implement the Signal protocol’s ses-
sion initialization and double ratchet mechanism. However,
we do not support all functionality that the Signal IM app
supports. For example, we do not support group chats and
video calls. We have intentionally chosen not to support
specific functionality to prevent adversaries from learning
about users’ deniable behavior. First, we do not support
read receipts for messages, since they leak to an adversary
if or when a user receives a deniable message. Second,
we support users blocking other users, but with the twist
that blocked users are not informed that they have been
blocked. An adversary that could learn that they have been
blocked can for example flood a user with messages to
provoke the user into blocking them, and the adversary
can then use the time of being blocked to infer that a user
has received their deniable messages, which also leaks that
the user’s deniable buffer has been drained.
In order to use DenIM, each user needs to upload Signal
keys and a seed for the key generator to the KDC through
registering. We support the following deniable Signal
actions:
Key exchanges. Users can send key requests to initiate
new Signal sessions, and the server responds with a key
response containing the user’s public identity key, mid-
term key, and crucially, always an ephemeral key unlike in
standard Signal where the KDC may run out of ephemeral
5
摘要:

MetadataPrivacyBeyondTunnelingforInstantMessagingBoelNelsonAarhusUniversityboel@cs.dkElenaPagninChalmersUniversityofTechnologyelenap@chalmers.seAslanAskarovAarhusUniversityaslan@cs.au.dkAbstract—Transportlayerdataleaksmetadataunintentionally–suchaswhocommunicateswithwhom.Whiletoolsforstrongtransport...

展开>> 收起<<
Metadata Privacy Beyond Tunneling for Instant Messaging Boel Nelson Aarhus University.pdf

共26页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:26 页 大小:1.51MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 26
客服
关注