ARTICLE POSTPRINT 2
it completely.
2
Furthermore, the adversary also receives the
information sent across the public channel, but cannot alter
or block this in any case.
3
The goal for Alice and Bob is to
transmit data via the channels in a number of rounds such that
they at the end know a shared random key, that is completely
unknown to the adversary. As is done in [11], each round
allows Alice to send information to Bob, or Bob to send
information to Alice, but not both within the same round. We
will refer to this problem as private randomness agreement
(PRA). Note that the BB84 protocol [1] also aims to provide
private randomness, but in the setting where Alice and Bob
are connected via a direct quantum link.
Regarding the number of corrupted channels, we consider
the case
t=n−1
, which is the worst case.
4
As we will
show, the problem does actually have a solution in this case.
This highlights a key difference between PMT and PRA, as
it is well-known that PMT cannot be solved for
2t≥n
[11,
Thm. 5.2] in the case of actively corrupted channels. Thus, the
changes introduced in PRA allows us to tolerate a much more
powerful adversary.
We now turn to defining precisely what we mean by privacy
and reliability in the PRA-setting. The definitions resemble
those used in PMT [11], [12], but there is an important
difference in the case of privacy. In PMT, a protocol being
private essentially means that the adversary cannot distinguish
between protocol transcripts when sending
m
and when
sending
m0
. But in that setting, the intended message
m
is
known a priori. In PRA, Alice and Bob have not decided on a
desired key beforehand. Furthermore, if the protocol fails, and
Alice and Bob output different keys, it is unclear what should
be considered the ‘intended’ key in that execution. Thus, we
will consider transcripts conditioned on the output of both Alice
and Bob. More precisely, let
Π
be a PRA-protocol, and assume
that Alice and Bob will output keys
kA
and
kB
, respectively,
where
kA, kB
are both sampled from a set
K
of possible keys.
Let
A
be an adversary and assume that its input randomness
is
r
. That is, the random choices made by
A
is specified by
r
similarly to how the random tape is used to define probabilistic
Turing machines. For such an adversary and a protocol
Π
, we
then define the random variable
AΠ(kA, kB, r)
describing the
transcript of
Π
from the view of
A
conditioned on the output of
Alice being
kA
and the output of Bob being
kB
. The probability
distribution of this variable depends on the randomness of both
Alice and Bob.
In the following definition and throughout the work, we
assume that the key space
K
has the structure of an additive
group. This is not an unreasonable assumption if the key is later
to be used as a one-time pad. With this assumption, addition
of keys in Kis well-defined and gives another key in K.
2
Otherwise,
A
would be passive. This has a trivial solution: Alice samples
a random key
k∈F`
q
, creates an additive sharing
k=Pn
i=1 ki
, and then
routes
ki
via the
i
’th channel. When
A
is passive, Bob is guaranteed to receive
the original shares sent by Alice, meaning that he can always reconstruct
k
.
The key
k
remains private by the
(n−1)
-privacy of additive secret sharing.
3
If the adversary could tamper with the public channel, it would simply be
another channel with the same properties as the main channels. That is, the
problem reduces to standard PMT.
4
Clearly, the case
t=n
is impossible to solve since an adversary could
block all channels under its control.
Definition 1: A PRA-protocol
Π
is called
δ
-reliable if the
following holds.
•Π
is perfectly private. That is, for any
kA, k0
A, e ∈ K
the
transcripts
AΠ(kA, kA+e, r)
and
AΠ(k0
A, k0
A+e, r)
are
perfectly indistinguishable.
•
When using
Π
, the probability that Alice and Bob output
different keys is at most δ.
In essence, the guarantee provided by Definition 1 is that the
adversary cannot learn the specific keys that Alice and Bob
output. It may learn whether the protocol has succeeded or
not as well as the difference
e=kB−kA
, but the reliability
requirement ensures that
e= 0
except with some (typically
very small) probability. Note that this is in line with the security
in PMT: we require Alice’s message to remain private, but do
not guarantee anything about the output of Bob when reliability
fails.
We note here that it is possible to define a stronger privacy
notion, namely to require
AΠ(kA, kB, r)∼ AΠ(k0
A, k0
B, r)
for
any
kA, k0
A, kB, k0
B∈ K
. This stronger privacy notion implies
that the adversary is not even allowed to learn whether the
protocol has succeeded or not – and hence it also learns nothing
about the difference
kB−kA
. This seems to be a much more
difficult problem to solve, so we limit ourselves to the security
provided by Definition 1.
We note that the statement of protocol privacy is somewhat
verbose. In particular, perfect privacy implies that for any error
e
, the transcript of the adversary must be distributed in the
same way regardless of the output of Alice,
kA
. Thus, we can
drop
kA
from the notation, and simply use
AΠ(e, r)
to denote
the random transcript in an execution of Π. Translating these
observations to statements on entropy, the protocol
Π
being
perfectly private implies
H(KA| AΠ(e, r)) = H(KA)
H(KB| AΠ(e, r)) = H(KB),(1)
where
H
denotes the usual binary Shannon entropy [13], and
KA
and
KB
denote the random variables describing the output
of Alice and Bob, respectively.
To illustrate the requirements of Definition 1, we consider
a minimal example similar to the case of a passive adversary
described above. This gives a protocol that is private against
an active adversary, but provides virtually no reliability.
Example 1: Let
n= 2
, meaning that the channels are the
public channel, one honest channel, and one corrupt channel.
Assume for clarity that the corrupt channel has index
2
.
Consider a protocol where Alice samples a key
kA∈F`
q
uniformly at random, and creates a sharing
k0+k1+k2=kA
of the key. She then sends
k0
across the public channel, and
ki
across the
i
’th main channel. Bob receives
k0
,
˜
k1=k1
,
and
˜
k2
where the latter may be altered by the adversary (but
Bob does not know if
˜
k1
or
˜
k2
is the one provided by the
adversary). Our toy-protocol specifies that he should then
output kB=k0+˜
k1+˜
k2.
Note now that regardless of the outputs of Alice and Bob,
kA
and
kB
, the transcript
AΠ(kA, kB, r)
always comprises
only
k0
,
k2
, and
˜
k2
. This specifies the difference
e=kB−
kA=˜
k2−k2
chosen by the adversary. Further,
k1
acts as