
3
A. Unp∗-privacy
We first briefly review the RFID system model and the
adversary model of unp∗-privacy. An RFID system consists
of a reader Rand a set of tags T. An RFID tag/mutual
authentication protocol contains three rounds. A reader first
sends a challenge cto a tag, then the tag responses with a
message r, and finally the reader sends the last message f.
Pc,Pr, and Pfare c,r, and f’s message spaces respectively.
An adversary is given access to the following oracles:
•O1: Upon queried, the reader initializes a session, and
returns (sid, c).
•O2: On inputs (Ti, sid, c), it returns a message r.
•O3: On inputs (sid, c, r), it returns a message f.
•O4: On an input Ti, it returns the tag Ti’s secret keys and
internal state information.
Let Odenote the set of the four oracles {O1, O2, O3, O4}
specified above. An adversary is a (t, n1, n2, n3, n4)-
adversary, if it makes oracle queries to Oiwithout exceeding
nitimes respectively, where 1≤i≤4, and the running time
is at most t.
We use the following notations. If A(·,·,· · · )is a ran-
domized algorithm, then y← A(x1, x2, . . . ;ρ)means that
yis assigned with the unique output of algorithm Aon
inputs x1,x2,. . . and coins ρ, while y← A(x1, x2, . . .)is
a shorthand for first picking ρat random and then setting
y← A(x1, x2, . . .).y← AO1,...,Oυ(x1, x2, . . .)denotes that
yis assigned with the output of algorithm Awhich takes
x1, x2, . . . as inputs and has oracle accesses to O1, . . . , Oυ.
Pr[E]denotes the probability that an event Eoccurs.
Now we introduce unp∗-privacy. Intuitively, achieving unp∗-
privacy requires protocol transcripts to be unpredictable, and
protocol execution results to be unobservable. The experiment
Expunp∗
A[κ, l, n1, n2, n3, n4], denoted as Expunp∗
Afor short,
is illustrated in Figure 1. Given the security parameter κ, an
RFID system is set up with a reader Rand a set of ltags,
where lis polynomial to κ. An adversary Acan launch oracle
queries without exceeding n1,n2,n3, and n4overall calls to
O1,O2,O3, and O4respectively throughout the experiment.
Aconsists of two algorithms, A1and A2, which run in two
stages, the learning stage and the guess stage, respectively. In
the learning stage, A1queries the four oracles, and outputs an
uncorrupted challenge tag Tcand state information st. Then
the experiment chooses b∈R{0,1}. In the guess stage, if
b= 1, the experiment forwards A2’s queries to the oracles
and returns the results, so that A2can really interact with the
reader and Tc; else, the experiment returns random values from
appropriate message spaces. Finally, A2guesses b’s value and
outputs b0. The experiment outputs 1 if b0=b, and outputs 0
otherwise.
Definition 3.1: The advantage of adversary Ain the exper-
iment Expunp∗
Ais defined as:
Advunp∗
A(κ, l, n1, n2, n3, n4)
=|Pr[Expunp∗
A(κ, l, n1, n2, n3, n4) = 1] −1
2|,
where the probability is taken over the choice of the tag set
Tand the coin tosses of the adversary A.
Experiment Expunp∗
A[κ, l, n1, n2, n3, n4]
1. run Setup(κ)to setup (R, T).
//learning stage
2. {Tc, st}←AO
1(R, T).
3. select b∈R{0,1}.
//guess stage
4. b0← AO1,O2,O3
2(R, Tc, st);
in this stage, when A2queries O1,O2, and O3,
if b= 1, return the results from the oracles;
else, return a random element from Pc,Pr, and Pf
respectively.
5. output 1 if b0=b; else, output 0.
Fig. 1: Unp∗-Privacy Experiment
Definition 3.2: An adversary A(, t, n1, n2, n3, n4)-breaks
the unp∗-privacy of the RFID system (R, T)if the advantage
Advunp∗(κ, l, n1, n2, n3, n4)of Ain the experiment Expunp∗
A
is at least , and the running time of Ais at most t.
Definition 3.3 (Unp∗-Privacy): An RFID system (R, T)
is said to be (, t, n1, n2, n3, n4)-unp∗-private, if for all
sufficiently large κthere exists no adversary who can
(, t, n1, n2, n3, n4)-break the unp∗-privacy of (R, T)for any
(, t), where tis polynomial in κand is non-negligible in κ.
B. Correction on the relation between ind-privacy and unp∗-
privacy
Li et al. proved that unp*-privacy implies ind-privacy [28].
However, Yang et al. claimed that unp∗-privacy does not
imply ind-privacy [29]. To support this claim, they provided
a counterexample, formally proved that it satisfies unp∗-
privacy, and then showed that it does not satisfy ind-privacy
through a traceability attack. However, we discover that the
counterexample does not satisfy unp∗-privacy in the first place.
We review the counterexample first. Let F:{0,1}lk×
{0,1}ld→ {0,1}lrbe a PRF family, ctr ∈ {0,1}lrbe a
counter, and pad ∈ {0,1}lpad be a padding, where lc+lpad =
ldand lcis the length of the challenge. Each tag Tihas
a unique identity ID i, and is assigned with a secret key
ki∈R{0,1}lk.Tistores ki, a counter ctriwith an initial
value 1, and a one-bit tag state stiwith an initial value 0. The
protocol works as follows.
1) The reader Rchooses c∈R{0,1}lcand sends it to Ti.
2) Upon receiving c, the tag Tichooses r2∈R{0,1}lrfirst.
Then Ticalculates r1=Fki(c||pad)⊕ctriif sti= 0; else
r1=Fki(c||r2)⊕ctri.1Finally, Tiupdates the counter
as ctri=ctri+ 1 , sets sti= 1, and sends (r1, r2)to R.
3) Upon receiving (r1, r2)from Ti, the reader Rsearches
for the matching tag in its database. If Rdiscovers a
tuple (k, ctr, ID )such that ctr =Fk(c||pad)⊕r1, then
accepts Tias the tag with ID . Then Rupdates ctr =
ctr + 1, computes f=Fk(c||ctr||r2); else if there exists
(k, ctr, ID )such that ctr =Fk(c||r2)⊕r1, then Raccepts
Tias the tag with ID , updates ctr =ctr + 1, computes
f=Fk(c||ctr||r2); or else, Rrejects Tiand chooses
f∈R{0,1}lr. At last, Rsends fto Ti.
1In the counterexample, some inputs of Fare not with the length ld. We
do not correct these mistakes.