
(e.g., a crypto library calls
memcpy
in libc), and library de-
velopers are unlikely to widely adopt ciphertext side-channel
countermeasures themselves. Finally, we target application
developers who build code on top of third-party libraries and
who do not have the necessary insight to manually fix leak-
ages in those libraries. Thus, a drop-in solution with little
manual interaction is desirable here.
There are two major approaches to this: One could either
create a compiler extension that rewrites vulnerable memory
accesses at compile time, or modify existing binaries through
SBI. A pure compiler-based solution needs to recompile all
dependencies, which is complex and requires manual inter-
vention. A combination of DBI and SBI can work directly
with the compiled binaries and, given sufficient coverage, ac-
curately identify and harden vulnerable memory writes. For
these reasons, CIPHERFIX aims for a binary instrumentation-
based solution. The trade-off between binary vs. source-based
approaches is further discussed in Section 7.1.
3.3 Protecting Memory Writes
In order to protect an existing binary from being attacked
through a ciphertext side-channel, the content-based patterns
of write accesses to memory have to be obscured. In [31],
the authors propose various approaches for randomizing ob-
served ciphertexts: First, by limiting reuse of memory loca-
tions through using a new address for each memory write;
second, by interleaving data with random nonces; and third,
by applying a random mask when writing data. The first
approach uses the fact that different memory addresses get
different tweak values in the memory encryption, but has a
high overhead when applied outside of well-defined condi-
tions. The second approach requires extensive changes to data
structures, which has many pitfalls and needs to be done by
the compiler. Due to lower overhead and higher practicability,
we thus opt for the last approach, i.e., we add a random mask
whenever an instruction writes secret data to main memory.
We further discuss the different approaches in Section 7.3.
The masking of data takes place before memory writes
and after memory reads. To store the masks belonging to a
particular memory chunk (e.g., a C
++
object), we allocate a
mask buffer of the same size, so there is a one-to-one mapping
of data bytes to mask bytes. When writing data, we generate
and store a new mask, XOR it with the plaintext, and store the
masked plaintext; when reading, we read the mask and then
decode the masked plaintext. Note that we need to ensure that
at no point non-encoded secret data is written to memory, so
all decoding must be done in secure locations like registers.
3.4 Tracking Data Secrecy at Runtime
While masking all memory writes provides good protection,
it comes with a high overhead. In fact, only a fraction of all
memory writes relate to secret information: As we assume
11 22 33 44 59 f1 c0 49
public secret
data
00
mask
secrecy ff00 00 00 ff ff00 ff
00 1c 6d 48 d3 f3 0d
plaintext 4411 22 33 11 2244 33
(a) CIPHERFIX-BASE
11 22 33 44 59 f1 c0 49
public secret
data
00
mask 00 00 00 48 d3 f3 0d
plaintext 4411 22 33 11 2244 33
(b) CIPHERFIX-FAST
Figure 2: CIPHERFIX-BASE stores the secrecy information in
a separate buffer, and uses it to decide whether a given mask
byte should be applied or not. This allows to safely have non-
zero mask bytes behind public data, as they are ignored if the
corresponding secrecy bytes are zero. In contrast, CIPHERFIX-
FAST stores this information directly in the mask buffer, i.e.,
a mask byte is zero iff the corresponding data is public.
that the implementation is constant-time, there is no secret-
dependent control flow, so, for example, return addresses
pushed onto the stack by function calls can be safely written
in clear text. The same is true for the data structures used by
the heap memory allocator to keep track of memory chunks.
Finally, there may be a point where data is no longer consid-
ered secret, e.g., when sending a signature over the network.
We thus aim to find and protect those instructions that actually
deal with secret data. However, this is non-trivial, as there
may be instructions that access both public and secret data,
depending on the context (e.g., from memcpy).
Thus, we need a way to detect at runtime whether a given
memory address should be considered secret, i.e., whether the
data at that address is masked, and whether we should apply a
new mask when writing to said address. We propose two ap-
proaches for storing this secrecy information (Figure 2): In the
first approach, which we denote CIPHERFIX-BASE, we allo-
cate another buffer of the same size as the mask buffer, called
the secrecy buffer. In the second approach, CIPHERFIX-FAST,
we encode this information directly into the mask buffer.
3.4.1 Storing secrecy information separately
In CIPHERFIX-BASE we allocate a buffer that holds the se-
crecy information for each memory location. If a byte is pub-
lic, the corresponding secrecy byte is
0x00
; if a byte is secret,
the secrecy byte is
0xff
. The secrecy buffer is initialized
on allocation, and may be updated during the lifetime of the
object. This construction allows us to read and update data
without branching, as we can combine the secrecy value
S
with the mask
M
via a bitwise AND (
⊗
), before applying it to
the data via a bitwise XOR (
⊕
): When reading, we compute
P=ˆ
P⊕(M⊗S)
, so we only decode the stored (potentially
masked) plaintext
ˆ
P
if the address is considered secret. For
writing, we always generate and store a new mask, and then
compute
ˆ
P=P⊕(M⊗S)
for plaintext
P
. As we make no
assumptions about the mask, this generally functions as a