
sification tasks. Different from existing ap-
proaches, our model can still perform well
without the need for the label information in
the target domain.
•
We introduce two different methods, namely
Prompt-based Secret Key and Adapter-based
Secret Key, that allow us to recover the ability
of the model to perform classification on the
target domain.
•
Extensive experiments show that our proposed
models can perform well in the source domain
but badly in the target domain. Moreover, ac-
cess to the target domain can still be regained
using the secret key.
To the best of our knowledge, our work is the
first approach for learning under the unsupervised
non-transferable learning setup, which also comes
with the ability to recover access to the target do-
main.1
2 Related Work
In this section, we briefly survey ideas that are
related to our work from two fields: domain adap-
tation and intellectual property protection. Further-
more, we discuss some limitations in the existing
methods which we will tackle with our approach.
In domain adaptation, given a source domain
and a target domain with unlabeled data or a few la-
beled data, the goal is to improve the performance
in the target task using the knowledge from the
source domain. Ghifary et al. (2014), Tzeng et al.
(2014), and Zhu et al. (2021) applied a Maximum
Mean Discrepancy regularization method (Gretton
et al.,2012) to maximize the invariance informa-
tion between different domains. Ganin et al. (2016)
and Schoenauer-Sebag et al. (2019) tried to match
the feature space distributions of the two domains
with adversarial learning. In contrast to the meth-
ods above, Wang et al. (2022) analyzed domain
adaptation in a different way and proposed non-
transferable learning (NTL) to prevent the knowl-
edge transfer from the source to the target domain
by enlarging the discrepancy between the represen-
tations in different domains.
In intellectual property protection, due to the
significant value and its vulnerability against ma-
licious attacks of learned deep neural networks,
1
Our code and data are released at
https://github.com/
ChaosCodes/UNTL.
it is crucial to propose intellectual property pro-
tection methods to defend the owners of the deep
neural networks (DNNs) from any loss. Recently,
two different approaches to safeguard DNNs have
been proposed: watermarking (Adi et al.,2018)
and secure authorization (Alam et al.,2020). In the
watermarking approaches, researchers designed a
digital watermark that can be embedded into data
such as video, images, and so on. With the de-
tection of the unique watermark, we could verify
the ownership of the copyright of the data. Based
on these ideas, Song et al. (2017) and Kuribayashi
et al. (2020) embedded the digital watermarks into
the parameters of the neural networks. Zhang et al.
(2020) and Wu et al. (2021) proposed a framework
to generate images with an invisible but extractable
watermark. However, they are vulnerable to some
active attack algorithms (Wang and Kerschbaum,
2019;Chen et al.,2021) which first detect the wa-
termark and then rewrite or remove it. On the other
hand, the secure authorization approach seeks to
train a model that generates inaccurate results with-
out authorization. Alam et al. (2020) proposed a
key-based framework that ensures correct model
functioning only with the correct secret key. In ad-
dition, Wang et al. (2022) were inspired by domain
generalization and proposed non-transferable learn-
ing (NTL), which achieves secure authorization by
reducing the model’s generalization ability in the
specified unauthorized domain.
Although the NTL model can effectively prevent
access to the unauthorized domain, it requires tar-
get labels during training, which may not always
be easy to obtain. Furthermore, there is no mecha-
nism to recover access to the unauthorized domain
when needed. In this paper, we present a new NTL
model and show that our model can still have good
performance even in the absence of the target labels
which are, however, indispensable in the work of
Wang et al. (2022). Besides, we extend it to a secret
key-based version. With our method, authorized
users can still access the target domain with the
provided keys.
3 Approach
In this section, we first introduce our proposed
Unsupervised Non-Transferable Learning (UNTL)
approach in Sec. 3.1, followed by a discussion
on its practical limitation – it lacks the ability to
regain the access to the target domain. Next, we
discuss our secret key-based methods in Sec. 3.2