
Citation: Wong L.J.; McPherson S;
Michaels A.J. An Analysis of RF
Transfer Learning Behavior Using
Synthetic Data. Preprints 2022,1, 0.
https://doi.org/
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
An Analysis of RF Transfer Learning Behavior
Using Synthetic Data
Lauren J. Wong 1,2,3∗, Sean McPherson 3, and Alan J. Michaels 1,2
1Hume Center for National Security and Technology, Virginia Tech
2Bradley Department of Electrical and Computer Engineering, Virginia Tech
3Intel AI Lab, Santa Clara, CA
*Correspondence: ljwong@vt.edu
Abstract:
Transfer learning (TL) techniques, which leverage prior knowledge gained from data with
different distributions to achieve higher performance and reduced training time, are often used in
computer vision (CV) and natural language processing (NLP), but have yet to be fully utilized in the field
of radio frequency machine learning (RFML). This work systematically evaluates how radio frequency (RF)
TL behavior by examining how the training domain and task, characterized by the transmitter/receiver
hardware and channel environment, impact RF TL performance for an example automatic modulation
classification (AMC) use-case. Through exhaustive experimentation using carefully curated synthetic
datasets with varying signal types, signal-to-noise ratios (SNRs), and frequency offsets (FOs), generalized
conclusions are drawn regarding how best to use RF TL techniques for domain adaptation and sequential
learning. Consistent with trends identified in other modalities, results show that RF TL performance is
highly dependent on the similarity between the source and target domains/tasks. Results also discuss the
impacts of channel environment, hardware variations, and domain/task difficulty on RF TL performance,
and compare RF TL performance using head re-training and model fine-tuning methods.
Keywords: machine learning, deep learning, transfer learning, radio frequency machine learning
1. Introduction
Radio frequency machine learning (RFML) is loosely defined as the application of deep
learning (DL) to raw RF data, and has yielded state-of-the-art algorithms for spectrum aware-
ness, cognitive radio, and networking tasks. Existing RFML works have delivered increased
performance and flexibility, and reduced the need for pre-processing and expert-defined fea-
ture extraction techniques. As a result, RFML is expected to enable greater efficiency, lower
latency, and better spectrum efficiency in 6G systems [
1
]. However, to date, little research has
considered and evaluated the performance of these algorithms in the presence of changing
hardware platforms and channel environments, adversarial contexts, or resource constraints
that are likely to be encountered in real-world systems [2].
Current state-of-the-art RFML techniques rely upon supervised learning techniques
trained from random initialization, and thereby assume the availability of a large corpus
of labeled training data (synthetic, captured, or augmented [
3
]), which is representative of
the anticipated deployed environment. Over time, this assumption inevitably breaks down
as a result of changing hardware and channel conditions, and as a consequence, performance
degrades significantly [
4
,
5
]. TL techniques can be used to mitigate these performance degrada-
tions by using prior knowledge obtained from a source domain and task, in the form of learned
representations, to improve performance on a “similar" target domain and task using less data,
as depicted in Fig. 1.
Though TL techniques have demonstrated significant benefits in fields such as CV and
NLP [
6
], including higher performing models, significantly less training time, and far fewer
training samples [
7
], [
8
] showed that the use of TL in RFML is currently lacking through the
arXiv:2210.01158v1 [eess.SP] 3 Oct 2022