3
types in real-world applications. In the proposed method, a
sliding window is constructed and swiped across the ob-
tained GPR image. The normal GPR B-scan images in the
window without any underground objects are mapped to the
feature space via the fine-tuned CNN, and an initial one-
class classifier is trained. For the subsequent features extracted
from the image in the sliding window, the trained classifier
is continuously used for classification. The abnormal data
is further trained by incremental one-class learning, where
more incremental classifiers could be obtained to classify the
features into classes. Thus the corresponding underground
object that generates the GPR image could be detected.
The main contributions of the proposed method could be
summarized as follows.
1) A section of a normal GPR image is obtained in the
detection area, segmented, and fused with simulated
GPR images to generate synthetic data that contains both
the basic underground conditions of the detection area
and various characteristics generated by underground
objects. The conducted experiments demonstrate that
fine-tuning CNNs with the synthetic data could increase
the distance between features extracted from GPR im-
ages formed by different objects in the detection area.
2) Only a section of a normal GPR image of the detec-
tion area (without subsurface anomalies) is required to
perform the proposed method, instead of an amount of
normal and abnormal data that is difficult to collect,
process, and label in real-world applications.
3) By performing one-class learning, there is not need to
know the number and types of subsurface anomalies that
may exist in the detection area in advance. The proposed
method could incrementally classify various types of
underground objects through the extracted features.
The rest of this paper is organized as follows. Some related
work about interpreting GPR images is presented in Section
II. The feature extraction is presented in Section III, including
generating synthetic images, and fine-tuning a pre-trained
CNN. The one-class learning is introduced in Section IV.
Experiments are conducted and analyzed in Section V. Finally,
conclusions are drawn in Section VI.
II. RELATED WORK
Interpreting GPR B-scan images could be roughly grouped
into two categories: extracting and fitting hyperbolic char-
acteristics on B-scan images, and identifying non-hyperbolic
shapes. The prevailing methodologies in hyperbolic recogni-
tion from GPR images include the Hough transform (HT)
[25]–[27], Machine Learning (ML) [28]–[30] and some meth-
ods that combine multiple approaches [31]–[35]. In our pre-
vious work [36], a GPR B-scan image interpreting model has
been proposed, which could estimate the radius and depth of
the buried pipelines by extracting and fitting hyperbolic point
clusters from GPR B-scan images.
Besides hyperbolic characteristics, some existing studies
identify underground objects with non-hyperbolic shapes from
GPR data by signal processing or image recognition methods.
The frequency-domain-focusing (FDF) technology of synthetic
aperture radar (SAR) is utilized to aggregate scattered GPR
signals for acquiring testing images, where a low-pass filter
is designed to denoise primordial signals, and the profiles of
detecting objects are extracted via the edge detection technique
based on the background information [6]. Subsequently, a
formula is conducted to relate the hidden crack width with
the relative measured amplitude [7]. The use of this kind of
method generally requires knowledge of the basic conditions
of the underground medium in advance.
To locate and identify objects in GPR images, the Convolu-
tional Neural Network (CNN) is utilized in recent decades. Re-
garding the EM signals as an input value, CNN structures are
conducted to automatically localize several kinds of targets in
GPR data [9]. The You-Only-Look-Once (YOLO) [37] is also
utilized to detect potholes and crackings beneath the roads.
Zhang et al. propose a mixed deep CNN model combined
with the Resnet-50 base network and YOLO framework to
detect the moisture damage in GPR data. Subsequently, Liu et
al. propose a method combining the YOLO series with GPR
images to recognize the internal defects in asphalt pavement
[38]. When detecting the underground objects in a certain area,
it is difficult to ensure that the existing training data of CNN
obtained from other areas or datasets is consistent with the
underground situation in this area or road. And only some
GPR images without any target objects could be obtained at
the beginning of the detection, resulting in insufficient training
data of CNN. The Generative Adversarial Network (GAN)
[39] could be utilized to generate remote-sensing data, but the
main issue with this approach in our scenario is that training
the generative network for the detection area could be time-
consuming for on-site applications.
III. FEATURE EXTRACTION BY FINE-TUNED CNNS
In this section, the generation of synthetic data of the
detection area is firstly introduced. After that, the pre-trained
CNN is fine-tuned with the synthetic and normal data to
enhance the feature extraction capabilities for the objects in
the detection area.
A. Generating the Synthetic Data for the detection area
1) The Data Sources: As aforementioned, the synthetic
data is fused from two sources: 1) The normal GPR image
obtained in the detection area without any underground ob-
jects; 2) The GPR images simulated by GprMax with various
kinds of buried objects.
When detecting an area (e.g. a pavement road), a GPR
image section without any buried objects could be easily
obtained. This image section could be used to describe the
basic subsurface environment of the detection area, and pro-
vide data for CNN to extract the features of the GPR image
in the area without underground objects. In the conducted
experiments of this paper, the GPR image section with a length
greater than 3000 pixels is collected, and more than 300 GPR
image segment with the horizontal length of 300 pixels is then
randomly selected in this GPR image section 2.
2Duplications could exist in the selected images.