
1
CONSS: Contrastive Learning Approach for
Semi-Supervised Seismic Facies Classification
Kewen Li, Wenlong Liu, Yimin Dou, Zhifeng Xu, Hongjie Duan, Ruilin Jing
Abstract—Recently, seismic facies classification based on con-
volutional neural networks (CNN) has garnered significant re-
search interest. However, existing CNN-based supervised learning
approaches necessitate massive labeled data. Labeling is laborious
and time-consuming, particularly for 3D seismic data volumes.
To overcome this challenge, we propose a semi-supervised method
based on pixel-level contrastive learning, termed CONSS, which
can efficiently identify seismic facies using only 1% of the
original annotations. Furthermore, the absence of a unified data
division and standardized metrics hinders the fair comparison of
various facies classification approaches. To this end, we develop
an objective benchmark for the evaluation of semi-supervised
methods, including self-training, consistency regularization, and
the proposed CONSS. Our benchmark is publicly available to
enable researchers to objectively compare different approaches.
Experimental results demonstrate that our approach achieves
state-of-the-art performance on the F3 survey. Our all codes and
data are available at https://github.com/upcliuwenlong/CONSS.
Index Terms—Seismic Facies Classification, Semi-Supervised
Learning, Seismic Interpretation, Contrastive Learning.
I. INTRODUCTION
SEISMIC facies classification refers to the interpretation
of facies type from the seismic reflector information. It
is an important first step in exploration, prospecting, reservoir
characterization, and field development. Data from core and
well log offer a vertical perspective that can aid in the
interpretation of seismic facies. However, due to the expensive
cost of drilling operation, direct facies information is scarce.
Alternatively, manual assignment of seismic facies based on
seismic attributes is possible, yet remains a highly subjective
process that relies heavily on the experience of the seismic
interpreter.
Deep learning has garnered significant popularity for seis-
mic data processing and interpretation,suah as denoising [1],
inversion [2], interpolation [3] ,fault detection [4] [5], etc.
The powerful feature extraction and representation capabilities
of neural networks enable deep learning methods to mitigate
human subjectivity. Zhao et al. [6] reviewed seismic facies
classification and reservoir prediction approaches based on
deep convolutional neural networks. Alaudah et al. [7] pro-
posed a facies classification framework based on deconvo-
lution network. These supervised deep learning methods are
The corresponding author is Kewen Li. likw@upc.edu.cn
Kewen Li, Wenlong Liu, Yimin Dou, Zhifeng Xu, College of computer
science and technology, China University of Petroleum (East China) Qingdao,
China.
Hongjie Duan, Ruilin Jing, Shengli Oilfield Company, SINOPEC Dongying,
China.
This work was supported by grants from the National Natural Science
Foundation of China (Major Program, No.51991365),and the Natural Science
Foundation of Shandong Province, China (No.ZR2021MF082).
Fig. 1. Comparison of the fully supervised method with the proposed CONSS.
Fully supervised methods utilize labeled seismic facies data. CONSS extends
the contrastive learning branch to learning unlabeled data features.
dependent on labeled data and are incapable of acquiring
knowledge from unlabeled data. However, the labeling of 3D
seismic data volumes is a demanding and time-intensive task,
often requiring geological teams to devote hundreds of hours.
In the majority of cases, there exists a scarcity of labeled
data with an abundance of unlabeled data. Training with
unlabeled data will minimize the labeling cost, which is
a primary objective of semi-supervised learning. Saleem et
al. [8] implemented a semi-supervised approach based on
self-training. They use a model trained on labeled seismic
data to predict the label of unlabeled seismic data, with the
predicted label subsequently added to the training set as a
pseudo-label for retraining. Self-training is categorized as an
offline learning method and is susceptible to the influence
of noise pseudo-labels, leading to potential performance sat-
uration. Consistency regularization represents another semi-
supervised paradigm, which operates under the foundational
assumption that slight perturbations ought not to produce
significant changes in the model’s output. In our experiments,
we implement the cross pseudo supervision [9] (CPS), which
yielded superior outcomes when compared to self-training
in the F3 survey. However, CPS necessitates the alternative
training of two models, thereby resulting in a comparatively
higher training overhead.
Contrastive learning [10] [11] [12] is self-supervised learn-
ing in which a neural network is trained to identify the
similarities and differences between different inputs. The main
idea of contrastive learning is to learn a representation that
can distinguish between positive and negative sample pairs.
In contrastive learning, a positive pair is a pair of samples
that are similar to each other, while a negative pair is a
pair of samples that are dissimilar. The network is trained to
arXiv:2210.04776v3 [cs.CV] 12 Mar 2023