Tighter Abstract Queries in Neural Network Verication Elazar Cohen1 Yizhak Yisrael Elboher1 Clark Barrett2 and Guy Katz1

2025-05-06 0 0 731.45KB 20 页 10玖币
侵权投诉
Tighter Abstract Queries in
Neural Network Verification
Elazar Cohen1
, Yizhak Yisrael Elboher1?( ), Clark Barrett2, and Guy Katz1
1The Hebrew University of Jerusalem, Jerusalem, Israel
{elazar.cohen1, yizhak.elboher, g.katz}@mail.huji.ac.il
2Stanford University, Stanford, USA
barrett@cs.stanford.edu
Abstract
Neural networks have become critical components of reactive systems in various do-
mains within computer science. Despite their excellent performance, using neural networks
entails numerous risks that stem from our lack of ability to understand and reason about
their behavior. Due to these risks, various formal methods have been proposed for verify-
ing neural networks; but unfortunately, these typically struggle with scalability barriers.
Recent attempts have demonstrated that abstraction-refinement approaches could play a
significant role in mitigating these limitations; but these approaches can often produce net-
works that are so abstract, that they become unsuitable for verification. To deal with this
issue, we present CEGARETTE, a novel verification mechanism where both the system and the
property are abstracted and refined simultaneously. We observe that this approach allows
us to produce abstract networks which are both small and sufficiently accurate, allowing for
quick verification times while avoiding a large number of refinement steps. For evaluation
purposes, we implemented CEGARETTE as an extension to the recently proposed CEGAR-NN
framework. Our results are highly promising, and demonstrate a significant improvement
in performance over multiple benchmarks.
1 Introduction
Deep neural networks (DNNs) have become state-of-the-art technology in many fields [58],
including image processing [37], computational photography [13], speech recognition [1,35],
natural language processing [23], video processing [10,45], autonomous driving [16], and many
others. Nowadays, they are even increasingly being used as critical components in various
systems [55,60,74,79], and society’s reliance on them is constantly increasing.
Despite these remarkable achievements, neural networks suffer from multiple limitations,
which undermine their reliability: the training process of DNNs is based on assumptions re-
garding the data, which may fail to hold later on [36,43]; the training process might cause
over-fitting of the DNN to specific data [77,91]; and, independently of the above, there are
various attacks that can fool a DNN into performing unwanted actions [72,87].
In order to overcome these difficulties and ensure the correctness and safety of DNNs, the
formal methods community has been devising techniques for verifying them [26,46,48,62,83].
Techniques for neural network verification (NNV) receive as input a neural network and a set
of constraints over its input and output, and check whether there exists an input/output pair
that satisfies these constraints. Typically, the constraints encode the negation of some desirable
property, and so such a pair constitutes a counter-example (the SAT case); whereas if no such
pair exists, the desired property holds (the UNSAT case). NNV has been studied extensively in
Equal Contribution
arXiv:2210.12871v2 [cs.LG] 14 May 2023
Tighter Abstract Queries in Neural Networks Verification Cohen, Elboher, Barrett and Katz
recent years, and many different verifiers have been proposed [2426,46,48,62,67,83]. However,
scalability remains a fundamental barrier, both theoretical and practical, which limits the use
of NNV engines: generally, increasing the number of neurons of the verified neural network
implies an exponential increase in the complexity of the verification problem [40,46].
In order to alleviate this scalability limitation, recently there have been attempts to apply
abstraction techniques within NNV [11,27,28,57,69,76], often focusing on the counter-example
guided abstraction refinement (CEGAR) framework [21]. CEGAR is a well-known approach,
aimed at expediting the solving of complex verification queries, by abstracting the model being
verified into a simpler one — in such a way that if the simpler model is safe, i.e. the query
is UNSAT, then so is the original model. In case of a SAT answer from the verifier, we check
whether the satisfying assignment of the abstract model is also a satisfying assignment of the
original. If so, the original query is declared SAT, the satisfying assignment is returned, and the
process ends. Otherwise, the satisfying assignment is called spurious (or erroneous), indicating
that the abstract query is too coarse, and should be refined into a slightly “less abstract” query,
which is a bit closer to the original model (but is hopefully still smaller). CEGAR has been
successfully used in various formal method applications [9,15,42,65,73], and also in the context
of NNV [27,28,57]. Typically, these approaches generate an abstract neural network, which is
smaller than the original, and is created by the merging of neurons of the original network. The
refinement, accordingly, is performed by splitting previously merged neurons. Other related
approaches have also considered abstracting the ranges of values obtained by neurons in the
network [11,69,70,76].
The general motivation for abstraction schemes is that smaller, abstract networks are easier
to verify. While this is often true, smaller networks tend to be far less precise, and verifying them
often requires multiple refinement steps [28,70]. In extreme cases, these multiple refinement
steps can render the verification process slower than directly verifying the original network [28].
Here, we seek to tackle this problem, by improving the abstract verification queries. We propose
a novel verification scheme for DNNs, wherein abstraction and refinement operations include
altering not only the network (as in [21,27,28,57,70]), but also the property being verified.
The motivation is to render the abstract properties more restrictive, in a way that will reduce
the number of spurious counter-examples encountered during the verification process; but at
the same time, ensure that the abstract queries still maintain the over-approximation property:
if the abstract query is UNSAT, the original query is UNSAT too. The key idea on which our
approach is based is to compute a minimal difference between the outputs of the abstract
network and the original network, and then use this minimal difference to tighten the property
being verified, in a sound manner.
Our approach is not coupled to any specific DNN verification method, and can use multiple
DNN verifiers as black-box backends. For evaluation purposes, we implemented it on top of the
Marabou DNN verifier [48]. We then tested our approach on the ACAS-Xu benchmarks [44]
for airborne collision avoidance, and also on MNIST benchmarks for digit recognition [54].
Our results indicate that property abstraction affords a significant increase in the number of
queries that can be verified within a given timeout, as well as a sharp decrease in the number
of refinement steps performed.
To summarize, our contributions are as follows: (i) we present CEGARETTE, a novel CEGAR
framework for DNN verification, that abstracts not only the network but also the property being
verified; (ii) we provide a publicly available implementation of our approach, CEGARETTE-NN;
and (iii) we use our implementation to demonstrate the practical usefulness of our approach.
The rest of this paper is organized as follows. In Section 2, we provide a brief background
on neural networks and their verification, followed by an explanation of the CEGAR framework
2
Tighter Abstract Queries in Neural Networks Verification Cohen, Elboher, Barrett and Katz
and its implementation for neural network verification. In Section 3, we describe our novel
verification framework CEGARETTE. In Section 4, we discuss how to apply this framework for
abstracting and refining DNNs, followed by an evaluation in Section 5. Related work is discussed
in Section 6, and we conclude in Section 7.
2 Background
2.1 Neural Networks
A neural network [33] is a directed graph, comprised of a sequence of layers: an input layer,
followed by one or more consecutive hidden layers, and finally an output layer. A layer is a
collection of nodes, also referred to as neurons. Here we focus on feed-forward neural networks,
where the values of neurons are computed based on values of neurons in preceding layers. Thus,
when the network is evaluated, values are assigned to neurons in its input layer; and they are
then propagated, layer after layer, through to the output layer.
We use ni,j to denote the j’th neuron of layer i. Typically, the value of neuron ni,j , denoted
as vi,j , is given by the following formula:
vi,j =acti,j (bi,j +
li1
X
k=1
wi
k,j ·vi1,k )
where li1is the number of neurons in the i1’th layer, acti,j is a pre-defined (neuron-specific)
activation function, wi
k,j is the weight of the outgoing edge from ni1,k to ni,j ,vi1,k is the
value of the k’th neuron in the i1’th layer, and bi,j is the bias value of the j’th neuron in
the i’th layer. For simplicity, we assume here that the only activation function in use is the
Rectified Linear Unit (ReLU) function [63], which is defined by ReLU(x) = max(0, x), and is
very common in practice.
Fig. 1depicts a small neural network. The network has 3 layers, of sizes l1= 1, l2= 2 and
l3= 1. Its weights are w2
1,1= 10, w2
1,2= 1, w3
1,1= 3 and w3
2,1= 4, and its biases are all zeros.
For input v1,1= 21, node n2,1evaluates to 210 and node n2,2evaluates to 21 (both are positive,
and hence not changed by the ReLU activation function). The output node n3,1then evaluates
to 3 ·210 + 4 ·20 = 714.
n1,1
n2,1
n2,2
n3,1
10
1
3
4
Hidden
layer
Input
layer
Output
layer
Figure 1: A simple, feed-forward neural network.
2.2 Neural Network Verification
The goal of neural network verification (NNV) is to determine the satisfiability of a verification
query. A query is typically defined to be a triple hN, P, Qi, where: (i) Nis a neural network;
3
Tighter Abstract Queries in Neural Networks Verification Cohen, Elboher, Barrett and Katz
(ii) Pis an input property, which is a conjunction of constraints on the input neurons; and
(iii) Qis an output property, which is a conjunction of constraints on the output neurons [56].
The query is SAT if and only if there exists an input vector x0to N, such that P(x0) and
Q(N(x0)) both hold; in which case the verifier returns x0as the counter-example. As we
previously mentioned, Qtypically represents some undesirable behavior of Non inputs from
P, and so the goal is to obtain an UNSAT result.
Most existing verifiers focus primarily on ReLU activation functions, and we follow this line
here. In addition, most existing verifiers assume that Pis a conjunction of linear constraints
on the input values, and we again take the same path. Finally, we make the simplifying
assumption that Nhas a single output neuron y, and that the property Qis of the form y > c.
This assumption may seem restrictive, but in fact, it does not incur any loss of generality [28],
and is sufficient for expressing many properties of interest with arbitrary Boolean structure, via
a simple reduction.
In recent years, various methods have been proposed for solving the verification problem
(for a brief overview, see Section 6). Our abstraction-refinement mechanism is designed to be
compatible with many of these techniques, as we later describe.
2.3 Counter-Example Guided Abstraction Refinement (CEGAR)
Counter-example guided abstraction refinement (CEGAR) [21] is frequently used as part of
model-checking frameworks, and it has recently been applied to neural network verification,
as well. The general framework, borrowed from [28], is presented in Algorithm 1. Given a
verification query hN, P, Qi, we begin by generating an abstract network N0. Then, the CEGAR
loop starts, where in each iteration, we verify a query with the current abstract network,
hN0, P, Qi. The abstract network N0is constructed in a way that makes hN0, P, Qian over-
approximation of hN, P, Qi: if the former is UNSAT, then so is the latter. Thus, if the underlying
verifier returns UNSAT on the current query, we can stop and return UNSAT. Otherwise, the verifier
returns a satisfying assignment x0for hN0, P, Qi, and we check whether this x0constitutes a
satisfying assignment for hN, P, Qias well. If so, we return SAT and x0as the satisfying
assignment, and stop. Otherwise, we say that x0is a spurious counter-example, indicating
that N0is too coarse; in which case, we refine N0into a new “tighter” network, N00 , whose
verification is more likely to produce accurate results. This refinement process is guided by
the spurious counter-example x0. This general framework can be instantiated in many ways,
depending on the implementation of the abstract and refine operations.
There have been a few recent attempts to apply CEGAR in the context of NNV [27,28,57],
all following a similar approach. At first, a preprocessing phase is performed, and every hidden
neuron in the network is classified according to its effect on the network’s output. Then,
abstraction is carried out by repeatedly merging pairs of neurons with the same type, usually
making the network significantly smaller than the original.
We focus here on one of these approaches, called CEGAR-NN [28]. There, the preprocessing
phase initially splits each hidden neuron into 4 neurons, each belonging to one of 4 categories
based on the effect of the neuron on values of both the next layer’s neurons and the output:
the output edges can be all positive (pos) or all negative (neg), and the value of a neuron can
increase the output when being increased (inc), or increase the output when being decreased
(dec). The splitting process changes the network’s architecture but does not change its output
— i.e., the preprocessed network is completely equivalent to the original. After splitting the
neurons and categorizing the new neurons, pairs of neurons from the same layer that share
a category can be merged into a single neuron. The weights and biases of the new, merged
4
摘要:

TighterAbstractQueriesinNeuralNetworkVeri cationElazarCohen1,YizhakYisraelElboher1?(),ClarkBarrett2,andGuyKatz11TheHebrewUniversityofJerusalem,Jerusalem,Israelfelazar.cohen1,yizhak.elboher,g.katzg@mail.huji.ac.il2StanfordUniversity,Stanford,USAbarrett@cs.stanford.eduAbstractNeuralnetworkshavebecome...

展开>> 收起<<
Tighter Abstract Queries in Neural Network Verication Elazar Cohen1 Yizhak Yisrael Elboher1 Clark Barrett2 and Guy Katz1.pdf

共20页,预览4页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:20 页 大小:731.45KB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 20
客服
关注