Safety Verification for Neural Networks Based on Set-boundary Analysis Zhen Liang1 Dejin Ren2 Wanwei Liu1 Ji Wang1

2025-05-03 0 0 1.71MB 19 页 10玖币
侵权投诉
Safety Verification for Neural Networks Based on
Set-boundary Analysis
Zhen Liang1, Dejin Ren2, Wanwei Liu1, Ji Wang1,
Wenjing Yang1, and Bai Xue2?
1College of Computer Science and Technology, National University of Defense
Technology, Changsha, China
{liangzhen, wwliu, Jiwang, wenjing.yang}@nudt.edu.cn
2Institute of Software CAS, Beijing, China
{rendj, xuebai}@ios.ac.cn
Abstract. Neural networks (NNs) are increasingly applied in safety-
critical systems such as autonomous vehicles. However, they are fragile
and are often ill-behaved. Consequently, their behaviors should undergo
rigorous guarantees before deployment in practice. In this paper we pro-
pose a set-boundary reachability method to investigate the safety ver-
ification problem of NNs from a topological perspective. Given an NN
with an input set and a safe set, the safety verification problem is to
determine whether all outputs of the NN resulting from the input set
fall within the safe set. In our method, the homeomorphism property
of NNs is mainly exploited, which establishes a relationship mapping
boundaries to boundaries. The exploitation of this property facilitates
reachability computations via extracting subsets of the input set rather
than the entire input set, thus controlling the wrapping effect in reacha-
bility analysis and facilitating the reduction of computation burdens for
safety verification. The homeomorphism property exists in some widely
used NNs such as invertible NNs. Notable representations are invertible
residual networks (i-ResNets) and Neural ordinary differential equations
(Neural ODEs). For these NNs, our set-boundary reachability method
only needs to perform reachability analysis on the boundary of the in-
put set. For NNs which do not feature this property with respect to the
input set, we explore subsets of the input set for establishing the local
homeomorphism property, and then abandon these subsets for reachabil-
ity computations. Finally, some examples demonstrate the performance
of the proposed method.
Keywords: Safe verification ·Neural networks ·Boundary analysis ·
Homeomorphism.
1 Introduction
Machine learning has seen rapid growth due to the high amount of data produced
in many industries and the increase in computation power. NNs have emerged
?Corresponding author
arXiv:2210.04175v1 [cs.AI] 9 Oct 2022
2 Z. Liang et al.
as a leading candidate computation model for machine learning, which promote
the prosperity of artificial intelligence in various fields, such as computer vision
[38,7], natural language processing [49,23] and so on. In recent years, NNs are
increasingly applied in safety critical systems. For example, a neural network
has been implemented in the ACAS Xu airborne collision avoidance system for
unmanned aircraft, which is a highly safety-critical system and currently being
developed by the Federal Aviation Administration. Consequently, to gain users’
trust and ease their concerns, it is of vital importance to ensure that NNs are
able to produce safe outputs and satisfy the essential safety requirements before
the deployment.
Safety verification of NNs, which determines whether all outputs of an NN
satisfy specified safety requirements via computing output reachable sets, has
attracted a huge attention from different communities such as machine learning
[29,1], formal methods [19,39,28], and security [41,11]. Because NNs are generally
large, nonlinear, and non-convex, exact computation of output reachable sets is
challenging. Although there are some methods on exact reachability analysis
such as SMT-based [24] and polyhedron-based approaches [43,40], they are usu-
ally time-consuming and do not scale well. Moreover, these methods are limited
to NNs with ReLU activation functions. Consequently, over-approximate reach-
ability analysis, which mainly involves the computation of super sets of output
reachable sets, is often resorted to in practice. The over-approximate analysis
is usually more efficient and can be applied to more general NNs beyond ReLU
ones. Due to these advantages, an increasing attention has been attracted and
thus a large amount of computational techniques have been developed for over-
approximate reachability analysis [27].
Overly conservative over-approximations, however, often render many safety
properties unverifiable in practice. This conservatism mainly results from the
wrapping effect, which is the accumulation of over-approximation errors through
layer-by-layer propagation. As the extent of the wrapping effect correlates strongly
with the size of the input set [44], techniques that partition the input set and
independently compute output reachable sets of the resulting subsets are often
adapted to reduce the wrapping effect, especially for large input sets. Such par-
titioning may, however, produce a large number of subsets, which is generally
exponential in the dimensionality. This will induce extensive demand on compu-
tation time and memory, often rendering existing reachability analysis techniques
not suitable for safety verification of complex NNs in real applications. There-
fore, exploring subsets of the input set rather than the entire input set could help
reduce computation burdens and thus accelerate computations tremendously.
In this work we investigate the safety verification problem of NNs from
the topological perspective and extend the set-boundary reachability method,
which is originally proposed for verifying safety properties of systems modeled
by ODEs in [45], to safety verification of NNs. In [45], the set-boundary reachabil-
ity method only performs over-approximate reachability analysis on the initial
set’s boundary rather than the entire initial set to address safety verification
problems. It was built upon the homeomorphism property of ODEs. This nice
Safety Verification for Neural Networks Based on Set-boundary Analysis 3
property also widely exists in NNs, and typical NNs are invertible NNs such as
neural ODEs [5] and invertible residual networks [4]. Consequently, it is straight-
forward to extend the set-boundary reachability method to safety verification of
these NNs, just using the boundary of the input set for reachability analysis
which does not involve reachability computations of interior points and thus re-
ducing computation burdens in safety verification. Furthermore, we extend the
set-boundary reachability method to general NNs via exploiting the local home-
omorphism property with respect to the input set. This exploitation is instru-
mental for constructing a subset of the input set for reachability computations,
which is gained via removing a set of points in the input set such that the NN is a
homeomorphism with respect to them. The above methods of extracting subsets
for performing reachability computations can also be applied to intermediate
layers of NNs rather than just between the input and output layers. Finally, we
demonstrate the performance of the proposed method on several examples.
Main contributions of this paper are listed as follows.
We investigate the safety verification problem of NNs from the topological
perspective. More concretely, we exploit the homeomorphism property and
aim at extracting a subset of the input set rather than the entire input set
for reachability computations. To the best of our knowledge, this is the first
work on the use of the homeomorphism property to address safety verification
problems of NNs. This might on its own open research directions on digging
into topological properties of facilitating reachability computations for NNs.
The proposed method is able to enhance the capabilities and performances
of existing reachability computation methods for safety verification of NNs
via reducing computation burdens. Based on the homeomorphism property,
the computation burdens of solving the safety verification problem can be
reduced for invertible NNs. We further show that the computation burdens
can also be reduced for more general NNs via exploiting this property on
subsets of the input set.
2 Related Work
There has been a dozen of works on safety verification of NNs. The first work on
DNN verification was published in [35], which focuses on DNNs with Sigmoid
activation functions via a partition-refinement approach. Later, Katz et al. [24]
and [10] independently implemented Reluplex and Planet, two SMT solvers to
verify DNNs with ReLU activation function on properties expressible with SMT
constraints.
Recently, methods based on abstract interpretation attracts more attention,
which is to propagate sets in a sound (i.e., over-approximate) way [6] and is more
efficient. There are many widely used abstract domains, such as intervals [41],
and star-sets [39]. A method based on zonotope abstract domains is proposed
in [11], which works for any piece linear activation function with great scalabil-
ity. Then, it is further improved [36] for obtaining tighter results via imposing
4 Z. Liang et al.
abstract transformation on ReLU, Tanh and Sigmoid. [36] proposed special-
ized abstract zonotope transformers for handling NNs with ReLU, Sigmoid and
Tanh activation functions. [37] proposes an abstract domain that combines float-
ing point polyhedra with intervals to over-approximate output reachable sets.
Subsequently, a spurious region guided approach is proposed to infer tighter
output reachable sets [48] based on the method in [37]. [9] abstracts an NN by
a polynomial, which has the advantage that dependencies can in principle be
preserved. This approach can be precise in practice for small input sets. After-
wards, [18] approximates Lipschitz-continuous NNs with Bernstein polynomials.
[20] transforms a neural network with Sigmoid activation functions into a hybrid
automaton and then uses existing reachability analysis methods for the hybrid
automaton to perform reachability computations. [44] proposed a maximum sen-
sitivity based approach for solving safety verification problems for multi-layer
perceptrons with monotonic activation functions. In this approach, an exhaus-
tive search of the input set is enabled by discretizing input space to compute
output reachable set which consists of a union of reachtubes.
Neural ODEs were first introduced in 2018, which exhibit considerable com-
putational efficiency on time-series modeling tasks [5]. Recent years have wit-
nessed an increase use of them on real-world applications [26,17]. However,
the verification techniques for Neural ODEs are rare and still in fancy. The
first reachability technique for Neural ODEs appeared in [16], which proposed
Stochastic Lagrangian reachability, an abstraction-based technique for construct-
ing an over-approximation of the output reachable set with probabilistic guaran-
tees. Later, this method was improved and implemented in a tool GoTube [15],
which is able to perform reachability analysis for long time horizons. Since these
methods only provide stochastic bounds on the computed over-approximation
and thus cannot provide formal guarantees on the satisfaction of safety proper-
ties, [30] presented a deterministic verification framework for a general class of
Neural ODEs with multiple continuous- and discrete-time layers.
Based on entire input sets, all the aforementioned works focus on develop-
ing computational techniques for reachability analysis and safety verification of
appropriate NNs. In contrast, the present work shifts this focus to topological
analysis of NNs and guides reachability computations on subsets of the input set
rather than the entire input set, reducing computation burdens and thus increas-
ing the power of existing safety verification methods for NNs. Although there
are studies on topological properties of NNs [4,8,34], there is no work on the
utilization of homeomorphism property to analyze their reachability and safety
verification problems, to the best of our knowledge.
3 Preliminaries
In this section, we give an introduction on the safety verification problem of
interest for NNs and homeomorphisms. Throughout this paper, given a set ,
,and respectively denotes its interior, boundary and the closure.
摘要:

SafetyVericationforNeuralNetworksBasedonSet-boundaryAnalysisZhenLiang1,DejinRen2,WanweiLiu1,JiWang1,WenjingYang1,andBaiXue2?1CollegeofComputerScienceandTechnology,NationalUniversityofDefenseTechnology,Changsha,China{liangzhen,wwliu,Jiwang,wenjing.yang}@nudt.edu.cn2InstituteofSoftwareCAS,Beijing,Chi...

展开>> 收起<<
Safety Verification for Neural Networks Based on Set-boundary Analysis Zhen Liang1 Dejin Ren2 Wanwei Liu1 Ji Wang1.pdf

共19页,预览4页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:19 页 大小:1.71MB 格式:PDF 时间:2025-05-03

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 19
客服
关注