2 Z. Liang et al.
as a leading candidate computation model for machine learning, which promote
the prosperity of artificial intelligence in various fields, such as computer vision
[38,7], natural language processing [49,23] and so on. In recent years, NNs are
increasingly applied in safety critical systems. For example, a neural network
has been implemented in the ACAS Xu airborne collision avoidance system for
unmanned aircraft, which is a highly safety-critical system and currently being
developed by the Federal Aviation Administration. Consequently, to gain users’
trust and ease their concerns, it is of vital importance to ensure that NNs are
able to produce safe outputs and satisfy the essential safety requirements before
the deployment.
Safety verification of NNs, which determines whether all outputs of an NN
satisfy specified safety requirements via computing output reachable sets, has
attracted a huge attention from different communities such as machine learning
[29,1], formal methods [19,39,28], and security [41,11]. Because NNs are generally
large, nonlinear, and non-convex, exact computation of output reachable sets is
challenging. Although there are some methods on exact reachability analysis
such as SMT-based [24] and polyhedron-based approaches [43,40], they are usu-
ally time-consuming and do not scale well. Moreover, these methods are limited
to NNs with ReLU activation functions. Consequently, over-approximate reach-
ability analysis, which mainly involves the computation of super sets of output
reachable sets, is often resorted to in practice. The over-approximate analysis
is usually more efficient and can be applied to more general NNs beyond ReLU
ones. Due to these advantages, an increasing attention has been attracted and
thus a large amount of computational techniques have been developed for over-
approximate reachability analysis [27].
Overly conservative over-approximations, however, often render many safety
properties unverifiable in practice. This conservatism mainly results from the
wrapping effect, which is the accumulation of over-approximation errors through
layer-by-layer propagation. As the extent of the wrapping effect correlates strongly
with the size of the input set [44], techniques that partition the input set and
independently compute output reachable sets of the resulting subsets are often
adapted to reduce the wrapping effect, especially for large input sets. Such par-
titioning may, however, produce a large number of subsets, which is generally
exponential in the dimensionality. This will induce extensive demand on compu-
tation time and memory, often rendering existing reachability analysis techniques
not suitable for safety verification of complex NNs in real applications. There-
fore, exploring subsets of the input set rather than the entire input set could help
reduce computation burdens and thus accelerate computations tremendously.
In this work we investigate the safety verification problem of NNs from
the topological perspective and extend the set-boundary reachability method,
which is originally proposed for verifying safety properties of systems modeled
by ODEs in [45], to safety verification of NNs. In [45], the set-boundary reachabil-
ity method only performs over-approximate reachability analysis on the initial
set’s boundary rather than the entire initial set to address safety verification
problems. It was built upon the homeomorphism property of ODEs. This nice