Decompiling x86 Deep Neural Network Executables Zhibo Liu Yuanyuan Yuan Shuai Wang The Hong Kong University of Science and Technology

2025-05-06 0 0 2.15MB 25 页 10玖币
侵权投诉
Decompiling x86 Deep Neural Network Executables
Zhibo Liu, Yuanyuan Yuan, Shuai Wang
The Hong Kong University of Science and Technology
{zliudc,yyuanaq,shuaiw}@cse.ust.hk
Xiaofei Xie
Singapore Management University
xfxie@smu.edu.sg
Lei Ma
University of Alberta
ma.lei@acm.org
Abstract
Due to their widespread use on heterogeneous hardware
devices, deep learning (DL) models are compiled into executa-
bles by DL compilers to fully leverage low-level hardware
primitives. This approach allows DL computations to be un-
dertaken at low cost across a variety of computing platforms,
including CPUs, GPUs, and various hardware accelerators.
We present BTD (Bin to DNN), a decompiler for deep neu-
ral network (DNN) executables. BTD takes DNN executables
and outputs full model specifications, including types of DNN
operators, network topology, dimensions, and parameters that
are (nearly) identical to those of the input models. BTD de-
livers a practical framework to process DNN executables
compiled by different DL compilers and with full optimiza-
tions enabled on x86 platforms. It employs learning-based
techniques to infer DNN operators, dynamic analysis to reveal
network architectures, and symbolic execution to facilitate
inferring dimensions and parameters of DNN operators.
Our evaluation reveals that BTD enables accurate recov-
ery of full specifications of complex DNNs with millions of
parameters (e.g., ResNet). The recovered DNN specifications
can be re-compiled into a new DNN executable exhibiting
identical behavior to the input executable. We show that BTD
can boost two representative attacks, adversarial example gen-
eration and knowledge stealing, against DNN executables.
We also demonstrate cross-architecture legacy code reuse us-
ing BTD, and envision BTD being used for other critical
downstream tasks like DNN security hardening and patching.
1 Introduction
Recent years have witnessed increasing demand for appli-
cations of deep learning (DL) in real-world scenarios. This
demand has led to extensive deployment of DL models in a
wide spectrum of computing platforms, ranging from cloud
servers to embedded devices. Deployment of models in such
a spread of platforms is challenging, given the diversity of
Corresponding author.
hardware characteristics involved (e.g., storage management
and compute primitives) including GPUs, CPUs, and FPGAs.
A promising trend is to use DL compilers to manage
and optimize these complex deployments on multiple plat-
forms [22,64,85]. A DL compiler takes a high-level model
specification (e.g., in ONNX format [6]) and generates cor-
responding low-level optimized binary code for a variety of
hardware backends. For instance, TVM [22], a popular DL
compiler, generates DNN executable with performance com-
parable to manually optimized libraries; it can compile models
for heterogeneous hardware backends. To date, DL compilers
are already used by many edge devices and low-power chips
vendors [48,74,75,83]. Cloud service providers like Amazon
and Google are also starting to use DL compiler in their AI
services for performance improvements [14,101]. In partic-
ular, Amazon and Facebook are seen to spend considerable
effort to compile DL models on Intel x86 CPUs through the
usage of DL compilers [49,61,69].
Compilation of high-level models into binary code typically
involves multiple optimization cycles [22,64,85]. DL com-
pilers can optimize code utilizing domain-specific hardware
features and abstractions. Hence, generated executables man-
ifest distinct representations of the high-level models from
which they were derived. However, we observe that different
low-level representations of the same DNN operator in exe-
cutables generally retain invariant high-level semantics, as
DNN operators like ReLU and Sigmoid, are mathematically
defined in a rigorous manner. This reveals the opportunity of
reliably recovering high-level models by extracting semantics
from each DNN operator’s low-level representation.
Extracting DNN models from executables can boost many
security applications, including adversarial example genera-
tion, training data inference, legacy DNN model reuse, migra-
tion, and patching. In contrast, existing model-extraction at-
tacks, whether based on side channels [30,45,46,104,105,116]
or local retraining [76,78,79,94], assume specific attack en-
vironments or can leak only parts of DNN models with low
accuracy or high overhead.
We propose BTD, a decompiler for DNN executables.
Given a (stripped) executable compiled from a DNN model,
1
arXiv:2210.01075v2 [cs.CR] 4 Oct 2022
w1 w2
mergeable nodes operator optimization
Model
Specification
Computation
Graph Creation
Graph IR &
Optimization Low-Level IR
Hardware-specific
Optimization
(Auto) Scheduling
& (Auto) Tuning
Code Gen &
Optimization
DNN
Executable
(a) DNN compilation pipeline.
(b) Sample DNN computation graph. DNN compiler frontend looks for holistic
opt. chances like mergeable nodes, whereas backend explores efficient machine
code for each operator.
Conv ReLU Conv
Pool
Figure 1: The high-level workflow of DL compilation.
we propose a three-step approach for full recovery of DNN op-
erators, network topology, dimensions, and parameters. BTD
conducts representation learning over disassembler-emitted
assembly code to classify assembly functions as DNN oper-
ators, such as convolution layers (Conv). Dynamic analysis
is then used to chain DNN operators together, thus recover-
ing their topological connectivity. To further recover dimen-
sions and parameters of certain DNN operators (e.g., Conv),
we launch trace-based symbolic execution to generate sym-
bolic constraints, primarily over floating-point-related com-
putations. The human-readable symbolic constraints denote
semantics of corresponding DNN operators that are invariant
across different compilation settings. Experienced DL experts
can infer higher-level information about operators (e.g., di-
mensions, the memory layout of parameters) by reading the
constraints. Nevertheless, to deliver an automated pipeline,
we then define patterns over symbolic constraints to automat-
ically recover dimensions and memory layouts of parameters.
We incorporate taint analysis to largely reduce the cost of
symbolic execution which is more heavyweight.
BTD is comprehensive as it handles all DNN operators
used in forming computer vision (CV) models in ONNX
Zoo [77]. BTD processes x86 executables, though its core
technique is mostly platform-independent. Decompiling ex-
ecutables on other architectures requires vendor support for
reverse engineering toolchains first. We also find that DNN
“executables” on some other architectures are not in stan-
dalone executable formats. See the last paragraph of Sec. 2
for the significance of decompiling x86 DNN executables,
and see Sec. 8for discussions on cross-platform support.
BTD was evaluated by decompiling 64-bit x86 executables
emitted by eight versions of three production DL compil-
ers, TVM [22], Glow [85], NNFusion [64], which are de-
veloped by Amazon, Facebook, and Microsoft, respectively.
These compilers enable full optimizations during our eval-
uation. BTD is scalable to recover DNN models from 65
DNN executables, including nearly 3 million instructions, in
60 hours with negligible errors. BTD, in particular, can re-
cover over 100 million parameters from VGG, a large DNN
model, with an error rate of less than 0.1% (for TVM-emitted
executable) or none (for Glow-emitted executable). More-
over, to demonstrate BTDs correctness, we rebuild decom-
piled model specifications with PyTorch. The results show
that almost all decompiled DNN models can be recompiled
into new executables that behave identically to the reference
executables. We further demonstrate that BTD, by decom-
piling executables into DNN models, can boost two attacks,
adversarial example generation and knowledge stealing. We
also migrate decompiled x86 DNN executables to GPUs, and
discuss limits and potential future works. In summary, we
contribute the following:
This paper, for the first time
1
, advocates for reverse engi-
neering DNN executables. BTD accepts as input (stripped)
executables generated by production DL compilers and out-
puts complete model specifications. BTD can be used to aid
in the comprehension, migration, hardening, and exploita-
tion of DNN executables.
BTD features a three-step approach to recovering high-
level DNN models. It incorporates various design principles
and techniques to deliver an effective pipeline.
We evaluate BTD against executables compiled from large-
scale DNN models using production DL compilers. BTD
achieves high accuracy in recovering (nearly) full specifica-
tions of complex DNN models. We also demonstrate how
common attacks are boosted by BTD.
2 Preliminary
Fig. 1(a) depicts DNN model compilation. DNN compila-
tion can be divided into two phases [58], with each phase
manipulates one or several intermediate representations (IR).
Computation Graph.
DL compiler inputs are typically high-
level model descriptions exported from DL frameworks like
PyTorch [80]. DNN models are typically represented as com-
putation graphs in DL frameworks. Fig. 1(b) shows a simple
graph of a multilayer convolutional neural network (CNN).
These graphs are usually high-level, with limited connections
to hardware. DL frameworks export computation graphs often
in ONNX format [6] as DL compiler inputs.
Frontend: Graph IRs and Optimizations.
DL compilers
typically first convert DNN computation graphs into graph
IRs. Hardware-independent graph IRs define graph structure.
Network topology and layer dimensions encoded in graph
IRs can aid graph- and node-level optimizations including
operator fusion, static memory planning, and layout transfor-
mation [22,85]. For instance, operator fusions and constant
1
This paper was submitted to USENIX Security 2022 (Fall Round) on
October 12, 2021. We received the Major Revision decision and re-submitted
revised version to USENIX Security 2023 (Summer Round) on June 07,
2022. When preparing the camera-ready version, we notice a parallel work
DnD [103], which considers decompiling DNN executables across architec-
tures (BTD only considers x86 executables). Nevertheless, DnD does not
deeply explore the impact of compiler optimizations compared to our work.
2
folding are used to identify mergeable nodes in graph IRs af-
ter precomputing statically-determinable components. Graph
IRs specify high-level inputs and outputs of each operator,
but do not restrict how each operator is implemented.
Backend: Low-Level IRs and Optimizations.
Hardware-
specific low-level IRs are generated from graph IRs. Instead
of translating graph IRs directly into standard IRs like LLVM
IR [55], low-level IRs are employed as an intermediary step
for customized optimizations using prior knowledge of DL
models and hardware characteristics. Graph IR operators can
be converted into low-level linear algebra operators [85]. For
example, a fully connected (FC) operator can be represented
as matrix multiplication followed by addition. Such repre-
sentations alleviate the hurdles of directly supporting many
high-level operators on each hardware target. Instead, trans-
lation to a new hardware target only needs the support of
low-level linear algebra operators. Low-level IRs are usually
memory related. Hence, optimizations at this step can include
hardware intrinsic mapping, memory allocation, loop-related
optimizations, and parallelization [17,22,84,110].
Backend: Scheduling and Tuning.
Policies mapping an op-
erator to low-level code are called schedules. A compiler
backend often searches a vast combinatorial scheduling space
for optimal parameter settings like loop unrolling factors.
Halide [84] introduces a scheduling language with manual and
automated schedule optimization primitives. Recent works
explore launching auto scheduling and tuning to enhance opti-
mization [12,22,23,70,97,113,114]. These methods alleviate
manual efforts to decide schedules and optimal parameters.
Backend: Code Gen.
Low-level IRs are compiled to gener-
ate code for different hardware targets like CPUs and GPUs.
When generating machine code, a DNN operator (or sev-
eral fused operators) is typically compiled into an individual
assembly function. Low-level IRs can be converted into ma-
ture tool-chains IRs like LLVM or CUDA IR [73] to explore
hardware-specific optimizations. For instance, Glow [85] can
perform fine-grained loop-oriented optimizations in LLVM
IR. DL compilers like TVM and Glow compile optimized
IR code into standalone executables. Kernel libraries can be
used by DL compilers NNFusion [64] and XLA [95] to stati-
cally link with DNN executables. Decompiling executables
statically linked with kernel libraries are much easier: such
executables contain many wrappers toward kernel libraries.
These wrappers (e.g., a trampoline to the Conv implementa-
tion in kernel libraries) can be used to infer DNN models. This
work mainly focuses on decompiling “self-contained” exe-
cutables emitted by TVM and Glow, given their importance
and difficulty. For completeness, we demonstrate decompiling
NNFusion-emitted executables in Sec. 4.4.
Real-World Significance of DL Compilers.
DL compilers
offer systematic optimization to improve DNN model adop-
tion. Though many DNN models to date are deployed us-
ing DL frameworks like Tensorflow, DL compilers cannot
be disregarded as a growing trend. Edge devices and low-
power processors suppliers are incorporating DL compilers
into their applications to reap the benefits of DNN mod-
els [48,74,75,83]. Cloud service providers like Amazon and
Google include DL compilers into their DL services to boost
performance [14,101]. Amazon uses DL compilers to compile
DNN models on Intel x86 CPUs [49,61]. Facebook deploys
Glow-compiled DNN models on Intel CPUs [69]. Overall, DL
compilers are increasingly vital to boost DL on Intel CPUs,
embedded devices, and other heterogeneous hardware back-
ends. We design BTD, a decompiler for Intel x86 DNN exe-
cutables. We show how BTD can accelerate common DNN
attacks (Appendix D) and migrate DNN executables to GPUs
(Sec. 8). Sec. 8explains why BTD does not decompile ex-
ecutables on GPUs/accelerators. GPU/accelerator platforms
lack disassemblers/dynamic instrumentation infrastructures,
and the DL compiler support for GPU platforms is immature
(e.g., cannot generate standalone executables).
3 Decompiling DNN Executables
Definition.
BTD decompiles DL executables to recover DNN
high-level specifications. The full specifications include:
1
DNN operators (e.g., ReLU, Pooling, and Conv) and their
topological connectivity,
2
dimensions of each DNN oper-
ator, such as #channels in Conv, and
3
parameters of each
DNN operator, such as weights and biases, which are im-
portant configurations learned during model training. Sec. 4
details BTDs processes to recover each component.
Query-Based Model Extraction.
Given a (remote) DNN
model with obscure specifications, adversaries can continu-
ously feed inputs
x
to the model and collect its prediction
outputs
y
. This way, adversaries can gradually assemble a
training dataset (x,y)to train a local model [79,96].
This approach may have the following challenges: 1) for a
DNN executable without prior knowledge of its functionality,
it is unclear how to prepare inputs
x
aligned with its normal
inputs; 2) even if the functionality is known, it may still be
challenging to prepare a non-trivial collection of
x
for models
trained on private data (e.g., medical images); 3) local retrain-
ing may require rich hardware and is costly; and 4) existing
query-based model extraction generally requires prior knowl-
edge of model architectures and dimensions [79]. In contrast,
BTD only requires a valid input. For instance, a meaningless
image is sufficient to decompile executables of CV models.
Also, according to the notation in
Definition
, local retraining
assumes
1
+
2
as prior knowledge, whereas BTD fully
recovers 1 + 2 + 3 from DNN executables.
Model Extraction via Side Channels.
Architectural-level
hints (e.g., side channels) leaked during model inference can
be used for model extraction [30,45,46,104,105,116]. These
works primarily recover high-level model architecture, which
are
1
or
1
+
2
according to our notation in
Definition
. In
contrast, BTD statically recovers
1
and then dynamically
recovers
2
+
3
from DNN executables (but coverage is
not an issue; see Sec. 4.2 for clarification). Sec. 9further
compares BTD with prior model extraction works.
3
(a) Glow (b) TVM -O0 (c) TVM -O3 (d) NNFusion
Figure 2: Compare CFGs of a Conv operator in VGG16 compiled by different DL compilers. TVM refers to enabling no
optimization as “-O0” while enabling full optimizations as “-O3”. Glow and NNFusion by default apply full optimizations.
Comparison with C/C++ Decompilation.
BTD is different
from C/C++ decompilers. C/C++ decompilation takes exe-
cutable and recovers C/C++ code that is visually similar to
the original source code. Contrarily, we explore decompiling
DNN executables to recover original DNN models. The main
differences and common challenges are summarized below.
Statements vs. Higher-Level Semantics:
Software decompi-
lation, holistically speaking, line-by-line translates machine
instructions into C/C++ statements. In contrast, BTD recovers
higher-level model specifications from machine instructions.
This difference clarifies that a C decompiler is not sufficient
for decompilation of DNN executables.
Common Uncertainty:
There is no fixed mapping between
C/C++ statements and assembly instructions. Compilers may
generate distinct low-level code for the same source state-
ments. Therefore, C/C++ decompilers extensively use heuris-
tics/patterns when mapping assembly code back to source
code. Likewise, DL compilers may adopt different optimiza-
tions for compiling the same DNN operators. The compiled
code may exhibit distinct syntactic forms. Nevertheless, the
semantics of DNN operators are retained, and we extract the
invariant semantics from the low-level instructions to infer
the high-level model specifications. See Sec. 4.3 for details.
End Goal:
C/C++ compilation prunes high-level program fea-
tures, such as local variables, types, symbol tables, and high-
level control structures. Software decompilation is fundamen-
tally undecidable [25], and to date, decompiled C/C++ code
mainly aids (human-based) analysis and comprehension, not
recompilation. Generating “recompilable” C code is very
challenging [32,98,99,102]. In this regard, DNN compila-
tion has comparable difficulty, as compilation and optimiza-
tion discard information from DNN models (e.g., by fusing
neighbor operators). BTD decompiles DNN executables into
high-level DNN specifications, resulting in a functional exe-
cutable after recompilation. Besides helping (human-based)
comprehension, BTD boosts model reuse, migration, security
hardening, and adversarial attacks. See case studies in Sec. 8
and Appendix D.
Opacity in DNN Executables.
Fig. 2compares VGG16 [89]
executables compiled using three DL compilers. For simplic-
ity, we only plot the control flow graphs (CFGs) of VGG16’s
first Conv operator. These CFGs were extracted using IDA-
Pro [41]. Although this Conv is only one of 41 nodes in
VGG16, Glow compiles it into a dense CFG (Fig. 2(a)).
Sec. 2has introduced graph-level optimizations that selec-
tively merge neighbor nodes. Comparing CFG generated by
TVM -O0 (Fig. 2(b)) and by TVM -O3 (Fig. 2(c)), we find
that optimizations (e.g., operator fusion) in TVM can make
CFG more succinct. We also present CFGs emitted by NNFu-
sion in Fig. 2(d): NNFusion-emitted executables are coupled
with the
Mlas
[67] kernel library. This CFG depicts a simple
trampoline to the Conv implementation in MlasGemm.
As in Fig. 2, different compilers and optimizations can re-
sult in complex and distinct machine code realizations. How-
ever, BTD is designed as a general approach for decompila-
tion of executables compiled by these diverse settings.
Design Focus.
Reverse engineering is generally sensitive to
the underlying platforms and compilation toolchains. As the
first piece of work in this field, BTD is designed to process
common DNN models compiled by standard DL compilers.
Under such conservative and practical settings, BTD delivers
highly encouraging and accurate decompilation. Similarly,
obfuscation can impede C/C++ decompilation [62]. Modern
C/C++ decompilers are typically benchmarked on common
software under standard compilation and optimization [16,
21,99,102], instead of extreme cases. We leave it as a future
work to study decompiling obfuscated DL executables.
4 Design
Decompiling DNN executables is challenging due to the mis-
match between instruction-level semantics and high-level
model specifications. DNN executables lack high-level in-
formation regarding operators, topologies, and dimensions.
Therefore, decompiling DNN executables presents numerous
reverse engineering hurdles, as it is difficult to deduce high-
level model specifications from low-level instructions. We
advocate DL decompilers to satisfy the following criteria:
R1
(
Generalizability
): Avoid brittle assumptions. Generalize
across compilers, optimizations, and versions.
R2
(
Correctness
): Use effective, resilient methods and pro-
duce correct outputs.
R3 (Performance): Be efficient when necessary.
R4
(
Automation
): Avoid manual analysis and automate the
decompilation process.
BTD delivers practical decompilation based on the invari-
ant semantics of DNN operators that aims to meet all four
criteria. Our intuition is simple: DL compilers generate dis-
4
Type Dimension Parameter Operators
NA NA ReLU; Sigmod; … Add; Sub; Negative; Sqrt; …
ExpandDims; BatchFlatten; …
NA Pooling;
NA BiasAdd; Multiply; Divide; BN;
✓ ✓ Conv; FC; Embedding
(b) Four types of operators.
DNN
Executable Disassembling DNN Operator
Recovery
Topology
Recovery
Dimension &
Parameter Recovery Model
(a) Workflow.
Figure 3: Decompilation workflow. Here “NA” in the “Dimension” column denotes an easy case where output dimension of
an operator
O
equals to its input dimension and no other dimensions associated with
O
. We find that in non-trivial DNN, it is
sufficient to decide Os dimensions after propagating dimensions from other operators on the DNN computation graph.
tinct low-level code but retain operator high-level semantics,
because DNN operators are generally defined in a clean and
rigorous manner. Therefore, recovering operator semantics
should facilitate decompilation generic across compilers and
optimizations (
R1
). Besides, as invariant semantics reflect
high-level information, e.g., operator types and dimensions,
we can infer model abstractions accurately (R2).
Fig. 3(a) depicts the BTD workflow. Sec. 4.1 describes
learning-based techniques for recognizing assembly functions
as DNN operators like Conv. Given recovered DNN operators,
we reconstruct the network topology using dynamic analysis
(Sec. 4.2). We then use trace-based symbolic execution to ex-
tract operator semantics from assembly code and then recover
dimensions and parameters with semantics-based patterns
(Sec. 4.3.2). Some operators are too costly for symbolic exe-
cution to analyze. We use taint analysis to keep only tainted
sub-traces for more expensive symbolic execution to ana-
lyze (
R3
), as noted in Sec. 4.3.1. BTD is an end-to-end, fully
automated DNN decompiler (
R4
). BTD produces model spec-
ifications that behave identically to original models, whose
focus and addressed challenges are distinct from C/C++ de-
compilation. BTD does not guarantee 100% correct outputs.
In Sec. 5, we discuss procedures users can follow to fix errors.
Dimensions and parameters configure DNN operators. We
show representative cases in Fig. 3(b). Type I operators, in-
cluding activation functions like ReLU and element-wise
arithmetic operators, do not ship with parameters; recovering
their dimensions is trivial, as clarified in the caption of Fig. 3.
Type II and III operators require dimensions or parameters,
such as Pooling’s stride
S
and kernel size
K
. In addition to
simple arithmetic operators, BiasAdd involves bias
B
, as extra
parameters. Type IV operators require both parameters and di-
mensions. These operators form most DNN models. Sec. 7.1
empirically demonstrates “comprehensivness” of our study.
BTD recovers dimensions/parameters of all DNN opera-
tors used by CV models in ONNX Zoo (see Sec. 7.1). Due to
limited space, Sec. 4.3 only discusses decompiling the most
challenging operator, Conv. The core techniques explained in
Sec. 4.3 are utilized to decompile other DNN operators. How-
ever, other operators may use different (but simpler) patterns.
Appendix Clists other operator patterns. We further discuss
the extensibility of BTD in Sec. 7.3.
Disassembling and Function Recovery.
BTD targets 64-bit
x86 executables. Cross-platform support is discussed in Sec. 8.
BTD supports stripped executables without symbol or debug
information. We assume that DNN executables can be first
flawlessly disassembled with assembly functions recovered.
According to our observation, obstacles that can undermine
disassembly and function recovery in x86 executables, e.g., in-
struction overlapping and embedded data [32], are not found
in even highly-optimized DNN executables. We use a com-
mercial decompiler, IDA-Pro [41] (ver. 7.5), to maximize
confidence in the credibility of our results.
Compilation Provenance.
Given a DNN executable
e
, com-
pilation provenance include: 1) which DL compiler is used,
and 2) whether
e
is compiled with full optimization -O3 or no
optimization -O0. Since some DNN operators (e.g., type IV in
Fig. 3(b)) in
e
are highly optimized when compiled, the compi-
lation provenance can be inferred automatically by analyzing
patterns over sequences of x86 instructions derived from
e
.
We extend our learning-based method from Sec. 4.1 to predict
compilation provenance from assembly code. Our evaluation
of over all CV models in ONNX Zoo finds no errors. Overall,
we assume that compilation provenance is known to BTD.
Therefore, some patterns can be designed separately for Glow-
and TVM-emitted executables; see details in Appendix C. To
show
e
s decompilation is flawless, we must recompile decom-
piled DNN models with the same provenance (see Sec. 7.1.4).
Using different compilation provenances may induce (small)
numerical accuracy discrepancies and is undesirable.
This section focuses on decompilation of self-contained
DNN executables compiled by TVM and Glow. Decompila-
tion of NNFusion-emitted executables is easier because of its
distinct code generation paradigm. We discuss decompiling
NNFusion-emitted executables in Sec. 4.4.
4.1 DNN Operator Recovery
As introduced in Sec. 2, one or a few fused DNN operators are
compiled into an assembly function. We train a neural model
to map assembly functions to DNN operators. Recent works
perform representation learning by treating x86 opcodes as
natural language tokens [28,29,59,81,108]. These works help
comprehend x86 assembly code and assist downstream tasks
like matching similar code. Instead of defining explicit pat-
terns over x86 opcodes to infer DNN operators (which could
be tedious and need manual efforts), we use representation
learning and treat x86 opcodes as language tokens.
5
摘要:

Decompilingx86DeepNeuralNetworkExecutablesZhiboLiu,YuanyuanYuan,ShuaiWangTheHongKongUniversityofScienceandTechnology{zliudc,yyuanaq,shuaiw}@cse.ust.hkXiaofeiXieSingaporeManagementUniversityxfxie@smu.edu.sgLeiMaUniversityofAlbertama.lei@acm.orgAbstractDuetotheirwidespreaduseonheterogeneoushardwarede...

展开>> 收起<<
Decompiling x86 Deep Neural Network Executables Zhibo Liu Yuanyuan Yuan Shuai Wang The Hong Kong University of Science and Technology.pdf

共25页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:25 页 大小:2.15MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 25
客服
关注