BLADERUNNER Rapid Countermeasure for Synthetic AI -Generated StyleGAN Faces Adam Dorian Wong

2025-04-24 0 0 983.25KB 30 页 10玖币
侵权投诉
BLADERUNNER
Rapid Countermeasure for Synthetic (AI-Generated) StyleGAN Faces
Adam Dorian Wong
MIT Lincoln Laboratory
Group 52
01 September 2022
MIT/LL: adam.wong[at]ll.mit.edu
Gmail:
Twitter: @MalwreMorghulis
OTX: MalwareMorghulis
GitHub (Personal): https://github.com/MalwareMorghulis
GitHub (MIT/LL): https://github.com/mit-ll/BLADERUNNER (DOI: 10.5281/zendo.7186014)
DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.
This material is based upon work supported by the Department of the Air Force under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or
recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of the Air Force.
© 2022 Massachusetts Institute of Technology.
Delivered to the U.S. Government with Unlimited Rights, as defined in DFARS Part 252.227-7013 or 7014 (Feb 2014). Notwithstanding any copyright notice, U.S. Government
rights in this work are defined by DFARS 252.227-7013 or DFARS 252.227-7014 as detailed above. Use of this work other than as specifically authorized by the U.S.
Government may violate any copyrights that exist in this work.
Adam D. Wong
@MalwareMorghulis -1-
Abstract
StyleGAN is NVIDIA’s open-sourced TensorFlow implementation. It has revolutionized
high quality facial image generation. However, this democratization of Artificial Intelligence /
Machine Learning (AI/ML) algorithms has enabled hostile threat actors to establish cyber personas
or sock-puppet accounts in social media platforms. These ultra-realistic synthetic faces. This report
surveys the relevance of AI/ML with respect to Cyber & Information Operations. The proliferation
of AI/ML algorithms has led to a rise in DeepFakes and inauthentic social media accounts. Threats
are analyzed within the Strategic and Operational Environments. Existing methods of identifying
synthetic faces exists, but they rely on human beings to visually scrutinize each photo for
inconsistencies. However, through use of DLIBs’ 68-landmark pre-trained file, it is possible to
analyze and detect synthetic faces by exploiting repetitive behaviors in StyleGAN images. Project
Blade Runner encompasses two scripts necessary to counter StyleGAN images. Through
PapersPlease.py acting as the analyzer, it is possible to derive indicators-of-attack (IOA) from
scraped image samples. These IOAs can be fed back into among_us.py acting as the detector to
identify synthetic faces from live operational samples. The opensource copy of Blade Runner may
lack additional unit tests and some functionality, but the open-source copy is a redacted version,
far leaner, better optimized, and a proof-of-concept for the information security community. The
desired end-state will be to incrementally add automation to stay on-par with its closed-source
predecessor.
Going forward PapersPlease, papersplease.py, or papers_please.py naming schema will be used interchangeably, similar to Blade
Runner and Among Us (AmongUs, among_us.py, or amongus.py).
Adam D. Wong
@MalwareMorghulis -2-
Introduction
Artificial Intelligence (AI) & Machine Learning (ML) are increasing areas of concern in
the realm of cybersecurity. Adversaries are exploiting bleeding-edge technologies to enable cyber
and information operations. Threat actors are leveraging DeepFakes and synthetic facial imagery
to manipulate others. Democratization of AI/ML-based technologies poses a significant and
continual cyber risk to national security. CNN reported that OpenAI refused to release their AI out
of concern for abuse [1]. Open-source reporting suggests that proliferation of AI-generated
imagery has been a key enabler for espionage, trolling, and harassment [2] [3].
DeepFakes remain a significant threat through projection of misinformation (misleading)
and disinformation (deception) campaigns. However, one specific type of fake comes in the form
of AI-generated synthetic facial images. Issues in the Strategic Environment (SE) derive from geo-
political propriety (or lack thereof). International law has not kept up with advances in technology.
For example, the Tallinn Manual has not yet addressed advances in AI/ML and its dangerous
potential in war. The technology has not been adequately regulated either. In Operational
Environments, AI/ML technologies are actively exploited by threat actors seeking to sew discord,
maliciously influence others, or engage in social-engineering activities. People are prone to trust
DeepFakes because they are getting more sophisticated in production and possibly based on
cognitive biases. Social media personas leverage these synthetic photos which is a next-generation
alternative to stolen real-photographs or generic stock images.
Blade Runner leverages pre-trained ML-predictor files to detect StyleGAN images through
exploitable repetitive behaviors and Indicators-of-Attack (IOAs). Future iterations of open-source
Blade Runner will automate certain tasks to stay on-par with its closed-source counterpart.
Adam D. Wong
@MalwareMorghulis -3-
Artificial Intelligence
In 2016, Microsoft created an AI chatbot: Tay on Twitter, to conduct an experiment in
learning “conversations”. The premise leveraged the idea that with more interaction (more data),
AI would act more human through learning. However, Twitter users disrupted Tay-AI’s learning
and radicalize the chatbot [4]. Needless to say, AI learning can easily be disrupted by bad actors.
DeepFake is a portmanteau of Deep Learning[ML-technique] and Fake [News]”. The
term was first observed in 2017 via a Reddit user account called: u/deepfakes [5]. This Redditor
grafted celebrity faces onto pornographic media, but their account now lies devoid of any content
[6]. This same technology has been used in Hollywood to posthumously revive beloved characters
such as with: Grand Moff Tarkinand Princess Leiain Star Wars: Rogue One [7]. However,
in the hands of threat actors, the technology has been abused to misinform or deceive audiences,
degrade public-image, blackmail or embarrass geopolitical leaders, socially-engineer others, or
sew discord in already chaotic environments [8] [9]. It’s been used in memes where Hollywood
actor Nicholas Cage’s likeness is superimposed onto Harrison Ford’s character in Indiana Jones:
Raiders of the Lost Ark [10]. Nevertheless, the computer science areas of AI/ML are exponentially
advancing. AI-assisted media manipulation will be a contested issue between freedom-of-research
and misuse by hostile actors.
The capabilities have since been cyclically advanced and refined. Due to this nature of
continuous improvement, technology for synthesizing DeepFakes has become common in
industry. The technology has been democratized by different research entities and thus more
openly-available to the public [11]. Academic research at leading technical universities has
compelled the nation to expand in the AI/ML space [12] [13]. In bygone eras, this was called
Adam D. Wong
@MalwareMorghulis -4-
doctoring” media and “photoshopping” (based on Adobe products). Today, this they are known
as DeepFakes.
About StyleGAN
NVIDIA is a well-respected company making advances in the Graphical Processing Unit
(GPU) and graphics card industry. Their department of research heavily operates in AI/ML-space.
Their research in AI/ML has facilitated unparallel advances in computer-generated graphics. It
must be acknowledged that StyleGAN allows ultra-realistic synthetic faces at 1024x1024
resolution.
Generative Adversarial Networks (GANs). DeepFakes rely on multiple algorithms to
synthesize images. Some images are manipulated through encoding processes to overlay latent
images. However, some use ML-algorithms to intelligently overlap photos. Early forms of
DeepFakes. It should be known that StyleGAN itself is not a malicious tool, nor was the research
itself malicious. The application of StyleGAN by hostile actors is what makes the tool dangerous.
Essentially, GANs rely on generators and discriminators to create these ultra-realistic images.
Evolution of StyleGAN:
2018
o NVIDIA proposed StyleGAN as their “official TensorFlow implementation” which
used the Flickr-Faces-High-Quality (FFHQ) dataset to train their neural network
[14] [15]
2019
o NVIDIA implemented improvements to the existing StyleGAN to enhance existing
capabilities to generate synthetic imagery as StyleGAN 2 [16].
2021
o StyleGAN 3 acknowledges weaknesses of StyleGAN2 where certain features are
placed based on fixed coordinates [17].
o StyleGAN-NADA aims to use text to morph images from one graphic to another
with limited training (such as Nvidia’s example of a dog shifting to The Joker) [18].
摘要:

BLADERUNNERRapidCountermeasureforSynthetic(AI-Generated)StyleGANFacesAdamDorianWongMITLincolnLaboratoryGroup5201September2022MIT/LL:adam.wong[at]ll.mit.eduGmail:Twitter:@MalwreMorghulisOTX:MalwareMorghulisGitHub(Personal):https://github.com/MalwareMorghulisGitHub(MIT/LL):https://github.com/mit-ll/BL...

展开>> 收起<<
BLADERUNNER Rapid Countermeasure for Synthetic AI -Generated StyleGAN Faces Adam Dorian Wong.pdf

共30页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:30 页 大小:983.25KB 格式:PDF 时间:2025-04-24

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 30
客服
关注