Bridging the Gap between Artificial Intelligence and Artificial General Intelligence A Ten Commandment Framework for Human -Like Intelligence

2025-04-30 0 0 891KB 51 页 10玖币
侵权投诉
Bridging the Gap between Artificial Intelligence and Artificial
General Intelligence: A Ten Commandment Framework for
Human-Like Intelligence
Ananta Nair1,2 and Farnoush Banaei-Kashani1
1. University of Colorado, Denver, 2. Dell Technologies Inc
Abstract
The field of artificial intelligence has seen explosive growth and exponential success. The last
phase of development showcased deep learnings ability to solve a variety of difficult problems
across a multitude of domains. Many of these networks met and exceeded human benchmarks
by becoming experts in the domains in which they are trained. Though the successes of artificial
intelligence have begun to overshadow its failures, there is still much that separates current
artificial intelligence tools from becoming the exceptional general learners that humans are. In this
paper, we identify the ten commandments upon which human intelligence is systematically and
hierarchically built. We believe these commandments work collectively to serve as the essential
ingredients that lead to the emergence of higher-order cognition and intelligence. This paper
discusses a computational framework that could house these ten commandments, and suggests
new architectural modifications that could lead to the development of smarter, more explainable,
and generalizable artificial systems inspired by a neuromorphic approach.
Introduction
Though the concept of artificial intelligence (AI) may seem like a futuristic prospect, the desire to
create man-made intelligence has long been a human yearing. Dating back to approximately 700
BCE is the ancient Greek myth of Talos, the first robot. Talos was created by the god Hephaestus
to serve as guard of Crete and hurl boulders at incoming vessels. One day a ship approached the
island, which, unknowing to the behemoth automatron, would serve as his greatest challenge. In
an attempt to escape the machine, Medea, a sorceress onboard, constructed an ingenious plan
and offered the robot a bargain; granting him eternal life in return for removing his only bolt.
Surprisingly, this offer resonated with Talos, who had not yet come to grips with his own nature,
nor understood his longings for human desires such as immortality. Though the tale ended
tragically for the giant, it does speak to the desires and fears humans have long had of creating
intelligence and the blurred line between man and machine, a challenge that is becoming
increasingly prominent today.
From mechanical toys of the ancient world to the looming dystopian apocalyptic scenarios of
science fiction, the creation and progression of AI has long been on the human mind. Though
many would argue the field has seen many bursts and busts, the last decade has resulted in the
most remarkable progress till date. Largely these advances have come from deep neural
networks that have become experts in areas such as vision, natural language processing, and
reinforcement learning (Brown et al., 2020, Ramesh et al., 2022, Chowdhery et al., 2022,
Schrittwieser et al., 2020, Ye et al., 2021, Baker et al., 2019, Arnab et al., 2021). Their breadth of
application is expansive and the commercial and practical deployments being undertaken seem
almost endless. Applications such as facial recognition ((Balaban et al., 2015), have found
common place on our phones and in our daily lives; virtual assistants like Alexa are a significant
step up from IBM’s Shoebox (Soofastaei, 2021), that could recognize only sixteen words and
digits; online translators are capable of accurately translating between any two or multiple
languages (Fan et al., 2021), and reinforcement learning agents have achieved human and
superhuman performance on a range of complex tasks such as board and video games
(Schrittwieser et al., 2020, Silver et al., 2018, Berner et al., 2019, Vinyals et al., 2019). Even in
academia, these tools have resulted in significant breakthroughs in a diverse set of endeavors,
including weather prediction, protein unfolding, mental health, medicine, robotics, and
astrophysics (Ravuri et al., 2021, AlQuraishi, 2019, Su et. al, 2020, Lusk et al., 2021, Gillenwater
et al., 2021, Pierson & Gashler, 2017, Huerta et al., 2019, George & Huerta, 2018).
Inarguably, each of these undertakings deserve to be commended for its unique
accomplishments. However, there still remains much work to be done by the field to achieve
general intelligence akin to the natural world. Deep networks in collaboration with the
advancement of GPUs, have accelerated data-processing to master pattern recognition and other
statistical based learning, supervised and rule based learning, unsupervised and training-test
generalization, and reinforcement and multi-agent learning (Zhu, 2005, Caruana & Niculescu-
Mizil, 2006, Sutton & Barto, 2018, O'Reilly et al., 2021, Mollick et al., 2020, Baker et al., 2019,
Stooke et al., 2021). These ingenious mathematical algorithms and data-intensive processing
techniques lead to great success in ideal conditions by converting a sparse problem space into a
dense sampling whose distribution can be integrated over or overfit on. These networks can even
further transfer to out-of-distribution learning, doing so by integrating over both training and testing
sets (Vogelstein et al., 2022). However, a common side-effect and growing problem of this type
of approach has been the creation of data-sensitive black boxes that struggle to generalize
outside of their well defined parameterizations (Lake et al., 2017, Geirhos et al., 2020). These
challenges make current AI tools great optimizers but they still fall short of traditional intelligence.
In comparison, natural intelligence is an exceptional example of evolutionary engineering, with
the human brain regarded as the premier example. Given the latest successes but large
limitations of deep networks, the field of AI is now more than ever drawing comparisons and
parallels between what neural networks can do and what natural intelligence is capable of
(Vogelstein et al., 2022, Silver et al., 2021, Richards et al., 2019). The brain, unlike AI, does not
demonstrate or strive for exceptional performance at every task it attempts in a hopeless effort to
maximize its reward function. Instead, it creates exceptional universal learners that are capable
of generalizing their abstract representations and skills to learn any task quickly and easily. This
methodology does not lead to exceptional general performance across all tasks, but instead
behavior is adjusted through goal-driven learning to create a system that only excels at high value
objectives whereas other tasks have more moderate probabilities of success. This type of learning
optimizes for specific performance, while leaving enough computational resources to adequately
perform all tasks. For example: athletes and soldiers may excel at tasks that require scene
integration, strategy, and physical or reflex capabilities, however scientists would not need to
excel in these domains and may instead showcase exceptional performance on logic and
reasoning, or artists on fine-tuned motor movements. Though each profession can learn the
other's skill set, humans prioritize their performance based on high value objectives that align with
their goals.
We believe this methodology of maximizing an architectures learning based on creating general
goal-directed learners rather than overtrained task specific agents is the key to the emergence of
more intelligent systems. In this paper, we use the brain as inspiration, to identify the key
properties that we believe are essential to higher order cognition. The next section of the
introduction teases apart components of brain function that could address notable limitations seen
in current AI tools. Whereas, the last section of the introduction provides an overarching summary
of the framework. Next, section two addresses the key properties of brain function that we believe
lead to intelligence. These are communicated as the ten commandments, and we believe
intelligence emerges not from an individual commandment but rather an all encompassing system
whose collective sum is stronger than its part. Lastly, section three addresses how these
commandments can be proposed into a framework for the development of AI systems that
transition from statistical superiority to general intelligence. Though this paper is largely
addressing the assembly of components that may lead to autonomous AI akin to natural
intelligence, we believe that the individual commandments in themselves can be beneficial to
improve a myriad of AI systems and tools where the end goal is not an all encompassing system.
Through the presentation of the commandments and the framework, we believe models can be
scaled up or down as needed.
1.1 Teasing the Brain Apart; The Paradox of Intelligence
Numerous deep learning models have surpassed human performance on tasks on which they
have been intensively trained (Schrittwieser et al., 2020, Silver et al., 2018, Berner et al., 2019,
Vinyals et al., 2019). This has led some to even argue that these models are akin to humans in
the manner in which they abstract the world and perform a strategic look ahead (Cross et al.,
2021, Buckner, 2018). Though this may or may not be true, it can be agreed upon that there exists
a notable list of sizable limitations that prevent AI tools from demonstrating the fluid intelligence
that makes humans the exceptional general learners they are. Though there exist numerous
issues in AI, we believe the most notable limitations are; 1) long training times (Schrittwieser et
al., 2020, Chowdhery et al., 2022, Berner et al., 2019); 2) the impotence to chart a novel path to
better guide learning; and 3) the inability to generalize to new or increasingly complex, uncertain,
and rapidly changing domains (Poggio et al., 2019, Geirhos et al., 2020). The scheme of learning
and retaining task-specific data which makes deep networks so successful, also limits it by forcing
networks to use a computationally intensive data-driven tabula rasa approach rather than allow
models to build upon what they know. Problems with this rigidity has not only lead to long training
times but also in complex domains has resulted in the daunting investigation of the trustworthiness
and reliability of these networks and the understanding they truly have of the task which they’re
undertaking (Geirhos et al. 2020, Hubinger et al., 2019, Koch et al., 2021).
On the other hand, it is believed natural intelligence, be it humans or animals, are either born with
or soon after develop innate knowledge which they can use to build increasingly complex
hierarchical representations of their environment that unfold over time. Though what this
knowledge is, and how it has been evolutionary encoded is debated, it suggests that natural
intelligence does not abide by either a blank slate or a tabula rasa approach (Wellman & Gelman,
1992, Lake et al., 2017, Velickovic et al., 2021, Silva & Gombolay, 2021). Furthermore, it is
becoming increasingly well accepted that the brain takes in the world by breaking it down into its
smallest components, even though neuroscientists are unsure how inputs are processed to create
internal models. All that is known, is that these smallest components or concepts as they are often
called (van Kesteren et al., 2012; Gilboa & Marlatte, 2017; van Kesteren & Meeter, 2020), are
hierarchically combined with increasing complexity to generate an internal model of the
environment. It has been widely hypothesized that the brain is able to do so by organizing
information into global gradients of abstraction. These gradients in accordance with a relational
memory system inclusive of processes, such as maintenance, gating, reinforcement learning,
memory, etc., continually update, store, recombine, and recall information that unfolds and is
strengthened over time to form generalized structural knowledge (Whittington et al., 2019).
A secondary backbone of intelligence is the ability to contextualize the world into an associative
framework of increasing complexity and actionable goals and subgoals (O'Reilly et al., 2014,
O’Reilly, R. C. et al., 1999, Reynolds & O’Reilly, 2009, O’Reilly, 2020). As argued in Hubinger et
al., 2019 and Koch et al., 2021, current deep networks establish objectives to assign to an
optimizer but then utilize a different model tasked with carrying out actions. This results in an
optimizer with an assigned objective, that then in turn optimizes a model that can act in the real
world. This type of architecture not only leads to problems in matching goal alignment between
creator and creation but also results in problems in trust and explainability. For example: the
authors placed an agent in an environment where it had to find and collect keys that it must use
to open a chest and gain reward. The difference between the training and testing environments
was the frequency of the objects, with the training set having more chests than keys and the
testing having more keys than chests. It was found that this simple difference in environment was
strong enough to force the agent to learn a completely different strategy than intended, i.e. finding
keys is more valuable than opening chests. This result occurred unsurprisingly as the agent
valued keys as a terminal goal and not a sub-goal. Thus, in testing, the agent not only collected
more keys than it could use but also repeatedly circled the area of inventory keys displayed on
the screen. These types of unintended strategies commonly seen by deep networks when
deployed in an environment outside of its training set have resulted in an ever-growing concern
and increasing skepticism of the outputs of deep networks.
摘要:

BridgingtheGapbetweenArtificialIntelligenceandArtificialGeneralIntelligence:ATenCommandmentFrameworkforHuman-LikeIntelligenceAnantaNair1,2andFarnoushBanaei-Kashani11.UniversityofColorado,Denver,2.DellTechnologiesIncAbstractThefieldofartificialintelligencehasseenexplosivegrowthandexponentialsuccess.T...

展开>> 收起<<
Bridging the Gap between Artificial Intelligence and Artificial General Intelligence A Ten Commandment Framework for Human -Like Intelligence.pdf

共51页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:51 页 大小:891KB 格式:PDF 时间:2025-04-30

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 51
客服
关注