ON THE PLAUSIBILITY AND INEVITABILITY OF ARTIFICIAL GENERAL INTELLIGENCE (AGI): IT IS IN THE “ADJACENT POSSIBLE”

Sui Huang
54 min readNov 15, 2023

(a speculative analysis with tools for thought)

“Thy array of works, unfathomably splendid, is glorious as on the first Day” — Johann W. Goethe, Faust I, Prologue in Heaven

ABSTRACT. That evolution has produced the human brain is stunning. But it is a fact. To some people, the brain’s complexity is so unfathomable that they resort to divine creation for explanation. Today, some people cannot fathom that the stunning capacities of ChatGPT is anything more than brute-force statistics and algorithmic mimicry. But consider this: To survive and reproduce, the guiding principle of biological evolution, we do not need the mental facility to compose symphonies, paint masterworks, nor figure out the existence of the Higgs Boson. Yet, the brain has evolved these capacities that lie beyond the task for which it has been optimized for by natural selection — general intelligence. If evolution, relying on the simple algorithm of random variation + selection, and channeled by physical laws, has produced life on Earth and the human brain with capacities beyond its original biological “purpose” of survival, why can computational brute-force training, guided by architectural constraints, not produce ChatGPT with the stunning functionalities beyond word prediction that we now witness, and even more? AI with human-like cognitive capabilities may arrive sooner than you think. GPT3, an earlier large language model (LLM) on which ChatGPT is based, contains roughly the same number of (virtual) neurons as the brain: ~80 billions+. Don’t be fooled by “hallucinations” and other flaws that critics point to, for these highly human-specific deficits are manifestations of a deeply human cognitive process. Errare humanum est! Studies of biocomplexity and Stuart Kauffman’s principle of the “Adjacent Possible” may explain the inevitable emergence of unnecessary, unintended, highly sophisticated functionality in a “ceaselessly creative” universe. What follows are theoretical considerations with some speculation that seek to ground the plausibility of AGI on first principles of the science of complex dynamical systems.

Most of us, experts and non-experts, find the many human-mind like capacities of ChaptGPT stunning. But I was surprised by the quick response of scholars who remain (or play to appear) unimpressed. Skeptics and naysayers, such as Noam Chomsky, Gary Marcus and others have vehemently doubted that the achievements of large language models (LLM) in AI, most prominently epitomized by ChatGPT, is a major step towards computational human cognition — or to use the buzz word, Artificial GENERAL Intelligence, AGI. Instead, so their views, what we see is just a superficial similarity, mimicry or impersonation, made possible by statistical brute-force training of a uniquely vast neural network on a uniquely vast data set. If such skepticism is unwarranted, it will be either because of lack of imagination, or lack of knowledge about complex systems (or both). There is not much one can do about the former, but the latter might benefit from some accessible formal explanation of one school of thought of how “self-organized” complex systems may have come into existence, which I hope to provide here — without using the e-word (“emergent property”).

I. FRAMING THE PROBLEM: WHENCE THE UNINTENDED FUNCTIONALITY?

I would like to offer a view from an encompassing category of thought by a non-AI expert who has studied the biocomplexity for over 30 years, in vitro and in silico. From the broader perspective afforded by such research, I find the arrival of ChatGPT with its awe-inspiring performance, as well as its highly specific failures, not unexpected. The realization of AI with human-like mental faculty, as displayed by ChatGPT (and other LLM based systems, or generative AI in other modalities that are not further discussed here), is a necessary result of the incessantly increasing complexity in an immanently creative universe that has produced the biosphere and human civilization, with the human brain at its pinnacle. Given that evolution, unguided by divine hand, led to works “unfathomably splendid”, as Goethe calls it, including the human mind, one can argue based on analogies that the technological evolution of ChatGPT-type AI could have been anticipated. But also, using the same argumentation, that the next big thing, Artificial GENERAL Intelligence (AGI) with human-like mental facility, will be inevitable (FIG. 1). It will arrive sooner than many expect, however poorly defined the ‘G’ in AGI still is.

FIGURE 1. The grand picture: A loose analogy of biological and technological evolution of human (-like ) intelligence. | CREDIT: Composed by author. Left panel, brain images and tree adapted from Suzana Herculano-Houzel (PNAS, 2012). Right panel, from Yang et al (arXiv, 2023)

Take the origin of life on Earth: unimaginable to many (who therefore invoke supranatural forces), but fully anticipated by those of us who study the natural laws behind the origination of complex systems. Or as Stuart Kauffman put it in At Home in the Universe: “We, the expected”.

While still working on the formalization of a detailed theory for the inevitability of AGI, but flabbergasted by the pushback by doubters, I would like to present a preliminary, crude summary of the key lines of reasoning behind it. It is speculative — but so is the naysayers’ assertion that ChatGPT is merely a statistical model, regurgitating what humans have ever said.

To understand why ChatGPT develops all those originally unintended but remarkable, human-like mental faculties, despite having been trained only for the elementary task of word prediction (plus some supervised fine-tuning on top of that), we must start by taking a perspective that is the inverse of that commonly found in the AI commentariat. In addition of asking, will AI one day become like humans, it helps to instead also ask: Does the human mind actually work like AI? Are natural and artificial intelligence based on the same logics?

As Paul Pallaghy has noted here, looking under the hood of both the human brain and the deep neural networks that power AI systems, and accepting some abstraction (to get irrelevant technical differences that only cloud our imagination out of the way), I dare to propose fascinating similarities in terms of fundamental principles between the two. Not only does the LLM underlying GPT3 utilize in silico neural networks (NN) that contain (according to estimates, and it depends on the model) roughly 80 billion virtual neurons. This is pretty close to the 86 billion neurons estimated to be in the human brain. (And according to various rumors, GPT4 contains > 100 billion neurons, and 100 trillion synapses).

Structural and numerical similarity is one thing — and may not matter that much. For, as biology keeps teaching us, a given task can be accomplished by an endless array of systems of distinct designs that all can solve that task. The Octopus’ decentralized brain offers a nice illustration of the principle that complex behaviors with similar objectives can be produced by entirely different types of neural anatomies. Of importance in our discourse is not the “be” but the “becoming”: the relative similarity between deep NN and the human central nervous system with respect to the very process of wiring the connections between the neurons. During millions of year of evolution this process has created the extant wiring diagram of the adult brain that is the material basis of acquiring, without much additional tinkering, all the unfathomable capabilities of human mind.

The naysayers point to the fact that all that ChatGPT does is to use clever statistics to learn on a vast corpus of human-generated text available on the internet to perform the well-defined, elementary task of determining the probability of the next word in a sentence, one-by-one, in a context-sensitive manner, and to complete the sentence, such that it sounds as if produced by a human. They list typical shortcomings, such as factual errors or non-sensical phrases despite a correct grammar and coherent phrase. These “hallucinations” have become the epitome of critique against AI, even ridicule. (More precisely, these are not hallucinations but rather confabulations = producing a coherent narrative to overcome gaps in knowledge of the truth).

THE CENTRAL QUESTION. Instead of the glass-is-half-EMPTY view centered around the apparent shortcomings, those of us who have expected ChatGPT all along, take a glass-is-half-FULL attitude, appreciating the enthralling capacities given a rather simplistic scheme of learning. This vantage point allows us to pose the following elementary question:

How can a system that has been designed merely to use statistical pattern analysis to guess the next word in an unfinished sentence end up with apparently human mind-like facility (however imperfect the latter still is)?

More concretely: How can LLMs, without been explicitly taught to do so, become proficient in writing essays, pass the Medical College Admission Test (MCAT), summarize scientific papers, compose poems and songs, perform computations, even some minimal logical reasoning, to a degree that in countless specific use cases exceeds the performance of most humans?

A CENTRAL DISCREPANCY. To argue that AI will never achieve human cognition, the glass-is-half-EMPTY community contends that training of deep NN wires the virtual neurons in a way that only optimizes for solving an elementary, statistically defined task. And the optimized capacity to solve the task of sentence completion, so their argument, vastly differs from the unfathomable functionalities of human cognition to compose symphonies, figure out evolution or quantum phenomena.

But it is precisely a discrepancy that we need to explain to be able to envision the arrival of AGI: The discrepancy between the elementary task for which a product has been designed (trained), and the far superior but unintended functionality that the finished product possesses — so to speak as a by-product that we get “for free”.

Sure, if you design a vehicle for driving reliably on the road, the product will not be a flying car. But with AI or with the evolution of the biosphere, the training (or natural selection, for that matter) belongs to a completely distinct class of the “process of becoming” than in industrial design as we will see.

A lack of imagination similar to that which plagues the AI-naysayers also underlies some people’s inability to fathom the emergence of life and complex organisms on Earth, “the array of works, unfathomably splendid”. This has given us creationism and Intelligent Design!

Superficially, the discrepancy between the design goal and any potential capability that far exceeds the former can be used to articulate skepticism about AI. And anthropomorphizing ChatGPT only to expose its deficits with respect to acting like humans is a self-defeating argumentation logic. (In fact, anthropomorphizing may be important, as I will discuss at the very end.)

ON BIOLOGICAL EVOLUTION (A PRIMER FOR NON-BIOLOGISTS). The AI skeptics who are scientists and engineers, and thus, I guess, “believe” in evolution, have no issue with the natural appearance of the diverse and complex organismal functionality on Earth, including the human brain. The latter is a product of evolution and possesses a sophistication that lies beyond the elementary task of maximizing reproductive fitness, the genetic algorithm that drove organismal evolution. Let’s recapitulate how evolution works:

Current mainstream thinking in evolution biology, still largely correct (with many caveats), is that an inheritable trait of an organism has evolved because it was “selected for” by nature. Selection occurs because said trait, once generated by a random genetic mutation, happens to improve reproductive fitness (survival, mating, survival of offspring, etc. which produces more offspring) of the organism that carries said mutation. In modern terms of molecular biology, a mutation essentially randomly rewires a local connection in the vast genetic network, a network of biomolecular reactions that is encoded by the organism’s genome (the inheritable information storage) and that runs organismal development (ontogenesis) and operations of that organism. Thus, “climbing the fitness landscape” in search of the “fitness peak” to maximize fitness in biological evolution that (re)wires said genetic network is roughly equivalent to gradient descent in deep learning that minimizes the error function and rewires the artificial NN. The result is a fixed wiring diagram (determined by the parameters) that connects the genes, or the neurons, respectively, in some manner such that the network, when it “runs the system”, minimizes the fitness deficit or the loss function, respectively. The result is the evolved genetic network, or the pretrained neural network.

To more concretely articulate the question of the discrepancy between elementary task and unintended functionality, we can again use biological evolution, whose results we know, as an aid for thought. We can then ask:

● How can the nervous system, a neuronal network, selected by evolution only for the elementary task of helping the gametes (sperms or eggs) find their mating partner to continue the immortal germline by enabling the mortal soma to move, find/attract partners, replenish energy, fend off predators, etc. — how can such a system also evolve a brain with the wonderful ability to create music, arts, poetry, etc. that have no function in survival?

● How can the genome, a set of genes that form a biomolecular network, selected by evolution for the elementary task of storing and transferring to the next generation information on how to build and operate the soma — how can such a system also evolve the capacity to encode and implement the development of vastly complex organisms, including the human brain?

The core principle behind such “self-complexification”, a ceaseless process immanent to the universe ever since the big bang, that we must explain is the following: Unfathomably stunning, unintended, apparently non-essential, highly diverse and organized, i.e., “sophisticated” functionality comes into existence as the inevitable byproduct of the process that governs the evolution of systems that only optimizes (selects) for performing a set of much simpler elementary tasks. These systems are comprised of a complex network of repeated elements (neurons, genes) wired together by physical/functional interactions (synapses, gene regulatory reactions) in some pattern.

ON WIRING THE BRAIN (A PRIMER FOR NON-BIOLOGISTS). Above I suggested an inverse perspective of AI: In view of the enthralling human-like facilities of ChatGPT, could it be that human cognition actually relies on a similar type of primitive statistical process that AI uses? I also suggested that in addition to the roughly similar size and class of hardware, what matters is the similarity of the process of wiring the human brain by evolution (for the entire species) and by development (for a given individual). For non-biologists this process can summarized as follows (and will look familiar to the AI engineer):

(i) Brains evolved over millennia by iterative selection for better and better structural organization that facilitate elementary survival tasks. (ii) At a coarse-granular level, elementary synaptic connections between neurons are molecularly hard-wired, that is, encoded by the genes and unfolded via cellular development that itself is also largely govern by the genetic network; at a finer-granular level, there is rewiring going on during embryonic and fetal brain development which, despite susceptibility to some environmental influence, is mostly orchestrated by intrinsic genetic and cellular programs. (iii) Finally synaptic transmission strengths (“weights”) are further fine-tuned during upbring, when children learn to align with moral values, societal norms and when they learn to speak and acquire knoweldge of the world. (iv) Thus, at all phases humans learn by explicit instruction (domain-specific education) and implicit environmental influence (repeating, imitation and reinforcement, e.g., by culture).

II. THE THREE PHASES IN THE BECOMING OF HUMAN COGNITION AND CHAT-GPT

Based on the above biological premises we can now engage in a higher category of thought than the usual deliberations by computational, cognitive and linguistic disciplines: We tempt a juxtaposition of the becoming of the adult brain/mind of a learned human being against the process that led to ChatGPT. For this comparison we can divide the coming into existence of human cognitive function broadly into three phases that cover separate time scales (FIG. 2):

[A] Evolution of the human brain by natural selection over millions of years that constructed its basic anatomical structure with a primordial neuronal wiring diagram, encoded in the genome and programmed for “inborn” instinctive behaviors.

[B] Brain development in fetus and infants by neurogenesis, axonogenesis and synaptic selection, etc. over years that translates genetic information into the material existence of the brain, thus unfolding the evolved structures needed for brain function.

[C] Upbringing and formal education of children into adulthood at which point most synaptogenesis ceases, over decades. (This phase is linked to the much longer time scale of cultural evolution, since a source of the functional sophistication it promotes is the collective learning of humans over centuries which creates a culture that in turn influences education).

FIGURE 2. CRUDE COMPARISON OF PHASES OF BRAIN/ChatGPT EVOLUTION AND DEVELOPMENT. The three major phases [A], [B] and [C] in these two processes allow us to entertain analogies that are crude but offer a framework for thought. Phase [A] is the evolution (phylogeny) of the basic architectures as shown in FIG.1, resulting in an ecosystem of various types of brains (LEFT) or LLMs (RIGHT). But for ChatGPT this phase also covers the specific LLM pre-training of one individual system a process that overlaps with individual development (ontogeny) of a specific human being. This Phase [B] is critical only in humans as it implements for one instance (an individual) the wiring diagram of the brain, as acquired during brain evolution in Phase [A] of an entire species that is stored in the genome. Phase [C] finally encompass the explicit formal acquisition of factual knoweldge, reasoning skills and ethical behaviors. Here the human is exposed to only 10 million words whereas the learning on words for LLM occurs in the earlier Phase [A], on a much larger corpus and contributes to evolution. | CREDIT: ‘Human evolution’ (top left inset) from cover of the 2006 Edition of “The Third Champanzee” by J. Diamon, cited in the text. Other Illustrations by author

COMMONALITIES. These three phases in the becoming of the human brain, both as a class of systems and for an instance in an individual human, are compressed in the development of ChatGPT. But otherwise, roughly equivalent phases can be identified, notably for the two phases [A] and [C] (horizontal double-arrows in FIG. 2).

The lengthy, energy-intense pretraining on the corpus of internet text that took months and a lot of energy, and ended up produced GPT or other LLMs, corresponds in great part to the evolution of the brain’s basic wiring diagram over millions of years of natural selection by exposure to environmental and social pressure — including communication capacities needed to build stature and hence, mating opportunities in a society. This is the phase [A] in FIG. 2 above. The result in both cases is the pretrained LLMs or the human brain, respectively, specified by structure and strengths of neuronal interactions. In LLM, these are captured by the hundred billion of parameters, a number often used to brag about a particular LLM’s prowess.

The Phase [B] in human brain development is essentially the unfolding of the evolutionarily learned (“pretrained”) structure, encoded in the genome, to form the physical brain with its basic wiring — and has no direct equivalent in LLMs in which the wiring is both learned and realized in the pretraining. Note that unlike Phase [A] that pertains to the class of human brains, development of the individual brain, Phase [B], pertains to an individual instance of a person but is also susceptible for individual level influences — accounting for the fac that genetically identical twins have distinct (albeit on average more similar) personalities. In humans, this basic wiring provides reflexes and intuitions, and the ability to learn quickly during infancy, e.g. the elementary capability of one-shot learning widely seen in higher mammals (future avoidance of a specific danger after a singular exposure, such as touching a hot stove).

Finally, the fine-tuning of LLMs with supervised learning and notably, reinforcement learning from human feedback (RLHF) that gave ChatGPT its edge over the barebone LLMs, allowing us to interact via prompts, may correspond to postnatal brain development: upbring of children and acquisition of factual knowledge of the world and of tools for reasoning in schools and higher education (Phase [C] in Figure) — and importantly, moral values as discussed at the end.

In both, human mind and ChatGPT, time and energy cost that goes into Phase [A] vastly exceeds that invested in the later phase [C].

DIFFERENCES. This comparison of the human brain with LLM is crude and contains countless inadequacies. It will provoke a flurry of pushback that will highlight more differences — but hopefully also stimulate further thinking. Of note is the difference with respect to the phase and volume of words for the exposure to human language: While humans learn by hearing within the first two years of life up to 10 million words (Phase [C]), LLMs are learn on hundreds of millions of words early on, during the pre-training when the network is being wired by exposure to human generated text (vertical text in boxes of FIG. 2). Thus, this process of LLM training may not so much serve the learning of actual knoweldge content but rather of the general rules — thus, recapitulating the evolution of the human brain’s primordial architecture by natural selection in prehistoric humanoid societies [A]. Herein, natural selection favors brains with increasing ability to produce and perceive vocalizations, thus verbal communication, much as in the evolution of songs in song birds. It is in this sense that the pretraining of LLM on the corpus of human-produced text may in some sense correspond to the biological evolution of human brain Phase [A].

In other words, while it is tempting to attribute the acquisition of factual knowledge to the pretraining on the vast amount of human generated text, phase [A] the pre-training may rather serve the evolution of elementary functionality that enables (future) learning rather than the cramming of actual facts about the world. This may explain the “hallucinations”. Therefore, in ChatGPT (paid version), factual knowledge of the world must be explicitly injected in a structured manner after the pretraining, e.g., via plug-ins, such as that from Wolfram Alpha, a procedure that would align with human formal education, Phase [C].

III. IF ONLY CHAT-GPT WERE SUBJECTED TO HUMAN “UPBRINGING” AND EDUCATION…

If in LLMs some but not all knowledge acquisition happens already in the evolutionary phase [A] during the pretraining, this knowledge would be rather disorganized. This assertion is warranted if one considers that the pretraining on text from the internet, unlike learning at school, does not follow rigorous didactical and pedagogical principles that maximize efficiency and lasting impact of learning. By contrast, when teaching students, teachers also pay attention to ontology of knowledge (groups, hierarchies), natural relationships and temporal sequences. They teach elementary principles before advanced topics: arithmetic before algebra before calculus, historical events in chronological order, anatomy and physiology before pathology, etc. GPT learns content in a random order, unsystematically. This impedes the erection of mental concepts about a topic that the symbolists so much desire. Imagine learning history by ripping off the pages of a history book and reading them in random order. Or learning zoology without an internalized notion of taxonomy. This may be the reason why ChaptGPT would answer ‘Peregrine falcon’ when asked what is the fastest mammal.

But even if one denies LLMs any “understanding” of what they say because of aforementioned lack of mental representations that may have come from a more structured learning, it may well be that with ChatGPT’s apparent mastery of language, even by mere mimickry, comes the internalization of some semantic structure that unsystematic learning to some extent inevitably affords: in the same way as children learn languages — statistically, without explicit grammar. Such implicit semantic notion underlies some of our thinking and shapes our perception of reality. Our mother tongue eo ipso determines our thinking, as Lee Whorf suggested in the 1940s and masterfully discussed by Guy Deutscher (summarized here).

Therefore, to those poking fun at the deficits of current LLM: Wait until we subject pre-training and ensuing fine-tuning of ChatGPT to a more structured approach that considers the best of our knowledge of pedagogy and didactics that we have gained from teaching humans for thousands of years. What we currently see with AI is more akin to force-feeding books in random order to the developing intellect of students and to promoting rote memorization in a process devoid of pedagogical finesse.

Until we subject pre-training and fine-tuning of ChatGPT to actual human-like upbringing that follows human tradition and principles of developmental psychology and professional education, ChatGPT will appear a bit like an incredibly smart, self-taught but unschooled person. Of course, such a person will be quite odd but might possess amazing capabilities in specific, narrow ways. Or alternatively, current ChatGPT may be more aptly compared to a “book-smart” person, even to an “idiot savant”. Not bad for a start.

With these shortcomings of ChatGPT in mind, we can more readily accept its erroneous answers, notably, the “hallucinations” or more correctly, confabulation. They are minor hick-ups due to subpar construction of the LLM by unsystematic pretraining that corresponds to both processes of learning in humans but compressed into one imperfectly and hastily implemented procedure: evolution of primordial mental functions for the entire species (phase [A]) and formal education for individuals (phase [C]). Hallucinations are therefore not signs of a fundamental, intrinsic limitations of AI. And don’t forget that “hallucinations” are profound features of the human mind and that madness has been associated with geniuses, as most prosaically epitomized by van Gogh. Ironically, it is the very peculiar type of flaws of current ChatGPT used to ridicule AI by naysayers that points to sparks of human-like cognition. “Errare humanum est”…

IV. THREE CHARACTERISTICS OF “OVERACHIEVING” SYSTEMS

Let’s get back to the question of the discrepancy between the intended functionality for which a system has been trained for or has evolved, and the resulting stunning unintended generative capacities far beyond that. What are the characteristics of such “overachieving systems”, as we shall call them, that are capable of more than the elementary task that the pre-training has prepared them for?

We will introduce in a highly condensed manner a concept of complex systems that may offer an explanation for this discrepancy. In doing so we will explain three characteristics of such overachieving systems, marked by {..} in the text:

{1} OVERPARAMETERIZATION OF THE UNDERLYING NETWORK

{2} RARE, YET ROBUST SURGE OF NOVEL, COMPLEX FUNCTIONALITY

{3} SPARKS OF FUTURE CAPABILITIES SEEN BUT NOT USED IN CURRENT SYSTEM (“PRE-ADAPTATION”)

In the following sections these three characteristics will receive an explanation in the context of developing our hypothesis.

V. THE QUASI-INFINITE CONFIGURATION SPACE

The first characteristic {1} of systems that are overachieving because they exhibit faculties not intended in their training is the following: The physical basis of overachieving systems is a vast network interacting elements (of genes in genomes, neuronal cells in the brain, or perceptrons in deep NN). The network system as such appears overparametrized and yet, it does not overfit in performing the elementary tasks for which it has originally been optimized.

Overparameterization (roughly) refers to a computational model’s property of having many more elements (the variables representing activation of genes, neurons, perceptrons, etc.) and interactions between them (described by parameters), than is needed to perform the intended task, e.g., predicting Y given X as an input. The parameter values specify the wiring diagram of the system that connects these network elements, following some rules of construction. For instance, in a genetic or neuronal network, the parameters determine the modality and strength of genetic or synaptic interactions, respectively. Overfitting means that idiosyncratic irregularities in the input (training) data are falsely taken by the model as essential, hence generalizable features, which results in the trained model not applying to future cases not contained in the training data.

Why exactly overparameterization in biological networks and in deep learning NN does not lead to overfitting is an interesting question that is not addressed here. But we will shortly come back to one aspect of this question later after we introduce the formalism of dynamical systems and the configuration space.

DYNAMICAL SYSTEMS FORMALISM. Understanding my argument for the plausibility and inevitability of AGI entails a radically different but natural view of neural networks. It starts with a view that comes from the theory of dynamical systems — a perspective usually not taken by AI researchers. In this view, we will generalize so much as to forget the organization of neurons in layers and the flow of activity from “left (input) to right (output)”. We view a biological network as a whole, a network in which the 80 billion of so neurons of a brain, or the 20,000 genes in a genetic network of a genome, are the elements (nodes) that are connected by fixed interactions (edges) between neurons or genes. An interaction dictates which and how a node influences another one’s activity. Thus, mathematically, such biological networks with predetermined architecture are graphs. They consist of one giant component (no node is detached, existing by itself without interaction); yet the network is typically far from being a complete graph, i.e., the number of N nodes (neurons, genes) far exceeds the number of edges (synaptic or gene regulatory interactions). Thus, the network is said to be sparsely connected.

We are then interested in the global state of the network, defined by the collective activation status x_i of each of the N elements (neurons or genes in a neuronal or in a genomic network, respectively) i at a given time. Thus, we introduce the configuration S of a system (network) composed of the N elements that influence the activities of one another via the network of interactions (FIG. 3). In the simplest, cartoonish model, the activity xi of every neuron (or gene) can be ON (=active=1) or OFF (=inactive=0) at a given time. A configuration is then the pattern comprised of such activity (1 or 0) at each element i across the entire network. Thus, a given configuration S of the brain or the artificial neural network behind GPT3 is a string like [1001100….. ] that is N=80 billion positions long. Extensive theoretical work has been done on this class of vast randomly connected dynamical networks, notably, as discrete-valued random Boolean networks introduced by Stuart Kauffman since the 1970s.

With the N=80 billion (80E9) neurons in the brain or neural network of GPT3, there thus are 2^(80E9) = 10^[(80E9)log_10(2)] = 10²⁴⁰⁰⁰⁰⁰⁰⁰⁰⁰ = 10^(24billion) ON/OFF configurations of S! They jointly form a “hyper-astronomic space” of all theoretical possible configurations (with every configuration is a point it this space). The number is so immense that the vast majority of configurations are never ever realized. For comparison: there are “only” 10E80 = 10⁸⁰ atoms in the universe. It is thus fair to consider the configuration space of S to be “quasi-infinite”. This will become important later.

FIGURE 3. Going from the deep neural network at the core of LLM architectures to the concept of a configuration S or the state vector S of a dynamical system, whose dynamics can be formalized as N-dimenional ordinary differential equation (RIGHT). This is a simplified, generic scheme in which a multilayer neural network (blue box, CENTER) that performs the actual “machine thinking” symbolically represents all the neural network sublayers in a Transformer (red box, LEFT). The neural network is the rough equivalent of the brain’s cortical neuronal network (BOTTOM) — both contain roughly 80–100 billion neural elements. This diagram is continued in FIG. 4. | CREDIT: composed by author, cortex structure from D. Haines 2007 Fig. 11, a drawing by Cajal)

More formally, as we ignore the organization of deep NN in various layers and treat the entire set of neural networks in an LLM as one dynamical network (blue box in FIG. 3), we can characterize the system configuration S (FIG. 3) by the state vector [x_1, x_2, …, x_i , x_N] at time t where the value x_i(t) of each element represents the activity (e.g., ON, OFF) of the neuron i.

The interactions between xi collectively define the wiring diagram that is characteristic to the network. The mathematical representation of its structure and interaction modalities establishes the set of parameters of the network which famously number in the hundred billion to trillion for current LLMs.

In the case of a pre-trained artificial NN (with a given, fixed wiring diagram), every time the network is “run” (executing an input), the configuration S of this entire network is altered along a specific sequence of configurations dictated by the network’s (unchanging) wiring diagram. The network thus imposes regulatory constraints onto how S as a whole changes (is updated) in time while obeying the regulatory interactions (top right in FIG. 3). Thus, the change of S describes an “allowed” trajectory in configuration space (see now FIG. 4) that moves towards more “stable” configurations that better satisfy the regulatory constraints. In the multi-layer NN view, the trajectory of S continues until a subset of network components (e.g., x_k, x_k+1, ..x_N = green in FIG. 3) that form the “classifier construction layer(s)” (on the right) displays an activity pattern deemed to be useful for the elementary task for which the NN is being trained, e.g. classification or word prediction. Importantly, please note that by contrast, in the systems dynamics view, these output neurons constitute only a tiny fraction of the 80 billion neurons in the entire network.

[ SIDE NOTE: One idea is that only if the classifier construction layer is overparametrized there is risk of overfitting, but not if the vast number of the other neurons of the network, which mostly serve feature transformation, are overparameterized. Thus, overparameterization does not pertain to the network as a whole.]

FIGURE 4. A schematic of space of system configurations S: The configuration S of activities of all the nodes of a network maps to one point in the N-dimensional configuration space (blue, schematically shown as a hilly landscape). Only a subset of all possible configurations S (green ellipsoid) is ever encountered when the deep NN (LEFT, from FIG. 3) is “run” to perform a task. Viewed through the lens of system dynamics, in this process the network state S describes a trajectory of allowed successions of configurations (green curved arrow). The small number of these realized or used configurations (Sr) in the green ellipsoid, and the unused configurations (Su) in the vast surrounding regions, shown in blue, occupy disjoint domains in the quasi-infinite space of configurations S. | CREDIT: drawing by author

[ SIDE NOTE: An allowed trajectory of S may continue until it converges to a stationary stable configuration and stays there because that configuration has satisfied all regulatory rules imposed by the entire network. Such a state represents an attractor state S* and provides homeostatic memory against perturbations because any change of activity of any network element, △x_i (a perturbation on element i) will no create a “driving force” that would push the system configuration away from S*. Instead, obeying the regulatory rules imparted by the network interactions on x_i, the system will return to S*. In genetic networks, such attractor configurations define the stable gene expression patterns of genomes that produce the biologically meaningful high-dimensional phenotypes — see below. In neural networks, attractors are the old integrated content-addressable memory of Hopfield networks. But such systems have been surpassed in scalability and trainability by modern deep learning NN, and with them the notion of network dynamics has been lost. But here we use the concepts of network dynamics in a different way, for “meta-reasoning” about AI. ]

A VAST SPACE OF NEVER-OCCUPIED CONFIGURATIONS S. With the above definitions we can move to another idea. During pretraining of the artificial NN that later runs the LLM, the connections and all the associated parameters are established. Similarly, during evolution of the brain, the basic synaptic wiring diagram of the human brain is established, and during evolution of the genome, the gene regulatory interactions are specified. In a first simplified view, the resulting parametrized neural network is fixed (does not change its wiring diagram) as it governs all the trajectories manifesting the changes of the system configuration S when the system is “run” to solve an elementary task after being prompted with an initial configuration S­_0. Equivalently, the genetic network is fixed when it is “run” and governs the developmental trajectories of an organism after initiation by the gene activation configuration S_0 in the fertilized egg.

But in this process, given the task and the particular network architecture, the system ever visits only a tiny, specified fraction of the unfathomably vast space that contains the 10²⁴⁰⁰⁰⁰⁰⁰⁰⁰⁰ theoretical N-dimensional configurations S. In other words, when solving elementary tasks for which the networks have been trained/selected for, the configuration of all the 80 billion neurons or all the genes in the genome collectively pass through only a tiny set of particular “allowed” sequences of configurations S (defining a trajectory) in configuration space. These are the ever-realized configurations, which we shall call Sr and which comprise a small domain in configuration space (shown as green ellipsoid, see FIG. 4).

Thus, not only does the fraction of neurons that constitute the classifier layer used to specify the output of the elementary task, represent only a minuscule fraction of all the neurons (FIG. 3). But also, the entire process of replying to the prompt that sends the system configuration S along allowed trajectories visits only a minuscule fraction of the theoretically possible configurations S (FIG. 4).

With the notion of a small set of ever-visited configuration Sr used for the elementary task (green areas in FIG. 4), we can now articulate the important converse: The fact that the vast majority of the quasi-infinite theoretical configuration space of S, is never ever used by the pretrained or the evolved network. These unused configurations, hereafter called Su (blue region outside of the green ellipsoid domain in FIG. 4), are not encountered when the neural or genetic network is run to execute the task of sentence completion or of developing an organism for which they have been optimized. Thus, the number of unused configurations Su greatly outnumbers that of the used Sr. This result will be at the core of our explanation of the “Adjacent Possible” later.

From the calculation above, if a network contains N=80 billion nodes which can take the activation value 1 or 0, there could be 1024billion configurations S over the entire LLM, with the new refinement of two domains of configurations:

number(S) = 10²⁴⁰⁰⁰⁰⁰⁰⁰⁰⁰ = number(Sr) + number(Su),

where number(Sr) << number(Su). Many of the unused configurations Su are “logically unreachable” when running a task given the input by a prompt or an environmental signal that defines the initial “starting” configuration S_0 and given the network’s wiring diagram that determines the ensuing allowed trajectories of realized configuration Sr originating in S_0. Even with noisy (stochastic) activation of individual neurons in the brain or of genes in the genome, most of the ever-visited configurations in the network state succession dynamics starting from a meaningful (=task relevant) set of initial configurations S0 are confined to a tiny fraction of the configuration space. Thus, precisely because of the alleged overparameterization, there are many more configurations of activities of the billions of neurons or tens of thousands of genes than is actually used to perform the elementary task. Because these configurations are never visited, that is, excluded in the implementation, one may speculate that therefore, the notion of overparameterization of the model may be irrelevant and it cannot exert its detrimental effect. Note that there is also an excess of attractor state configurations S*u that exist in the unrealized domain of S — this is important for later.

VI. THE ADJACENT POSSIBLE

PRELIMINARIES: THE UNUSED CONFIGURATIONS ARE GOVERNED BY THE SAME RULES. So far, we have only defined a formalism and encouraged the perspective offered by the study of complex dynamical systems — nothing new. But now, with the concepts of the configuration space of S and of allowed trajectories that represent the network-enforced succession of realized configuration Sr that are traversed during execution of a task, we can now present a more abstract but pivotal argument:

All the unused configurations Su and their patterns of succession are the inevitable by-product of the same training of the very same network system that has produced the relatively small fraction of used configurations Sr. The trajectories of succession of Sr and Su exhibit patterns that are imparted by the same network of interactions. The constraints that the unused “excess” configurations Su obey as they change have been established by the same training that constrains the change of Sr for solving the elementary task.

In other words, the succession of the small number of network configurations Sr that are realized and that of the gigantic number of never used configurations Su are both “encoded” by the very same one network of perceptrons in deep learning NN, of neurons in the brain, or of the genes in the genomic network — one network that impose all dynamical relationships of all configurations S, used or unused after the training/evolution. There are plenty of reasons why the dynamics of Su ended up not been used for solving the elementary task: These configurations may not easily be reached by any of the trajectories descending from initial configurations S_0 encountered in the training that encode typical input prompts associated with the intended elementary task (sentence completion), or they have become unreachable during training given the network parameters of the final pretrained model. A default possibility for banning a configurations into the realm of the unused, Su , is that there succession trajectories converged to a configuration in which the subset of k nodes that contribute to the output layer (green nodes in FIG. 4) has an activity pattern with a too big an error for the task, such that the pretraining has rewired the network to avoid trajectories that lead to such suboptimal patterns.

EXCURSION: HOW GENES COLLECTIVELY ESTABLISH COMPLEX PHENOTYPES. The genes in our genome regulate each other’s activity in a well-orchestrated manner specified by the genetic network that has been wired by evolution (= Phase [A] in FIG. 2). We now understand how these evolve genetic networks in cells orchestrate the collective activities of the 20,000 or so genes to produce robust cell behaviors that are biologically meaningful and readily observed. The parallels to how pretrained neural networks produce through collective activity meaningful functionality are apparent. Thus, to better understand the most central new idea that will be introduced next, let’s have a look at how genetic networks through generate the biologically meaningful cell phenotypes. The configurations Sr are used by organisms to produce “coherent” (smooth, non-chaotic) succession of states, the trajectory S(t), leading to biologically meaningful stable gene activation configurations, the aforementioned attractor states (see SIDE NOTE in Section V). Roughly speaking, trajectories guide ontogenesis (development of the organism), towards the attractor states whose associated gene activation configuration S encode the adult phenotypes — the characteristic cell types of the body (e.g. a liver cell, a neuron, a skin cell) that are robust to perturbations of their molecular network. The stable attractor states S*r thus offer high-dimensional homeostasis “for free” to maintain the exact gene activation configuration needed to encode said complex (multivariate) phenotype and maintain it against perturbations on gene activities.

Now here is the new insight first presented by Stuart Kauffman: Stable attractors S* also exist in the unused domains of the configuration space among, i.e., among the Su configurations. This is important. But what exactly are all these unused stable configurations S*u, emanating from the same genetic network that also generates the meaningful attractor states S*r used by the organism to implement meaningful activity patterns? We will discuss them later: unused attractor states S*u are cancerous states.

FIGURE 5. The Adjacent Possible. Building on the configuration space of FIG. 4 (in blue), the orange ellipsoids represent domains in the Adjacent Possible — adjacent to the domain of realized configurations (in green). The purple arrow indicates an instance of entry into an Adjacent Possible

NOW, THE REALLY CENTRAL, NEW IDEA. I would like to call interesting subregions in the vast space of all the unused configurations Su, the “Adjacent Possible (orange regions in FIG. 5), in honor of Stuart Kauffman.

In Kauffman’s theory, the Adjacent Possible represents domains in the unfathomably vast space of the not actualized possibilities (of configurations) of a system (blue regions in FIG. 5): That which is a “potentiality”, just adjacent to the actual (green region in FIG 5). Kauffman uses the Adjacent Possible to explain sudden surges of industrial innovation when an innovation event triggers the entry into the Adjacent Possible after a long phase of stagnation in that domain.

I now postulate that the appearance of overachieving systems, such as AGI, belongs to a class of events that constitutes an entry into the Adjacent Possible of a system and thereby epitomizes the origin of a new set of organized behaviors. In such events new unexpected functionality of higher sophistication than the intended elementary task come into existence. Such events of realizing the Adjacent Possible include the origin of life, of human civilization, of economics, of the internet, of social media, of AI, in sum, many an industrial revolution. Let’s therefore discuss this process first, as basis for explaining why such events are “rare and robust” — the characteristic listed above as the second hallmark of overachieving systems {2} and how they pertain to the plausible arrival of AGI.

The central question is: Why are the un-realized configurations Su and their sequence of succession poised to produce apparently “coherent” and “ordered” behaviors that represent (or are readily converted into) new “meaningful” more sophisticated functionalities? Why is the Adjacent Possible the source of a self-propelling diversifying innovation? This is addressed in the next sections.

VII. ENTERING THE ADJACENT POSSIBLE

The realm of the unused configurations Su, including the Adjacent Possible of a complex system, is the inevitable byproduct of the genesis of said system by evolution or by training of a complex network that has produced the actual system behavior governed by sequences of systems configurations Sr. The Adjacent Possible exists, it is a latent possibility, waiting to be entered (purple arrow in FIG. 5). Thus, realizing the “potentially existing but not actualized” simply means: to enter the Adjacent Possible. It may be a simple (accidental) event, like crossing Rubicon into a pre-existing, unused land, landing on a single configuration Su. The apparent innovation is not a de novo construction of a new land. It is the converting into the actual of a domain of the possible but not realized that was adjacent and hence “one step away” from the actual (orange regions in FIG. 5). Complex systems are poised to stumble into its Adjacent Possible and do so if conditions are right.

It is now important to remind ourselves (see PRELIMINARIES in last section): the theoretical trajectories of Su in the Adjacent Possible, even if unused, follow constrains imparted by the same network that has been trained or has evolved to produce the successions of configurations Sr that deliver the useful functionality of solving elementary tasks.

The entry into the Adjacent Possible is harder to imagine than its mere existence. We cannot pre-state which part of the unused configuration space (blue in FIG. 5) will be accessed by a complex system that exists in the realm of the actual (green in FIG. 5) and thereby represents the effective Adjacent Possible waiting to be accessed (orange in FIG. 5). Entry into the realm of unused configurations Su can result from an incremental change in the wiring diagram of the system’s network during the ongoing evolution of a system (e.g., random genetic mutation or local network rewiring in the case of engineered systems). It does not require an obvious major material or energy inflow, such as a drastic growth of system size, but rather, the arrival of a particular set of permissive conditions. And perhaps most of the time, a transient foray into the Adjacent Possible has no consequences and remains unnoticed. But occasionally, occupying a new domain in the Adjacent Possible results in a self-propelling chain of discovering novel functionalities beyond that of the elementary task.

EXAMPLES OF ENTRY INTO THE ADJACENT POSSIBLE (see FIG.6). Cases of realizing the Adjacent Possible with the ensuing explosive growth of novel complex functionality abound in the biosphere and in human civilization and economics. In economic innovation (Kauffman’s focus in developing the theory), entering a domain in the Adjacent Possible can create its own new Adjacent Possible itself –since the realm of the actual has been expanded by novel functions which themselves may be poised to trigger a new incorporation of nearby unused configurations into the actual. Innovation begets innovation. We have a robust self-fulfilling prophecy, or a chain reaction of “explosive” (as in “combinatorial explosion”), yet bounded, creative diversification. As Carlos Perez remarked on AGI, “it takes less and less effort to make exponential progress”. The heavy lifting has been done in the construction of the overparametrized system that has produced the meaningful sequences of system configurations for an elementary task, but as byproduct, also its unused, Adjacent Possible that harbors hidden functionality.

The internet has been in the Adjacent Possible of the world of connected computers, social media has been in the Adjacent Possible of the world-wide web. Rideshare, food delivery services and self-driving cars have been the realization of an Adjacent Possible to of GPS on cell phones (FIG. 6). A new species created by homeotic mutation (see later) may also represent the genetic actualization of an Adjacent Possible in morphospace. And to use Jared Diamond’s favorite example: complex human civilization and industrialization has been the Adjacent Possible of the prehistoric hunter/gatherer and agricultural societies for hundred thousands of years, suddenly accessed, then driven by a self-propelling chain reaction of technological innovation. But whence the long stagnation?

FIGURE 6. Examples of entering the Adjacent Possible — including the idea that AGI is an Adjacent Possible.

RARE AND ROBUST EVENTS. These examples illustrate the second hallmark of overachieving systems {2}. It pertains to how the Adjacent Possible is entered and becomes part of actual: On the one hand, such events are relatively rare, characterized by the apparent uniqueness of a sudden surge of complex functionally following a minimal gradual (often unnoticeable) change in a system’s architecture. On the other hand, the explosive increase of “complexity”, and typically self-propelling diversification of sophisticated, well-organized functionality appears irreversible and resistant to perturbations, and thus very robust. It seems counterintuitive for a complex system or process to be both robust (readily replicated), and yet rare (requiring a particular, unlikely constellation). Conversely, such events are rare and yet robust. In other words, if a process is robust, it is more likely to happen, and thus it should not be rare, and, conversely, if an event is rare, it is because the process of its becoming is not robust.

Why does here rarity not simply imply a statistical fluke, that is, a rare particular combination of chance events? Entry into an Adjacent Possible unleashes spurts of innovation “poised” to happen, yet may not have happened for a long period of time. But unlike rare statistical outliers, if an event of accessing the Adjacent Possible happens, it benefits from the “pre-wired” trajectories in the space of unused system configurations. The prewiring is the result of the training that optimizes the network for the elementary task and created the realized configurations Sr, even if at the end these occupy only tiny subspace of the space of all configurations. But, as said, trajectories of Su also benefit from this training — they are not operated by randomly wired networks.

A different, related imagery that mat help some of the readers: the vast domain of unused configurations (blue in FIGs. 4, 5) can act in a similar way as the simpler, well-known excitable media: poised (prewired) to take off because (by chance) some of the system configurations Su appears in self-amplifying causal loops in the network. Such conditions allow the contingency of minor local rewiring of a causal network — or in our case, the neuronal or genetic network, to create a constellation that can trigger a self-propelling, self-organizing, often irreversible process, much like the every-day concept of “perfect storms”. Because such events unleash stored “energy” pre-configured to self-propel, they resist the “regression to the mean” by which statical flukes or contingencies die out.

It is in the sense of such irreversible, hard to perturb self-amplification that an entry into an Adjacent Possible can be robust. The actualization of an Adjacent Possible with a tendency to give rise to “higher level order” creates new mutual dependencies, or, according to Kauffman, Kantian wholes: Systems (an organism) whose existence depends on their parts (organs) whose existence in turn depends on the system. A kind of chicken-egg conundrum that transcends microscopic-macroscopic separations. It is because of the robustness of these new dependencies “clicking into place” that we have inevitable “order-for-free” in complex, overparametrized systems that have undergone a long pre-training (or evolution) for performing a simple task, but as byproduct created vast uncharted territories for potential Adjacent Possibles, waiting to be realized.

CONTRAST TO BLACK SWAN EVENTS. An entry into the Adjacent Possible bears superficial similarities to, but has distinct intellectual roots from, a black swan event. This term was introduced by Nassim Taleb in 2007 for the “disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology”. The theory of Black-Swan events emphasizes the statistical aspect: these are extreme outliers, hard to predict and understand, involving the Bayesian notion of “probability of probabilities”, and, yet they do happen because of fat tail distributions. By contrast, Kauffman’s idea of the Adjacent Possible (as he recently explained to me) is that because of the aforementioned quasi-infinite size of the set of possible configurations S of a system, there is not even a sample space for defining a probability in the first place — or for invoking stochasticity as reason for fundamental unpredictability. In Kauffman’s view, as he explains in a recent paper (Third Transition of Science),,the becoming of the universe, or of the biosphere, or, for a more tangible example, the arrival of the immense diversity of applications of GPS technology, from rideshare and fitness tracker apps to self-driving cars, are all fundamentally “un-pre-statable”. Far beyond Newtonian, and even Quantum mechanics.

Contrary to me, Kauffman disputes the very existence of a configuration space that can be defined and hence, he insists on the fundamental “un-prestatability” of the Adjacent Possible. But in any case, entry into the Adjacent Possible offers a quasi-formal explanation for the coherence and “meaningfulness” of the un-pre-statable, suddenly “emerging” complex functionality that elicits the impression of “order-for-free”: The high degree of organization of the result, in Goethe’s words, “the unfathomably splendid”, appears subjectively “highly improbable” yet they exist and are objectively inevitable. They are rare but robust. This constructive character sets it apart from the disruptive chaos tacitly associated with the rare Black Swan events. We will argue that the arrival of AGI is an entry into an Adjacent Possible of the current deep NN based systems, such as LLM. Thus AGI is plausible and inevitable.

VIII. COUNTER-INTUITIVE KINETICS OF ACCESSING THE ADJACENT POSSIBLE

The kinetics of the process by which an Adjacent Possible becomes actualized has a characteristic feature: Entry into a given Adjacent Possible domain typically occurs suddenly, after a relatively long “waiting time” in which minimal progress is made. At some hard to predict point, without an apparent commensurate cause, an explosive, self-propelling innovation and diversification of functionality occurs. Plotting a quantity that measures innovation as a function of time generates the “hockey stick shape curve” that economists have long described for industrial innovation.

[ SIDE NOTE: Some have claimed that the competence of LLMs has undergone a hockey-stick kinetics of “emergence”, but such claims have been disputed based on technical considerations]

KINETICS OF EXAMPLES OF ENTRY INTO THE ADJACENT POSSIBLE. The Cambrian explosion of the number of animal species and of morphological diversity in the Cambrian period (530 million years ago) within a relatively short period of a few million years is the classical example in biology that can be considered the unleashing of the Adjacent Possible in morphospace (FIG. 7). The long stagnation of human technological sophistication for most of human history that came to an abrupt end with the rapid rise of modern civilization and economic progress within just 10,000 years, propelled by the self-fulfilling prophecy of innovation (innovation begets innovation), can also be regarded as (a series of) entries into the Adjacent Possible. And finally, the origin of life: The earth is 4.5 billion years old, but life began “only” after ~1 billion years. And after only 150 million years, a vast diversity of bacteria lived on earth.

FIGURE 7. The Cambrian Explosion, a period in evolution characterized by a burst of new complex life forms about 550 million years ago, as documented by fossil records. This sudden diversification of lineages within just a period of 10 million years created the basic body plans still observed in extant animals. | CREDIT: Adobe Stock, via BigThink

As to LLM, it is worth noting that while clever, higher-level architectures for artificial NN, such as the transformer architecture with attention learning, evolved from earlier more primitive forms (FIG. 1) and brought NLP (natural language processing) to the vicinity of NLU (natural language understanding), the underlying “microscopic” principles of neural networks, inspired by tunable synaptic activation of neurons arranged in a layered network subjected to evolutionary optimization, has not changed for decades. Thus, an entry into an Adjacent Possible of current AI, may not hinge of a revolutionary invention, but a barely noticed incremental advance — a characteristic of accessing the Adjacent Possible

[ SIDE NOTE: Google Brain engineers have recently challenged the idea that “Attention is all you need” (the core idea of the Transformer LLM) and proposed that you can get away with gated multilayer neural networks. This alternative architecture underscores the idea that “rare and robust” innovations can have distinct, independent mechanisms (see below).]

OPEN QUESTION: THE TRIGGER THAT ACTUATES AN ADJACENT POSSIBLE. What actually triggers the entry into an Adjacent Possible? The mechanism must be compatible with the characteristic, long period of preceding stagnation resulting in the sudden surge, within a relatively shorter period, of new complex and robust functionality and the typical absence of an apparent, plausible major deterministic causation. This is a different topic for another time — I don’t know the answer. It could be a minor, barely noticeable or hard to rationalize change in some system parameter, triggering what in the theory of non-linear dynamics is known as a symmetry-breaking bifurcation event. Concretely, it can be either a gradual increase in a system parameter driven by some external change, such as rise in atmospheric oxygen level in the case of Cambria explosion. In any case, the key idea is that the “information” for the innovation of new form and functionality is immanent to the system’s internal “overparametrized” complexity and not explicitly instructed by an external input. But once exposed, such hidden functionality still needs to be fine-tuned.

Whatever the details, the characteristic kinetics of entry into the Adjacent Possible lends further credence to the idea that AI and AGI can arise (inevitably yet rather rarely, but to too rare) in complex NN trained with minimally guided computational brute-force, as opposed to the naysayer’s notion that some explicit symbolic representation of cognitive processes is required.

THE PHENOMENON OF “MULTIPLE ORIGINS” OF …Intriguing empirical support for the hallmark {2} of overachieving systems, namely that the actualization of an Adjacent Possible is rare (seemingly unique), yet robust (thus, replicable), is that despite the intuition of “improbable complexity”, such events can have happened twice or more times independently. Remember: the Adjacent Possible is already there and only need to be accessed — which can occur at multiple entry points.

There is discussion that the origin of life has happened multiple times independently, as cogently presented by Paul Davies. (Despite multiple origins, it could be that one form of life, the one we know, has outcompeted all the others). Davies also asks why early genomes encode so much more information than was present in the environment of the early earth. For an answer we can now point to this: The genomes are also overparametrized, like the LLM!

Emergence of complex human civilization and economies have obviously happened multiple times independently. Multi-cellularity in the biosphere has evolved at least 25 times independently. With it came of course the Adjacent Possible of the Adjacent Possible: The presence of multiple cell types (the attractors S*r in the dynamics of gene expression configurations Sr governed by genetic networks) which in turn afforded a new level of combinatorial opportunity for constructing the tissues and organs of complex organism…

And even the arrival of homo sapiens may have had multiple roots — with the various original populations then intermingling over millions of years. We, the possible –we, the inevitable. Not trivially frequent, rather rare, and yet stunningly non-unique and replicable, despite the intuition of a requirement of a combinatorically unlikely constellation of things. As said above, this apparent paradox is also at the root of the too wide-spread disbelief in natural evolution (“irreducible complexity” requires an Intelligent Designer) and now, the disbelief that human cognition by computers, or AGI, will ever be possible.

From here it is a small step to fathom the likely existence of extraterrestrial life in the quasi-infinite universe… That’s for another day, but there is an interesting discussion by Paul Pallaghy on aliens on exoplanets.

We see the phenomenon of multiple origins also in the development of LLM and humanoid chatbots: not one but quite many companies have developed products with the stunning ability of generative AI and, despite the enormous cost and logistical challenge required for the development of LLM of the scale of GPT3, it has happened a handful of times, quite independently. A similar degree of relative rarity but non-uniqueness of companies can be expected to lead to AGI-capable systems because AGI is in the Adjacent Possible of current AI.

EVOLUTIONARY CONSIDERATIONS. The human brain may have been the Adjacent Possible to the more primitive nervous systems in the biosphere. We do not have yet a solid theory for explaining what triggers the entry into this Adjacent Possible. But one can only speculate whether it was a particular constellation of genomic and anatomical-developmental trajectories created by mutations, combined with a particular shift in environmental conditions, that have unleashed a self-propelling process. Narrative theories abound. Yet the unique size, shape and capability of the human brain compared to that of all other organisms must not distract us from studying the universal principles of the existence of the Adjacent Possible waiting to be accessed by a complex system. Again, in the grander scheme of things, perhaps we humans are nothing special — and neither will AGI be.

Sure, the human brain with its 80–100 billion neurons has a complexity that puts it in a class of its own in the biosphere, if not universe. Expansion of the number of new units that establish the system configurations S (neurons, perceptrons, genes, novel goods in economies) certainly may have contributed to a combinatorial explosion into the Adjacent Possible. But genome evolution tells us that merely rewiring the network (by genetic mutations) without adding more network elements (genes) may suffice to facilitate access to the countless, pre-trained and hence coherent but unused configurations Su of an Adjacent Possible.

Numbers from neurobiology illustrates how the diversity of complex capabilities depend more on the wiring than the number of elements: the human genome has ~ 23,000 protein-coding genes, thus not many more than much more primitive organisms: e.g., the roundworm C. elegans with its 20,000 protein-coding genes. Thus, here is a case where complexity was achieved mostly by rewiring. However, this has expanded, via multi-layer developmental mechanisms, complexity at another level: the human brain has 80 billion neurons, composed of more than a thousand types of neurons, each encoded by accessible attractors S*r compared to the 302 nerve cells of C. elegans.

IX. THE ADJACENT POSSIBLE OBEYS THE SAME CONSTRAINTS AS THE ACTUAL

To complete our theoretical speculation, we need to come back to the central idea introduced in Section VI that the unused network activity configurations Su (blue region outside the green domain in FIG. 5) and their allowed trajectories of succession are generated by the same network of interactions that as a whole, is the product of training (or evolution) the entire overparametrized network for solving a particular elementary task, even if the desired task ends up being accomplished by only a small set of the used configurations Sr. But all network elements are used.

In this framework we now arrive at the central argument for answering the question of why the output behavior dictated by traveling through unused configurations Su in the Adjacent Possible can be expected to exhibit meaningful functionality, or “order-for-free”:

Since the trajectories of succession of configurations Su in the unused configuration space are constrained in the same way as those of the used configurations Su the specific patterns of change of configurations Su are the natural by-product of the training of the network. And both, used and unused configurations, are the collective manifestation of all the nodes of one same network (of neurons or genes). It is just that for a large set of network configurations S the patterns of activation in the subset of nodes belonging to the output layer (green nodes in FIG. 3 in the case of artificial NN) have not been useful for the task. As a consequence, these associated network-wide configurations S are not part of the set of configurations Sr used in the execution of the task in the final network and hence they are no used after the pretraining, and they are banned into the vast lands of unused configurations Su.

But the network behavior resulting from the succession of unused configurations Su follows considerable order and clearly is not “random” or “chaotic”.

UNUSED CONFIGURATIONS Su ARE NOT FINE-TUNED. Despite the unused configurations Su having been subjected to the same training as the used ones Sr, and exhibiting allowed state transitions, there is a fundamental difference: unlike the configurations Sr through which the NN in the final operating product transitions, the unused configurations of node activities do not directly benefit from the fine-tuning of the pretrained model because, well they never see the light of the actual. Fine-tuning focuses on improving the performance in solving the intended elementary task, and thus only “sees” the configurations instantiated in performing the task, i.e., by definition, the realized configurations Sr. This is very similar in genomic networks: Some attractors (defined by stable genome-wide gene activity profiles S*) are used by the organism to produce the meaningful states that can encode stable phenotype (S*r). But others remain unoccupied in the extant adult organism (S*u). Thus, they are never exposed to natural selection, which acts only on the phenotype it “sees” to select the genotype that encodes optimal organismal fitness. As a consequence, unused attractors S*u as well as trajectories of successions of Su that lead to them are not fine-tuned by evolution.

With the above we can appreciate that the behavior of a network after entry into its Adjacent Possible and exposing the unused configurations Su can, in the grander scheme, produce familiar types of behaviors that are in some way similar to that exhibited by the used and fine-tuned configurations Sr. But they are the rudimentary, unpolished new comers. They have been the unseen “variations on a theme”, likely suboptimal for the elementary task. Some configurations Su and their trajectories of succession might be better suited for other tasks than for the intended elementary task.

I hope to have now conveyed the reasoning to explain why solutions resulting from running through unused configurations Su more likely than not may generate some logically coherent if not functionally meaningful patterns, even if long disconnected from the real world. We know such coherent, alternative realities as imagination, dreams, hallucinations, … (and yes, “alternative facts”!)

Here, the discrepancy between the elementary task (word prediction) that a system is trained to perform and the unintended more complex functionality observed for ChatGPT (helping with homework) collapses. This may explain the surprisingly high prevalence of unintended overachievement of overparametrized systems. They may be poised to one day develop AGI, almost “for free”.

BIOMEDICAL EXAMPLES OF REALIZATIO OF ADJACENT POSSIBLES. I propose that the Adjacent Possible of genomic networks harbors the “hopeful monsters” (see also here for original) which had been postulated by the post-Darwinian evolution biologist Richard Goldschmidt in 1933 to explain how new species arise during evolution (FIG. 8). The hopeful monster phenotypes are accessible with just a few rare mutations in genes that control the body plan, which by rewiring of the genetic network open up access to new domains in the space of genome-wide gene activation configurations, allowing embryonic development to explore latently present, previously inaccessible but robust gene expression patterns, the unused attractor states S*u , to encode new, stable developmental programs that produce new, qualitatively distinct phenotypes in previously unused parts of morphospace. And if the resulting organism occupies new ecological niches, with these phenotypes now realized, it will be exposed to evolutionary fine-tuning. Such expansion into the Adjacent Possible of genomic programs can lead to innovation of new taxonomies.

FIGURE 8. An illustration of Richard Goldschmidt’s hopeful monster that explains the origin of new, discretely distinct species. The dinosaur (a) has as its adjacent Possible an organism without a tail (b) which evolved to become birds | CREDIT: Figure from Geant et al. 2006

At the cellular level, we have another monster: the Adjacent Possible to the genome of cells in somatic tissues, which encodes the healthy cell states, contains diseases states.

One type of unused attractors S*u explains phenomenon of cancer that is inescapable to all metazoan. The phenotype of cancer cells follows similar coherent rules of biochemistry and physiology as normal cells (e.g., the cell division cycle, basic metabolism), as required for cellular life. But on top, cancer cells express an immense diversity of variant behaviors that serve their own cellular propagation in the local tissue, not the organismal well-being. The stable pathological tissue states encoded by S*u, known as cancer attractors, lurk in the realm of unrealized configurations of gene activities. They manifest intrinsic, primordial survival behaviors of cells not fine-tuned by evolution to serve the entire organisms.

Entry into cancer attractors is prevented during normal development by secondarily evolved homeostatic mechanisms that has become encoded in the wiring of the genetic network itself to make it difficult to access unused attractors in the domain of Su configurations, thereby stabilizing normal developmental trajectories. This control layer, prosaically epitomized by the tumor suppressor genes, has been established by evolutionary fine-tuning of cell and tissue regulation — a process much akin to the fine-tuning of LLMs for ethical alignment and suppressing embarrassing answers, which essentially is prevention against accessing the Adjacent Possible. Again, the latter, produces not chaos, but organized, dysfunctional behaviors that are mostly useless but sometimes self-perpetuating and damaging to the operation of the system as a harmonic Kantian whole.

In mental health, some consider psychotic conditions, such as schizophrenia with its characteristic productive symptoms of paranoia, manias and delusions, and, yes, hallucinations, to represent stable configurations S*u in the Adjacent Possible of some neuronal circuits, trapped in pathological attractors. These attractors again, are normally inaccessible because they are separated from Sr by a “quasi-potential energy barrier”. To overcome them will require “activation energy” (=the “effort” needed for moving along a trajectory of S against network-imposed constraints, in the sense of non-equilibrium state transitions)

[SIDE NOTE: Brain attractor states S*u may also be just transiently occupied, e.g., after consumption of hallucinogens, which transiently affect neuronal interaction (synaptic) parameters].

Pathogenesis of these psychiatric disorders can then be modeled as the entry into a perhaps readily and reversibly accessible Adjacent Possible of the mind, facilitated by genetic variants that lower the entry barrier and by environmental conditions that provide the push across the barrier, followed by being temporarily trapped in a self-stabilizing, normally not used set of configurations Su of neuronal activities — until the return to a configuration Sr.

In summary, what do hopeful monsters, cancers, psychotic states of mind and AGI have in common and? They manifest potential forms of behaviors of a system that are coherent, organized and robust, but not constitutively realized and not desirable. Thus, their manifestation is not promoted but are suppressed by development (by fine tuning of realized states). They are not seen by evolution or organism or the evaluation of LLMs and thus, are not (yet) fine-tuned. Such order-for-free, latent but not-realized yet possible configurations in complex systems is a source of well-organized pathological processes but also harbor creative potential that can be harnessed for innovation of new functionality.

X. “SPARKS OF AGI”: FUNCTIONALITY ALREADY IN THE ADJACENT POSSIBLE CAN SHINE THROUGH INTO THE ACTUAL

The above examples lead us to the third hallmark of unintentionally overachieving systems {3}: Sparks of future capacities that embody the overachievement beyond the elementary task that has guided the pre-training, can already be manifest as an (mostly) inconspicuous functionality without apparent utility in the actual system. The glimmers of the Adjacent Possible beneath the surface of the actual that shine through.

This phenomenon may be reminiscent of (albeit not entirely analogous to) what is called “pre-adaptations” (or “exaptation”) in evolution biology: a preexisting structure is repurposed for a novel, complex functionality. To illustrate the idea of the hypothesis that AGI, or at least its structural and organizational underpinning, may already be here, I would like to cite two of the most impressive examples of latently present structures that later become the basis of more advanced functionality:

The modest immotile and brainless sponge (Porifera), optimized by evolution to filter nutrients that float by, possesses almost the entire molecular armamentarium to build synapses, even if they do not have neural cells (FIG. 9, TOP). Genes whose orthologs in organisms with a nervous system encode proteins that assemble to synapses are already present in the genome of the sponges! But the function of many of these “synapse proteins” in sponges is not so well understood; they are thought to be part of “proto-synapses” or some molecular structures that serve cell-cell communication.

FIGURE 9. Structures underlying future more sophisticated functionality is latently present in the more primitive life forms (~ “pre-adaptation”). TOP: The sponge (which lacks neurons) has already the set of proteins that are homologous to those used in more complex organism to build synapses (TOP RIGHT). BOTTOM: The sea slug, which does not have limbs, already has the neuronal circuitry used for coordinating limb movement for locomotion (TOP LEFT). |CREDIT: University of Queensland, via Phys.org, Wong et al. 2019; Moroz et al, 2006.

Similarly, the modest legless slugs, which do have a nervous system but no limbs, possess already the neuronal circuits for locomotion that in higher animals coordinates goal-directed motor control of limbs, even if they rely on ciliary activity to move around (FIG. 9, BOTTOM).

Finally, the brain of H. sapiens, ever since their appearance on Earth, has had the neuronal wiring architecture necessary and sufficient to paint images of reality, as evident by the gorgeous cave painting of our prehistorical ancestors (FIG. 10). But it took tens of thousands of years until the great artists of our millennium appeared. The entire progress was “non-genetic” but driven by cultural evolution that unleashed unused, latent capacities of the same phsyical brain that lies in the Adjacent Possible to the realized human talents.

FIGURE 10. Both the cave painting (Cave Chauvet, painted ~30,000 years ago) and the artistic painting of the German expressionist Franz Marc (1913) are produced by the same type of brain that has not changed significantly in the relevant historical time span. Yet there is “progress” which may be viewed as a consequence of accessing the Adjacent Possible to the human brain’s used configuration space. The remarkable painting on cave walls displays sparks of human artistic brilliance that is later manifest in Marc’s oil on canvas painting. | CREDIT: Bradshaw Foundation and Franzmarc.org

For artificial NN, notably the pre-trained LLM, the claimed and controversial sparks of conscience (whatever the latter means) may thus not be that outlandish, and certainly less so the claims of the imminent arrival of AGI. I posit that AGI functionalities are in some unpolished form already present in the Adjacent Possible; or at least, LLMs that exist in the domain of the actual and that possess “NLU-like” faculties may epitomize the most relevant entry point into an Adjacent Possible that contain the seeds for AGI.

Why imminent? An important concept for understanding the becoming of the human intellect with its generalized competency is the afore-mentioned distinction between two processes that take place at distinct scales of time and granularity: on the one hand, the evolution of the hardware, the general-purpose neural network of the brain (Phase [A] in FIG. 2 above), and on the other hand, the subsequent cognitive development by education (Phase [C]) that reorganizes the finer structure.

For Phase [C] learning, the human brain has evolved an architecture that is robust and versatile. It has through evolution learned to learn, thus linking phylogeny with ontogeny: The evolved brain harbors the huge potential for an individual to acquire new functionalities without significant structural change but just with some rewiring and reweighing of synaptic connections at best. This is possible because of a highly complex overparametrized network. Combining such “brain plasticity” based learning of Phase [C] with the concept of the Adjacent Possible, it is then reasonable to entertain the following idea: such intrinsic plasticity can convert latent, rough functionalities inherent in the constrained dynamics of the unused configurations of neuronal activities Su, into actual competency, thus, becoming part of the domain of realized configurations Sr in with little effort, in some self-propelled was when the initial conditions are right.

Thus, much as the case of the human brain, we can imagine that the current LLM architecture should be capable of acquiring general intelligence by exploiting its Adjacent Possible without that much more fundamental changes or expansion of its basic architecture, but rather by a more systematic fine-tuning process.

If biological evolution can create the human mind, so can technological evolution produce AGI. The underlying logics of “complexification” of the ever-creative universe is the same.

XI. OUTLOOK

With the exposition of (imperfect) parallels between the human brain and LLMs in this long piece, which hopefully may serve as framework for further thinking, we can now summarize key ideas in a condensed aphoristic manner:

(1) Ask not if ChatGPT is human-like — but ask whether humans actually operate like ChatGPT.

With this inversed perspective used through-out this piece, many a conundrum that drive current discussions can be solved (“..ChatGPT is just a statistical language model” … “Humans are not parrots”). Instead, we open a new vista for new insights. The reality is that primates learn a lot by imitation. Even more, once evolved, the human brain is endowed with the capability of statistical learning, as shown for word segmentation in sentences (tokenization?), and we also learn by word-prediction. All the talk about LLMs having no access to facts in the real world, their lack of physical experience of it, their lack of anterograde memory etc. are superficial technical deficits that one can expect to be readily solved by some clever engineering hacks in the near future.

(2) If Y mimics X, and does so perfectly, then by logical necessity, Y becomes X.

So, if LLMs (Y) are merely regurgitating and parroting what humans say (X), as AI naysayers often claim, then such mimicry would also allow for Y to eventually become indistinguishable from X — iff mimicry is driven to perfection. But why is this not possible? The mimicry argument may bite its own tail!

Sure, currently mimicry of human cognition by algorithms is imperfect, albeit, stunningly good. We have X-Y = d, where the deficit d is small but non-zero. But we are only at the beginning. The imperfection, in a logical twist, is used to argue that because of non-zero deficit d, the systems Y (AI) and X (brain) must be qualitatively different, belonging to distinct categories; therefore, Y would never be like X for fundamental reasons. But this school of thought does not explain what these fundamental reasons are. For lim(d → 0), we have X = Y, and thus mimickry becomes identity — at least functionally. Good enough for all practical purposes.

(3) Human mind and AI operate on the same logic of the universe that is immanent to the same phsyical world. (Only the material implementation differs).

The wetware of the human brain and the software of deep NN operate on the same elementary logic. This logic dictates how a system of many elements (neural cells, perceptrons), irrespective of their detailed inner working and material realization, but through similar types of interactions, give rise to collective behavior: the human mind, or AGI. In both, a similar evolutionary algorithm structures the behavioral repertoire, constraining the system’s immense space of possible configurations to a small fraction that is used. There is nothing in the anatomy of the brain that suggests something special to organismal cognition that is not achievable by an in silico version for producing such collective behaviors of their elements. What matters is the logic, readily simulated in simple computer programs as pioneered by Stuart Kauffman, of how ‘parts’ give rise to the ‘whole’ that are more than the sum of its parts, as Aristotle has long taught, and that constitute Kantian Wholes.

[SIDE NOTE: I just noticed, as I write this, that the points (1) and (2) on the indistinguishability and the anchoring in a common logics, are aligned with Pallaghi’s argument for why LLM “get it” — just published last week. ]

All those abstract principles invoked by naysayers, such as the need for “syntactic representation”, innate “universal grammar”, higher level mental “world models” and other symbolic concepts that they claim AI lacks, are figments of scholarly theory, if not fantasy — so far. Sure, they are helpful as scaffold for academic discourse and to serve as starting point for developing theories to explain them. But they are neither predictive nor falsifiable — and thus, are justifiably suspected by some scholars of representing pseudoscience. In a more charitable view, the ideas of symbolic representations may be useful as temporary conceptual scaffolds, much as Galvani’s “vital force” later became bioelectricity or Mendel’s “heritable factors” later became genes. For AI, a trained deep NN may one day be found to contain structures or subroutines that “represent” these abstract symbolic concepts — but we are not there yet.

All is much simpler: there exist a class of evolvable (trainable) systems of interacting elements (neurons, genes or perceptrons). They interact via networks that satisfy some minimal, readily met architectural requirements, and there exists the inescapable logics of interactions, such that these systems, irrespective of their material realization, are poised, with aid of iterative channeling by selection or task optimization of random variation, to produce complex systems capable of coherent collective behavior, as Stuart Kauffman, John Holland, Paul Davies and many others, have long described. That’s it. There is no magic, no need here for new physics (contrary to what my good friend Kauffman suggests, rightly or not, I don’t know), no need to invoke other high-flying philosophy on self-organization, emergence, non-ergodicity, etc. Whatever the underlying scientific theory, “We, the expected” (Kauffman) applies to life on Earth, us humans, and to intelligence — natural or artificial.

To end, I would like to point to a pragmatic consequence of this rather abstract discourse.

I argued that ChatGPT is best compared to the a well-read but immature, minimally schooled mind without anterograde memory, operating on an internal machinery, the deep NN, that is in principle not too far from human neurophysiology. I also proposed that the all too apparent imperfections of ChatGPT can be viewed as sparks of the unrealized potential of human-like intelligence that awaits us in the “Adjacent Possible”, the inevitable by-product of a complex overparametrized system trained for an elementary (survival) task. Factual errors, up to the proverbial hallucinations, are not simply deficits that suggest unreachability of higher cognitive faculties. So is the recently discovered deteriorating performance in GPT 4.0, to be seen more like a “deformation professionelle”: signatures of neuroplasticity and inappropriate overlearning and thus, of profoundly human features and of the potential to improve. The ability to “hallucinate” is a complex creative process unique to the human mind. I don’t think a bacterium hallucinates.

Anthropomorphizing ChatGPT or any chatbot is often frowned upon. But logically, we should do just that! If LLMs achieve “understanding” or AGI by mimicking humans, perhaps treating chatbots more like human by engaging professional pedagogists and psychologists to work alongside programmers may be the next thing to do in the future development of AI. Raise ChatPGT and other LLMs like your own (of course, very smart) children: with compassion and caring to instill our moral values; give them the best possible education in science and humanities. Only if we don’t treat AI systems like humans, may they become the monsters that threaten humans as the pessimistic critics envision.

If conversely, humans are actually more like AI machines, we have already experienced the outcome of this idea through our history: human culture, tradition, religion, and love and respect, can produce civilizations that more or less, but mostly, keep the evil of our nature at bay. Sur, there are challenges on the way, punctuated with horrible manifestations of our darker side, emanating from the very same neural circuitry, but overall, we are doing fine. And it won’t be harder to imprint unto AI the same culture than with us, humans. We may even, and must, do better this time! If indeed AI one day becomes like humans, and if one day it turns out that human mind works much like AGI, there is no need to fear the loss of our humanity and spirituality.

--

--