The Promise of AGI and Neuroscience, or “Hold your Horses”

“We assume that the neuron is the basic functional unit, but that might be wrong. It might be that thinking of the neuron as the basic functional unit of the brain is similar to thinking of the molecule as the basic functional unit of the car, and that is a horrendous mistake.”

John Searle

Edited image generated by MidJourney

Mind and Machine

Large language models (LLMs) such as ChatGPT have demonstrated capabilities that are raising questions about how much closer we are to achieving artificial general intelligence (AGI). AGI is artificial intelligence that possesses general cognitive abilities that are not problem or task-specific and is essentially indistinguishable from human intelligence. The prospect of such an autonomous capability raises questions about human relevance, progressive deskilling, and existential threat. Though not a widely accepted view, a small minority of people think we have achieved AGI already; a Google engineer was recently fired for claiming that its AI chatbot LaMDA had become sentient.

Anthropomorphizing chatbots and claims that state of the art AI technologies possess a theory of mind are consistent with a historical pattern of using cutting-edge technology as a metaphor for the mind and brain:

  • 17th century brain (Descartes) = works like a hydraulics system 
  • 18th century brain (Galvani) = works like electricity
  • 19th century brain (Helmholtz) = works like telegraph wires 
  • 20th century brain (McCulloch & Pitts) = works like a computer

Because the mind-brain is the most complex contained system humans have discovered in the universe thus far, we have a tendency to imagine it functions in a way that is analogous to the most complex technology that we have developed. However, metaphors are not reality.  

In fact, if the historical timeline above is any indication, our current metaphor is likely to be wrong. It turns out that brains are not much like hydraulics systems, electrical systems, or telegraph wires. Instead, the metaphor changes as technology evolves. We could reasonably imagine a future in which a new technology “X” comes into being that is more complex than computation, and the new prevailing view becomes “brains work like X.” 

The supposition underlying the vast majority of AI research is not just that brains are metaphorically computer-like, but that mental states and processes are actually realized through computation. This is known as the computational theory of mind (CTM). CTM modeling philosophies generally fall into either symbolic (i.e. computationalist) or non-symbolic (i.e., connectionist) approaches. Despite their differences, both approaches assume that intelligence in the brain is a computational phenomena that can be modeled as such.

However, humanity’s understanding of the brain is still in its infancy, and our most basic assumptions about it may be simply incorrect. Even if it is true that brains are computational machines, it does not logically follow that massive connectionist neural networks like LLMs are brain-like. They are not. 

It therefore is worth critically examining our claims, beliefs, and fears that the advent of large language models heralds generalized machine intelligence. Neural communication shares interesting similarities with computation as implemented by neural nets, but they are far from identical.

Mind and Brain

There are many complex issues identified in neuroscience and cognition research that AI has yet to wrestle with as a field. Below are examples of research findings that challenge a “computer-like” description of the brain, and challenge a “brain-like” description of LLMs:

  • Neurons are probably not “dumb” mathematical integrators as modeled in neural nets, but have an active and complex role in signaling that arises from their biological properties. For instance, recent neuroscientific evidence has shown that single neurons can solve the classic XOR problem, which suggests much more complex representational capabilities for single or small numbers of neurons than was once believed.

  • Synaptic strength (the connection strength between two neurons) doesn’t just change computationally – i.e, like weights in a neural net. It also changes at a “hardware” level. For instance, new synaptic processes (i.e., meat) grow and extend from neurons, or wither, depending on the experience of the organism. The pre- and post-synaptic ends of neurons also physically enlarge with learning. Neural nets computationally change weights with training, but the brain changes its network hardware and architecture dynamically, learning continuously throughout the lifespan. Accompanying this change in neural architecture is neurogenesis, or the creation of entirely new neurons that is known to occur in both the hippocampus and piriform cortex.

  • The architecture of neural nets is a homogenous set of units based on the “model neuron” found in every introductory psychology textbook. In contrast, cellular diversity in the nervous system is vast – in just a single region of the visual cortex alone (V1) 50 different types of interneurons have been identified. Neurons vary widely both morphologically and functionally. The same is true for synapses – while the characteristic synapse is axo-dendritic (meaning the signal travels from the end of one neuron’s axon to the next neuron’s dendrites), there are axo-axonic and dendro-dendritic connections, too.

  • Glial cells are totally unaccounted for in neural nets. Glial cells outnumber neurons in the brain 3:1. Once thought to only play a supporting role to neurons, neuroscientific evidence increasingly points to glial cells’ active participation in learning and synaptic plasticity.

  • Human intelligence is a property of the whole brain working in concert, not a special property of the neocortex (cortex). There are multiple, well-described subcortical systems that critically support different types of complex cognition, all operating in parallel with the cortex. This is a much needed update to the widely held misconception of human intelligence as cortically-based cognition layered on top of a “reptilian” brain underneath. AI research has potentially overemphasized the sole importance of the cortex in human intelligence, perhaps because its relatively homogenous six-layer organization of neurons corresponds better to neural net architectures than does the brain as a whole.

It is worth noting that the issues above are just a small sampling of topics largely unaddressed by machine learning researchers and engineers. AI optimists may nonetheless believe these challenges will soon be overcome through continued development, while pessimists may cite these issues as evidence that even if AGI is achievable, it may be a long time coming.

Of course, all of the above might be entirely moot. AGI might arise from neural net architectures that look very little like the human brain. Just because we are developing a technology inspired by the brain doesn’t mean it has to be faithful to every detail. Automobiles were developed at the beginning of the twentieth century to emulate and replace the functions of horses. It’s unlikely that automobile engineers at the time were obsessing over horse physiology. This is consistent with a computational theory of mind view of intelligence as independent of its implementation, or the philosophical principle of the “multiple realizability” of mental states.

However, some AI researchers are now claiming that in order for AGI to be realized beyond the agreeable (if sometimes bizarre) conversational partnership achieved by LLMs, machine learning models must be more brain-like and biologically-inspired. If that’s true, a key to moving AGI development forward may be developing better meso-level theories of how neural circuits dynamically behave to support intelligence, and integrating those theories into machine learning architectures. The importance of defining meso-level theories of cognition in neuroscience research has been raised before, in 2012 during the height of attention to the Human Connectome Project. However, the problem remains just as relevant today, and I argue that meso-level theories of intelligence are essential to the development of AGI. 

Mind the Gap

Developing meso-level theories that bridge the gap between micro-scale information about neurons and macro-scale information about brains is quite difficult given the current state of knowledge in neuroscience and the cognitive sciences. Neuroscience has given us exquisitely detailed knowledge of neurons and their sub-cellular signaling processes. Unfortunately, it’s not yet clear how to link this micro-scale knowledge to cognitive models of intelligence. Even simply defining the problem space is a challenge. The Human Connectome Project is an ongoing effort to create a map of the human brain’s large and tangled connectome of an estimated 86 billion neurons and more than 1.5 x 1014 cortical synapses. Begun in 2011, it is yet incomplete. The only organism to date with a fully mapped connectome is C. elegans, a one millimeter worm with a nervous system comprising just 302 neurons. Related efforts like the Human Brain Project have thus far yielded disappointing results

In contrast, cognitive neuroscience and neurology have given us macro-scale models of regional brain networks and their correspondence to specific types of human behavior and cognition that contribute to intelligence. Of particular importance to AGI research is decades-old evidence which supports the existence of multiple learning and memory processes running in parallel via distinct brain subsystems to represent the full range of human knowledge and skill. However, these models do not readily translate down to the details of machine learning algorithms or model implementation.

In Goldilocks fashion, a level of articulation that would be “just right” for supporting algorithmic implementation is a set of meso-level descriptions of how particular cognitive functions are realized in the brain. Developing these descriptions is additionally challenging because of limitations in the currently available methods for examining brains, neurons, and everything in between. The most widely used brain imaging techniques in cognitive neuroscience and neurology such as fMRI, EEG, PET, MEG, DTI, and TMS/TDCS generally lack the resolution to peer into the (living) human beyond a macro level. Neuroscience research utilizes more invasive and high-resolution techniques with animals like rodents and non-human primates, which provides insight into processes that humans share in common with those animals such as visual perception and motor control. The depth of knowledge achieved for vision and motor processing through animal research has led to groundbreaking technologies like retinal implants and neuroprosthetics that have perhaps contributed to a false sense of how close neuroscientists are to understanding higher-level cognition to the same depth. However, much of what contributes to human intelligence either isn’t shared with other animals, or isn’t easily probed using experimental methods that measure cognition in humans (i.e., they can’t talk to us). Lack of meso-scale theories for many cognitive processes like language, complex reasoning, and explicit memory may be the ultimate limiting factor to creating human-like artificial general intelligence at this time. Fortunately, there are more recent examples of research making progress on this front.

With a goal as ambitious as creating AGI, deeper collaboration is needed between the core engineering and computer science fields that have pushed AI technology forward in recent years, and fields that systematically study human intelligence. There’s been recent fanfare around the synergistic combination of AI and neuroscience, but a true interdisciplinary AGI effort would benefit from inviting other disciplines to the table, too. Cognitive psychologists can speak to the results of over a half century of scientific research on human cognition and behavior, including intelligence. Cognitive neuroscientists add evidence for how those cognitive functions and behaviors are localized and represented in the brain. Psychometricians provide knowledge of how we define, operationalize, and measure a construct like intelligence, and inform cognitive modeling methods. Philosophers bring with them the highly relevant fields of logic, epistemology, and philosophy of mind. They can also address the ocean of ethical dilemmas that are sure to arise as AGI is introduced to the world. There is a wealth of expertise and institutional knowledge from these fields and others that can be leveraged alongside engineering and computer science in the effort to develop AGI. The sooner we all start working together, the better. Or worse. Take your pick.

Works Cited

Barber, N. 07/29/2015. Can artificial intelligence make us stupid? Psychology Today, https://www.psychologytoday.com/us/blog/the-human-beast/201507/can-artificial-intelligence-make-us-stupid

Bickle, J. (2006). Multiple realizability. Encyclopedia of Cognitive Science. https://doi.org/10.1002/0470018860.s00116

Bikoff, J. B., Gabitto, M. I., Rivard, A. F., Drobac, E., Machado, T. A., Miri, A., … & Jessell, T. M. (2016). Spinal inhibitory interneuron diversity delineates variant motor microcircuits. Cell, 165(1), 207-219. https://doi.org/10.1016/j.cell.2016.01.027

Brodkin, J. 07/25/2022. Google fires Blake Lemoine, the engineer who claimed AI chatbot is a person. Ars Technica, https://arstechnica.com/tech-policy/2022/07/google-fires-engineer-who-claimed-lamda-chatbot-is-a-sentient-person/

Carlsmith, J. (2022). Is power-seeking AI an existential risk? arXiv:2206.13353 https://doi.org/10.48550/arXiv.2206.13353

Cesario, J., Johnson, D. J., & Eisthen, H. L. (2020). Your brain is not an onion with a tiny reptile inside. Current Directions in Psychological Science, 29(3), 255-260. https://doi.org/10.1177/0963721420917687

Chuang, A. T., Margo, C. E., & Greenberg, P. B. (2014). Retinal implants: a systematic review. British Journal of Ophthalmology, 98(7), 852-856. http://dx.doi.org/10.1136/bjophthalmol-2013-303708

Citri, A., & Malenka, R. C. (2008). Synaptic plasticity: multiple forms, functions, and mechanisms. Neuropsychopharmacology, 33(1), 18-41. https://doi.org/10.1038/sj.npp.1301559

Cobb, M. (2020). The idea of the brain: The past and future of neuroscience. Hachette UK. 

Collinger, J. L., Wodlinger, B., Downey, J. E., Wang, W., Tyler-Kabara, E. C., Weber, D. J., … & Schwartz, A. B. (2013). High-performance neuroprosthetic control by an individual with tetraplegia. The Lancet, 381(9866), 557-564. https://doi.org/10.1002/adma.201903558

Fields, R. D., Araque, A., Johansen-Berg, H., Lim, S. S., Lynch, G., Nave, K. A., … & Wake, H. (2014). Glial biology in learning and cognition. The Neuroscientist, 20(5), 426-431. https://doi.org/10.1177/1073858413504465

Gates, B. 11/22/2021. Is this how your brain works? GatesNotes, https://www.gatesnotes.com/A-Thousand-Brains

Gidon, A., Zolnik, T. A., Fidzinski, P., Bolduan, F., Papoutsi, A., Poirazi, P., … & Larkum, M. E. (2020). Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science, 367(6473), 83-87. DOI: 10.1126/science.aax6239

Goertzel, B. (2014). Artificial general intelligence: concept, state of the art, and future prospects. Journal of Artificial General Intelligence, 5(1), 1-46. https://doi.org/10.2478/jagi-2014-0001

Granados, N. 01/31/2022. Human borgs: how artificial intelligence can kill creativity and make us dumber. Forbes, https://www.forbes.com/sites/nelsongranados/2022/01/31/human-borgs-how-artificial-intelligence-can-kill-creativity-and-make-us-dumber/?sh=8c3b4f921a2d

Hawkins, J. (2021). A thousand brains: A new theory of intelligence. Basic Books.

Khambhati, A. N., Sizemore, A. E., Betzel, R. F., & Bassett, D. S. (2018). Modeling and interpreting mesoscale network dynamics. NeuroImage, 180, 337-349. https://doi.org/10.1016/j.neuroimage.2017.06.029

Knott, G. W., Holtmaat, A., Wilbrecht, L., Welker, E., & Svoboda, K. (2006). Spine growth precedes synapse formation in the adult neocortex in vivo. Nature Neuroscience, 9(9), 1117-1124. https://doi.org/10.1038/nn1747

Kosinski, M. (2023). Theory of Mind May Have Spontaneously Emerged in Large Language Models. arXiv:2302.02083 https://doi.org/10.48550/arXiv.2302.02083

LeCun, Y. (2022). A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. Open Review, 62. https://openreview.net/pdf?id=BZ5a1r-kVsf

Milner, B., Squire, L. R., & Kandel, E. R. (1998). Cognitive neuroscience and the study of memory. Neuron, 20(3), 445-468.        10.1016/s0896-6273(00)80987-3

Pei, J., Deng, L., Song, S., Zhao, M., Zhang, Y., Wu, S., … & Shi, L. (2019). Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, 572(7767), 106-111. https://doi.org/10.1038/s41586-019-1424-8

Rescorla, M. (2015). The computational theory of mind. retrieved 02/18/23 from: https://seop.illc.uva.nl/entries/computational-mind/

Saleeba, C., Dempsey, B., Le, S., Goodchild, A., & McMullan, S. (2019). A student’s guide to neural circuit tracing. Frontiers in Neuroscience, 897. https://doi.org/10.3389/fnins.2019.00897

Sample, I., 06/09/2012. Sebastien Seung: you are your connectome. The Guardian, https://www.theguardian.com/technology/2012/jun/10/connectome-neuroscience-brain-sebastian-seung

Savage, N. (2019). Marriage of mind and machine. Nature, 571(7766), S15-S17. https://doi.org/10.1038/d41586-019-02212-4

Sterling, A. 11/08/2012. Brain Brawl: Sebastian Seung vs Tony Movshon at Columbia University. retrieved 02/19/23 from: https://blog.eyewire.org/brain-brawl-sebastian-seung-vs-tony-movshon-at-columbia-university/

Tang, Y., Nyengaard, J. R., De Groot, D. M., & Gundersen, H. J. G. (2001). Total regional and global number of synapses in the human brain neocortex. Synapse, 41(3), 258-273. https://doi.org/10.1002/syn.1083

Van den Heuvel, M. P., & Yeo, B. T. (2017). A spotlight on bridging microscale and macroscale human brain architecture. Neuron, 93(6), 1248-1251. https://doi.org/10.1016/j.neuron.2017.02.048

Van Essen, David C., Smith, Stephen M., Barch, Deanna M., Behrens, Timothy E.J., Yacoub, Essa, Ugurbil, Kamil, for the WU-Minn HCP Consortium. (2013). The WU-Minn Human Connectome Project: An overview. NeuroImage, 80, 62-79. 10.1016/j.neuroimage.2012.02.018

Van Praag, H., Schinder, A. F., Christie, B. R., Toni, N., Palmer, T. D., & Gage, F. H. (2002). Functional neurogenesis in the adult hippocampus. Nature, 415(6875), 1030-1034. https://doi.org/10.1038/4151030a

White, J. G., Southgate, E., Thomson, J. N., & Brenner, S. (1986). The structure of the nervous system of the nematode Caenorhabditis elegans. Philos Trans R Soc Lond B Biol Sci, 314(1165), 1-340. doi:10.1098/rstb.1986.0056     

Yong, E. 07/22/2019. The human brain project hasn’t lived up to its promises. The Atlantic, https://www.theatlantic.com/science/archive/2019/07/ten-years-human-brain-project-simulation-markram-ted-talk/594493/