AI and The Rhetoric Neuroscience

René Jensen
4 min readOct 1, 2021

The brain is sometimes described as the most complicated object in the known universe, and at other times we hear that it is the final frontier of modern science.

Supposing that these statements are true, of what consequence, if any, are they to the possibility of creating AI?

After all, if it takes an object so complicated to give rise to the kind of intelligent behavior that we are capable of, shouldn’t this shake our confidence in the possibility of creating intelligence artificially?

How can we ever hope to create something of such cosmic difficulty?

The truth is that no (few?) AI engineer(s) hope to create artificial intelligence by building a human brain 1:1, though some have started out small and are engaged in similar but much less ambitious projects such as the modeling a mouse brain:

By around 2024, the Blue Brain Project aims to have reached a cellular-level model of an entire mouse brain. It will be a first draft at the cellular-level of detail. In other words, a digital model with all the neurons (around 100 million), most of the types of neurons (around 1,000 different types) in the morphological detail (with all their tree like branches) and most of the synapses that they form (around a trillion)

Right now building a realistic model of the human brain is just too tall an order and still out of our reach:

It is not feasible to map every detail of the brain experimentally because, there are too many parts (over 20,000 genes, more than 100,000 different types of proteins, more than a trillion organic molecules in a single cell, nearly 100 billion neurons, up to 1,000 trillion synapses and over 800 different brain regions in the human brain), too many complex relationships between all these parts, and then too many variations of the brain; across individuals, genders, age, and species.

Engineers working on AI can (and do!) take inspiration from neuro- and cognitive science, however, insofar as it brings them closer to their goal of creating artificial intelligence, but nothing bids them do so.

This is because understanding human intelligence and creating human-level intelligence artificially are largely independent pursuits as Mark Sprevak explains:

Cognitive science and AI are related: both concern human intelligence and both use computation. It is important to see, however, that their two projects are distinct. AI aims to create an intelligent machine that may or may not use the same mechanisms for intelligence as humans. Cognitive science aims to uncover the mechanisms peculiar to human intelligence. These two projects could, in principle, be pursued independently. (Sprevak 2017, 278)

Neuro- and cognitive science and artificial intelligence are pursued together in practice which is no doubt sensible; we might as well take inspiration from mechanisms we know can produce the kind of complex, intelligent behavior we are looking for instead of just groping in the dark.

At the same time, however, it is a fundamental assumption in AI that a realistic model of the brain isn’t a prerequisite for achieving the same kind of behavior that it can produce according to Daniel Dennett:

Like any effort at scientific modeling, AI modeling has been attempted in a spirit of opportunistic oversimplification. Things that are horribly complicated may be usefully and revealingly approximated by partitionings, averagings, idealizations, and other deliberate oversimplifications, in the hope that some molar behavior of the complex phenomenon will prove to be relatively independent of all the myriad micro-details, and hence will be reproduced in a model that glosses over those micro-details. (Dennett 330–331, 1987)

AI engineers pin their hopes on the idea that we can sacrifice some of the micro-details of our neural make-up and rely on various approximations and still preserve the causal powers of the brain when modeling.

The brain might well be the most complicated object in the known universe, but since understanding the human brain and intelligence and creating intelligence artificially are independent pursuits, the difficulties involved in understanding the former need not be ones we have to overcome before we can create the latter.

In fact as we have seen, research in AI is guided exactly by the assumption that we can avoid tackling the mysteries of that alleged final frontier of science, the human brain.

So even if true, the bombastic rhetoric of neuroscience need not shake our confidence in the possibility of AI.

But… of course there is a but, if the brain is the only object that’s capable of producing the kind of intelligent behavior that it can produce—and as of yet, we know of no other object that can—then the difficulties involved in understanding it are bad news for artificial intelligence.

Dennett believes that this is the case, that, more precisely, whenever we observe behavior of a complexity equivalent to that of humans, the agent exhibiting it must ultimately be controlled by an organic brain:

My position . . . is that neurophysiology is (probably) so important that if I ever see any entity gadding about in the world with the real-time cleverness of, say C3PO in Star Wars, I will be prepared to wager a considerable sum that is controlled — locally or remotely — by an organic brain. (ibid., 334)

If Dennett is right and if the human brain is indeed the most complex object in the known universe, then, it seems, we would have to drink up the sea!

Bibliography

Dennett, Daniel C. ‘Fast Thinking’. In The Intentional Stance. MIT Press, 1987.

Sprevak, Mark. ‘Turing’s Model of the Mind’. In The Turing Guide, by Copeland, B. Jack, Jonathan Bowen, Mark Sprevak, and Robin Wilson. Oxford University Press, 2017.

--

--