Source: The Conversation (Au and NZ) – By Tomas Fitzgerald, Lecturer in Law, Curtin University
It is a truth, universally acknowledged, that the machines are taking over. What is less clear is whether the machines know that. Recent claims by a Google engineer that the LaMBDA AI Chatbot might be conscious made international headlines and sent philosophers into a tizz. Neuroscientists and linguists were less enthused.
As AI makes greater gains, debate about the technology moves from the hypothetical to the concrete and from the future to the present. This means a broader cross-section of people – not just philosophers, linguists and computer scientists but also policy-makers, politicians, judges, lawyers and law academics – need to form a more sophisticated view of AI.
After all, how policy-makers talk about AI is already shaping decisions about how to regulate that technology.
Take, for example, the case of Thaler v Commissioner of Patents, which was launched in the Federal Court of Australia after the commissioner for patents rejected an application naming an AI as an inventor. When Justice Beech disagreed and allowed the application, he made two findings.
First, he found that the word “inventor” simply described a function and could be performed either by a human or a thing. Think of the word “dishwasher”: it might describe a person, a kitchen appliance, or even an enthusiastic dog.
Second, Justice Beech used the metaphor of the brain to explain what AI is and how it works. Reasoning by analogy with human neurons, he found that the AI system in question could be considered autonomous, and so might meet the requirements of an inventor.
The case raises an important question: where did the idea that AI is like a brain come from? And why is it so popular?
AI for the mathematically challenged
It is understandable that people with no technical training might rely on metaphors to understand complex technology. But we would hope that policy-makers might develop a slightly more sophisticated understanding of AI than the one we get from Robocop.
My research considered how law academics talk about AI. One significant challenge for this group is that they are frequently maths-phobic. As the legal scholar Richard Posner argues, the law
provides a refuge for bright youngsters who have “math block”, though this usually means they shied away from math and science courses because they could get higher grades with less work in verbal fields.
Following Posner’s insight I reviewed all uses of the term “neural network” – the usual label for a common kind of AI system – published in a set of Australian law journals between 2015 and 2021.
Most papers made some attempt to explain what a neural network was. But only three of the nearly 50 papers attempted to engage with the underlying mathematics beyond a broad reference to statistics. Only two papers used visual aids to assist in their explanation, and none at all made use of the computer code or mathematical formulas central to neural networks.
By contrast, two-thirds of the explanations referred to the “mind” or biological neurons. And the overwhelming majority of those made a direct analogy. That is, they suggested AI systems actually replicated the function of human minds or brains. The metaphor of the mind is clearly more attractive than engaging with the underlying maths.
It is little wonder, then, that our policy-makers and judges – like the general public – make such heavy use of these metaphors. But the metaphors are leading them astray.
Where did the idea that AI is like the brain come from?
Understanding what produces intelligence is an ancient philosophical problem that was ultimately taken up by the science of psychology. An influential statement of the problem was made in William James’ 1890 book Principles of Psychology, which set early scientific psychologists the task of identifying a one-to-one correlation between a mental state and a physiological state in the brain.
Working in the 1920s, neurophysiologist Warren McCulloch attempted to solve this “mind/body problem” by proposing a “psychological theory of mental atoms”. In the 1940s he joined Nicholas Rashevsky’s influential biophysics group, which was attempting to bring the mathematical techniques used in physics to bear on the problems of nueroscience.
Key to these efforts were attempts to build simplified models of how biological neurons might work, which could then be refined into more sophisticated, mathematically rigorous explanations.
If you have vague recollections of your high school physics teacher trying to explain the motion of particles by analogy with billiard balls or long metal slinkies, you get the general picture. Start with some very simple assumptions, understand the basic relations and work out the complexities later. In other words, assume a spherical cow.
In 1943, McCulloch and logician Walter Pitts proposed a simple model of neurons meant to explain the “heat illusion” phenomenon. While it was ultimately an unsuccessful picture of how neurons work – McCulloch and Pitts later abandoned it – it was a very helpful tool for designing logic circuits. Early computer scientists adapted their work into what is now known as logic design, where the naming conventions – “neural networks” for example – have persisted to this day.
That computer scientists still use terms like these seems to have fuelled the popular misconception that there is an intrinsic link between certain kinds of computer programs and the human brain. It is as though the simplified assumption of a spherical cow turned out to be a useful way to describe how ball pits should be designed and left us all believing there is some necessary link between children’s play equipment and dairy farming.
This would be not much more than a curiosity of intellectual history were it not the case that these misconceptions are shaping our policy responses to AI.
Is the solution to force lawyers, judges and policy-makers to pass high school calculus before they start talking about AI? Certainly they would object to any such proposal. But in the absence of better mathematical literacy we need to use better analogies.
While the Full Federal Court has since overturned Justice Beech’s decision in Thaler, it specifically noted the need for policy development in this area. Without giving non-specialists better ways of understanding and talking about AI, we’re likely to continue to have the same challenges.
Tomas Fitzgerald has received funding from the WA Bar Association. He is a member of WA Labor and the NTEU.
– ref. Why we talk about computers having brains (and why the metaphor is all wrong) – https://theconversation.com/why-we-talk-about-computers-having-brains-and-why-the-metaphor-is-all-wrong-185705