
When I was writing essays at school and university, I always began by defining my terms. This was no great hardship – I find words and etymology terribly interesting. It also served me well. There is a reason that it is a staple of essay-writers everywhere: through definitions, key issues and points of contention come readily bubbling to the surface. It allows us to get straight to the heart of the matter.
In my studies, I am now applying artificial intelligence in medical research. But before I hone in on that in future posts, I wanted to introduce the field and outline a definition of AI – a notoriously elusive term, invariably accompanied by ambiguities and differing interpretations. And sure enough, in exploring simply the definition of AI, I have uncovered more questions than answers.
This is not an essay. I cannot immediately pursue the logical questions that here arise – it would require a book (indeed, many books have been written on this subject). This is simply a blog post about the complexity of communicating about AI, which will perhaps provide fodder for future discussions.
(It is worth mentioning that I could forego this topic in my studies. I could dispense with the vague term ‘AI’, replace it with the name of the specific neural network that I am using, and chuck the philosophy. But the historian in me likes philosophy and context, and thinks that it might just be important.)
Let’s start at the beginning: the dictionary! The dictionary defines AI as:
the capacity of a computer, robot, or other programmed mechanical device to perform operations and tasks analogous to learning and decision making in humans, as speech recognition or question answering.
Well, if we’re looking for complexities, we’ve struck gold! The dictionary definition here, reliably as ever, gives us a nice broad understanding of the term and some examples, but also draws our attention to contentious details. In the following questions, I am not seeking to challenge the dictionary’s authority, but to highlight the issues that this definition raises.
Complexity 1: What AI are we talking about here?
The term ‘Artificial Intelligence’ was coined in 1955, but intelligent machines have been imagined since antiquity, and have continued to be imagined in science fiction. This presents us with a difficulty of definition, as AI is past, present and future. It is both imagined and, now, real. Is it all the same?
Complexity 2: Does AI have to be mechanical?
Computers, robots or programmed mechanical devices – shiny silicon-based structures operating on binary code. Can AI never be organic? How would we define the monster in Mary Shelley’s Frankenstein? What about hybrids of organic and mechanical, as in any number of sci-fi cyborgs, or brain implants as proposed by Neuralink?
Complexity 3: How does machine decision making and learning differ from that of humans?
The definition pivots on the lovely word ‘analogous’. The dictionary writers, those bastions of human knowledge, are not satisfied that machine intelligence is the same as that of humans. In this way, the very concept of AI has not passed the Turing Test. But what makes machine intelligence different? The mechanism? The motivation? The application?
Complexity 4: Where do other beings come into this?
The definition establishes a human realm of learning and decision making – but are animals not also capable of these? Are plants? Are microbes? Are systems?
Complexity 5: Are speech recognition and question answering markers of intelligence?
These are incontestably areas in which machines shine, but was Watson’s intelligence analogous to human intelligence when it won Jeopardy!? Is Alexa’s intelligence analogous to human intelligence when it plays the song you asked for? Was Deep Blue’s intelligence analogous to human intelligence when it beat Garry Kasparov at chess? (I’ve just read his book on the subject, so I will definitely be coming back to this one.)
I think we need to go deeper, and break down the definition further. What is intelligence? According to the trusty dictionary:
capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.
Hooray – we’ve struck another seam of complexities! (Exacerbated by that cheeky et cetera at the end; who knows what that could be hiding…)
Complexity 6: What is learning, reasoning, understanding?
We have some deep educational questions here: if I memorise Wikipedia like IBM’s Watson, have I learned? If I can suggest the likely next word in a sentence like autocorrect, have I reasoned? If I can write prose in response to a prompt like GPT-3, have I understood? I think it is fair to say that these activities do not guarantee learning, reasoning, or understanding – but they could entail them. If I did these things, I would say that I have used these faculties. How do we test or define whether the machine has?
Complexity 7: What is mental activity?
Does one have to have a brain to exhibit intelligence? Can one have a mind without a brain? Does this require consciousness?
Complexity 8: What is truth?
Very Pontius Pilate of me. Seriously – are humans the arbiters of truth? Is there a ground truth? Are there truths that machines might be capable of understanding that we can’t?
And finally (because my head’s starting to hurt): let’s look at the definition for ‘artificial’.
1. made by human skill; produced by humans (opposed to natural)
2. imitation; simulated; sham
Complexity 9: Is AI an extension of human intelligence?
If humans made it, can we claim it? When we harness it, are we collaborating with it or exploiting a tool?
Complexity 10: Is our intelligence something to be imitated?
Perhaps we are being arrogant here in assuming that our intelligence is the model to follow. Might machines be capable of something different or more? And why do we assume we are the original – how do we know that we ourselves are not simulations?
On that existential crisis inducing note, I’m going to call it a day. This so-called intelligence, grown weary of metaphysics, needs cheese and Netflix.