
When I set out to write blog posts at the weekends, I had forgotten what a pre-pandemic weekend looked like. As we ease out of lockdown, it’s slowly coming back to me that weekends are busy, social affairs (even for this introvert!) – perhaps not filled with opportunities for peaceful thinking and writing. So I didn’t write anything this weekend – instead I made banana bread, met friends in the sunshine, visited the new local bookshop, and went to evensong. As it turned out, the delay was fortuitous.
In my last headache-inducing post, I posed questions about how we define artificial intelligence past, present and future, fictional and non-fictional. In the wake of this wildly existential piece, I was planning to bring us back down to earth with a post about the current real state of AI. My intended weekend of writing came and went, and on Monday a professor of computer science called Melanie Mitchell published a brilliant and extremely relevant paper.
In Why AI is Harder Than We Think, Professor Mitchell outlines fallacies in thinking about AI, and how this has driven a cycle of ‘boom and bust’ in the field. As I read this (highly engaging and accessible) piece, I found myself nodding along and quietly exclaiming “yes!”. Mitchell’s well-reasoned prose imparts logic to the intuitive caution and scepticism I feel every time I hear grand assertive statements about the imminent future of AI.
Mitchell starts by telling the history of AI through a history of unmet promises: from the Dartmouth Summer Workshop of 1956 (which predicted “a significant advance” in simulating “every aspect of learning or any other feature of intelligence”) to the present day (which does not find us all travelling around in the self-driving cars that were foretold). Research has surged and foundered in phases known as AI ‘springs’ and ‘winters’. Of course, this is not to say that progress has not been made (I wouldn’t be a researcher in this field if I thought so!). We’ll reserve a detailed history of AI for another day, but suffice it to say that through the chronological themes of symbolic AI, to expert systems, to machine learning, to deep learning, we have come a long way. The historic problem, however, is that progress has not matched grandiose claims, leading to disappointment, collapse and dormancy, before the spark of hope is lit anew by some novel research. And so the cycle repeats. Here we are again today, with ‘human-level’ AI predicted to arrive in the next 20-40 years.
Why are we always (thus far) so erroneously optimistic? Mitchell identifies four fallacies which inflate our confidence:
1. Narrow intelligence is on a continuum with general intelligence
‘Narrow intelligence’ is the ability to perform a specific task, such as chess (think of Deep Blue). Each time AI is successful in a task like this, it is heralded as the forerunner of ‘general intelligence’ – the ability to apply learning to a multitude of tasks. It is a flimsy assumption, discredited time and time again as narrowly intelligent machines do not evolve into superintelligences. One of the pioneers of the field observed that: “As soon as it works, no one calls it AI any more”. Every time we think we’ve got our feet on the first rung of the ladder, we forget about the previous ladders that we’ve thereby discarded.
2. Easy things are easy and hard things are hard
This fallacy is also known as Moravec’s paradox, that: “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”. We can easily think that markers of human intellect, such as the ability to win at games like chess, are the hurdles that machines must conquer to demonstrate their intelligence. We can forget that things that come organically to us, like vision or movement, are actually greater hurdles for machines. In a feat of self-deprecation, we are unaware of how complex our own intelligence is.
3. The lure of wishful mnemonics
There is a tendency to anthropomorphise narrowly intelligent machines. We talk about machine ‘learning’, ‘understanding’, ‘goals’ and ‘thinking’ – none of which equate to the human manifestations of these concepts. A chess machine (to return to my favourite example) has no emotional concept of victory or play or its opponent. As instructed, it has successfully optimised the algorithm to increase its likelihood of winning (as defined by the programmer – they could equally instruct a machine to increase its chances of losing – the machine would have no contextual understanding of the difference). AI has bested us at chess, but not by humanly intelligent means. Garry Kasparov writes in Deep Thinking about how he developed an anti-machine style of chess, so different was machine play to human play (for example, it did not appear to have the human understanding of ‘territory’). Whilst researchers know all of this to be the case, these shorthand analogies are pervasive and affect both public and expert perceptions of the state of the field.
4. Intelligence is all in the brain
The assumption that intelligence is housed inextricably in the workings of the brain has led to the conclusion that human-level AI simply requires software to be scaled up to match the brain’s complexity. This would seem to ignore the importance of human bodies in our sensing and perception of the world – vital components of our intelligence. I think this also raises the question of mortality. It has been observed that human intelligence is distinct from artificial intelligence because it is limited (and characterised) by a finite lifespan, a finite brain and the need to communicate humankind-sustaining solutions. Researchers have sought to imitate our own intelligence because it is what we know – but with this fundamental difference in constraints, perhaps we haven’t even begun to grasp the nature or the foundations of AI.
These fallacies cause Mitchell to pose some open questions to the AI community so that we might get a better sense of where we are. How should we measure progress? How should we assess the difficulty of a problem for AI? What vocabulary should we use to describe the abilities of AI? How can we improve our understanding of what intelligence is?
I’ll leave you with these huge and excellent questions. Meanwhile I will turn my attention to my own mind-boggling quest: to determine which weekday is best for blog-posting.