When you think about artificial intelligence (AI) you would be forgiven for picturing armies of robots trying to take over the world, while Will Smith fights in earnest to stop them. Truth be told, most people’s perceptions of AI and machine learning are fuelled by Hollywood fantasy and fear – even our understanding of the field’s name is often skewed, and as a result very little of what we think we know about AI is rooted in reality.
To support public understanding of AI, its value and its limitations, leading contributors to Oxford’s AI research programmes have helped bust the biggest misconceptions around the field:
Professor Mike Wooldridge
Head of the Department of Computer Science at Oxford. A leading authority on machine learning, he is also the author of the Ladybird Expert Guide to Artificial Intelligence.
‘I think the biggest misconception is that people imagine advances in narrow AI problems like playing games such as Go must mean that we are closer to solving the problem of general AI – that is, the problem of building AI systems that have broad general intelligence in the way humans do.
‘There have been lots of important advances in AI recently, but these are in narrow problems like recognising faces and playing games. These advances are genuinely impressive but don’t point the way to general AI. I think that is a long way off, and we really don’t know how to get there.
‘Another misconception is that AI techniques are about modelling the brain. That isn’t what AI researchers do. Even techniques like neural networks, which take inspiration from the microstructure of the brain, are only loosely related to brain structure. So, artificial intelligence today is not about modelling the brain.’
People’s perceptions are still fed by Hollywood and the media, and they create a false impression of what is happening, which is misleading for the public
Professor Marina Jirotka
Professor of Human-Centred Computing at Oxford, Associate Director of the Oxford e-Research Centre, and Associate Researcher at the Oxford Internet Institute.
‘People’s perceptions are still fed by Hollywood and the media, and they create a false impression of what is happening, which is misleading for the public – particularly the notion that robots are going to take over the world and violently overthrow us. I try not to think about the scary job loss stories: in reality a robot takeover would be more about the demise of human exchange and interaction, such as the use of automated customer services – which is already happening.
‘People want to know how things apply to them and how something is going to affect them. We need to bring machine learning to life for people so that the reality becomes more interesting than the Hollywood hype.
‘AI and machine learning have the potential to improve a number of societal concerns, including health, mental health issues and environmental concerns. The medical improvements particularly could be phenomenal and would represent a major step-change for society. However, they have to be tempered with keeping human society going, and the opportunities and challenges need to be made clear to the general public. If there are going to be job losses to automation, what preventative measures and safeguards can be put in place for the people affected, so that they still have jobs? These are questions that are already under scrutiny, but in the future I anticipate they will be addressed at a greater pace.’
Professor Michael Osborne
Dyson Associate Professor of Machine Learning at Oxford and Co-Director of the Oxford Martin Programme on Technology and Employment.
‘A common misconception about machine learning is that it provides only “black-box” algorithms – that is, their workings are inherently incomprehensible to human overseers. There’s some truth to this belief. Cutting-edge research has been accused of trading rigour for glamour: current machine learning systems have often been obsessively tuned to improve performance in practice, with the understanding of the underlying principles lagging behind. Such systems do indeed have blind spots, failure modes, and other behaviours that are not well understood. However, such problems do not plague all machine learning systems, nor are they fundamental: many researchers are developing approaches to provide interpretable machine learning.
‘It’s worth remembering that alternatives also have problems. Human decision makers are often subject to problematic heuristics and biases, nor are we always able to explain why we made the decisions we did. Alternative, simpler algorithms may be better understood – however, this simplicity often means compromising on accuracy and performance. A sacrifice of predictive power has consequences for interpretability: if your model (or a human) can’t accurately predict the real world, you can’t use it to interpret anything about the real world. Rather than abandoning machine learning for the perceived sin of uninterpretability, we should be working to provide such interpretability for the thrilling advances made within the field.’
Dr Sandra Wachter
Research Fellow in Data Ethics at the Oxford Internet Institute.
‘A lot of people have sci-fi fantasy-led ideas about AI and machine learning, but it is very much here already. Many decisions that were once made by people are now made by algorithms, and the problems that come with this are not always made clear to individuals.
AI and machine learning are already used widely in a number of sectors, from finance and loans to supporting life-changing criminal justice decisions like whether or not someone should be granted parole. This is a massive step-change for society. As AI is embedded seamlessly, we are often not aware of it.
‘We have a real conflict of discussion at the moment. On the one hand there are technology developers who want everyone to have confidence in AI and machine learning, and say that any suggestion of linked societal problems are overblown and that people shouldn’t worry. But, on the other side, some say the exact opposite and warn of significant social disruption and paint a very negative picture. I think the truth is somewhere in the middle. The world in general is currently facing a lot of problems, such as discrimination and job losses, and how we prepare for the future and how we should use AI are important questions that need to be answered in addressing these problems. We need to find a middle ground between the utopian and dystopian viewpoints.’
AI is only really comparable to the industrial revolution, which, at the time, of course changed everything
Professor Marta Kwiatkowska
Researcher at Oxford’s Department of Computer Science.
‘The term AI implies intelligence, with all the connotations of this word – namely the “thinking machine”. I find that too often conventional software is presented as AI. I also find that there is a lot of hype about AI, inevitably resulting from excitement about seeing it work well in applications – think, for example, about Alexa in comparison with early efforts in speech recognition. However, these early successes are often presented in the media without questioning how generic these solutions are and whether the concepts are sufficiently well developed and understood to use them “in the wild”.
‘In the case of recognition systems like Alexa, can the technology really reason (think), understand context and perceive social subtleties? In reality, many of the AI developments constitute very early progress that is presented as huge promise, but that promise depends on a lot of underpinning work that has yet to be done.’
Dr Abishek Dasgupta
Research Fellow in AI at the Centre for Technology and Global Affairs.
‘I think it is a mistake to give AI any single value in and of itself: good, bad, evil and so on. It is nothing like that. Like humans, AI is limitless, so it cannot have that kind of value. It is not the AI system that’s the issue, but more the way it is implemented and the ethics and frameworks around it that are lacking.
‘The other big misconception is comparing AI to any other technology before it. People often say “the internet is changing our brains”, which it is, but people have said and feared that about technology for hundreds of years, from the telephone and the internet through to the invention of writing, which people once believed would stop us from remembering the things we put to paper. But society is still standing.
‘AI is very different because it is artificial. As it crosses a certain cognitive threshold, and assuming you solve the ethical and algorithmic bias issues, there is no need for a person to be there. Even something like data entry, which used to be carried out by a team of 50 people working full-time, can be achieved with one person over a weekend.
‘AI is only really comparable to the industrial revolution, which, at the time, of course changed everything. People moved from rural areas to cities to work in industry, and profits were invested back into those cities, so its impact was socially positive. Now, that investment is going into machines and not people, so what are people going to do?’