It's the fly on the wall of our colleges, as our academics debate key issues shaping the future of society. Season one is all about artificial intelligence...
We live in ever-changing times, so information we can trust is more important than ever before, and it’s not always where our academics agree that’s most revealing, but where they disagree. Futuremakers is the fly on the wall to that debate.
You may already have read a hundred articles about artificial intelligence and the future of society, but these longer conversations – featuring four of our academics at the cutting edge of research and at the forefront of their profession – explore each topic in detail, from the automation of jobs to the inherent bias of algorithms.
You’ll find Futuremakers on:
- Apple Podcasts at: http://po.st/Futuremakers
- AudioBoom at: http://po.st/futuremakers
Episode one: How will the automation of jobs likely progress?
In 2013 two Oxford academics published a paper titled 'The Future of Employment: How Susceptible Are Jobs to Computerisation?' estimating that 47% of US jobs were at risk of automation. Since then, numerous studies have emerged, arriving at very different conclusions. So where do these estimates diverge, and where do we think the automation of jobs might be heading? Join our host, philosopher Peter Millican, as he explores this topic with one of the authors of that paper, Professor Mike Osborne, as well as Dr Judy Stephenson, an expert on labour markets in pre-industrial England, and Professor David Clifton from our Department of Engineering Science.
Episode two: Are all algorithms biased?
Our lives are increasingly shaped by automated decision-making algorithms, but do those have in-built biases? If so, do we need to tackle these, and what could happen if we don’t? Join our host, philosopher Peter Millican, as he explores this topic with Dr Sandra Wachter, a lawyer and research fellow in areas including data ethics, AI, robotics and internet regulation at the Oxford Internet Institute; Dr Helena Webb, a senior researcher in the Department of Computer Science; and Dr Brent Mittelstadt, a research fellow at the Oxford Internet Institute focusing on auditing, interpretability, and the ethical governance of complex algorithmic systems.
Episode three: Is the banking sector about to change for ever?
AI is already playing a role in the finance sector, from fraud detection, to algorithmic trading, to customer service, and many within the industry believe this role will develop rapidly within the next few years. So what does this mean for both the people that work in this sector, and for the role banking and finance plays in society? Join our host, philosopher Peter Millican, as he explores this topic with Professor Stephen Roberts, Royal Academy of Engineering and Man Group Professor of Machine Learning, Professor Nir Vulkan, a leading authority on e-commerce and market design, and on applied research and teaching on hedge funds, and Jannes Klaas, author of 'Machine Learning for Finance: Data algorithms for the markets and deep learning from the ground up for financial experts and economics'.
Episode four: Is AI good for our health?
With AI algorithms now able to mine enormous databases and assimilate information far quicker than humans can, we’re able to spot subtle effects in health data that could otherwise have been easily overlooked. So how are these tools being developed and used? What does this mean for medical professionals and patients? And how do we decide whether these algorithms are making things better or worse? Join our host, philosopher Peter Millican, as he explores this topic with Alison Noble, Technikos Professor of Biomedical Engineering in the Department of Engineering Science, Paul Leeson, Professor of Cardiovascular Medicine at the University of Oxford, and a Consultant Cardiologist at the John Radcliffe Hospital, and Jessica Morley, a Technology Advisor to the Department of Health, leading on policy relating to the Prime Minister's Artificial Intelligence Mission.
Episode five: Does AI have a gender?
As chatbots and virtual assistants become an ever-present part of our world, and algorithms increasingly support decision-making, people working in this field are asking questions about the bias and balance of power in AI. With the make-up of teams designing technology still far from diverse, is this being reflected in how we humanise technology? Who are the people behind the design of algorithms and are they re-enforcing society’s prejudices through the systems they create? Join our host, philosopher Peter Millican, as he explores this topic with Gina Neff, Senior Research Fellow and Associate Professor at the Oxford Internet Institute, Carissa Véliz, a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities, and Siân Brooke, a DPhil student at the Oxford Internet Institute focussed on construction of gendered identity on the pseudonymous web.
Episode six: From Ada Lovelace to Alan Turing – the birth of AI?
Many developments in science are achieved through people being able to ‘stand on the shoulders of giants’ and in the history of AI two giants in particular stand out. Ada Lovelace, who inspired visions of computer creativity, and Alan Turing, who conceived machines which could do anything a human could do. So where do their stories, along with those of calculating engines, punched card machines and cybernetics fit into to where artificial intelligence is today? Join our host, philosopher Peter Millican, as he explores this topic with Ursula Martin, Professor at the University of Edinburgh and a member of Oxford's Mathematical Institute, Andrew Hodges, Emeritus Fellow at Wadham, who tutors for a wide range of courses in pure and applied mathematics, and Jacob Ward, a historian of science, technology, and modern Britain and a Postdoctoral Researcher in the History of Computing.
Episode seven: Has AI changed the way we find the truth?
Around the world, automated bot accounts have enabled some government agencies and political parties to exploit online platforms in dispersing messages, using keywords to game algorithms, and discrediting legitimate information on a mass scale. Through this they can spread junk news and disinformation; exercise censorship and control; and undermine trust in the media, public institutions and science. But is this form of propaganda really new? If so, what effect is it having on society? And is the worst yet to come as AI develops? Join our host, philosopher Peter Millican, as he explores this topic with Rasmus Nielsen, Director of Oxford’s Reuters Institute for the Study of Journalism; Vidya Narayanan, post-doctoral researcher in Oxford’s Computational Propaganda Project; and Mimie Liotsiou, also a post-doctoral researcher on the Computational Propaganda project who works on online social influence.
Episode eight: What does AI mean for the future of humanity?
So far in the series we’ve heard that artificial intelligence is becoming ubiquitous and is already changing our lives in many ways, from how we search for and receive information, to how it is used to improve our health and the nature of the ways we work. We’ve already taken a step into the past and explored the history of AI, but now it’s time to look forward. Many philosophers and writers over the centuries have discussed the difficult ethical choices that arise in our lives. As we hand some of these choices over to machines, are we confident they will reach conclusions that we can accept? Can, or should, a human always be in control of an artificial intelligence? Can we train automated systems to avoid catastrophic failures that humans might avoid instinctively? Could artificial intelligence present an extreme, or even an existential threat to our future? Join our host, philosopher Peter Millican, as he explores this topic with Allan Dafoe, Director of the Centre for the Governance of AI at the Future of Humanity Institute; Mike Osborne, co-director of the Oxford Martin programme on Technology and Employment, who joined us previously to discuss how AI might change how we work; and Jade Leung, Head of Partnerships and researcher with the Centre for the Governance of AI.
Episode nine: Is China leading the way in AI?
In the penultimate episode of series one, we’re looking at the development of AI across the globe. China has set itself the challenge of being the world’s primary innovation centre by 2030, a move forecast to generate a 26% boost in GDP from AI related benefits alone, and some claim they’re already leading the way in many areas. But how realistic is this aim when compared to AI research and development across the world? And if China could dominate this field, what are the best, and worse, case scenarios for both it, AI technology, and the rest of the planet? Join our host, philosopher Peter Millican, as he explores this topic with Mike Wooldridge, Head of Oxford’s Department of Computer Science; Xiaorong Ding, a post-doctoral researcher who’s studied and worked several of China’s leading universities and companies; and Sophie-Charlotte Fischer, a visiting researcher at the Future of Humanity Institute, and a PhD Candidate whose dissertation project focusses on the development of AI in China and the US.
Episode ten: Season Finale: AI selection box
In the final episode of our series, we’re looking back at the themes we’ve discussed so far, and forward into the likely development of AI. Professor Peter Millican will be joined by Professor Gil McVean, to further investigate how big data is transforming healthcare, by Dr Sandra Wachter, to discuss her recent work on the need for a legal framework around AI, and also by Professor Sir Nigel Shadbolt on where the field of artificial intelligence research has come from, and where it’s going. To conclude, Peter will be sharing some of his views on where humanity is heading with AI, when you’ll also hear from his final guest, Azeem Azhar, host of the Exponential View podcast.
Futuremakers will be taking a short break now, but we’ll be back with series two in the new year, when we’ll be taking on another of society’s grand challenges: building a sustainable future. Before then we’ll also be publishing a special one-off episode on Quantum Computing and the global opportunities, and risks, it could present.
To read more about some of the key themes in this episode, you can find Sandra Wachter’s recent papers below.
- A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI
- Explaining Explanations in AI
- Counterfactual explanations without opening the black box: automated decisions and the GDPR (PDF)