Skip to content

How AI is shaping the future of politics

How AI is shaping the future of politics
Students on a political rally

It can often seem like the concept of artificial intelligence (AI) means something different to everyone. But one thing is indisputable: the impact AI is having on society.

As automated technology has evolved and become more capable, it has revolutionised the way we live, interact with each other, and even vote, affecting society from the heights of government through to everyday behaviours. Better understanding the scope of AI’s capability is key to developing and, crucially, managing the technology in the future. In recent years, academic research has played a fundamental role in supporting this knowledge base and, as the first dedicated academic facility to study the impact that technology is having on politics, the Centre for Technology and Global Affairs at Oxford University is a leading-edge example of this knowledge exchange in action.

Abhishek Dasgupta, Research Fellow in Artificial Intelligence at the Centre for Technology and Global Affairs (the Centre), discusses how this pioneering interdisciplinary research facility is driving our understanding of AI’s societal impact, how technology is influencing political governance today, and how these changes are shaping our lives – for better or worse.

Why is a research facility like the Centre so important today?

It goes without saying that technology is playing a huge role in politics – particularly now. For example, the way it interacts with politics through fake news, misinformation and algorithmic bias, an issue which I think will become even more important over time.

The Centre is one of the first research facilities of its kind to look at the impact that these technology shifts are having on politics, from a political viewpoint. Other research comes at the subject from more of technical position, for example computer science, but we look at the bigger picture from a social sphere, informed by tech. Being situated in one of the leading political departments in the world, we are in a very privileged position to do this. The people in this department are political scientists who know the theory of the field, so working with them to use machine learning techniques and to look at the technologies themselves is very important. 

Every new technology changes everything, from the internet to the telephone and even the newspaper. But there is a key difference with AI: it is the only technology where the point is artificial intelligence, not human intelligence

It is also very important that this research is taking place at all – particularly the relationship between AI and how it can affect political institutions and democracy. These institutions have an obligation to ensure social stability, because without that you have war. This is only going to become worse with AI.

What are the main technological challenges facing society?

There is a lot of attention on AI as a positive force – especially in science. But it is also bringing with it substantial change in how society will be governed. 

In the case of social media for example, you can already see how machines are sitting between us as individuals and other people, mediating our interactions. You see something on social media because an algorithm has recommended it to you.

This change will of course lead to social change, which leads to political change. But, there is always a danger that this political change will not be what you want. Through the rise of populism and job automation, we are clearly seeing that and having to ask ourselves some important questions as a society. For example, how do you manage free speech and achieve objectivity and rational debate, if it is true that technology is making people more outspoken? 

In the future many of the existing jobs will become automated, increasing unemployment. So, we have to ask ourselves will it be possible to have a stable society with a 60% unemployment rate – and if so, how do we get there? We need to think about moving towards a basic income and, if that is not feasible, what other measures can we put in place to preserve stability in society.

Do you think this is achievable?

I personally think the freeing up of human resources is generally a good thing. Receiving a basic income, not having to work for money and being able to pursue creative outlets is the dream, isn’t it? But could it really happen? Hard to say. None of the basic income pilots to date have been successful, and in general it is too early to say what path AI will take.

How is the Centre working to help preserve this stability in society?

One of the key areas in the AI theme is looking at how misinformation spreads across networks and how it can influence politics through elections and referendums. 

One of our main areas of research is in fake news and misinformation. We also have an ongoing project looking at elections and referendums, including most recently, the Irish abortion Referendum, and to what degree bots play a role in these critical junctures. 

The Centre also supports understanding of cyber security and AI-driven conflict, as well as looking at robotics and and cryptocurrency as key, individual research themes.

 Cyber weapons, drones and automated weapons are changing the way war takes place and even who is on the frontline. We are trying to understand what happens next. What happens when you have robots fighting each other instead of people? Where do you draw the line?

What drives your interest in AI?

What interests me most about AI is that it is going to change everything – and to an extent already is.

Every new technology changes everything, from the internet to the telephone and even the newspaper. But there is a key difference with AI: it is the only technology where the point is artificial intelligence, not human intelligence. And that is why, from those developing the tech to research like ours, it is so important to monitor it closely.

AI is very powerful, so we need to make sure that this change is something that actually helps rather than hurts people.

Can you be more specific?

Making AI work for society requires a long-term view and answering the bigger questions, such as: can AI coexist with us? Is it going to be out of control? 

If you just look at its primary purpose, on one level it helps us to automate things and services, as well as enabling us to gather more intelligence on things that already exist. This intelligence is powered by data, gathered from a number of sources, from the internet of things to sensors, distributed sensor networks and, perhaps most interestingly, free data collection, as I like to call it. Thanks to social media, people now put tonnes of personal information online – voluntarily. This is information that once took teams of people a long time to find. Now it is easily available and driving this increased intelligence.

With more intelligence you have the ability to use this knowledge for good and for bad. There’s nothing new there, but what is new is the scale at which you can do these things through AI – which is what makes it so powerful. You can run distributed AI algorithms in the cloud, and replicate them across the world - achieving your said goal en masse, with little, if any, human input.

In terms of governments, you can use it for surveillance, which has implications for privacy and civil liberties but also positive prospects for security which would definitely help governments to clamp down and make society safer – but at what cost? 

On the industry side, more intelligence can lead to more efficiency and make us more productive. But, of course, making machines more capable can potentially cause job displacement and unemployment in the short term. 

What is next for you?

In future, I would like to look at algorithmic bias issues, as I think this is a key political issue. There is a famous example in the US, of the Los Angeles Police Department using predictive policing to control crime. However, the data used had clear algorithmic bias, leading to more arrests of minority groups. It is a bias that is very common in machine learning and comes from having a limited training dataset. I am keen to look at how we can address these issues both from a policy and technical perspective.

I am also interested in legislative significance. Laws are passed and implemented all the time, and as part of a project at DPIR we are using a model based on mentions of legislation on the web, to understand what makes a specific piece of legislation important.

To find out more about research projects conducted by the Centre for Technology & Global Affairs visit the department website: https://www.ctga.ox.ac.uk/home

We use cookies to give you the best experience of using this website.

To accept our cookies, click here or read our Cookie Policy for more information.