'If privacy is a fundamental human right, and therefore a legal requirement, should people have to pay for it?'

'If privacy is a fundamental human right, and therefore a legal requirement, should people have to pay for it?'
Image credit: Shutterstock

Dr Stephanie Hare is a researcher and broadcaster focusing on technology and politics, who recently spoke at Oxford's Centre for Technology and Global Affairs, in the Department of Politics and International Relations, on the importance of building ethics into AI developments. In an exclusive interview, Dr Hare goes beyond the clickbait headlines to discuss some of the most critical questions in technology, including how data harvesting and use of biometrics are shaping the world today.

Is data harvesting becoming out of control?

Was data harvesting ever in control? We are in the early stages of losing our naiveté and innocence about data, but I don’t think it is a case of us ‘losing control’ of it, or needing to rein it back. I don’t think we have ever been in control of data sharing.

Isn’t sharing an individual’s details without their consent an infringement of privacy?

The default mode in our society has not been privacy. It has been a trade-off between companies and users, where in return for using companies’ services, we give them our data.

We don’t have to pay to use Google, but instead you see incredibly targeted advertising as you browse. All without any of us paying for it.

But we are getting to a point where this targeted advertising verges on oppressive surveillance, and we as consumers and citizens want greater privacy and data protection. If we want it, we may have to pay for it.

This isn’t as shocking as it sounds, though. Just in the way that people got accustomed to getting news content for free, in recent years the Financial Times, Wall Street Journal and Bloomberg have started charging for their news content. This has created its own problems: not everybody can afford subscriptions to every quality news outlet, which is where taxpayer funded outlets like the BBC come in. There are also web news sites which you don’t have to pay to access, but they do not produce quality content.

So just as we have to ‘pay to play’ for quality news, when it comes to data, it looks increasingly like we are going to have to start paying for privacy. Yet this raises an even bigger and much more problematic question: if privacy is a fundamental human right, and therefore a legal requirement, should people have to pay for it?

This would completely upend the business model of so many companies that currently profit from people’s data.

It has been said that we have lost control of data harvesting - but you can’t lose control of something that you never had a handle on to begin with.

Stephanie Hare

Is there a solution to data harvesting?

Right now we do not have a uniform approach to upholding data protection globally. There is only the recent EU General Data Protection Regulation (GDPR), which was introduced this year. But there are increasingly calls now for similar legislation in the United States.

In the short term, data localisation is an option – this is where people’s data is processed and stored in the country of origin. That is why countries such as China and Russia are insisting on data localisation, and as a result we are getting something called the ‘splinternet’ (data fragmentation). 

But it is not an ideal solution, particularly for multi-national companies or people that have to move around a lot, using their phones the whole time. What are their data rights? And who do they even contact if they are concerned?

Why is the ethics of AI important?

Ethics is increasingly a focus not just for AI and technology, but for business in general. For example, the ethics of funding, as in the case of Saudi Arabia, which funds a number of US technology companies. Is it ethical to receive financial backing from a questionable political regime that openly murders journalists, such as the Washington Post’s Jamal Khashoggi, who speak out against it? 

Meanwhile, supply chain ethics and software ethics raise important questions around data gathering, for example how is data used? How is it collected? What transparency is there around data and, more importantly, what rights do I, as a consumer, have to know what information is taken about me? To amend, correct, or insist that something is deleted? Even to simply know who else has access to my data – is it being sold on or shared with third parties? 

We don’t have good answers to any of these questions yet and we need them. Complacency is not an option. We need to build in ethical protections from the very beginning.

If technology is biased towards people of colour and women, those people are citizens and taxpayers too, and they have a right not to be targeted by racist, sexist technology.

Stephanie Hare

What can be done to make the use of AI in business fairer and more transparent?

For starters, updating legislation and empowering regulators would help. 

Then there’s the media, which has the power to elevate the discussion and share research in a way that raises awareness. As a society we need to have a much bigger and better conversation about AI.

Right now the discourse seems stuck at the extremes, ranging from visions of utopia, such as ‘AI is going to remove all of our jobs, so we will all be living on universal basic income and existing to fulfil our human potential’ to darker scenarios such as ‘AI is going to take all of our jobs and we are all going to be homeless and starving’ or ‘AI is going to empower robots that might kill us all.’

That doesn’t mean that these things shouldn’t be discussed – they absolutely should! But there is so much more to it. 

Aside from the clickbait extremes, what are the concerns that people need to be aware of?

Biased algorithms are a huge issue and we need to address their use in areas such as recruitment. 

Also, what role do we want biometrics and facial recognition technology – as an aspect of biometrics – to be used in policing and national security?  

If technology is biased towards people of colour and women, those people are citizens and taxpayers too, and they have a right not to be affected by racist, sexist technology. And we all have a right to intelligibility, transparency and accountability around how that technology is used.

How could legislation around biometrics be more effective?

As I demonstrated in an op-ed for the Financial Times, the Home Office is not currently obeying UK law in terms of keeping people’s biometric data in national databases. People who are innocent of a charge, or who have been acquitted of a charge, can still have their data retained. And children in Northern Ireland and Scotland do not have the same protections over their biometrics as do children in England and Wales, something Scotland is not addressing in its proposed biometrics legislation.

When we think about biometrics being abused, we tend to think of a surveillance state, like China, where they are used to track the Uighurs – and are thought to be incarcerating around a million Uighurs. Few people would imagine that abuses are happening in liberal democracies, yet Anne-Marie Slaughter and I point to some areas of concern in our op-ed for Project Syndicate about the use of biometrics in the United States, the United Kingdom and India.

When I interviewed Professor Paul Wiles, the UK Biometrics Commissioner, for the BBC World Service this summer, he said that we in the United Kingdom are so distracted by what is happening with Brexit and until that is done, other issues such as biometrics legislation just are not getting a look-in. Meanwhile, UK police forces are rolling biometrics technology out and using it and testing it, and we are not really having a conversation in our society about what the risks of that are.

Of all the potential issues with AI, which concerns you most?

Who has a seat at the table for the global conversation on AI? Philosophers, engineers and ethicists, sure. But what about the public – the people that this tech is going to be used on?

What are the biggest public misconceptions of AI?

First, that AI is going to fix everything. Second, that it is going to kill us all. Third, that we can’t fix this path. And finally, that only engineers can understand it. Every single one of those statements is questionable, so let’s question them.

Where do you think AI has the most potential for good?

Improving healthcare and fighting climate change, hopefully. Possibly in keeping us all safer from terrorism, as we have tonnes of video footage in areas of national security importance, such as airports, railway stations and ports, and so on. No human being could monitor it all, but technology could. It could also help in securing the food chain and providing integrity in the financial system, busting fraud, et cetera.

What needs to change to rebalance the AI conversation?

Researchers need to share their research beyond their institutions or academic journals that, all too often, are only read by other academics. They need to be regularly publishing in national newspapers, social media, podcasts, that the general population is consuming, and they need to work with companies and governments to infuse their work with research in thoughtful, measurable, transparent ways.

We’ve got a challenge to raise the next generations to be digitally literate and savvy about things like fake news, deep fakes and the data trails and digital footprints that they are leaving. They are pretty savvy already, but there is room for us all to improve. This means we need to help parents and teachers, too. 

And we need to help lawmakers, who are overwhelmed with all sorts of issues and would no doubt welcome support on AI. 

Are there any other women in the field who inspire you?

There are a lot of incredible women in technology. To name a few:

What can be done to encourage more women into the field?

We need to help parents and teachers to understand that technology is an incredibly exciting and rewarding place to work for any human being. If you want to have a career where you can really make a difference, grow your skillset, work on meaningful, high-impact problems and get paid very well, technology is where to go. And governments and companies have room to improve what kind of culture they create and uphold so that girls grow up wanting to work in technology and women want to stay in the field and contribute to it fully.

Much is said about China, the UK and US leading in the international AI race would you agree?

It will be very interesting to see what happens in the next ten to 20 years, to see if the same certain group of countries that we all think of as leading the race are the same.

China has invested a lot of money into AI as a national priority. But they are by no means the only one to watch.

Israel is a fascinating country to watch on anything to do with technology, particularly cybersecurity and AI. They are looking at AI in a really interesting way that doesn’t necessarily grab the headlines but will have long-term impact. The startup community there and the high level of research coming out is really interesting.

A lot of Silicon Valley companies have opened AI labs in France. Obviously, with Brexit, the EU needs to establish a new country as a contender in the race. France is already positioning itself to take that mantle.