Are algorithms biased?

Are algorithms biased?
Will AI reinforce existing human biases? (Credit: Shutterstock)

Our online lives are being shaped by processes of automated decision-making.

News feeds, search engine results and product recommendations increasingly rely on algorithms to filter content and personalise what we see when we are browsing. For instance, when we enter a query into an online search engine, algorithmic processes determine the results we see and the order of those results. Similarly, when we look on Facebook and other social networks, personalisation algorithms operate to determine the adverts and posts we see in our individual accounts. 

These algorithmic processes can be immensely useful. They help us cut through the mountains of information available online and direct us to those bits that are most relevant to us. However, in recent years genuine concerns have arisen that the way these algorithms operate online can lead to unfavourable, unfair or even discriminatory outcomes. A number of public controversies have occurred, including:

  • The development of an automated system to set trending news items in users’ Facebook feeds in 2016. Although this was an attempt to overcome human bias in selecting news items, it was soon found that the algorithm-based system allowed false news items to be promoted alongside items containing offensive terms and images.
  • The potential for personalisation mechanisms to place users within ‘filter bubbles’ in which they are only shown content they are already likely to like and agree with and are not challenged to consider alternative viewpoints. Following the Brexit referendum and US presidential election in 2016 there has been a great deal of debate over the extent to which these processes might limit critical thinking and vital political discussion. 
  • Complaints that the results of searches put into Google Images and other search engines reinforce societal prejudices – for instance by by depicting black and white people differently and by portraying stereotyped gender roles. This problem might occur if the particular algorithms involved are not designed as neutral or if the datasets they are trained on are not neutral.

A further problem that exacerbates these concerns is lack of transparency. These algorithms are typically considered commercially sensitive and therefore not made available for open inspection. In any case, they are also highly technically complex and difficult for most of us to understand. How fair is it that our browsing behaviours are shaped by processes we know so little about? Is it possible to design algorithms that can be fair and visible to all?

The ongoing multi-university research project ‘UnBias’ recognises that the contemporary prevalence of algorithms online is an ethical issue of societal concern. We ask key questions such as: how can we be sure that algorithms are operating in our best interests? Are algorithms ever ‘neutral’? And how can we judge the trustworthiness and fairness of systems that heavily rely on algorithms?

In order to answer these questions, we combine approaches from the social and computer sciences and engage with a wide range stakeholders including industry professionals, policy makers, educators, NGOs and online users. We carry out activities to support user understanding about online environments, raise awareness among online providers about the concerns and rights of internet users, and generate debate about the ‘fair’ operation of algorithms in modern life. 

Our project will produce policy recommendations, educational materials and a ‘fairness toolkit’ to promote public civic dialogue about how algorithms shape online experiences and how issues of online unfairness might be addressed.

The EPSRC-funded UnBias project is a collaboration between the universities of Oxford, Nottingham and Edinburgh.

Professor Marina Jirotka, Professor of Human-Centred Computing, Department of Computer Science

Dr Helena Webb, Senior Researcher, Department of Computer Science