What is junk news? How does it spread deep and fast, and why does it thrive on polarisation? Oxford is one of the few centres in the world right now working hard to understand the fine mechanics, algorithms and players who are gaming social media networks to promote emotionally potent, divisive political propaganda.
Ruth Abrahams joined Dr Vidya Narayanan and Lisa-Maria Neudert, researchers at the Computational Propaganda Project, in their offices at the Oxford Internet Institute to tease out what new forces are at play and how we can mitigate against their most harmful effects.
RA: How is junk news different from news that appears in tabloid press or on populist news channels?
VN: We have developed a list of five main criteria that qualifies something as junk. A news source has to satisfy three of our five criteria to be junk (PDF). It’s an iterative process and is now in its 20th month.
- Professionalism: do these outlets refrain from providing clear information about real authors, editors, publishers and owners, and do they fail to publish corrections on debunked information?
- Style: Does the site use emotionally driven language with emotive expressions, hyperbole, misleading headlines, excessive capitalisation, unsafe generalisations and fallacies, moving images, graphic pictures and mobilising memes?
- Credibility: Does the site rely on false information and conspiracy theories, and do they report without consulting multiple sources or using fact-checking methods?
- Bias: Is there an ideological ‘skew’ to the site’s work and do they frequently present opinion as news?
- Counterfeit: Is the site mimicking real news organisations, counterfeiting fonts, branding and stylistic content strategies?
LN: A tabloid will be professional in the sense that they will have clear information about the authors, they will be registered somewhere, they will print corrections if they get something seriously wrong, and they have systemic organisation behind it, versus the junk news, where a lot of the stuff we’re seeing is user-generated. If someone posts something that is highly defamatory in junk news there’s a really good chance that you will never find the person who’s behind it.
VN: Tabloid press has a lot of the sex and crime stuff, whereas junk news is about political mainstream issues, but then is presenting them in a misleading, polarising way.
RA: How are the prioritisation techniques used by online marketers and propagandists different to traditional marketing methods – for example, targeting certain societal groups via TV, radio or print?
LN: Actually not that different at all. So, for example, a lot of people think that micro-targeting is a completely new terminology, but micro-targeting is something that is well established in political cultures. The difference with what we’re seeing now is that we have vast, vast amounts of data and have very powerful algorithms to model it.
We like to say that propaganda has existed for a very long time, since the ancient Greeks, but it’s now possible to connect with people to push propaganda out at unprecedented scale because of some of these advanced AI techniques.
RA: So it’s the scale and the accuracy that separates it from what’s gone before?
VN: Right. And because so much of public data is available online it’s possible to target people much more accurately than it was before. There are these algorithms that are running in the background that are constantly building a profile of your preferences. So then ‘bad actors’ lift these techniques from the consumer industry.
RA: Is there any regulation around which companies can buy data, and what data they are able to buy?
LN: Yes, there is regulation. The biggest piece of regulation right now is GDPR. It is unparalleled. We do think that GDPR has a couple of issues, but it is the biggest piece of regulation that is trying to protect user data. Different countries have different rules and regulations about what you can do with data. For example, it’s possible to combine different datasets and triangulate information to find another piece about them. That sort of data triangulation is illegal in some countries, but legal in others.
RA: Do you think we’ve reached a moment where the worst has happened in terms of public naivety around its own data and how content is prioritised and targeted – are people now more savvy about the techniques at play?
VN: I think so. I think awareness has increased, but it also depends on what kind of communities we’re talking about. There are vulnerable communities in different parts of the world where media literacy is not as high as it is, perhaps, in the UK. Coupled with a lack of technical awareness, this might still be a huge problem. The communities might not be aware of the risks of misinformation. For audiences who are newly online I think they’re encouraged to believe that this shiny new technology can’t lie to us. They’re predisposed to believe what they see on social media platforms.
LN: The countries where we see misinformation having the biggest effects are countries such as Sri Lanka and Myanmar, where there are calls to violence that are spreading over WhatsApp, over Facebook – and they are really calls to a genocide. We see people going on the streets because of misinformation that they have read over WhatsApp, and it doesn’t look anything like professionally produced news. There’s also a different culture in some countries where you don’t rely as much on your professional’s version of news, but you’re relying more on your social network and people that you trust. We really see this playing out right now. So while maybe citizens in the EU and the US are becoming more aware, and risks here are coming more from the regulatory side, it’s really terrible in other places right now.
VN: I think it’s a big issue, even in India. WhatsApp is a major tool for political propaganda. People tend to trust what they see in these closed networks rather than what liberals put out or professional news organisations.
RA: How are targeted, strategic, manipulative political messages different from past ones in terms of content?
LN: The content is designed to manipulate a citizen in a political way, so it can be anything from fake news or highly emotional or conspiratorial content. But then what is also different are the methods that are being applied, and specifically the method of automation. So automation – the use of bots and algorithms – is going to help you to spread a piece of information way quicker, way deeper – you have an entirely different scale. So a while ago when you wanted to reach 40 million people you would have had to spend a lot of money for that. Now it’s possible to have a junk news story that is being shared for much less money.
I think it’s very important not to judge people for their beliefs. I think that generates a lot of anger and has led to the polarisation we see today on social media
RA: It’s a lot cheaper?
VN: Yes, it’s a lot cheaper. I think these algorithms with intent are coming together with the affordances that platforms provide – perhaps unintentionally – to promote engagement. Anything that has emotional hooks tends to get shared if you have this shock factor. There’s a real intersection with the intent to manipulate with what social media is designed to do. We need a lot of answers and we’re just at the beginning of our journey.
RA: Why does polarisation make it difficult to correct falsehoods and inaccuracy?
VN: I think it comes back to psychology. You tend to seek out news that confirms your own beliefs, which is why there can be deliberate attempts to stoke your prejudices against a certain community. You get really attached to your world view and get quite angry when it’s challenged, which might be why polarisation works so well on social media, because it’s emotionally charged and deliberately seeks to tell you that you are right or that you’re completely wrong. There’s no room for consensus. It’s either black or white.
RA: So the digital space provides a place where this polarisation is more potent but also people are exploiting it?
VN: You’ve got players who are exploiting it to serve their own ends.
RA: What is the best way forward to mitigate against this?
VN: I think media literacy is key. Technological awareness is key. You have to constantly engage with people and not cling too closely to your own philosophy. When we put out our reports we do get to interact with a lot of alt-right media. But in some cases it’s important to have a dialogue with them. I think it’s very important not to judge people for their beliefs. I think that generates a lot of anger and has led to the polarisation we see today on social media.
LN: I think it’s important to understand that what’s on top of your news feed is not necessarily on top because it’s the most important thing. It’s because Facebook thinks it’s relevant and it’s going to create engagement which basically turns into money. Those engagement metrics are completely easy to hack whether you have bots, whether you have fake accounts that are engaging with the piece of content, or whether you have search engine optimisation. Overall, the way that information ecosystem is working right now is maximising for attention, which is not benefiting the overall quality of the information. That is the big disconnect, and if we solve that, then the rest may potentially get better. If we’re allowing our algorithms to promote the lower-quality content, that’s a spiral.