
2021 has gotten off to a wild start to say the absolute least. With terror in the United States capitol building and rising polarisation and tensions worldwide, some people are returning to an old theory to explain the state of the world.
Are personalised search results ensuring that people only read political opinions they agree with to maximise engagement, resulting in entrenched views that are constantly validated and rarely opposed?
Are Facebook and Twitter algorithms solely responsible for creating polarised groups that are ripping apart democracy?
The short answer is: No, that isn’t happening and explaining polarisation isn’t that easy, but the chances are you’ve heard that it is. It’s an incredibly popular theory called ‘filter bubbling’ that has persisted in public debate for nearly a decade without any reliable scientific backing – it’s even been supported by Bill Gates, who voiced his belief in filter bubbling as recently as 2017, and 2020’s smash hit documentary The Social Dilemma covers it as fact.
The more it’s been researched, though, the simplistic version of this concept has been disproven.
Which isn’t to say it isn’t worth talking about. Debunking filter bubbling leads us to discussing what part technology is playing in our increasingly polarised and radicalised society. Read on to have one of the decade’s simplest and most frightening theories about technology’s effect on democracy myth-busted and then replaced with an equally frightening, more complex perspective.
In 2010 the political activist and entrepreneur Eli Pariser coined the term ‘filter bubbling’. His 2011 presentation on the phenomenon went viral and his subsequent book on the topic became a New York Times bestseller.
Pariser broke down what he saw as a fundamental, and potentially hugely destructive, flaw in personalisation. To maximise engagement, sites will personalise for you wherever possible. Amazon, for example, give you personalised recommendations. You’re more likely to return and engage if you’re greeted with what you like. To make those predictions, Amazon tracks what you and other people have bought and looked at and analyses that to find patterns (“pattern matching”) – sometimes pattern matching is as explicit as “Customers who viewed this item also viewed...”, sometimes it’s more subtle, like what shows up first when you go on Amazon’s homepage.
What happens when you extend that same logic to a search engine like Google? Well, Pariser theorised that you run into a major problem:
He took an educated guess that people treat searches for politics and news with the same attitude that they do products – meaning that you’re more likely to click on and read what you already agree with because it makes you feel good. So, for example, let’s say you don’t believe in the climate change crisis (we’re asking a lot from you, we know): If Google’s search results confirm your assumptions by showing you climate change denial articles when you search for ‘climate change’, you’re more likely to come back. That’s better for Google, but worse for society.
Personalised search results, Pariser said, make financial sense by maximising engagement but will confirm narrower and narrower world-views, cocooning people in comfortable, easy bubbles of their own views.
The same could be happening on social media. Say you get a majority of your news from social media: If you’re more likely to engage with news you agree with or that makes you feel good, and the social media algorithms as a result only show you that news to keep you engaged, the filter bubble loop takes hold there, too, in much the same way.
“A world constructed from the familiar is a world in which there's nothing to learn ... (since there is) invisible autopropaganda, indoctrinating us with our own ideas.” - Eli Pariser, 2011
Pariser referenced in his book his personal example where two of his friends searched for ‘BP’ on Google. One friend received top results about the 2010 Deepwater Horizon spill (an estimated 4.9 million barrels, eleven immediate deaths, uncountable loss of marine life, a felony charge for lying to Congress, $20billion in fines) and the other got top results for investment opportunities (still financially decent, apparently, if morally bankrupt). You can see how terrifying that is: The friend who was more likely to engage with bad news got informed, the friend who’d rather stay ignorant and invest didn’t.
Amazingly, though, that’s the only example quoted in the whole book. No studies or further scientific backing – just his theory and a single anecdote.
That hasn’t stopped the theory from remaining in public discourse for a decade. That’s because it sounds really plausible. The lack of scientific evidence can be chalked down to ignorance about Google, Facebook, Twitter, or Reddit’s algorithms, and/or the difficulty of scientifically measuring the phenomenon. That’s changed recently, though: Conclusive studies have shown that search engines and social media algorithms aren’t filtering opposing worldviews – in fact, they’re making them more accessible.
A study for Digital Journalism concluded that using search engines diversifies people's news diets. People who use search engines for news on average use more news sources than people who don't and are more likely to read sources from both left- and right-leaning sources.
A study from the Reuters Institute found similar results for social media, concluding that despite newsfeeds being filled with opinions from our friends we’re likely to agree with, social media users encountered news from a wider variety of sources than people who access their news directly.
It’s fair to say that the simplistic version of the filter bubble theory needs to be re-evaluated.
So we now know that the situation is even more complex than we thought initially. How can it be that search engines and social media platforms are showing everyone a wider variety of information from across the spectrum, but we’re still getting more divided? Surely instant access to the same information should have the opposite effect?
Well, today’s polarisation can't all be chalked down to technology. It has to be understood in a wider societal and political context. No major polarisation in past democratic history entirely blamed on a communication problem, and today’s polarisation is no different. We need to deeply examine today’s communities, politicians, and public figures offline and online if we are going to have any perspective on today’s polarisation. However, having said that, tech itself is still an important part of the conversation. Whilst search engines have largely been cleared of any negative polarisation effects (for now) there are complicated effects stemming from mass social media communication and they’re slowly becoming clearer.
Social media platforms are characterized by community engagement, unlike search engines. That means social media sites need engagement between users as part of their business model – the more, the better. Social media sites can fall into a trap of encouraging and promoting interaction even if that’s interaction regarding misinformation or polarising speech. In fact, misinformation and polarising speech is more likely to be promoted because of the high interaction rates.
A study funded by Twitter found that false or misleading news travels six times faster than accurate news. ‘Fake news’ isn’t a new phenomenon and has been a significant part of public discourse for millennia with psychological theories abounding for its existence. It’s also a political force, having been systemically utilised by politicians worldwide including Donald Trump. Mix all that with today’s social media technology giving people the ability to communicate misinformation instantly, and you get a more complex view that could explain the rise and rise of totally unfounded conspiracy movements like QAnon (where politicians eating children is just the tip of the iceberg).
Here’s another phenomenon. A 2018 study took a group of liberal-leaning voters and a group of conservative-leaning voters and had their twitter accounts follow bots that re-tweeted opposing views. What they found was that whilst liberal voters didn’t experience any significant swings (apart from a slight, inconclusive overall lean further left) the conservative-leaning voters became significantly more entrenched with their views over time the more opposing views they read.
The creators of the study, published in the science journal PNAS, stress that there isn’t enough in their data to come to significant, generalised, sociological conclusions, and they’re right. Their findings don’t exist in a context-less bubble (no pun intended) and would no doubt be different at a different point in time or in a different country. But, studies like theirs are leading us to the right answers for our time, which combine technological advancements with the contexts they exist in.
View the viral Cambridge Analytica / AggregateIQ data scandals in the same way. Facebook’s data harvesting and enabling of political advertising was consciously exploited by leaders aiming to manipulate voting habits on a wide scale. The initial phenomenon context-less – political advertising can effectively influence voters – must be understood in context of the forces that control it (in this case, the U.S. Republican party and Vote Leave politicians in the U.K.)
So where does that leave tech companies? We think that tech companies have a responsibility to develop their tech with foresight, anticipating and preparing for the negative effects that their algorithms and features may have and the potential for exploitation they’re creating. They also have a responsibility to keep a constantly introspective mind state, acting on and remedying issues as soon as they arise.
Twitter’s pledges to remove fake news are commendable, as are Facebook’s assurances that the Cambridge Analytica scandal could never happen again. High-profile responses alone shouldn’t give us a sense of calm, though.
The potential for disastrous consequences only increases as technology continues to develop, which means the pressure that we’re applying as a society needs to be consistent. We can never stop demanding foresight and proactivity from the tech companies carving the way into the future – the fabric of democracy depends on it.
When it comes to our search engine, Xayn, the absence of filter bubbling thus far hasn’t made us grow complacent. It's important not to rule out any possibilities, instead, we have to take the correct countermeasures. That's why the Xayn AI has a built-in filter bubbling countermeasure – just in case it's possible they could occur.
As it’s the responsibility of any digital services company to contribute to and protect democratic society, we’re thinking proactively. We believe every tech company has a responsibility to do that.
But what happens when protecting democratic society involves moderation – or censorship? Stay tuned for our next post!
Photo by Cleyton Ewerton from Pexels