"The challenge is that the internet has created a place where misinformation just flourishes. We have to think about these systems more carefully." - Bill Gates
Being misinformed by social media is not a flaw. It is a feature. Why would that be good? For social media providers to sell access to you, they need to know everything about you and then keep you glued to that device. The best way to keep you there, is to give you more of what you want, until you have a skewed sense of reality. Nowhere in the experience is the accuracy of the information or the perception you have afterwards factored in.
To come to this conclusion does not require conspiracy theories or allegations of malice. It just requires an understanding of their business model.
Who is the customer?
You are the customer right? NO. If you are a user of a social media site or free internet search you are not the customer. YOU are the product.
To be more specific, your data (search history, comments, likes, friends etc) and your attention (spending lots of time on the platform) are the product. Google or Meta can identify who you are, and what you might like and can sell advertisers access to you. Your eyeballs are available to the highest bidder.
Putting the customer first
The advertisers are the customers and the customer is prioritsed. The more of your data and time the platform has access to, the more they can sell advertising to you.
So, how does Big Tech keep you on the platforms and harvest your data? and how does this lead to misinformation? Read on.
Gluing people to the platform?
The best way to keep people on the platform is to provide them with the content that keeps them there, here are 7 of the main evolutionary steps to keep you glued.
It started with profiles and connections with friends
Then it moved to personal updates and scrolling became easier to avoid disruption.
Then games, gambling and quizzes.
Then it tapped into approval with “likes”. (This was both interactive and helped algorithms find out what people really supported)
Then came increasing notifications. Immediacy increased, and people felt compelled to look at their phone.
Then came more use of videos. Streaming sites like YouTube and later TikTok would be able to better target people with content predicted from their profile and previous engagement. This combination was much more addictive and keeping people engaged.
Then came the finessing of content e.g. live streaming and shorter videos taking advantage of people’s desire to ‘snack’ content.
Each of these phases were enhancements designed to satisfy the real customer’s need and have you glued to the platform.
Most of the content above is in the realm of entertainment, e.g. pictures of your friends, funny YouTube shorts etc. so you could argue that we are being entertained in return for our private information and time. BUT one obvious problem here, is when that same algorithm is applied to the news.
What we humans are not
If you have read my blog for a while, you will know that truth seeking is hard but worthwhile. It requires being open-minded and effort. It's fair to say most humans are not truth seekers most of the time. Three biases stop us getting a balanced picture.
Confirmation bias: We tend to find content that resonates with our views more believable and that reinforces our existing beliefs.
Availability bias: The more we see a point of view, the more we believe it to be true.
Social proof: We are likely to change our views to what is more socially acceptable and back-solve for the information that lets us believe this. Intelligent people are excellent at creating arguments why they should hold their self-serving beliefs.
Echo chambers
How do you keep people’s engagement to the news? From a "like" to doomsday in 4 steps:
You see an article posted by a friend covering a UFO sighting. You “like” the article and comment “really interesting / scary”. Both you and your friend are impacted by that interaction. You feel supportive of your friend, and your friend feels more right. This is an example of social proof.
The algorithm know assumes you like that content. It will be more likely to show you content from people who also liked the article and predict the content you like. Soon you are seeing more content about UFOs. Some light-hearted but others quite concerning. You see an article that says ‘Was the UFO the first of many?’. You like the article. People should at least be aware about it even if they do not agree.
Engagement with the more extreme article increases further the amount of UFO content you see. Now you see articles about ‘The aliens in our midst’ and ‘Is it too late to save Earth?’. You are now more frequently seeing aliens stories. It feels like everyone else on the platform is worried about it too. This is availability bias (the more you see something the more you believe it). The more you believe it, the more you want to read about it. You start doom-scrolling about the impending alien invasion.
All doubts are expelled now. You see more and more about the topic. All the articles you see now that look credible talk about the alien threat, and you keep clicking on articles that describe this idea in greater and greater detail. This is confirmation bias.
Nowhere here is the accuracy of the information factored in. We know that false certainty is much more entertaining and popular than nuanced reality, and so naturally the algorithms promote false certainty.
Through users being skewed into content they have previously liked, they get further pushed in that direction. This is more engaging than hearing views of the opposition. This “echo chamber” i.e. amplifying of any interest/concern is what keeps scrolling.
People who use social media to understand the world will be misinformed.
Rather than people experiencing “free speech” and a competition of ideas, they are experiencing confirmation and extremising of their initial views. They will not even hear about the opposing views, or if they do, it will be a poorly articulated caricature of them.
Bad faith use of social media
The ability to skew beliefs on social media, is a huge draw for people looking to run disinformation campaigns. The US and UK general elections will have enormous disinformation campaigns. Some examples of the likely disinformation campaigns:
Use of sleeper bot accounts: fake accounts that have been silently pretending to be an innocent member of a side to earn trust, will start to slowly shift more radically and engage with content that is more extreme so that it becomes a part of your feed.
“Fake networks”: Bot accounts will interact with each other to amplify a viewpoint.
Fake news: Made up news stories will be sent out. Fake bot networks will engage with it, and it will start appearing on the feeds of the people with similar political views.
Repeated lies: Baseless accusations made repeatedly will stick when it comes to voting.
Jokes with a message: The repeated lies will be packaged as jokes or memes.
Psychological ops: Using information about someone to target a message to them in a way that will be most convincing. Social media is excellent at promoting alternative realities so messaging does not need to be consistent. - You want Party X to win. Target a Party X voter in an emotionally resonant way and convince them to turn out to vote. alternatively - Target an opposition voter. Muddy the water about their candidate. Try and decrease their chance of going out to vote.
What’s the result of all of this?
The outcome of the algorithmic approach is a poorly informed and easily manipulated person. The drip-drip and slow nudging of either extreme or planted views will take its toll on even the most resilient and intelligent person. Not only will that person have become misinformed, they will feel highly confident in their views. Is this responsible for the polarisation that we see in the public square? Tory vs Labour, Trump vs Biden, Immigration, decarbonisation, taxes. The list goes on.
Don’t expect any regulation!
I don’t see any regulation coming. The providers of social media companies and print new media before them will find a way to escape muscular regulation under the guise of free speech. The could also use the argument similar to vice industries - “If customers are harmed by our product they are doing it of their own volition”.
What’s the solution?
If you can, avoid news articles from social media. That to me is an obvious start.
If you consider yourself intelligent, some people think you are even more likely to be skewed because you can make good arguments to post-rationalise your views.
Use a variety of news sources with a reputation for high editorial standards (FT, BBC, The New York Times, possibly add Al Jazeera for world news).
Defeat the argument that social media is a bastion of free speech. That algorithm does anything but.
So what?
Social media platforms profit by engaging users for extended periods, by feeding them content that aligns with their existing beliefs. User data and attention is then sold to advertisers, the real customers.
Algorithms create echo chambers by reinforcing users' existing beliefs through confirmation bias, availability bias, and social proof, leading to an increasingly polarized and misinformed public. Accuracy or objectivity of information on the platform is not a consideration.
Social media platforms are ideal for disinformation campaigns. In political contexts bot accounts and fake news proliferate to manipulate public opinion, often unregulated under the guise of free speech. Best not to get informed via social media even if you are intelligent by conventional measures.
Join me next week to discuss “Principles to rebuild our public square”. Sign up to the subscription list on Blog | Deciders (hartejsingh.com). Follow me on twitter: @Decidersblog.
Comments