"When people talk about fake news, a lot of folks are like 'people will figure it out.' The truth is, they don't always figure it out." - Megyn Kelly
Propaganda is all around us and as we discussed last week it is a way to get people to support our cause or at least not support our opposition. We all use it, and within social norms it is an accepted part of everyday life. The dark side of propaganda is that it stops us being objective and could lead us to make poor decisions as we have relied on a skewed sense of reality.
Today's post is the second part of the three-part propaganda series and is about propaganda in the digital age. We will cover an example of how people have used social media to manipulate people in the recent past and how with the use of AI this will be far more potent going forwards.
Background on digital media
Through the advent of radio, television the internet, social media and increasingly AI, technologies have evolved to deliver propaganda to you in more personalised ways.
Social media has significantly altered the way people consume news. Between messaging services like Whatsapp and pure social media sites like Tiktok, Twitter, Youtube, Facebook, Instagram and Linkedin people are now able to be reached in ways they were not able to previously. I see the main changes as:
People can be reached globally, quickly and cheaply
People's viewing can be skewed through paid advertising and the likes and shares of their network
Reality can be skewed further by echo chambers and filter bubbles
Content can be generated by individuals whose motives you do not know
Fake accounts and bots can get access to you
Whilst newspapers have some redress for gross lies, untruths can be spouted out freely.
One example story
A well covered tale on digital propaganda was the Cambridge Analytica story. They were thought to have influenced a number of elections including 2016 US Presidential election and the 2016 Brexit Referendum. Most of my information from Cambridge Analytica was from a Guardian expose of their operations, which includes whistleblower testimony. (Article below).
How is Cambridge Analytica thought to have swung elections?
Whilst it is customary to blame foreign countries (particularly Russia) on social media propaganda, a far more interesting story was happening closer to home. Cambridge Analytica started in 2013 in the UK and was closely tied to its parent company Strategic Communications Laboratories in the US. Strategic Communications was a psychological operations company that used technology and psychology to create propaganda. It was primarily funded by Robert Mercer an American Hedge Fund billionaire from the firm Renaissance Technologies who funded a number of conservative causes.
Step 1: Understand people's actual views through surveys: Whilst Cambridge Analytica knew they could get some people to share their views for free, they paid people somewhere between $2 and $5 to fill in a survey of 120 questions asking their views on a range of topics. They paid more, to less reachable groups (e.g. working class men). This survey helped indicate where people were on 5 personality dimensions. They used the OCEAN model, which highlights openness, conscientiousness, extraversion, agreeableness and neuroticism. They favoured this model as it is thought to be a more consistent grouping over cultures across age groups and over time. This effectively gave the company a choice between 2 or more options in each category. There were now at least 2x2x2x2x2 = 32 different profiles, each of whom have different preferences/views and ways of receiving things.
Step 2: In the testing phase of the research, Facebook (now Meta) was barely involved. At the very end, however, users were required to log into Facebook and approve access to the survey app which facilitated payment. This was a diversion! Without consent the app harvested as much data by scraping data from the user’s Facebook profile. This included personally identifiable information such as real name, location and contact details, something that was not identifiable through the survey.
Step 3: Map profiles onto a person from the electoral register: Personality data was mapped onto an actual person that appeared on the electoral register. Secondly, the app did the same things for all the friends of the user who installed it. This created a treasure trove of information on hundreds of thousands of people.
Step 4: Create prediction algorithms: According to a whistleblower there 253 algorithms created from this sample. The algorithms were trained on the personality test and Facebook data to predict what people's .personality type was, their political affiliations and their views on particular subjects. This could then be used to reliably forecast the views and personality type of any other Facebook profiles they could get access to. By the end of August 2014, 2.1m profiled records from 11 target US states had forecast outputs.
Step 5: Target with highly personalised advertising: With the 253 reliable algorithms Cambridge Analytica could micro-target adverts. An example is a bland election promise like creating more jobs. It doesn't feel like much of a vote winner, but can be dressed up in language that resonates emotionally with a personality type. Let's say you have someone who scores highly on conscientious, you focus the messaging on the opportunity to succeed. An open person might be targeted on the opportunity to grow as a person and you would emphasise job security to a neurotic person.
How could they persuade you?
Each personality type could be influenced through advertising, viral content or face accounts on a continual basis. Let's say it was an independence election. The key would be to emotionally bombard you with things that resonate for independence and dissuade you to support the opposition.
The level of access and ability to micro-target meant that not only were they able to persuade people but they were able to evoke very strong emotions like fear, anger and indignation. This is important as the stronger the emotional appeal, the more likely people are to share content, rally other people and turn out to vote.
The ads were microtargeted to people their algorithm identified as most susceptible. They also used Dark Ads, which were visible only to a selected audience. This made it impossible for the general public to see what other messages other people were being targeted with, if they wanted to counter misleading or untrue messages (e.g. fake news).
They also used social proof, people sharing content, to make things go viral. it was designed to be simple, visual and easy to share. Once messaging had infiltrated a social network it would be readily shared.
In short, Cambridge Analytica were able to get emotionally resonant content to people. That content was most likely to persuade people to vote in favour of the targeted cause and share information about it. They were also very good and trying to cause apathy amongst people who were likely to vote for the other side. Spreading confusion and apathy is powerful at getting some people not to vote.
We. Are. All. Targets.
We are all targets. We are all targets for people who want to persuade us and change our mind. We are all targets for people who want to cause division.
Our digital footprint is growing. Our search history, our connections and likes, bank information, work information, tax returns, location data, health information, our facial reactions, walking gait and biometric data. Whilst much of this is legally protected, it is a concern if it gets into the wrong hands.
With the advances in AI and quantum computing the processing power and analysis that can be directed to persuading us is going up exponentially. We can be ever more micro-targeted. Rather than 32 different personality types, it could be targeted along billions of parameters.
The creation of fake accounts can be more subtle using LLMs like ChatGpt. They can talk be influencers that have an expertise in a topic you are most interested. For months they can slowly draw you in and veer towards propaganda over time and in subtle ways. They can use AI generated media to help persuade you that popular figures you respect feel a certain way too. You can be subjected to subtly false events which you might trust because this social media figure has gained your trust.
It does not stop there, what about cyber warfare? If you want to destabilise a country you can creative emotive reactions on both sides of a debate. This might lead to more polarised and hostile debate and cause crises. You can also leak embarrassing data into the public domain to lose people's trust and you can add some untrue material, people might believe it given some of it was true.
How do you know what to trust in the age of digital media?
So what?
People can reach us from anywhere at anytime for little cost with microtargeted material to persuade us emotionally on certain topics.
Cambridge Analytica is thought to have influenced elections using people's own digital footprint to forecast what they thought, what they would react to and systematically targeted them.
With more sophisticated tools, larger digital footprints and more computing power, we will be presented propaganda in ever more convincing ways to persuade us emotionally. Next week, we will discuss protecting yourself from digital propaganda.
Thank you for joining. "Propaganda protection" next week. Sign up to the subscription list on Blog | Deciders (hartejsingh.com). Follow me on twitter: @HBSingh_uk
Other blogs in the propaganda series:
Link to the Guardian article
Comments