Members Only | October 26, 2021 | Reading Time: 5 minutes

How did Facebook’s algorithms radicalize users across the globe? And is it culpable?

Ordinary human bigotry radicalized for profit.

Facebook whistleblower Frances Haugen on "60 Minutes."
Facebook whistleblower Frances Haugen on "60 Minutes."

Share this article

The public has been given insight into Facebook’s business practices. Many of these disclosures have come from a whistle-blower, Frances Haugen, a former Facebook employee who, in her testimony before Congress, stated: “I am here today because I believe that Facebook’s products harm children, stoke division, and weaken our democracy.”

The Facebook leaks have shown, among other things, that the company provided a breeding ground for right-wing extremism. For example, Facebook’s own researchers determined that a fake user who was tagged as “Conservative, Christian, Trump-supporting” would be hit by a deluge of conspiratorial and racist propaganda within days of joining the platform. Similarly, in India, over the course of only a few days, a fake user was inundated with anti-Pakistani rhetoric, such as, “300 dogs died now say long live India, death to Pakistan.”

How did Facebook’s algorithms radicalize users across the globe? 


In 2017, Facebook made the decision to give five times more weight to posts that elicited extreme emotional reactions — such as rage or love — than posts that elicited mere likes. This decision exploited biases towards emotional valence.


We don’t have the complete answers, but here’s what we do know: Facebook designed algorithms that played upon a web of human cognitive biases and social dynamics to maximize engagement and derive profit. And the very factors that made these algorithms profitable also made them a veritable petri dish for extremism. 

To understand this, we can first reflect on the underlying psychological mechanisms that the company exploited.

We, as social creatures, are subject to multiple forces that shape the information we consume and our social interactions. 

  • Confirmation bias: We seek out information that confirms our beliefs rather than that which would falsify them. 
  • Congeniality bias: We seek out supportive behavior from others who affirm our beliefs. 
  • Emotional bias: We favor emotional information over neutral information in general. We favor engaging with negative content over positive content, especially on social media. 

These biases then lead us to self-select into groups. We want to interact with people who agree with us. We want affirmation. We bond over powerful emotions, rather than neutral facts. 

Once we join groups of like-minded people, we are subject to multiple effects that arise from our interactions with other group members. Within a group, we are less likely to express dissenting opinions than we are to express agreement. Further, we are driven to not just agree, but to rather make more elaborate points. These tendencies can be benign, or even productive, but research has also shown that, over time, the confluence of agreement and elaboration can be detrimental: specifically, the more members of a group speak about a topic about which they all agree, the more extreme their rhetoric becomes.  

None of us are immune to these pressures, including myself. I’ll hesitate before expressing dissent within a given social group, whereas I’ll feel bolstered when I express agreement. When I express agreement, I’m rarely enticed to say, “Yes, I agree;” rather, I feel inclined to offer an elaboration. This is all ordinary human behavior. 

However, biases and behaviors become pernicious within the domains of bigotry and conspiracy theories. If a group rewards members for bigotry, they will engage in more frequent and extreme acts of bigotry.  If the group rewards members for the brilliance of a conspiracy theory, members will increasingly elaborate on the conspiracy theory.  

What does all of this have to do with Facebook? 

Facebook made specific algorithmic choices that not only facilitated these psycho-social phenomena, but exploited and amplified them. Why? Because appealing to biases and group behavior leads to user engagement. User engagement, in turn, leads to greater profit. 

Facebook is still not fully transparent about its algorithms, but here is what we do know: Before a user views a given piece of information — whether it’s a news report or a post from another person — that information gets filtered to maximize the user’s engagement. 


When you post something that elicits more extreme responses, such as anger, Facebook rewards you even more. As one internal Facebook report stated, “Our algorithms exploit the human brain’s attraction to divisiveness.”


To achieve this, the algorithm evaluates a person’s profile and provides them with information that conforms to a user’s identity. It also down-weights — or, frankly, suppresses — information that disconfirms the user’s priors. This entails that if a user expresses doubt about vaccines, they will see more doubt about vaccines rather than pro-vaccine arguments. If a user expresses bigotry, they will see more bigotry, rather than anti-bigotry arguments. This aspect of Facebook’s algorithm thus relies heavily on confirmation bias to engage users. 

But the algorithm’s cognitive tricks don’t end there. 

In 2017, Facebook made the decision to give five times more weight to posts that elicited extreme emotional reactions — such as rage or love — than posts that elicited mere likes. This decision exploited biases towards emotional valence. The company also decided to double down on promoting group membership to combat a decline in engagement. Mark Zuckerberg, Facebook’s CEO, wrote: “There is a real opportunity to connect more of us with groups that will be meaningful social infrastructure in our lives . . . that can strengthen our social fabric.” 

At the same time, researchers warned that Facebook’s group dynamics could be a hotbed of extremism. In 2018, one researcher went so far as to state group algorithms produced bot-like behavior among humans and introduced “radicalization via the recommendation engine.”

As we know from psychology, if you are in a social group, you are societally rewarded for increasingly extreme behavior. But, on Facebook, you’re not just rewarded by other members of the group, you’re also rewarded by the company itself. When you get a lot of likes from your group, Facebook rewards you. When you post something that elicits more extreme responses, such as anger, Facebook rewards you even more. As one internal Facebook report stated, “Our algorithms exploit the human brain’s attraction to divisiveness.”

Furthermore, Facebook decided to show group members unrelated posts from other members of the same group. This inevitably led to an interconnected web of extremist ideologies. Research has shown that once a Facebook member joins one extremist group — such as flat-earthing — Facebook will recommend they join interconnected groups, such as those pertaining to anti-vaxxing or chem-trails. 


Maybe Facebook shouldn’t be held too accountable here. They are a company that is trying to make money. This excuse falls apart the moment one realizes that, for years, Facebook was warned by people both inside and outside the company that their algorithms led to the rise of right-wing extremism globally.


And, if group membership correlates with white supremacy, users will start to see that, too. As one researcher put it, “The groups recommendation engine is a conspiracy correlation index.”

When we look at all of this, it becomes clear how Facebook’s specific choices to maximize engagement facilitates a snowball of interconnected conspiracy theories and radicalization. Users are shown information that confirms their beliefs. They are encouraged to engage with others who share those beliefs. They are furthermore rewarded for increasingly extreme posts. And, then, when they engage in one extremist group, they will be exposed to several others. 

Perhaps, one could argue, Facebook shouldn’t be held too accountable here. They are a company that is trying to make money. Their ability to make money is dependent on engagement. They didn’t design the algorithm with the explicit purpose to encourage radicalization. 

This excuse falls apart the moment one realizes that, for years, Facebook was warned by people both inside and outside the company that their algorithms led to the rise of right-wing extremism globally.

What we now know is that Facebook drew people in based on their relationships with friends and family, and then it exploited specific cognitive biases in order to maximize engagement with other content. 

We know the company made choices it was warned could lead to radicalization globally. The company not only ignored these warnings, but suppressed evidence by their own researchers that demonstrated dire predictions about the algorithm were coming to fruition. 

We know radical content led to more engagement, which, in turn, was good for the company’s bottom line. Facebook is therefore culpable of not only exploiting human beings’ ordinary cognitive biases, but knowingly encouraging political extremism for profit. 


Magdi Semrau writes about the politics of language, science and medicine for the Editorial Board. She has researched child language development and published in the New York Academy of Sciences. Born and raised in Alaska, she can be found @magi_jay.

Leave a Comment





Want to comment on this post?
Click here to upgrade to a premium membership.