Q&A: Kellogg Prof. William Brady talks new study about moral outrage on Twitter

Kellogg+School+of+Management+building.+Grey+sky+in+background%2C+lake+in+left+pane.+Kellogg+windows+reflect+blue.

Daily file photo by Colin Boyle

The Ryan Institute will be funded by a $25 million gift from the Ryan Family Foundation.

Nora Collins, Senior Staffer

Kellogg Prof. William Brady led a study published earlier this month about moral outrage on the internet. The study examined how users’ perceptions of anger in online interactions contribute to the magnification of negative, partisan content on social media platforms like Twitter. 

Researchers identified and contacted users who tweeted with high or low outrage levels about U.S. national politics on topics like the 2020 presidential election. Participants then reported their emotional state — specifically whether they felt happy or outraged — within 15 minutes of posting the tweet. 

Then, researchers asked 650 individuals to rate how happy or outraged they believed the users had been when writing the tweets. They found observers generally overestimated the level of outrage expressed in posters’ statements, but more accurately estimated happiness.

Brady spoke with The Daily about the study’s implications on social media and polarization. 

This interview has been edited lightly for clarity and brevity. 

The Daily: What questions were you trying to answer through this study?

Brady: The main question we were trying to answer was whether characteristics of the social media environment can make us misperceive emotion. Do we actually see evidence that because of features of social media, people are overestimating the amount of outrage that is online? This would be something that could be problematic. A lot of times we form our understanding of what we should be morally concerned about based on the outrage of our group. 

The Daily: Why might engaging with others online lead to more misperception than speaking face-to-face?

Brady: When you’re communicating via text and images only, there’s limited information richness. You lack some of the subtlety you would have in a face-to-face conversation, and that tends to push our emotion perception to extremes. There’s not a lot of nuance in the middle. People are either not upset or very upset. 

The other big thing is the role of algorithms. We have a natural attraction to emotional and moral content because we have social learning biases that actually draw our attention toward those types of information. Moral outrage is the kind of content that tends to get boosted or amplified by algorithms. It keeps us engaged and draws our attention.

The Daily: Which study finding was most surprising for you? 

Brady: One interesting thing was that the people who were most likely to overperceive outrage in our study spent the most time learning about politics on social media. It suggests individuals get used to interacting in these political contexts that are more extreme than the average person might interact in. (This) informs their expectations about other people expressing outrage. Sometimes we forget the networks we interact in might not be representative of everybody, and this can bias our emotion perception.

The Daily: Could you speak on why your study is relevant right now?

Brady: One of the big media events right now has to do with Fox News and the Dominion Voting System settlement, and subsequently Fox and Tucker Carlson parting ways. This is related to the phenomenon I’m studying in this paper. When you look at the defamation case, the main evidence was that Tucker Carlson privately said he didn’t truly think fraud was pushing the election over the top, but pushed it anyway for attention and viewers. That is a classic case. It’s one side of the explanation for why overperception of outrage happens and why it’s negative.

The Daily: You mentioned earlier that there’s a human tendency to overinterpret negative information. Have you or your group explored any solutions?

Brady: There’s no doubt (the human tendency is) a combination of our psychology, the algorithms behavior, and other constraints (on) social media platforms. On the platform side, I argue you can improve the ways algorithms behave so they don’t exploit emotional information by increasing the prevalence of that content. 

I think we do have to try to think of ways to educate people on how their social world is impacted by algorithms and constraints to the platform. If people’s digital and algorithm literacy could increase, I do think that would help us be aware of some of these biases.

But, the downside is that even when you know about biases, we often still fall prey to them. So it really takes a combination of those two things, the platform and the education side, but I think there’s a lot of work that can be done right now to figure out the most efficient way to reduce some of these issues like overperception of outrage.

The Daily: What areas are you looking into for the future?

Brady: One of the main lines of research I’m looking at now is related to educational interventions, such as simple 2-minute explainer videos, which could help people interact in a more accurate way with emotions they see on social media. 

We’re currently doing a study that’s testing different types of interventions to do that. I’m also using computational modeling to test out different versions of algorithms when it comes to picking messages based on their content to amplify informational diversity. 

Email: [email protected] 

Twitter: @noracollins02

Related Stories:

Panelists discuss race in the digital revolution, impact of social media in journalism

Students, faculty discuss future of Blockchain technology and cryptocurrency following FTX collapse

Community-based social media app and news aggregator Skuy seeks to supplement YikYak, Sidechat