What goes viral social media and why? Do people value information-based content less favorably than misinformation? Why do we click more on polarizing content than neutral information? In this episode, Under the Cortex hosts Dr. Steven Rathje from New York University. Rathje’s research explores what people think about social media content and what motivates their online behavior.
Rathje and APS’s Özge G. Fischer-Baum explore the implications for societal change, in-group and out-group behavior, and emotional choices on internet usage.
[00:00:14.290] – APS Özge Gürcanlı Fischer Baum
What goes viral on social media? Does divisive content attract more attention? Do people value information based content less favorably? This is under the cortex. I am Özge Gürcanlı Fischer Baum with the Association for Psychological Science. To answer these questions, I have with me Dr. Steven Rathje from New York University. He recently published an article in APS’s journal Perspectives on Psychological Science about people’s approach to social media content. Steve, thank you for joining me today. Welcome to Under the Cortex.
[00:00:52.780] – Steven Rathje
Thank you for having me.
[00:00:54.930] – APS Özge Gürcanlı Fischer Baum
I want to dive into your research right away. It is very interesting. It explores what people think about social media content. How did you first get interested in this topic?
[00:01:07.670] – Steven Rathje
Yeah, I mean, in my personal opinion, what goes viral on social media is one of the most important questions we can ask. In our contemporary society. Around 5 billion people use social media, which is more than half the world’s total population. People, on average, use social media for more than 2 hours per day. And for the younger generation, Gen Z, that statistic is closer to 4 hours. So people are really spending more and more of their lives in the online world. And people are increasingly turning to social media to get their news. They’re not going to the New York Times or to the CNN website. They’re instead looking at their newsfeed and they’re looking at what their friends are sharing. And I think especially when it comes know, we have a us presidential election in a year, there’s a lot of news about the Israel Hamas war right now and what goes viral. This question of what goes viral determines the type of news that people get and politicians. So I think we saw this especially well with Donald Trump, who was back when he was active on Twitter. He was quite good at gaming Twitter, and he was quite good at going viral.
[00:02:15.540] – Steven Rathje
And that probably led to his success as a candidate. So I think it’s really important as a question, because what goes viral determines what we see for those 2 hours or on social media every day, which can decide elections, or people speculate that social media can contribute to political violence or political protests or all sorts of things like that. So I think it’s important because it determines what we see, and it also determines what we are incentivized to create. So, for instance, if someone gets the sense that polarizing content goes viral, they might create more and more polarizing content, or politicians might specifically create incendiary content because they suspect that the Facebook or Twitter algorithms will like this content more. So, yeah, I think the question of what goes viral is important because it determines what these 5 billion social media users see and what they’re incentivized to create.
[00:03:13.950] – APS Özge Gürcanlı Fischer Baum
And we are guilty of it, too, just like you said. Right? So we all follow social media and we all have our online behavior. But also, despite that, people have their opinions about what type of content they like or they say that they like or respect. What categories of content have you looked at in your study?
[00:03:38.070] – Steven Rathje
Yeah. So in this particular study, we looked at the distinction between what goes viral and what people think should go viral on social media. So we looked at this disassociation. And I have previously done research on what goes viral on social media, and some of my prior work. For instance, in 2021, I published an article in the journal PNAs, which was about how outgroup animosity predicts virality on social media. So in this article, I analyzed data from Facebook and Twitter, from republican and democratic politicians, as well as republican and democratic news sources. And what I found was that the biggest predictor of virality in these samples seemed to be expressing animosity towards one’s outgroup. So if a Democrat was very negative about a Republican, they would go more viral on social media, or if a Republican was very negative about a Democrat, they would also go more viral on social media. So this article found that one of the big predictors of virality seems to be posting polarizing content, specifically polarizing content about an out group. So in this paper, this perspective is on psychological science piece. Our first goal was sort of to review the literature about what goes viral on social media.
[00:04:54.270] – Steven Rathje
So we reviewed a lot of articles, including articles like my own, that basically looked at some of the predictors of what goes viral on social media. And we found a few clear patterns. So we found a number of articles that one found that polarizing content in general seems to go viral. So researchers have looked at the toxicity of politicians messages. I specifically looked at outgroup animosity. Just a number of studies have found this general pattern that polarizing or divisive content go viral. So that’s one category of content we looked at. We also found in the literature that a number of studies suggest that negativity tends to go viral. And this perhaps reflects research on the negativity bias in psychology. So it’s long been noted in the psychology that we pay more attention to the negative than to the positive. And this seems to be the case on social media as well. Messages with negative sentiment tend to go more viral than messages with positive sentiment.
[00:05:56.490] – APS Özge Gürcanlı Fischer Baum
So polarizing content is one of the predictors of going viral. Does it have to be political, can it be something as simple as, do you like cats or dogs?
[00:06:08.310] – Steven Rathje
Yeah. I don’t necessarily think it has to be political, is my speculation. But one of the limitations of a lot of the research on virality is most researchers have looked at political contexts. So I think a gap in the literature we have is people have not looked at non political content enough. I can give you a few examples of non political studies, however. So my colleague Claire Robertson and Jay van Bavel, who are also co authors on this paper, they had an article published where they looked at news articles on the website upworthy. And the website upworthy billed itself as a positive news source, and they found that negative news headlines from upworthy, whether or not they talked about a political issue, tended to be clicked on more. So at least that study, in the context of negativity, found that negativity is attractive for political and non political headlines. So we do have a little bit of robustness, at least in the negativity question. But I also see, at least anecdotally, when I go on Twitter, there’s all sorts of conflict about, for instance, like the generations are conflicting, like Gen Z versus the boomers or whatever.
[00:07:22.780] – Steven Rathje
I think we have a lot of conflicts between groups, just general inner group conflict, whether or not this conflict is political. And I think also non political controversies can become moralized very quickly on social media. So my intuition, and also some of the research suggests that sort of the negative and the polarizing, I think, tends to go viral universally. But we do need to look at this question more.
[00:07:48.550] – APS Özge Gürcanlı Fischer Baum
And you said you looked at polarizing content, you looked at negativity. Are there any other categories that you are interested in?
[00:07:57.130] – Steven Rathje
Yeah. So other categories we found in the literature that seemed to have been associated with virality in the past, one was moral and emotional content. So there was sort of a well known study that was conducted by one of our co authors, William Brady, that found that tweets that contained moral and emotional words, so these are words like attack or bad or blame that contain both moral and emotional content in them. These tweets that contained these sorts of words tended to be retweeted much more specifically, they found that each moral and emotional tweet added to a social media post led to about a 20% increase in retweets in their initial study. And some of their follow up work using machine learning classifiers found that tweets that contained moral outrage ads classified by these classifiers also tended to be retweeted more. So moral outrage is a category of content that seems to be associated with increased virality. There are other studies that show that social media posts containing high arousal emotions tend to go more viral online. And this initial study was conducted by Katie Milkman and Jonah Berger. There were also some follow up studies about this.
[00:09:14.470] – Steven Rathje
So what is a high arousal emotion? For instance, high arousal emotions are essentially intense emotions, and they can be both positive or negative. So a high arousal positive emotion is an emotion like excitement or awe, and a high arousal negative emotion is an emotion like outrage or fear. Sadness would be a low arousal negative emotion, and calmness would be a low arousal positive emotion. So they found that high arousal emotions tended to lead to increased virality. And their initial study looked at New York Times articles that were most likely to be shared. So that’s one predictor of virality that’s been focused on in the literature, and a final predictor of virality we found in our review, and this is perhaps a little bit depressing, is there have been some seminal studies that suggest that false news tends to be retweeted, shared and clicked on more than true news. And there’s a seminal study, it was in the journal Science, and it basically looked at all of the tweets in the entire Twitter archive, and they sort of match tweets about news to sort of fact checked true and false news. And they found that fact check false news spread further, faster and wider than fact check true news.
[00:10:33.400] – Steven Rathje
And some replications have supported their initial findings. Now, in an interesting caveat to this result is this doesn’t seem to be true on all social media platforms. So, for instance, there was a replication of this finding on Reddit that found that false news didn’t seem to get upvoted more than true news. So some of these patterns are platform dependent. It could be that different platforms are designed in different ways, and Twitter things, I think they have an ability to go viral across the entire world quite quickly, whereas Reddit things sort of stay confined within niche communities. So it could be that aspects of the platform design make it so that some of these predictors of virality aren’t universal. But the initial study that found that false news spread more widely, it also found that false news tended to contain more emotional content, specifically like emotions like disgust or fear tended to be present in false news. So I think sometimes the advantage of misinformation going viral online is misinformation. You can sort of perfectly craft it to go viral. You could be like, I’ll make this the most negative, emotional and polarizing posts, and it contains a lot of those other categories of content that we know are associated with virality, whereas sometimes true news can be less interesting.
[00:11:54.100] – Steven Rathje
And I think that’s why we often see that false conspiracy theories spread widely, because they’re more entertaining or they play into our emotions, our fears and everything. So, yeah, the first part of this paper basically found, just to review, these five categories of content tended to be associated with virality. Negative content, polarizing content, high arousal content, moral and emotional content, and false content in the prior literature have been associated with virality. And then in the second part of the paper is where we looked at people’s preferences about what they think should go viral.
[00:12:31.730] – APS Özge Gürcanlı Fischer Baum
Thanks for summarizing those five points for us. As a person who produces evidence based podcast content, I find these results very depressing. Like you said. Right. And, yeah, so I am also very interested about the second part of your study, because people think different things about what should go viral. Right? So, yes, please tell us, what do people think should be viral when they look at the social media?
[00:13:00.730] – Steven Rathje
Yeah. So, first of all, we thought this was a really important question to ask, because when people hear about some of these findings, that polarizing content goes viral or fake news go viral, some of the assumption seems to be that consumers have a demand for polarizing and false content. They want their social media to feed them all this polarizing content all day. And I think Facebook sort of plays into these assumptions as well, like Facebook on their webpage, about how their algorithm works. They say that their social media algorithm, their Facebook newsfeed ranking algorithm, is specifically designed to show us what we want to see. So I think Facebook’s assumption here is that the content that people click on or the content that people share reflects what they want to see. And it also seems pretty intuitive to us. Why would someone press like or share on a post that they don’t actually want to see? It seems like an intuitive assumption, but we thought that it could potentially not be the case that people want to see polarizing content. It could be that people are perhaps drawn to or addicted by polarizing content, but don’t actually like that content.
[00:14:09.630] – Steven Rathje
So that’s why we thought it was really important to ask about a number of categories of content, whether people think this content should go viral, and whether people think this content does go viral. So, basically, for this study, we recruited a nationally representative sample of United States participants, and we asked them about several different categories of content. Many of these categories of content were the ones I just talked about with you, sort of negative content, moral outrage, misinformation et cetera. We also ask people about positive categories of content. So we ask people about accurate information, positive information, nuanced information, et cetera. And for each of these categories of content, we asked people, do you think this type of content goes viral on social media? And do you think this type of content should go viral on social media? And this is what we found. We found, in line with a lot of prior research, that people thought that negative categories of content do go viral. So people thought that negative information, moral outrage, polarizing content, misinformation, they thought all of those categories of content do go viral. And people’s intuition seemed to be in line with pyre research.
[00:15:26.520] – Steven Rathje
Research suggests that those categories of content do go viral. However, do people think these categories of content should go viral? We found overwhelmingly that people thought they should not go viral. So when we asked people directly, they said that they didn’t think that negative or polarizing or false content should go viral. So what do people think should go viral? We found that people thought that accurate information, that nuanced information, that positive information, and that educational information should go viral. Yet people think those categories of content do not go viral. So, in other words, we found this strong disassociation, and it was very large effect sizes, such that people thought that negative content goes viral online, and yet they think this negative content should not go viral. And also, perhaps surprisingly, we found very little political polarization around these issues. So we found that Republicans and Democrats had very similar preferences about the categories of content they think do and should go viral online. Now, there were some small political differences, such that Democrats thought that slightly more negative content goes viral, and Republicans seemed to be slightly less concerned about misinformation and conspiracy theories going viral.
[00:16:43.170] – Steven Rathje
But these effects were very, very small. And the overwhelming pattern was that there appeared to be a strong bipartisan consensus that people thought that too much negative information goes viral online and thought that more positive information should go viral online.
[00:16:57.750] – APS Özge Gürcanlı Fischer Baum
Yeah. I’m glad people agree on certain things, at least.
[00:17:00.980] – Steven Rathje
[00:17:02.790] – APS Özge Gürcanlı Fischer Baum
So that’s why when I read your article first, I found it fascinating that if you think positive content is important, educational content is important, click on that. But you don’t do that, right? We don’t do that. So. Yeah, very interesting. So I have another question. You mentioned this in the passing, your study is based on the US population. Do you expect different results in other regions of the world?
[00:17:30.290] – Steven Rathje
Yeah, that’s a really important question, and I think that I would expect slightly different results. Yeah. I want to say that there is very little literature from outside the United States about social media in general. And some of the future directions from my research are specifically focused on trying to create more studies about social media outside of the United States. And there’s reason to believe that the predictors of virality might be different in different cultural contexts outside of the United States. There was this great study. It was done by Stanford researchers, and what they found was that certain types of content are more contagious in the United States versus in a more collectivistic culture like Japan. So they were sort of comparing the United States and Japan, and they looked at the kind of content that was more. They called it contagious. Now, this was a little different than content that is viral. It was more content that is influential, and social networks and people are likely to sort, know, replicate that kind of content, but it’s a different but similar construct. And they found that high arousal negative content, so emotions like outrage, tended to be more viral in the United States, whereas high arousal positive content, like emotions, like excitement, tended to be more contagious in Japan.
[00:18:58.950] – Steven Rathje
And we know this fits some existing theories about how emotions differ a lot cross culturally. Like, there’s a lot of research on ideal affect, about how Americans desire high arousal positive emotions like excitement, whereas people in more collectivist cultures desire low arousal positive emotions like calmness. So there are a lot of differences in how we experience emotions and the emotions we prefer across culturally, which might influence the type of content that is more likely to spread in different social networks. And one of the future directions for my research is we’re just doing a multicountry study, and we’re relying on some advances in large language models like GPT to sort of categorize various categories of content across cultural and see what the predictors of virality are in different countries and how they differ around the world, because it could be that very different types of content go viral in different countries, and I think we need to explore that. And indeed, it could also be true that people have very different preferences about what they want to have go viral. As I said earlier, people have different ideals about the kind of emotions they prefer, with some cultures preferring high arousal emotions more than other cultures.
[00:20:10.920] – Steven Rathje
So I think this certainly could shape the kind of content that we want to see. So I think there needs to be a lot more cross cultural research on this topic, and I’m excited to do some of that in the upcoming years.
[00:20:23.450] – APS Özge Gürcanlı Fischer Baum
And I’m excited to see those. Like, let us know when you conduct that study. I am really excited to see.
[00:20:30.170] – Steven Rathje
[00:20:30.490] – APS Özge Gürcanlı Fischer Baum
Yeah, I have another question. So you have your research group, and it sounds like you are aware of many other research groups studying this topic. Are there collectives? Do you work with organizations that are outside of academia or other groups?
[00:20:48.030] – Steven Rathje
Not that I work with personally, but I know that there are a lot of practitioner organizations that are focused on this issue. There are a lot of journalists who are sort of fighting for social media regulations, and I follow tech journalists a lot, and I pay a lot of close attention to what is being done in the policy space around social media. And I want to talk about a few things regarding regulations. So, first of all, on the freedom of speech issue, I agree there are a lot of debates about freedom of speech and censorship, which is why a lot of my focus instead of being specifically on content moderation. While I agree content moderation is important, it’s a very divisive issue, especially when it comes to politically divided topics. I focus a lot more on algorithms and incentive structures and what social media platforms amplify. So there appears to be a strong bipartisan consensus. We found in our study that people want social media to amplify less polarizing content and less false content, and this is actually an easy fix for social media platforms to do so. For instance, we know Facebook has said that right before the presidential elections, they shifted their newsfeed ranking algorithm to downrank unreliable news sources and uprank more reliable news sources.
[00:22:03.730] – Steven Rathje
So this is something Facebook has done in the past, but immediately after the presidential election, they shifted their algorithm back to normal. So this is one of the questions of, like, why isn’t Facebook always uprinking more reliable news? This is something people say they want. And I think the answer is, it’s probably not within Facebook’s business incentive. The New York Times reported this in around 2021. I’ve talked about this anecdote in a few of my papers before. So Facebook tested out a solution with some of their internal researchers to downrank content in people’s newsfeeds that people considered bad for the world. So Facebook had people look at various forms of content and vote about whether they thought that content was bad for the world. And they also made a machine learning classifier that would classify social media posts that people were predicted to be considered bad for the world for people. And then they implemented a solution that would downrank these posts. And an example of a post that people might consider bad for the world would be like, maybe Donald Trump, like, professing an election conspiracy, something like that, something that most people would view to be destructive.
[00:23:15.560] – Steven Rathje
And they found that when they downranked social media posts that people considered bad for the world. People spend less time on Facebook, and after Facebook executives discovered this, they decided not to implement that solution. So yeah, that was an anecdote that was reported in the New York Times, and I think it illustrates that Facebook might be well aware of solutions that make social media a more constructive place that is less negative, less divisive, less bad for our mental health. But they might be unwilling to implement these solutions if they know that these solutions hurt their bottom line. Which is why in some ways, I think that regulation might be one of the only options to improve social media, or at least public pressure from individuals who are aligned about this issue. And we know from the study that individuals are quite aligned about these issues. People seem to be unhappy that social media amplifies negative content.
[00:24:15.810] – APS Özge Gürcanlı Fischer Baum
That is the topic of another maybe podcast. What does it mean to regulate content? And yeah, I totally agree with everything you say, though. Steve, thank you very much for this very productive and interesting conversation.
[00:24:31.670] – Steven Rathje
[00:24:32.760] – APS Özge Gürcanlı Fischer Baum
Thank you for summarizing your research for you.
[00:24:38.330] – Steven Rathje
Thank you. Yeah, I had a lot of fun. This is super interesting.
[00:24:41.530] – APS Özge Gürcanlı Fischer Baum
This is Özge Gürcanlı Fischer Baum with APS, and I have been speaking to Dr. Steven Rathje from New York University. If you want to know more about this research, visit psychologicalscience.org.