Summary, Highlights, and Transcript for Outrage 013 – How moral misinformation uses emotion to generate outrage – Ritsaart Reimann

Back to episode


This episode of the podcast “Outrage Overload” explores the effects of affective polarization and the increasing divide between Democrats and Republicans. While social media and information silos are often blamed, new research suggests that this may be overstated and it may have more to do with how people come to believe things in the political realm. The episode delves into the theory of knowledge and the idea that lack of understanding can be attributed to the information deficit model. The podcast also explores the historical context of polarization, media regulation changes, different types of misinformation and the role of social organizations and podcasts. Finally, the episode examines regulating speech that incites violence and the difference between fact and belief when it comes to political identity. Listeners are encouraged to join the show’s Facebook group for further discussion.


[00:00:16] “Epistemology and Moving Beyond Opinion”

[00:04:05] “Social Influence on Political Beliefs”

[00:06:47] Historical Context of Polarization

[00:08:54] “Media regulation changes”

[00:11:33] “Misinformation Types: Factual vs. Moral”

[00:17:35] Twitter Contagion Analysis

[00:21:38] “Role of Social Organizations and Podcasts”

[00:25:05] “Content Targeting”

[00:27:04] Regulating Speech Inciting Violence

[00:30:05] Fact versus belief and political identity.

[00:36:48] Join Facebook Group.

This transcript was generated automatically and may contain errors and omissions.


Speaker A

Welcome to Outrage Overload, a science podcast about outrage and lowering the temperature. This is episode 13. Um, the effects of affective polarization that is, the tendency for partisans to dislike and distrust those from the other party are all too personal here’s.

Speaker B

Liliana Mason what we’re seeing today is that the divide is much more about our feelings about each other. We are angry at one another. Democrats and Republicans don’t trust one another. We are more likely to dehumanize people in the other party. We think that they’re a threat to the country, and these types of feelings are not the kind of thing we can compromise with.

Speaker A

I often hear stories about strained relationships and lost relationships with friends and family.

Speaker B

Over politics with my dad, my grandfather.

Speaker A

I blocked my own daughter. These kinds of stories are becoming all too common.

Speaker B

I do think that things have broken down. I have neighbors that we, you know, we sort of wave to each other, and that’s the extent of our relationship now. It’s really hard because these are people I care about. These are people I’m close to, that I’ve grown up with, I’ve lived in the same house with. The underlying current between all of us is very tense.

Speaker A

There’s been a drumbeat suggesting that social media and information silos echo chambers are a driving force behind our divisiveness and polarization. New research suggests this may be overstated. It may have more to do with the very nature of how we come to know or believe things in the political realm. We need to talk about epistemology, or the theory of knowledge. Epistemology is that part of philosophy that asks, what can we know? What can we be sure of? How do we get beyond mere opinion to real knowledge? In 2020, Neil degrasse Tyson tweeted quote, you can’t use reason to convince anyone out of an argument that they didn’t use reason to get into. This is a very popular meme around social media. He admits it wasn’t necessarily his original idea, suggesting a paraphrasing of the quote, you cannot reason someone out of something he or she was not reasoned into. Often attributed to Jonathan Swift. It turns out there is an agreement on where this quote actually came from. Similar expressions have been ascribed to Sidney Smith, Fisher Adams, and many others. But the point is, with all due respect to Neil degrasse Tyson and the rest, it’s wrong. Or at least it isn’t saying what a lot of people think. It’s saying this way of thinking feels good. On the surface, it tells us that we’re the smart ones using reason, and there are these other people, the dumb ones, not using reason. That feels pretty good. We use reason and they don’t, and that makes us always right and them always wrong. For a long time, there was this idea known as the information deficit model that attributes lack of understanding to lack of information. The idea that if we just provide more facts, people will see the light and come around to our way of thinking. Now we call it the information deficit hypothesis because ironically, the evidence is not in its favor. What we’re talking about comes down to beliefs, attitudes and values. There are scholars who spend their entire careers trying to identify what a belief is, and the deeper they go, they only seem to get further away from a clear and concise definition. We can’t go into all that in this episode, but we will be exploring this more in a future episode. For now, one important concept is that how we come to believe things is a social endeavor. This is particularly true with political information and political beliefs. Here’s. Friend of the show, David McCraney.

Speaker C

The drive behind that is almost always at some point belonging goals. You can start at all sorts of initial positions and almost all of them funnel their way into at some point you’re in a group. And once you’re in that group, that becomes the most important part of what got you there, no matter what it was that started you there. Once you’re in the group, that becomes the most important motivating factor.

Speaker A

The conventional wisdom has been if we get rid of echo chambers, problem solved. But it turns out only a small fraction of the population is trapped in any kind of echo chamber. So something else is going on. I sat down with a researcher who has looked at decades of research to suggest some ideas.

So I’m hitsar de gaiman. I’m currently doing my doctorate in philosophy at Macquarie University, and I’m broadly interested in social epistemology, which is a branch of philosophy that treats solid knowledge as an inherently social phenomenon. So rather than think that knowledge is something that we do privately inside our own heads, it’s really most of the things we know from talking to other people. So trust and testimony are very important in how we come to know things about the world. And that’s also sort of my angle on outrage, is how it relates to trust, because it seems like it can both sort of encourage trust among people who have the same beliefs about the world. But at the same time, it also generates a lot of distrust, as we see especially sort of in American politics with polarization and things like that.

Speaker A

In order to spare me the embarrassment of even pretending I could pronounce his name correctly, he suggested I call him Ritz, which I accepted with sincere appreciation for my sake and for the sake of your ears. Now, we might believe if echo chambers are not the problem, then we have nothing to worry about. But the authors provide this caution quote despite the temptation to thus conclude that democracy is alive and well and that our contemporary media environment poses no imminent threat to the prospect of reaching political agreement, there is an alternative account to be considered. We all think that we vote based on rational judgments, looking at issues, et cetera. But the research shows, quote, political behavior is driven primarily by affective party affiliation, and only in part, if at all, by policy considerations, end quote. The authors not only provide their hypothesis, but also offer something we don’t always find and that we’re all looking for specific suggestions for improvement. The insights you’re about to hear will change how you think about politics and social media, political identities and disinformation. And with that, let’s hear from my new best friend Ritz, speaking from his office in Sydney, Australia.


We’Ve talked a fair bit about our state of affective polarization on this podcast and we’ve talked about the political sectarism paper with Peter Ditto over a couple of episodes. So our listeners by now are somewhat familiar with the implications. In this paper. You look at the relation between Disinformation and this polarization and we’re definitely going to dive into that. We’ve also talked on this podcast a fair bit about sort of the political sorting stuff and there might be new insights you want to add there. But one thing I really wanted to talk about was before we jump into all that, is you point out a number of regulatory, social and technological changes that have taken place and I don’t think we spend a lot of time talking about that. Some on the podcast, but not a lot. So maybe you can speak to at least some highlight. I know that’s kind of a large part of the paper, but maybe you can speak at least to some of the highlights of those things. I found that pretty interesting.

Yeah. So we kind of place this development of outrage and polarization, its broader historical context, rather again, than just sort of focus on the implications of social media really trying to trace its origins further back in time. And you guys have already spoken about the sorting, so we mainly draw on Mason’s 2008 book and I think she does a really good job of sort of explaining how partisans sorted not just along their ideological lines, but really into social, racial and economic cleavages and also some really compelling data on that. People who have sort of well sorted partisan identities are 50% more likely to be hostile towards outpartisans, whereas people who have sort of cross cutting social identities don’t experience that much. So the sorting was a big part of it and that sort of got in the way on their way in, like the 1960s. So in addition to looking at that political sorting that occurred, we also look at technological and regulatory changes and specifically the regulation of media. And the technological changes got underway in the 1990s, obviously, with the start of the Internet. And the basic idea here is that prior to the Internet and other sort of big communications infrastructure, publishing and producing content was a very costly business to be in and broadband was very limited. So there was only so much content that could be produced and that motivated content producers to capture the largest possible audience by sticking to really centrist and public or popular points of view. And then sort of as the internet came online we were obviously able to produce much more content and spread it to much more niche audiences and that’s really taken off now with things like YouTube as well where virtually anyone with any kind of taste can find their little niche products. So producers are able to make money by targeting very small niches and that’s become profitable because the internet has made dissemination of communication so cheap and sort of around the same time that the internet came on. I think this is also in the late 1990s the FCC, which is the commission that regulates media and communication in the United States went through a process of deregulation. So for a long time there was very strict laws in place that media outlets sort of had to represent both sides of a story. Right? So there was even rules like if you did 1 hour on one perspective then you should have do half an hour promoting the other perspective. So it was really clear, like you devote this much time to fairly and in a balanced way representing debates that are of the public import and that regulation sort of was deregulated deregulated more and more sort of up. To the point of what we have today, where it’s essentially there’s virtually no media regulation, at least in the sense that there is no rules as to what kind of content you’re allowed to produce and whether that content should be balanced and things as such.

Another aspect I think is also true that particularly with cable news they don’t have to sort of differentiate between news when they’re doing news kind of quote unquote news and when they’re doing kind of opinion and editorial they kind of don’t separate that very much. I don’t think they’re required to.

Yeah, I don’t think they’re required to anymore. And I think that’s also one of those things. Whereas back in the day that was especially sort of between news and then editorials and opinions, there was a really strict division between those things so that people knew, hey, this is sort of what the facts are and this is what people’s take on these facts are, so we’re going to keep those things separate.

Yeah and they kind of blend all that together nowadays. Yeah. In this paper, you make a distinction that it was kind of interesting, kind of between factual misinformation versus moral misinformation, and I hadn’t really heard that term used a lot. And quoting from the paper, you say your analysis focuses on affective implications of partisan misrepresentations of the values and character of their adversaries. And is that kind of what you mean by moral misinformation? And maybe you can kind of talk about this distinction between the two.

Yeah, I think that does sum up what we mean by moral misinformation. And the distinction between the two is sort of grounded in the idea that when it comes to a lot of disinformation, or sort of the disinformation associated with outrage in particular, is that it’s really not even about whether the facts are wrong or right. So you can give all kinds of accounts that don’t even use false facts at all but nevertheless severely misrepresent the state of the world. And what you’re usually misrepresenting is the moral character of your political opponents. And that sort of has two effects. One is that it sort of moves the discussion entirely beyond fact. So fact checking is beside the point. And the other thing is that it really gives it this moral charge. And the implications of that are that people are going to become very emotionally invested because we invest our moral beliefs with strong emotions. And these are the sorts of emotions that will both make you more resistant to accepting counter evidence to your beliefs and also more hostile towards people who hold different beliefs to you.

Right, and this is kind of one of the kind of fundamental premises of the podcast, really, that we talk about a lot. Yeah, like you say, it doesn’t even necessarily have to be completely false characterization. It can just be like exaggeration. Plus it could also be you never kind of put it in context or you never say anything counter. If you just constantly say these negative things about your adversaries, it just builds.

Up, right, so it’s hyperbole, it’s decontextualization. I mean, more often than not there will be falsehoods, but like you say, it doesn’t depend on that. And then, like you say as well, it’s really this constant piling up because a lot of people who listen to this kind of content, they get it from multiple channels, right? So they listen to radio, they watch Fox News, et cetera, not to mention any particular outlets and it kind of just keeps piling up. Keeps piling up. And the more frequently you hear things, the more likely you are to believe them.

Speaker A


And I think that’s kind of something that’s happened to us, right? We keep telling ourselves the other side is evil and now we kind of believe it and it’s hard to get past that.

Yeah, it’s really hard to get past that. And this is something that we touched on in the paper as well. And I’m sure how much we want to get into this, but this is really primal stuff, these sorts of sentiments, right? Like, this is really part of our innate hardwired cognitive machinery and it’s got pretty good reasons for being there that way. Right. So if you go back a long time ago, it was really important for us to be able to identify who was part of our group and sort of distinguish ourselves from our outgroup. And that was really important back then but it kind of seems like it’s become a little bit disingenuous. So it seems like a lot of people are mobilizing this kind of rhetoric because they know that it’s going to create this kind of moral antagonism as opposed to actually being morally antagonistic. Right. It’s more so like self interested whether it’s to win votes or whether it’s to sell copy or get links and clicks and things like this.

Right. I think politicians and campaigns can weaponize this. I think news media can weaponize this for clicks, social media can weaponize it and sort of the business model incentives are kind of there to do that. Yeah. So you also note once this moral outrage is out there and you have this polarization situation, the likelihood of politicians to throw out or to endorse positions that are basically kind of it’s impossible for it to happen almost because it’s not negotiable. And if you don’t have either, you don’t have both, all the chambers of government, it’s probably never going to happen. And they don’t even almost don’t want to negotiate. It’s almost like they throw these bills out as almost like the whole virtue signaling type thing.

Yeah. So that’s sort of the cross party corporation. There’s some really good data on that that’s at an all time low and that sort of inversely correlates with the amount of outrage that’s out there that we can clearly see. Okay, so the more outrage there is, the less cross partisan cooperation there is. It’s like you say, the politicians themselves are probably afraid to make these kinds of cross partisan agreements because they’ll be seen as appeasers. And this has a lot to do with the signaling and the virtue signaling and it seems like it really goes both ways. Right. So on the one hand, the constituency is putting pressure on its politicians to adopt these really extreme positions, but at the same time, political leaders set the example for their constituency. And it’s also been shown that normal voters take their cues from their elite representatives. And it’s kind of like a vicious cycle almost, where the constituency is saying, we want extreme positions, the leaders embody those positions, which sends the queue back to the constituency and it kind of just goes round and round and round and it goes further and further in one direction.

Right. Yeah. It’s like a bad spiral. Yeah, that was a cool fact to it. I hadn’t seen that before that there’s less cross party cooperators than there’s ever been, which intuitively looks that way. But it’s interesting to see that that pans out in the data as well.

Yeah, I feel that’s sort of with a lot of this data and a lot of the theory that’s behind it as well is that it really just makes sense of our common sense intuitions. Right. We look out into the world and we see these dynamics and then the data supports those dynamics. And again, underlying that is kind of like these theories of human motivation and behavior and political theories as well that can really explain what’s going on. And that’s interesting. That’s sort of where it’s scary on the one hand, but it also gestures towards possible solutions, which is what we’ve always have to be after.

Right? Yeah. Right. And I do want to kind of close out with some of those thoughts on that. Yeah. And I also wanted to make a note that I know I’ve seen this in other stuff, but you also reiterate that this moral emotional language requires sort of this combination of moral and emotional to have the most effect. And that’s an interesting little that seems like that’s something in our evolutionary biology as well.

Yeah. So this is really J. Von Babel and William Brady. Both of them are based in the States, I think. One is at New York, the other one at Princeton. They’ve done a lot of really good research on this and so empirically, it turns out, so they use Twitter data to analyze sort of the contagion of moral and emotional language. And what they find is that emotional language drives the spread of content and so does moral language. But it’s really the combination of moral emotional terms of which outrage is really the paradigm example that completely really drives how much engagement a certain tweet gets. And I think they find something crazy like for every additional outrage related word in a tweet, that tweet gets posted 20% more. And that’s for every term. So if you use three or four outrage related terms in a tweet, that tweet is going to get posted four or five times more. Three or four times more. And like you say, again, grounded in this evolutionary stuff. So on the one hand, emotions are really important signals because they sort of regulate and coordinate behavior. Right. They sort of encourage us to be pro social. They also encourage us to punish each other when we’ve deviated from norms that we’ve commonly agreed to. And then morality really captures those norms that we’re sort of supposed to abide by that can help us get along, and the moral emotions of those emotions which motivate us to act morally and then to punish people who act immorally. And it’s, again, two sides of the same coin there. Right. So something like outrage is both a signal that punishes someone who’s defected from our norms and it’s also a signal to people who have the same beliefs as I do that I’m a trustworthy moral person that you can count on me. Right. So it’s, again, like, it really sort of binds the in group and sets them off against the out group. And that’s sort of a classic tension or dynamic that we see.

Right? Yeah. And you also note that as we get more exposure to this moral outrage, it makes our susceptibility to the false information is more likely, which seems to be sort of a version of a confirmation bias, maybe, or some kind of a motivated reasoning type thing.

Yeah, I think motivated reasoning is a big part of it. Another thing that seems to be part of it is when you’re in this emotional state, you’re just less deliberative about the information that you encounter. So you’re just more prone to accept information that confirms the kind of emotions that you’re feeling, and at the same time more prone to reject information that would make you question whether those feelings are appropriate. So it’s definitely confirmation bias, but it’s also literally sort of the cognitive mood that it puts you on, whether you want to critically and carefully engage with incoming evidence, or whether you just say, this fits my worldview, I’ll take it and that doesn’t, so I’ll reject it.

Right. Because you talk about that the outgroup distrust effectively inoculates you from their challenges.

Yeah, that’s right. And that’s something that we try to draw a little bit of attention to in this paper, which hasn’t received a great deal of attention in the effect of polarization literature. So it’s often mentioned that, hey, if we dislike each other, we’re also likely to distrust each other, because trust is a very effective attitude in many ways, but it has this epistemic or knowledge component. Right. So I learn things because you tell me that’s kind of the epistemic dimension. But it’s really an effective attitude. So if I like you, I’ll trust you. If I dislike you, I won’t trust you. And when that distrust is in place, then there’s really no possibility of the transmission of knowledge because it’s just completely blocked out by this negative attitude that I have towards you.

Right. And you see that in our real lives.

Yeah. Again, at a personal level, at a political level, you kind of experience it everywhere.

Right. So let’s go ahead and start jumping in a little bit to some of these proposed solutions. I’m a bit cynical about it because so many people have sort of have dooms, and I sort of see our politicians, but at the same time, I have hope, and I feel like a lot of the change seems like it might have to come from the bottom up. But you talk about the idea of political elites and parties moderating their position. It seems to me like the best way to make that happen is for us to put the pressure on them to do that, because they sort of go where the votes are. So, I mean, what’s your sense on that? I mean, how optimistic are you about ways to make that happen?

Right. So it’s kind of like the flip side of the vicious cycle that we talked about earlier. This could be a virtuous cycle, right. Where a constituency starts signaling towards its representatives. We’d like a more moderate position, and then the elites will adopt those more the representatives will adopt those moderate positions which will in turn signal back. I’m generally also someone who’s more optimistic about the chance of things changing from the bottom up because it seems like a lot of the top does respond to what’s coming from underneath it. At the same time, I’m generally skeptical of any sort of unidirectional explanation. So things always have to work in multiple directions and things are constantly causally affecting each other, right? So it’s again, it’s this interplay between the two. It’s a good question where that would come from or where it would start. Maybe that’s also a place for social organizations, for outreach programs, for podcasts like this, where people can get informed about what’s going on and realize or get some idea of how they can help change things, right?

Because as you also note in the paper, if you can change those misperceptions about the other side and show that they’re not as extreme as you think they are and this kind of thing that can soften some of this, that’s really interesting.

That correcting. Those kinds of misperceptions does work and we contrast that with sort of corrections of false facts which have been shown to not work at all. Right, so you can debunk false facts and a couple of days later you can ask people and they’ll still recall the fact as if it was true because essentially it’s just like, reactivated the memory of that thing being true. Whereas we do seem to be more responsive to these sorts of corrections of misperceptions of people’s moral character and that seems like a really important thing to do. And we kind of comment on the role of the media in that regard as well, sort of saying that it’d be good if media promoted this image of most people are not ideologically extreme and they’re actually looking for compromise and such, which most people are. But we note as well that that’s just not what sells copy. So it’s just unlikely that media is supposed to give this picture of the ideologically neutral person is just not that exciting, right?

I mean, even with this podcast I get threatened about I’m not being harsh enough on the other side and depending on which part of your honor where you live on the spectrum, whoever the other side is for you. Right, because we try to just say what the data says, say what the scientists say, and not take position on issues and people are mad. Like you should be taking a position on issue or you should be yelling at these other people for doing these things and it’s like that’s not what this podcast is about. There’s other places for that. But it’s funny how that pressure exists. I’ve talked to some journalists as well that feel that pressure, like they’re trying to be ordinary journalists or good journalists and report things in the way that you’re speaking of. And they’re almost like getting pressure not from their editors so much, but from their listeners and readers saying, look, you were too soft on these people, or whatever.

Speaker A


Yeah, I think that comes back a little bit. So you asked me earlier about sort of some of these changes that took place in the media landscape, say, from the 1950s on, because it’s a common thing to just blame all of this stuff on social media. And there’s by now a good amount of evidence that, hey, look, outrage does proliferate online, and there are certain effects, things like echo chambers, et cetera, that also proliferate online. But none of these things are new. Right. So this goes back way further when we talk about things like sorting, where people deliberately and physically move themselves into ideologically homogeneous enclaves. Right. So if you think about a real offline, echo chamber is simply a neighborhood of people with the same status and the same beliefs, and you find that everywhere. But coming back to the point of sort of when we’re talking about, say, me, before the 1990s, the technological infrastructure of the news media and the way that information was propagated was just limited in terms of broadband. So there was only so much information that could be pumped out into the world, and there was only so many people who were producing that information. There’s a few major broadcasters. And under those conditions, all broadcasters essentially had an incentive to produce content that was widely appealing to a broad audience. Right. So you kind of want to capture as much of the center as possible, and the fringes are just sort of left out. Whereas now, because it’s so cheap to produce content and there’s so many different avenues and so many different venues that produce content, it’s become both possible and profitable to target really niche sectors of consumers who want that really sort of ideologically extreme content or just that little bit that you’re looking for. Right. And that’s kind of one of the ways in which the changes in the media landscape have sort of affected this, that we can now cater to those small audiences and that kind of creates the demand and creates the supply.

Right, yeah. And then if you start to go off into the online world there, I mean, you go to YouTube or TikTok or something, you can pretty much find any niche of information that you want, for better or worse. Moderation is one thing that you talk about of finding ways to moderate how the political elites moderate their positions, but and also the news media, which you talk a little bit about the possibility of some regulatory options there. What are your thoughts on that?

Yeah, so talk of media regulation is always sensitive because people get very concerned about sort of the right to free speech very quickly, and that right is very important. And I’m not a legal scholar, but the author that we draw and who’s talked a lot about. This is Baker and he has a 2001 book where he really takes very seriously the importance of free speech. And he takes very seriously the other values that democratic societies need to have in order to protect free speech.


So we need things like a safe space for public discourse. We need to have constructive engagement with each other that’s based on reasons as opposed to just driven by emotions. And he sort of outlined a whole range of constitutionally admissible changes to regulation of media that would protect free speech, but at the same time prevent the worst kinds of things. And I guess it really depends by country as well. Right? So I think the United States is really a country where free speech is very valued very highly and where media regulation is very low. In Europe, that’s very different. There’s a lot more media regulation in Europe, and especially in countries such as Germany where they take very seriously. For example, there’s free speech and then there’s hate speech. And the German government is very quick to say, look, that’s hate speech, that’s incite violence, and that doesn’t count as free speech. Right. So there is things that you can identify, and a lot of this stuff that we see in outraged media definitely verges on speech that incites violence. And once that’s at play, then regulation seems appropriate.

Right? Yeah. You just get these events, so many of these events, and a lot of times it traces back that the guy had gone down some rabbit hole of this kind of information and these pundits and whatnot that want those viewers and want that airtime and want those ratings, just say that stuff and thinking there’s no consequence. I mean, they probably do know there’s consequences, but they do it anyway and there are consequences to that stuff. So, yeah, that’s an area that definitely needs to be cleaned up. And now this kind of free world, this craziness we have with some of these social media platforms where they’re not regulated as well is a challenge. But I mean, regulated morality is hard too. So it’s not a super easy answer. But yeah, I don’t think I know that Baker book, so I’m definitely going to check that out because that sounds like in that book he covers this in a thoughtful way as opposed to some of us, just in a reactive way.

Yeah, I think that it’s definitely some of the best work that I’ve read on it. I think he’s a legal scholar and an economist and he sort of takes all these things into account and he gives a really thorough account.

Yeah. So if you’ve got another couple of minutes, I don’t think it’s in this paper, but I was curious to get your thoughts on it. We talked before in your paper, and I’ve read some about this as well. That kind of fact checking, these factual misinformation doesn’t have as much effect. But then that raises the question then what do we do about these I recently spoke with an advocate that deals with anti Semitism and you have something like Holocaust denial which is a factual misinformation thing. I mean how do you deal with things like that if the fact checking doesn’t work?

Right? That’s a really good question and I think it should probably be a two pronged approach. We do want to correct factually false information because I’m sure there are some people who respond to those sorts of corrections. The question is whether you know, whether what what those facts are being listed for. So I feel and it’s again because you can have say you can have false facts to promote good things right and you can have real facts to promote bad things. So facts are one thing but it’s really what are they being enlisted of? What kind of service are they playing? And I think that often both true facts and false facts are enlisted for good or bad ends and those ends are usually related to people’s moral convictions and their motivations. So if you’re motivated to believe a certain thing then you’re more likely to believe it regardless of how true it is. So in some sense I do feel like the motivations and the convictions and the misperceptions sits a little bit more fundamental than the facts.

Okay. It still seems like a challenging problem.

It is.

Yeah. So in this paper also kind of reaffirms as you know, you’ve read all stuff and you cite in your paper in some of the early days of looking at some of these problems, a lot of people were sort of saying, well, the whole problem is the echo chamber thing, and we sort of break the echo chamber, and everything kind of takes care of itself. Sort of the research is showing that it doesn’t really work that way and sometimes if you just dump people with information that counters their side or talks about the other side more it actually just makes them dig in even harder.


But on the other hand and as you propose here you talk about the idea of participation in cross cutting social groups and there is a lot going on in that space. I will say there are several organizations and you almost wish some of these nonprofits would kind of get together and work together on it because there’s so many of them trying to do this but that they’re sort of trying to do like mediated dialogues and things like that and they do seem to have effect. But I want to check is there sort of a contradiction there? I mean what’s kind of the difference between the echo chamber versus kind of the sorting and then this idea that if you can get involved in these non political social interaction right. Is kind of one thing you propose.

Right. So I think that’s one thing is that once you make that political identity less salient, people will be more open to each other. Right. So if we first meet each other and we’re discussing a sport that we both like, right? We’re playing a game that we both like, then we start building positive affect towards each other. And if, after a couple of hours of doing that, I found out that you’re on one side of the political spectrum and I’m on the other side, I already like you as a person. And if I already like you as a person, that could actually make me realize, like, hey, these people from the other side aren’t so bad. Right. And what seems to happen a lot, especially online and also in the general media, is that we’re introduced qua our political identities, right. So that’s the first thing that we get to know about someone is their political identity. And when you start on that foot, then you kind of take away the space for people to get to know each other and to actually realize that they have a lot in common. Another difference that does maybe separate the virtual from the offline a little bit is that humans are naturally very empathetic creatures. Right. We actually want to get along with each other in most cases, for most people. This doesn’t count for everyone, but most people generally want to get along with most other people. And that’s also, again, really driven by sort of cognitive mechanisms that we have, like, these mirror neurons that make us empathize with the people that we’re speaking to because we’re literally mimicking the emotions that they’re experiencing and that we obviously don’t have online. Right. I don’t empathize with online avatars. So it’s much easier to be just much more of a head online than it is to be in face to face settings. And you see that a lot of people hide behind anonymity. They say all sorts of stuff that they would never say to someone if they were face to face.

Yeah. We’ve covered a couple of proposed kind of remedies or paths that could maybe improve things. Anything else I missed there that you want to throw out?

I think, yeah, you covered sort of the main four, sort of like these elite cues. But we discussed that kind of goes both ways, right. Elites respond to their constituency, so it really needs to go both ways. These cross cutting identities regulation is a tricky one. So out of these, I really think that as far as the empirical evidence goes, I think the correcting these misperceptions seems to be maybe the most easily implementable and perhaps also one of the most effective ones. And that also seems like something that would occur naturally once we engage in these kind of cross cutting, non political interactions right. That we kind of realize, hey, these are people just like me on most things. We agree, and there’s just a few points where we disagree, and that’s totally fine. So I think the combination of those things and they naturally happen together.

Yeah. And I do really like what you were saying before about sort of bottom up and top down, kind of working together and having some efforts on both ends of that. So that is true. You probably can’t do everything from the bottom up, right?

There needs to be some kind of uptake. It just takes a few people who are willing to make those kinds of changes, right. Someone needs to take some risk, lead by example, and then show the rest that, hey, this is another way of going about these things, because that’s the important thing to remember as well, is that none of this is inevitable. This seems to be sort of a strange outcome out of technological changes, regulatory changes, just things that are happening in the world. But none of these things are inevitable. And, yeah, there’s always room for change.

Speaker A


Well, thank you very much, Ritz, for coming on the program. I really appreciate it. I really appreciate you giving me the time. It’s really been awesome to talk to you. I’ve really enjoyed it.

Awesome. Thanks so much for having me, David, and looking forward to hearing the podcast come out.

Speaker A

That is it for this episode of the Outrage Overload podcast. For links to everything we talked about on this episode, go to outrageoverload. Net. I’m asking you, good listener, to join our Facebook listeners group. There’s a great place to get actionable ideas and resources to join. Visit The sooner you do it, the sooner your ideas can help make the show better. I hope to see you there on the Facebook group. Okay, watch for a new episode in a few weeks.

Leave a Reply

Your email address will not be published. Required fields are marked *