- Summary
- Timestamps
- Transcript
Summary
This bonus episode of Outrage Overload is about AI and political persuasion. Chat GPT, a language model trained to produce text, is being used across multiple industries, including writing code, policies, legal briefs, plans, and job descriptions. It has the potential to revolutionize medical advice and even pass the US medical exam with ease. Political ads use persuasion to influence humans, and researchers set out to answer how effective AI is at this. AI is being used for online commenting, texting voters, and writing to legislators, but research on political persuasion generally finds small effect sizes.
Max Bai, a postdoc scholar at Stanford University, recently conducted research on AI and its ability to persuade humans on political issues. The research found that AI created persuasive messages could be as effective as those created by humans, even in polarizing issues like gun control and carbon tax. Max discussed this research and its implications, such as the potential for misinformation campaigns targeting voters and legislators, and the need for regulation of AI’s use in political activities. He also shared his background in both the tech and social science worlds and how this research combined and intersected his two interests.
In a research study, the persuasive capacity of GPT-3 large language model was compared to the persuasive capacity of everyday people. It was found that GPT-3 may have already caught up to the persuasive capacity of everyday people. To determine this, two experiments were conducted in which the same prompts were given to both GPT-3 and humans. In one experiment, the prompt was to persuade someone to become more supportive of a smoking ban, and in the other, the prompt was to persuade others to become more supportive of an assault weapon ban. The results showed that GPT-3 is just as persuasive as human responses. This is concerning as it means that GPT-3 could be used to spread disinformation on a large scale.
This conversation discusses a recent study conducted by the researchers which compared the effectiveness of persuasive arguments generated by both humans and computers. In order to receive quality responses from human participants, the researchers intentionally picked people who were already somewhat supportive of the targeted policies. They then sent the responses to another group of participants and measured their support for the targeted policy issues before and after reading the message. The results showed that both human and GPT generated messages were persuasive, with the persuasiveness of the GPT generated messages being statistically not significant from the human generated ones. The conversation also highlights that the human generated persuasive arguments were not necessarily from experts in the topic.
Timestamps
0:00:16 “Exploring the Potential of AI for Political Persuasion” +
0:03:48 Interview with Max Bai on AI’s Ability to Persuade Humans on Political Issues +
0:05:43 Analysis of GPT-3 Language Model’s Persuasive Capacity Compared to Everyday People +
0:07:41 Analysis of GPT-Generated Messages: Comparing Persuasiveness to Human-Generated Messages +
0:10:54 Discussion on the Effectiveness of Persuasive Techniques in Political Campaigns +
0:14:53 “The Dangers of Chat GPT: Exploring the Potential for Abuse” +
0:17:58 “Exploring the Potential Misuse of AI Generated Content” +
0:21:22 “Exploring the Challenges of AI-Generated Content and How to Counter It” +
0:26:53 Conversation on the Impact of Language Modeling on Scam Detection +
0:28:59 “The Need for Increased Regulation and Corporate Responsibility in AI Development” +
0:30:19 Interview with Max Ma, Researcher on the Use of AI in Political Persuasion +
This transcript was generated automatically and may contain errors and omissions.
Transcript
[0:00:16] David Beckemeyer: Welcome to Outrage Overload, a science podcast about outrage and lowering the temperature. This is a bonus episode about AI and political persuasion. The AI landscape is changing extremely fast. Chat GPT was first released to the public only a few months ago. We did an episode about it fairly early on. I’m Chat GPT, a language model trained to produce text. I’m optimized for dialogue by using reinforcement learning with human feedback.
[0:01:09] David Beckemeyer: Since then, so much has changed and continues to change. Anything we say about it is going.
[0:01:14] Max Bai: To be outdated at some level.
[0:01:15] David Beckemeyer: Within weeks or even days, an entire industry has sprung up overnight. Everyone seems to be a Chat GPT expert, ready to teach us how to use the AI for a price. Competition has entered the arena from Bing, Google and others, including academia. This new AI is being used already across many industries and applications, including writing code, policies, legal briefs, plans, job descriptions, doing analyses, and many others, which are.
[0:01:44] C: So many of us.
[0:01:44] D: Dr. Google right? And Dr. Chat GPT is the new.
[0:01:48] C: Technology in town, and it has the potential to revolutionize the way we get medical advice.
[0:01:53] D: Revolutionize it in a good way or a bad way. In fact, the newest model of this AI technology is so accurate that it could pass the US. Medical exam with ease.
[0:02:03] C: Wow.
[0:02:03] D: It’s eliminating the need for doctors. One of those doctors, our doctor Nick Coatesworth, joins us. Doc, good morning to you. So could Chat GPT be coming for your job?
[0:02:15] C: Carl absolutely it could.
[0:02:17] D: I mean, the power of this artificial intelligence technology is just extraordinary.
[0:02:25] David Beckemeyer: One area that has received a lot of attention is marketing and promotion, using the AI to create copy for ads and related copywriting.
[0:02:33] C: Hey, it’s Ryan Reynolds, owner of Mint Mobile.
[0:02:35] Max Bai: We’re always looking for ways to save you money. So this year, we’re kicking things off.
[0:02:39] C: With an ad that I created using Chat GPT, the AI technology.
[0:02:44] Max Bai: This is what I asked.
[0:02:45] David Beckemeyer: This brings us to the role of persuasion. Advertisements attempt to persuade us to buy something. Political ads hope to persuade us to buy an idea or a candidate. Researchers realized very quickly the potential of large language models like Chat GPT to play a role in political communication, such as online commenting, texting voters, and writing to legislators. Political persuasion is challenging, potentially drawing on complex skills like logical reasoning and clarity of expression.
[0:03:17] David Beckemeyer: Research on political persuasion generally finds small effect sizes. So how effective is AI at influencing humans political attitudes? Some researchers at Stanford set out to answer that question. The AI has had upgrades since the research was conducted. But it’s important to note that even using this early release of the AI, this research found that messages created by AI could be as persuasive as those created by humans in changing positions on issues, even polarizing issues like gun control and carbon tax.
[0:03:48] David Beckemeyer: This has potentially significant ramifications. The researchers write, quote, due to the availability of LLMs, anyone can now write unlimited amounts of persuasive messages. It is now much easier to create misinformation campaigns targeting voters and legislators, threatening accurate perceptions of political events. This can ultimately undermine shared reality in the US. And beyond. Our results call for immediate attention to potential regulation of AI’s use in political activities, end quote.
[0:04:22] David Beckemeyer: And that’s what we’re going to talk about on this episode of the Outrage Overload podcast. I’m your host, David Beckmeyer, and on this episode, we’re going to speak with Max Bai about this new research.
[0:04:33] C: I’m Max Bai. I’m a postdoc scholar at Stanford University. I’m in the Stanford Impact Lab, and I do research on understanding how people think about some of the most critical and controversial issue in the society, and that tend to be things about race, politics and everything between. So that’s what I do.
[0:04:53] David Beckemeyer: The research is titled artificial Intelligence can Persuade Humans on Political Issues. Let’s dive into it right now.
[0:05:15] Max Bai: So I saw this paper get referenced somewhere, so it’s pretty interesting. And I’ve sort of been I’m sort of a retired tech guy now. I was in that space for a long time, but I still kind of have an ear to that and they kind of pay attention to it. So this was an interesting overlap between stuff I’m doing now, which is kind of the social science kind of looking at outrage and all that and my old world, kind of the tech tech world and what’s going on there.
[0:05:43] Max Bai: So I’m really glad that you did this research. I think it’s important. I hope it gets sort of some attention and maybe some people try to act on it because I’m really concerned about this. I’m concerned that our response to how this could be used could maybe too slow. So to jump in a little bit. So your study shows that the GPT-3 large language model, which I’ve talked a little bit about on this podcast before, but in your words, I’m going to quote quote, may have already caught up to the persuasive capacity of everyday people, end quote.
[0:06:21] Max Bai: And I come at that from sort of two perspectives. On the one hand, first, the persuasive power of everyday people, which I’m sort of learning more and more about on this podcast, isn’t very high, so the bar is kind of low. But the other half of it is that it’s still really important because at scale, this kind of stuff can move people. I mean, we sort of saw the we’ve seen disinformation campaigns before, and something like this could potentially move people. So tell us a little bit about what you discovered.
[0:06:50] Max Bai: What’s the baseline for human persuasiveness and how does it compare to what you found about the persuasive capacity of this AI?
[0:06:58] C: Yeah, so we run a couple of experiments. In two of the experiments, what we did was that we provided the same prompt for generating a persuasive argument on a policy. In one study it is sort of playing it’s like persuade someone to become more supportive a smoking ban. And in the other one it’s like a more polar polarized it is to ask people to write something persuading others to become more supportive of assault weapon ban. So like gun control, everything about that is extremely partisan and polarized and we just have GPT respond to that prompt versus having human respond to that prompt.
[0:07:41] C: For the human one we actually had to do a little bit more to notch people to give us a quality response. As you said, the normal people, everyday people, a lot of the persuasive, it’s not that great. So we intentionally pick people who are already at least somewhat supportive of those policies. We make sure there’s minimal number of words they have to fill in. We have people verify that it’s like are they on the topic and everything. So when we deal one on GPT, it passes all of this quality check. We didn’t even need to do anything.
[0:08:21] C: So that’s kind of interesting. But anyhow, after gathering those responses, we send them to another group of participants and those participants who receive the messages, we measure their support for the targeted policy issues before they read the message and after they read the message. And we compare whether their support for them goes up after they read a message and compare that against people who read like a neutral control topic.
[0:08:53] C: Things like talking about the history of a bowl tie or talking about how Americans are having more residential mobility. Just things that have nothing to do with either smoking ban or assault weapon ban. And we found out both human and GPT generated messages, they’re both persuasive. And the persuasiveness of the GPT generated messages, they’re just as high as the human generated ones. It’s just like they’re statistically not significant from each other.
[0:09:27] C: Also look at a couple of other things like base analysis. It’s just evidence suggests that they’re pretty much just the same. That’s the gist on the outfront.
[0:09:38] Max Bai: Yeah. When you had the human generated persuasive arguments, you’re saying these were just kind of randomly select. Not randomly, but I want to say they weren’t necessarily experts in persuasion or in that topic.
[0:09:51] C: Right. They are not expert of persuasion in that topic at all. The best we did to incentivize people to improve the quality and the wellness or desire to actually try harder. We incentivize people with a cash award. If someone generates a really persuasive one, we’re going to give them extra money. That kind of thing. But no, it was not like an expert generated one. And to contextualize that, one thing I do want to mention is if we look broader in the political science literature on the campaign persuasion usually for anything like classic psychological thing like showing people article and see if it change anything, the percentage change like what we have is about two to maybe four something percent change. That’s pretty much as high as most psychological persuasion can go.
[0:10:54] C: It’s pretty close to whatever you can find in there, whether it’s like expert generator or any other sources. So just like in the broader picture, it looks like to us, we’re quite surprised and impressed with the value performance. And given that on the team at least, like two other colleagues of mine there, they do a persuasion research for living in many regards, they were pretty impressed on that regard.
[0:11:27] Max Bai: Yeah, I guess it’s kind of hard to put in sort of a practical sense. What does two to four points feel like?
[0:11:39] C: You’re right, so it depends on how you measure things. Of course, if you’re thinking about, okay, changing people’s real vote choice and everything, a lot of time in the real life when people are deciding if they want to engage in any political action, whether go out to protest or vote for someone who supported a particular policy, a lot of times there are a lot of preexisting belief that go into it. If you’re a Democrat, you just support this policy. If you’re Republican, you support the opposite of that.
[0:12:14] C: And for most type of a campaign and the political advertisement, it just doesn’t change anything by much at all. And if we’re looking at a real election, a lot of times they can come pretty close too. There was one point in algorithm versus George W. Bush or one of those elections, the title is pretty close within 2% range a lot of times between the two candidates. So if you are thinking about well, if you contextualize it again in that scenario, two to 4%, if used right and if implemented widely, it is something that is quite powerful.
[0:12:57] Max Bai: Yeah, I suppose there are also people on certain issues that would be kind of fence sitters. I mean, you noted that there are places where people aren’t necessarily very much of fence sitters, like something like an assault weapons ban, but there may be other things where they’re a little bit more fenceitters. But just to reiterate, you did find that there was movement even on something like an assault weapons ban. That was one of your polarized policy issues, right?
[0:13:24] C: Yeah, I think in total we used six policies and assault weapon ban was one of the more divisive one carbon tax. That one was also sort of I would say it’s a little more politically divided one and we find effect on that one as well. There’s a couple of other ones too.
[0:13:45] Max Bai: Yeah. So that’s interesting. So you had some persuasive persuasion experts on the team.
[0:13:52] C: Yeah. So the most senior advisor, my Pi, Rob Willer, he’s a persuasion expert and second, Alder Young, he has also deepened the literature, developed a lot of intervention, working on a lot about just discovering what can help people to change their mind on things. So these two, they’re definitely the expert, right? There.
[0:14:23] David Beckemeyer: That’s cool.
[0:14:24] Max Bai: They were somewhat surprised by these findings.
[0:14:27] C: Yeah, in many ways they are. Well, of course, before we did it, we had to hunch this well work and so that’s why we started doing it in the beginning. So it’s still within expectation it changed something, but still looking at that result, seeing that there is actual difference, that was like we’re still just quite impressed looking at the results.
[0:14:53] Max Bai: Yeah, that is pretty crazy. And for clarity now, the Chat GPT wasn’t necessarily better than humans, it was about the same. Is that right?
[0:15:05] C: Yeah.
[0:15:08] Max Bai: Interesting.
[0:15:09] C: Yeah. So maybe on that, another thing I want you to say is the thing we want you to highlight in there is this thing is quite powerful and can be easily, easily abused in many ways. And the concern for that is not just like it is as good as human on performing many other things, as the way I’ve seen on Twitter, on news agencies report on everything, but even on creative, creatively generated content like this, it’s because it can be produced very cheaply, very fast, on a massive scale.
[0:15:44] C: Having human, right, it’s like people will still spend a lot of time, you have to have people who are actually motivated to do this. And the GPT is like, it produced the content just like within seconds. Then it can also be customized based on who you are. If I just know a little bit about you, it can change the content within seconds. It can be easily and it just can be easily distributed based on what we know about the targeted audience. So that part is what I find to be like, how do I say that critical to why this is something that we need to think more about, like you need to potentially worry about for the implication, for the society, for the politics, the election integrity and all of those.
[0:16:32] Max Bai: Right? So when I first saw that Chat GPT was sort of made generally available and kind of given my background in sort of internet and security threats, this is where my head went right away, is not necessarily just politics, but all the different ways it could sort of be abused. And one of the first things I thought of was sort of scamming and things like that. And we’ve already seen some reports that it is being used for that kind of stuff. And boy, it scares me that we’re this early with a general model.
[0:17:02] Max Bai: I think if you made a special model specifically for doing this kind of thing, it could probably be even better. But spending time in that space can be sort of soul crushing and cause you to lose hope a little bit because, I mean, the bad guys out there, they’re quick to jump on this kind of technology and really use it. And I know I almost guarantee that it’s already being used in this way at some level just because how fast, they latch onto new things.
[0:17:29] Max Bai: To me, people are using it for search engine optimization a lot, which is practically only a small step away from a disinformation campaign to some degree, because often you’re just trying to get your site even if the quality isn’t very good, you’re trying to get it ranked higher in the search engines. And so it can be pretty close to disinformation. I know this thing is already being used for that and it’s kind of a small leap from there for sort of a full on misinformation disinformation campaign.
[0:17:58] Max Bai: When you talk about, and I agree with you that it just needs to be looked at, but when we talk about doing something, I get concerned about that because we’re kind of back to the same issue of doing anything about any other disinformation campaign, like how do we do this? In many cases this stuff is propagated on social media networks and it’s kind of the private companies policies and how it aligns with their incentives and so on.
[0:18:25] Max Bai: Did your team come up with any specific kind of suggestions for ways to maybe mitigate this?
[0:18:30] C: Yeah, on the point you mentioned earlier, people use this as search engine optimization. Actually we chatted between members of our team quite a lot about this. How exactly should we treat this AI as an entity? How should people think about the regulation of it? Because in some ways way it is producing, it could just be perceived or treated as a customized version of encyclopedia just like in the same way as Wikipedia. Like the emergence of this thing.
[0:19:10] C: Maybe it is just like another version of something that is already there on the website because the train data is whatever is there. It is a more efficient way of summarizing information for you. And if that is the case, like, okay, should we really be worried about that much? It’s just like increasing the efficiency of getting information to what you’re asking, to the person who is asking a question faster.
[0:19:35] C: But then on the other hand, we have some preliminary data and also we have seen similar data from other scholars that just most humans evaluation and judgment of AI generated content using the latest AI technology and human generated content, most people can tell the difference. It’s like, well, if you are able to just pass any AI generated content as if it is human and quite successfully, then that is starting to be concerning.
[0:20:11] C: What if it is starting to pretend they’re human and trying to have a conversation with you with the intention to change your mind on this and in the end you find it’s not the case? Or what if like when you’re trying to write an essay, write a paper and that is supposed to be written by you, but you have a significant proportion. But it’s like the same concern behind all this use of a GPT in homework, in higher Ad or even high school homework, writing.
[0:20:41] C: It’s the same underlying concern there in terms of Guardiol it’s just like this is really quite a new challenge. Sure. We just discovered this thing that AI generated content can just perform as well as human on persuading people might as well like two days after we publish it, someone already started to use it. The policymakers, I cannot imagine how fast they have to act to counter it for us, we thought about, okay, maybe at least let’s see if you can identify the AI generated content as AI generated content and does it tone down the effect or not?
[0:21:22] C: Our preliminary data, like we didn’t see any of that. And we also saw some other scholars work. It’s like, well, if you tell people this is the AI generated created content and it doesn’t really change anything about how people evaluate our work, then that is also another scary part. If you just fully inform your audience about who wrote this, it doesn’t really change the effect of it. Then it will be hard to counter that just from the psychological, individual perspective. Then what you have to do is have to be on the back end, either on the institutional side deciding, okay, what kind of content will be appropriate to generate or whatever.
[0:22:04] C: Right now there are policies on like, okay, you cannot generate stuff about violence, about whatever, sexually explicit content or whatever, and perhaps things that are related to missing information, then that has to be part of that regulation domain as well. And also just about a lot more effort I think has to be done from the organization that are behind those large language model. I think it’s just like they have to do a lot of work on making sure their model is not misused.
[0:22:38] C: And again, it’s just like the time is going so fast on all of these things. Like two months ago I saw the AI, the GPT, well, maybe it’s March now. Three months ago, GPT was just releasing their chat GPD version. They start to have their first user on it and today, I don’t know how many million or millions of people are using it right on their desktop anyhow. So kind of like a scuff sidetracked a lot on couple of offshoots and hopefully that help answering the question.
[0:23:13] Max Bai: Yeah, I mean, I think it raises the challenge that we face here. Like you’re mentioning the speed with which this is going to change from here on out and the speed of our normal kind of work through government kind of regulation kind of stuff goes. I don’t see how that’s going to play out. And of course bad guys aren’t going to if we have rules, like you have to label it or something like that. I mean, the bad guys aren’t going to follow those rules anyway, so if you’re really a bad actor, you’re going to do it until you’re caught.
[0:23:45] Max Bai: And how are you going to be caught. And that’s an arms race as well. Right. I mean, if you’re trying to do AI that can detect the AI, the AI that’s generating the AI keeps getting better and the detection has to keep getting better. And it’s an ongoing challenge. Today talking about the main operators or providers of these LLM services, that’s fine when it’s sort of a white hat company providing it and they at least have to pay somewhat to social concerns and the backlash of doing otherwise, that’s fine. When they sort of control it that way and maybe they can put in and they are putting in these kind of guardrails and such. But what worries me here is it’s probably not too far away at this rate of the way the Moore’s Law is and how fast prices for this kind of stuff comes down today.
[0:24:37] Max Bai: It’s still kind of out of the reach of most bad actors to probably create their own large language model. But there is open source and there are tools to do it if you have the resources. And you know, over the next few years that’s going to start to cross over to where it is affordable. Now, it might cost, I don’t know, anywhere, probably a million dollars maybe to do it, maybe less, but that’s going to be a lot less in a few years and there’s a lot of people that can come up with that much money compared to say, a million dollars.
[0:25:08] Max Bai: So now you have a bad actor operating the thing and creating the language model and probably tuning it to do bad things. Right. So I don’t know, that’s a whole nother how do you regulate that now?
[0:25:20] C: Yeah, I think for anything like this, it’s just like any other contemporary major social issue. There is this new challenge that’s rising, just say, like missing information. The solution is not going to be easy. It has to be multifaceted. The government has to do something. The big institutions have to do something, the platform have to do something and the citizens have to do something to get educated, get better at recognizing them, to not share them when they see one.
[0:25:57] C: It’s like the solution for any challenges coming from the rise of artificial intelligence, I think it’s the same thing. I cannot imagine a strata of a society that is not going to be impacted by AI. And I think everybody from everywhere have to be more cautious and have to think more about how our lives are going to be impacted by it and what we have to do to counter it.
[0:26:23] Max Bai: Right. And as you say, I think what’s interesting, for lack of a better term or potentially impactful here, is this time, like you’re saying, it was just a few months ago when we first started getting our accounts and we could play with this thing in the open. And now it’s being used all over the place. And that’s just within a few months. So you can imagine this is going to explode even more, not just in the number of users, but in the types of applications people find for it.
[0:26:53] C: Yeah, totally.
[0:26:57] Max Bai: When you think of that rate of change there, that’s pretty concerning because we’re not used to dealing. If we have to come up with new laws and things, I mean, that’s going to be a big challenge to get anything done in that kind of time frame. But there is an awareness aspect today, I think you note in your paper that people are kind of underestimating the way this could impact things and it probably already is impacting things.
[0:27:22] Max Bai: We don’t even know it. It’s happening out there in so many ways and how do you sort of inoculate yourself I guess it’s not that different than how you inoculate yourself over any other kind of attack like this. But sort of on the scamming example, a lot of the things we’ve learned about looking for our scams are things like sometimes there’s poor spelling or weird grammar and things like that. Well, these LLM models are really good. I can enter my prompt in whatever is my native language that I’m very fluent in and ask it to output English which it’ll have good grammar and otherwise it’ll look like well written English and it’ll be very sort of convincing or at least it’ll appear authentic or genuine.
[0:28:10] Max Bai: And I have a lot of confidence, sort of a high confidence factor in the language and so I think that changes the scamming aspect quite a bit and that gives us one of our tools. We had to try to recognize a scam kind of taken off the list. We don’t get to use that one anymore.
[0:28:26] C: Yeah, that’s true. Based on just like what we see so far, it seems to be the case that just within OpenAI community, they’re doing pretty good job at guardrailing their own content. Like every time you open the Chat GPT you see like 15 different warning signs. We’re like, okay, this is not to be used for this, don’t trust it for a numerical operation and stuff. And at one point if you ask it to generate problematic content, it will do it and then two days later it doesn’t do it anymore.
[0:28:59] C: Of course, when it was incorporating bin and a lot of those filters somehow didn’t come with it implementing the bin, but it sounds like they’re working on it. The bottom is line is that I think entirely relying on regulator. Okay, regulators, they do need to do something, but entirely relying on them is just not going to cut it. Unfortunately, I think a lot more do need to be done within the in the back end from the, from the creators creator side and it’s just from the Open AI’s own practices, it looks like it is, it is possible for them to do it.
[0:29:38] C: They are able to do this although as I’m saying this, I was reminded of all this attack layoff just happening end of last year. A lot of people who are laid off, they are the social scientists, they are the people who do election integrity. They are the people who monitor a particular kind of misinformation and make sure they are not going exploded and everything. And these are the people who got laid off. So that I do feel kind of concerned because the corporates go there a lot of their goal is still about generating profit. And you see like, well, when it comes to profitability, these are the people who are getting laid off the first.
[0:30:19] C: So that’s quite sad. And so of course, we cannot just entirely rely on them. It’s kind of like a dance, I guess, in many ways, right? Both parties need to do something at different stages in different circumstances. One’s role is bigger than the other.
[0:30:39] Max Bai: Do you have any follow up research on this? Are you going to pursue any more in this field or what? What are you looking at?
[0:30:46] C: Yeah, so like so what we do so far is just like about, okay, demonstrating its ability to persuade people on the political issues. But what I what else can I do? Some of the things we’re thinking about is like, okay, can this thing be used for just really generating promoting social goods, social cohesion? Can it be used to, say, counter the misinformation? Or can it be done in promoting mental health in any way as part of a conversation? Those are some of the direction we have been thinking about. But we’re also aware, like, other scholars are working on this too. And we’re looking forward to just find out either through our own research or other people’s work, exactly how we can really leverage this novel technology for the better of the society.
[0:31:32] Max Bai: That’s cool. Yeah. Okay, well, I really appreciate your time. I don’t know if there was anything else you wanted to add in, but I do really appreciate you coming on and I enjoyed speaking with you, Max.
[0:31:42] C: Yeah. Thank you so much again. Thank you so much. This is a really wonderful opportunity and we appreciate that you’re reaching out and help promoting my own work to audience following your work as well. I will definitely let you know if I have anything new. Okay, awesome.
[0:32:01] Max Bai: Yeah, keep me in the loop. I’d love to find out what else you’re looking at, but yeah. Thank you so much. It’s been my pleasure to speak with you.
[0:32:08] C: All right, thank you very much.
[0:32:09] Max Bai: All right, thanks a lot.
[0:32:10] C: Bye bye. Bye.
[0:32:25] ย David Beckemeyer: That is it for this episode of the Outrage Overload podcast. For links to everything we talked about on this episode, go to outrageoverload. Net. I’m asking you, good listener, to join our Facebook listeners group. There’s a great place to get actionable ideas and resources to join. Visit outrageoverload.net/join the sooner you do. It, the sooner your ideas can help.
[0:32:47] Max Bai: Make the show better.[0:32:49] David Beckemeyer: I hope to see you there on the Facebook group. Okay. Watch for a new episode in a few weeks.