# Will AI Populism Decide the 2028 Election? | Jasmine Sun *Author: David Hoffman, Ryan Sean Adams* *Published: May 13, 2026* *Source: https://www.bankless.com/podcast/ai-populism-warning-shots-before-2028* --- **TRANSCRIPT** David: [0:02] Bankless Nation, we are here with Jasmine Sun. She writes about AI, David: [0:05] technology, and politics. She is a contributing writer at The Atlantic and recently has a New York Times opinion piece on AI and the permanent underclass, a phrase we are all too familiar with here in the world of crypto. She's also the AI of the AI Populism series on her subsec, jasmine.news. Jasmine, welcome to Bankless. Jasmine: [0:24] Thanks so much for having me. David: [0:26] Jasmine, you put together a definition for AI populism. You wrote it, a worldview in which AI is viewed not only as a normal technology, but as an elite political project to be resisted. This is really what we want to explore here with you today on the show. Kind of want to ask the question, and maybe we can start with this. How big is AI populism as a political issue domestically here in the United States? And we kind of want to get to, do we think AI populism will be a relevant issue in the 2028 election? So maybe we can start with that first question. Just how big do you think AI populism is in the world of politics? Yeah, thanks for asking. Jasmine: [1:04] Yeah, I've been thinking about AI populism a lot over the last few months. I think noticing this mass movement that is sort of growing around the AI backlash and in particular, noticing how very different interest groups and very different factions, different sides of the aisle are coming together to protest AI. And so, you know, when I'm in Washington, D.C., I'll notice that there are family first conservatives sitting with antitrust people, sitting with environmentalists, people who would never be working side by side, but who have united in order to push for AI regulation. Jasmine: [1:33] And that was sort of what really got me thinking about AI populism. Jasmine: [1:36] In terms of how big of a force it is in the U.S. right now, I would say that it's not a primary force in American politics yet, but it is rising extremely quickly. And so one of the best research polling that's been done on this topic is from David Schwarz Blue Rose Research. And what he's shown in his polling is that among like, you know, a list of 40 different issues that American voters might care about, AI ranks 29 out of 39. So it's not super high, but it has risen in salience the faster than any other issue over the last year. And so in terms of how quickly it is entering the broader political conversation, I think AI is rising really fast. And the other thing that I'm starting to notice is that AI is not just a separate issue. Like most people who are, you know, thinking about AI, they're not they may not have particular opinions on, you know, which model is the best or should we do chip export controls? they're really seeing AI as part of these broader conversations around affordability, economic mobility, geopolitics. And those are issues that do rank very high on Americans' list of concerns. And so if AI is seen as, you know, a bogeyman or very tied to conversations around land use and their neighborhoods around economic mobility and whether you're going to have a job, then AI will be a much bigger part of the political conversation than we would otherwise expect. Ryan: [2:52] So it's rising fast, but it's still 29th in the list of issues. So there are other, you know, the top five issues got to be like the economy, jobs, inflation, that type of thing. And yet we see some of the most savvy politicians that we have in the U.S. That seem to be doubling down on AI populist messaging, maybe tripling down even. Bernie Sanders, it seems like, has made it sort of a cornerstone piece for him. In other words, he's kind of like betting heavily on the topic of AI populism and putting a lot of his chips in. Why? Like, is he, if it's only 29th, wouldn't people rather hear about inflation Ryan: [3:29] and jobs and other things that are core to the Sanders message? Why is he betting so hard? Jasmine: [3:34] I mean, because they're tied, right? I think like, I think it's because they're tied together. I mean, I have the Blue Rose research pulled up next to me. Number, the top five issues, like you say, it's cost of living, the economy, corruption, inflation, and healthcare, right? And that's kind of roughly the issues we'd expect. My guess is that those five issues probably haven't changed all that much over the past, you know, 20, 30 years. My guess would be two things. One is, If AI is a thing that you were going to blame for the economy, for the cost of living, for like corruption, inflation, health care, then you're able to tie it into the issues that Americans do really care about. Right. And when, you know, you have these AI CEOs saying AI is going to take all the jobs, when you have these questions about whether we're in a bubble or like the fact that like, you know, I think it was something like a huge fraction, like 30 percent or something of US GDP growth in 2025 was from data center and AI related investments. Then it means that your questions about cost of living and the economy are very tied to AI. And then the other thing that I think is going on with Bernie is I think there is an element of opportunism, right? And you don't just see that from Bernie. You also see it from other politicians as you may have been saying the same message on cost of living and the economy and the billionaires for like year on year on year on year. But now you have this new force that's showing up and it's and you know, the leaders are also promising it's going to change everything. It's going to take all the jobs. We are like the only thing that matters in the economy now. And so like maybe if you feel like your messaging wasn't resonating before in terms of getting people to support universal health care or like a higher minimum wage or whatever, Jasmine: [4:59] AI is like a brand new shiny reason that to sort of build support for the policies that you might have already wanted to pass. You see that from people like Bernie. You also see that from folks who want to, say, increase speech regulation and censorship or content moderation for tech platforms, where there are folks who are already very interested in applying stronger age and kids' safety laws or stronger speech regulations on tech platforms. And now that AI has showed up, it's become kind of an extra reason to push for the thing that people are already pushing for. So I do think that AI matters, but I also think that a lot of politicians are being pretty opportunistic about, you know, pointing to the shiny new thing and saying, maybe this is a reason to do what I've been saying all along. Ryan: [5:37] I kind of wonder if this opportunism is actually going to stick in the hearts and minds of the American people, though, right? So there was something that we recall, at least Bankless listeners will recall being kind of crypto of Elizabeth Warren and some politicians tried something similar with kind of an anti-crypto policy. This was in 2023, 2024. for Ryan: [5:57] Our listeners will recall kind of a campaign slogan, Ryan: [6:00] Something she promoted. Elizabeth Warren is building the anti-crypto army. David: [6:05] Right after the fall of FTX, which was highly opportune of a time to broadcast that message. Ryan: [6:11] Yeah, it was, you know, Sam Beckman freed and you have kind of the corrupt crypto bros and this weird technology that no one really understands. And there was a bubble and there's NFTs and everyone hates it anyway. And so there seemed to be an effort that was somewhat contrived, an opportunistic effort to lump all of these things together and have kind of a theory of everything message around populism, some of the campaign messages for Elizabeth Warren. But it didn't seem to really stick or hold. Like even among, obviously, the crypto people didn't enjoy this, that she was building an anti-crypto army. But I think like the normal people just looked at it and were like, huh? Like anti-crypto army, I care about jobs and the economy and inflation. Like, what are you talking about? And that messaging didn't really stick. And I'm wondering if that will be a repeat story with this AI populist opportunism Ryan: [7:05] that we're seeing that the politicians are trying to group things that just like don't. Ryan: [7:09] Exactly belong together in Jasmine: [7:10] A voter's mind. Yeah, I mean, I could see why you would think that. And I do think that looking at, you know, parallels to crypto are definitely interesting. I think that AI has some pretty distinct differences. One is it's just a far bigger part of the economy than crypto ever was. Like, crypto was not driving like 30 or 40% of GDP growth over the course of a year. Crypto is not, you know, yes, there's like Bitcoin mining operations, but these are not showing up in neighborhoods as much as data centers are. Most people at their work are not being forced or encouraged to use crypto as part of their jobs, nor is there as high of consumer adoption, like even from people's own volitional use of crypto. That was always a niche thing. It was crypto is very hard and confusing to use. And I my guess is that most most Americans never really got in the habit of using crypto on a regular basis, whereas ChatGPT is like the fastest growing app in human history. Right. And so I think that in terms of the salience of AI to a lot of normal people, it does feel like a more relevant thing. There are also other differences. Like I think the AI leaders have been very different than crypto leaders in their messaging. The way that you describe the Warren dynamic, which is not something I'm personally familiar with, I didn't follow crypto quite as closely, but it sounds like Elizabeth Warren was forging one narrative and people in the crypto industry and maybe many crypto advocates had another perspective. Jasmine: [8:24] In AI, one thing that's really interesting and that has always been really interesting about AI is that the risks that the populace are talking about are many of the same risks that people in the industry are talking about, right? Like Dari Amadei is one saying that 50% of entry-level white-collar jobs are going to go away by 2030. And so that adds a lot of credibility to the message when the people building the technology are saying, actually, that's true. Like, this stuff is going to hurt you. It is going to take your job. David: [8:49] There's something dislocated to me when you tell me that AI is 29th in terms of importance on politics, yet... We have, I mean, maybe this is just cherry picking or just like picking out a few bits of data, but you have entire communities showing up into town halls to tell people that they don't want data centers in their communities' backyards. Like that doesn't ring like a 29th most important political issue. AI, and maybe it's something, as you just said, like the AI tech leaders, Dario, Sam, they're saying, oh, yeah, we're going to completely rewrite the social fabric. And well, what does the social fabric do as a result of those statements? Like kind of gets scared, kind of gets offended, decides to show up where they know how to show up, which is in their communities. And so there's something like uniquely galvanizing about AI. And so when I hear the 29th most important political subject matter, I feel like that's a lagging indicator. Jasmine: [9:48] Yeah, and I'm watching the trend lines. Like I'm looking at the fact that it is number one for fastest rising issue with number two being war in the Middle East. So this is as of February, to be clear. David: [9:58] Yeah, and then there's one more thing I'd like to just introduce is there's actually been violence on the table. Sam Altman's home has been the target of two attacks, one with a Molotov cocktail, another one with some bullets. I think there's others. And then they're just not related to AI, but for some reason, the Luigi, I don't know how to pronounce his last name, but the individual who killed the- Maggioni. Ryan: [10:18] David. David: [10:19] He's all over the place. Yeah, the healthcare CEO, like the political assassination. And then we have, you know, people showing up with Molotov cocktails to Sam Altman. It's just, as a political topic, it's just far more galvanizing and motivating than any other. Like, what do you make of just like how some people really feel motivated to do big things, big drastic things when it comes to AI and what that means for David: [10:45] just like the future of what that means for the 2028 election and domestic politics? Yeah. Ryan: [10:49] Yeah, I mean, I think that people, Jasmine: [10:52] Again, AI has sort of become almost this political boogeyman. I think in some ways it reminds me of the way that China showed up in the discourse over the last decade where everything was because China. Like we got to, you know, we need to do AI because China. We need to reinvest in manufacturing because China. We need to educate our kids better because China. The specter of China competition and China eating America's lunch on the economy, on geopolitics and whatever, was sort of used as an all-purpose justification in Washington, D.C. And I think that sometimes this is fair. Like, again, I think some of the AI risks are really real. I think that China competition is a real thing. But I think it also comes from the sense that when there's a big other force in the world, this big alien force, whether that's another country like China that's very foreign to people, or whether there's this spectrum of super intelligence, and people don't really understand it, but it promises to change everything, and it seems very powerful, and like there's a lot of money behind it, it becomes very easy to sort of blame and tie into a really wide range of issues. But yeah, I think that this opportunism is probably going to accelerate going into the 2028 primary season. I mean, it's going to be a crowded primary, most likely, on both the Republican and the Democratic sides. And we're already starting to see some of the likely candidates picking this up as part of their campaign messaging. Like, it's very notable to me that, you know, like Ro Khanna and Mark Kelly, both who are expected to put themselves in the running, have been, you know, doing these big AI action plans. And Josh Hawley on the right, for example, has also been especially active in AI legislation on kids' safety, on jobs. Jasmine: [12:22] I've heard from other folks who haven't necessarily introduced plans yet, but who are expected to do so. And I think, again, it's because you always need a galvanizing new thing that's going on in order for these politicians to justify why they are the unique ones to sort of meet the moment with their plans. And AI can also be interestingly kind of distorted to fit any of these plans. I think another thing is just like it collides with pre-existing sentiment, pre-existing populist sentiments in America, right? Like we're already seeing rising distrust of institutions, rising distrust of elites, distrust of billionaires, corporations. That's been a growing process. Sentiment in the U.S., growing resentment long before AI. And with how wealthy these AI billionaires are, with how much revenue the companies are making, Anthropic hitting $30 billion run rate recently, with the scale of these data center investments, I think that AI is a very good target for a lot of this anti-billionaire, anti-corporate sentiment. Jasmine: [13:22] And so, you know, even when I talk to accelerationists, even when I talk to people who are very pro-AI, or when I talk to AI executives, they understand that they are very unsympathetic, right? Like most Americans do not relate to Sam Altman. They do not find him relatable. They know that they personally are getting no piece of this pie. Remember, these are private companies. And so most people have no way of sharing in the wealth of this thing. And so it's very easy to blame the AI billionaires because they're like kind of culturally weird. They're really far away from you. They're not sharing their wealth in any way. And they are kind of like transforming the whole of economy and society. And so I think that they're also a politically convenient target and one that I expect is going to get more ire and more hatred over the next couple of years Jasmine: [14:06] as a presidential primary really kicks into effect. Like one crypto contrast I think is really interesting, for example, is. Jasmine: [14:13] The super PACs that the industry has created, right? And so during the crypto era, Chris Lehane, who now works for OpenAI, was one of the critical people shaping the Fairshake PAC, which lobbied for pro-crypto legislation. And he was really effective with a lot of that. They went after candidates who really wanted to crack down on crypto. This scared off more candidates from doing the same. And for the most part, a lot of potentially onerous crypto legislation was avoided and Fairshake mostly flew under the radar for normal people. Whereas on the other hand, And the same playbook was tried for AI, is being tried with leading the future pack, also shaped by Chris Lehane, as well as some other AI venture capitalists and executives. They went after Alex Boros in New York for, you know, pushing New York state AI regulation. And actually the opposite thing happened where Alex Boros was ranking like number three in the polls. He was kind of an irrelevant guy who was going to lose. The AI billionaires go after him, start running attack ads. He starts running his own ads being like, AI billionaires hate me, shoots up to number one in the polls or like neck and neck, number one, number two. And he now has a much better chance of winning now that leading the future, this AI super PAC has gone after him. I've seen in other districts, the same thing happened, where when leading the future, the AI super PAC endorses a candidate. The other person in the race will say, thank God I haven't been endorsed by the AI billionaires. And so you have enough of this populist sentiment that it's actually a little bit of a political liability to be partnered too close with AI industry. Ryan: [15:35] In so many ways. I think some things that happened with crypto were really a dress rehearsal for AI. I do want to get on this thread of the violent attacks though, because that is somewhat new in American life. And I'm curious about this thread. I think you called some of these things, the attack on Sam Altman, the warning shots. The Molotov cocktail thrower, he was a 20-year-old. He was part of a Paws AI Discord group. Some of his writings, he said, sociopaths, psychopaths are gambling with your future and with the lives of your children. I'm wondering, between this and the murder of Luigi Mangione, the UnitedHealthcare CEO, Brian Thompson, are those like the attacks on Sam Altman and the murder of a healthcare CEO, is that all part of the same movement? Or is there a particular thread that is targeted towards kind of the tech leaders and the AI leaders that's separate from the attack on kind of healthcare executives. Jasmine: [16:38] Yeah, super interesting. Ryan: [16:39] I think... My argument would be that they are not part of the same movement. Jasmine: [16:44] They have different motivations for their attacks. Like, for example, the attacker of Sam Altman, the guy who threw the Molotov, he had written some blog posts about existential risk in particular and his, you know, Eliezer Yudkowsky-style fears about how AI was going to kill us all. So he definitely had some AI-specific fears. I think the things that feel really similar to me when you look at a lot of the recent assassination attempts or successful assassinations that have happened over the past few years is a lot of them are committed by very online young people who spend a lot of their time in discords and in these very niche online communities that often tend to develop more extreme beliefs. Jasmine: [17:23] Like Charlie Kirk's, Charlie Kirk's murderer did the same thing. He was also a discord lurker, very young as well. And I think that it also reflects the fact that political violence in the U.S. Has become more prominent. And that's something that political science researchers have found too, both when they look at the incidence of political violence, but also when they poll the public on, do you ever think assassination is justified? Do you think that violence is justified? And now whether you poll for right-wing figures or left-wing figures, you get numbers like 10 to 20% of Americans think that assassination attempts are justified when they're directed as people who you think are bad people, whether that is Nancy Pelosi's husband or whether that is Donald Trump or whether that is the UnitedHealthcare CEO. And so the thing that I notice with, you know, Sam Altman's attackers, like with the other attackers, is that these are young people who have developed a pretty nihilistic politics whose views might be increasingly extreme as a result of participating in online communities where people kind of reinforce each other's beliefs really quickly in this cycle, and who also believe that they have no other outlet but political violence. I think that when I think about the resentment that people feel or Jasmine: [18:31] Why do crazy things like this happen? Like, I am no fan of political violence, you know, like, why would someone do something like this? What I really see is these people no longer believe that the democratic system works. They do not believe they have any other channel to quote unquote have a voice or to shape the direction of what happens to politics, what happens to the economy. And they see direct action, in this case, direct violent action, as the only way of making their voice heard in order to stop some of the changes they think are coming. I mean, I see this at a lesser scale with things like data center protests or something like that. It's like, you know, a lot of my friends in the AI industry, for example, think that the data center protests are really stupid. Like they're like, data centers are the wrong target. If you are worried about AI safety, you should pursue regulation or something. But I'm like, do normal people have any channel to pursue regulation or to shape how these models are trained or what the products look like? They don't. They don't. They don't know anybody who works in AI policy. They don't know anyone who works at an AI lab. If they feel like they are being forced to use AI in particular ways that they don't like or that it's threatening their job or their kid's safety, they do not actually have a lot of channels to express that discontent. It's not like something you can vote on. It's not democratically governed. And when people are really nihilistic and very distrustful of these companies, which is how a lot of folks feel, they are going to go for things like grassroots protests or even in the extreme cases, political violence. And so that's one of the things that I notice when I see more incidents of violence, whether it's against healthcare executives or AI executives, it's people saying, I don't like the way our healthcare system works. I don't like the way that Jasmine: [20:01] AI is affecting my life. And I have no idea what to do about it. And I feel like I have nothing to lose anyway, because I feel very bleak about the future. And so I might as well shoot someone. I think that's really scary. David: [20:12] Yeah. Yeah. AI is definitely in this moment of time in which there is convergence on just a number of different things. Wealth inequality is at all time highs. The last tech boom, social media, you know, promised global connectivity to all of our friends. And we all understand that we're being fed something completely different. And everyone is kind of like disgruntled about that. And so like you said, like distrust is at all time highs, just being chronically online is probably at all time highs. And then all of a sudden we have this AI industry, which I think as you're kind of alluding to is like a pretty good boogeyman to express a lot of our frustrations in society upon. It's kind of like this blank slate. It's like, what are you upset about? Well, you can point it at AI in some particular way. And to your point, A basic psychological principle is if you thwart any individual human's goals, what are they going to do? They're going to lash out. You back someone, a dog into a corner, and they have no choice but to bite. And I think with wealth inequality, you have a growing number of people who probably feel something like that. It's like, I don't know how to improve my circumstance, and then here we have a new wave of technology, And you have the CEOs, the leaders of that technology, really not doing themselves any favors. Jasmine: [21:31] No. David: [21:31] Like Sam Altman and Dario are both like, yeah, we're going to do mass job wipeout and it's going to be sick. Ryan: [21:38] Sam has changed his messaging. David: [21:41] Sam has changed his messaging. Jasmine: [21:42] But the thing is, A, he changed his messaging after the New York Times criminal underclass piece. I think it was pretty clearly responsive to that. B, if you read his tweets closely, he says in the second tweet, he's like, I think people will be more fulfilled than ever, but we're going to have some painful transitions along the way. And that's the thing that really bothers me. It's what they all say. It's they say that in the future, 20 years, 50 years, whatever down the line, we're going to have this amazing utopia where AI does all the work, all the diseases are cured, consumer goods are really cheap, housing's cheap, whatever's cheap, and life is going to be perfect. But they all talk about this transition period where it's kind of a euphemism. They'll say it really quickly like, yeah, there'll be a bit of transitional friction, but it's going to be okay. And what do people hear? What do they mean by transitional friction? What they do mean is that if you are a current worker, not somebody 50 years in the future, if you work as an illustrator or a copywriter or a young software engineer, you are kind of screwed. And so like even in Sam's new sort of approach to this, he's still admitting that a lot of people working right now are going to be screwed over on the way to the utopia. And so when people hear that, they're like, man, like, I don't want to be screwed over. David: [22:43] Yeah, yeah. There's that stat that like 80% of American, the American like labor force is like one unexpected medical bill away from like poverty. And when you hear like Sam Altman say that, oh yeah, there's going to be a painful transition. Well, that counts as an unexpected medical bill, a painful transition. David: [23:00] And so this is probably making things feel a little bit too real or threatening to the average worker. I want to know what you think, either Sam, people like Sam Altman or Dario, the leaders, and then also people at these companies, what they think publicly versus what they think privately. Like there's kind of like, maybe there's a gap between on mic versus on mic. This is a quote from your article in the New York Times. Tech industry sources expressed more extreme concern about the labor market impacts of AI in private conversation, but suddenly became optimist once I turned on the microphone. And so I kind of want to understand, like, give us a take about what people, what you think people believe behind closed doors, all the people inside of like the AI elite Silicon Valley circles. Jasmine: [23:45] Yeah, I mean, so the reason I wrote this New York Times opinion piece was in large part that I felt like people were saying things behind closed doors that they were not willing to say on the record. And I felt like because I had at least heard some of these conversations and I was aware of the sentiment, I could piece it together and sort of lay out a case with publicly available information and a couple anonymous quotes as to what people really expected. Jasmine: [24:08] And even when I was reporting the article, I noticed this happen where there might be a person who I talk to just as part of my normal, you know, life living in San Francisco. We just chat about AI and they'd say something like, yeah, I think the median person is screwed. I don't know what I would do if I was 17 and I didn't have a lot of money. I don't think I could go to college. I have no idea. And then if I'd ask that person, hey, like, would you mind like doing this interview for the piece? I'm trying to make a case for managing disruption better. that same person would say, sure, I'll do the interview. But then on the interview, they'd focus on stuff like, well, you know, like I think AI can help people start a lot of small businesses. And they would be super reluctant to say any of the things that they had said maybe an hour or a day before to me, the same person. And this actually freaked me out more. It was because it wasn't just that people had these bleak predictions about what was going to happen to the economy and to workers, but that they were flipping their tune as soon as I turned on the mic and ask them to go on the record. And I noticed this happened with multiple people. Some people wouldn't go on the record at all. And one person, like a high-powered venture capitalist told me, a lot of my executives are telling me that they want to lay off their workers with AI. But to be honest, Jasmine, I don't think they're going to talk to you for their peace, for your peace, because they don't want to be the bad guys. They know that they're going to get backlash for saying that. Jasmine: [25:24] And so I feel really frustrated when people say things like, you know, Dario is just trying to do marketing and hype for his company. And the reason that he's predicting these crazy things is that he doesn't actually believe it. It's just marketing. I'm like, hey, he does actually believe it. I feel pretty certain he actually believes it. That doesn't mean he's correct about the way it's going to play out, but he at least believes he is correct. And second of all, it makes him look worse. It makes people more anti-AI when he says that. So it doesn't make sense as a marketing strategy. Jasmine: [25:51] And then third of all, the vast majority of AI leaders, researchers, and executives who hold the exact same belief as Dario are not willing to say it out loud because they don't want to be the one targeted for laying off their workers with AI or for building the worker replacing technology. And so I actually do think that the belief that there will be at minimum mass job displacement or a near-term disruption is super, super common. I think people differ on like, will there be jobs in the far off future? Like the permanent underclass belief is more niche, the idea that everyone is permanently screwed. But I think that the belief that AI will exceed the abilities of basically every human and this will cause mass job disruption in the near to medium term is pretty common among folks who I talk to in the AI industry. Ryan: [26:34] So you think when Dario says 20% unemployment, you think he really means it. You think he actually thinks that's what's going to happen. And so this is a warning for the world to get ready for that. Jasmine: [26:45] Yeah, I do think so. Ryan: [26:46] Let's talk about whether he's right or not, because there is significant pushback on those unemployment numbers. You know, people say, people like Dario and Sam, they're not economists. One of the sources of that pushback is Marc Andreessen, who I think enjoys pushing back on a lot of your work, Jasmine. So, I mean, he'll point to the lump of labor fallacy, right? So, Ryan: [27:09] Who call this classic zero-sum economics, Ryan: [27:12] The idea that there's only a fixed amount of work in the economy and then you have to sort of split it up. Well, that's not really true. That's the lump of labor fallacy, of course. We can have grow the pie types of gains, productivity gains, new industry, new demand. The classic case in lump of labor fallacy that everyone cites is ATMs. There was a time where people thought ATMs were going to kill the jobs of bank tellers in the 70s and 80s. What actually ended up happening in the decades that followed was we got more bank tellers. They actually grew because demand increased. And we've seen the same thing with radiologists. AI was supposed to wipe out radiology jobs and radiologist jobs are growing. Even programmers right now, maybe not entry level, but the demand for programmers, at least by some measures, is increasing as a result of AI. They'll also point to deflation benefits to labor. So They'll say AI is a deflationary force. It's making everything cheaper in particular services. So we want better healthcare services, time with a doctor. Well, you have a doctor on an app in your phone with doctor level intelligence or a therapist or a psychologist or a lawyer or name your thing that you want to make more affordable. This is all a deflationary effect and that will benefit labor as well. And then lastly, Ryan: [28:32] Someone like Mark will dismiss all of the things that even people like Dario are saying is kind of a particular lens on the world, maybe like a doomer socialist type of take that you're taking your worldview and you're applying that to AI and you're saying, you know, here it is. You're being politically opportunistic about things. I'm not saying you in particular, of course, Jasmine, I know you are reporting about these things, but this is the pushback on the unemployment numbers that it's just like, that's not actually how it's going to play out. And even if Dario believes that's how it's going to play out, we've had technical revolutions throughout history and it's led to more productivity. It's led to more positive sum games for more people. And why wouldn't it play out this way? So what do you think is actually going to happen here? Jasmine: [29:18] Yeah, I mean, that was a lot. I can, I don't know if you're, do you want me to just say what I believe or do you want me to make the steel man for Dario's case? Because those are not the same because I don't agree with Dario either. Ryan: [29:27] First, why don't you give the steel man for Dario's case? And then I would be interested in your own opinion, because I know you've spent a lot of time here and given it considerable thought. Jasmine: [29:36] So, yeah, like you mentioned, I think the most common critique of jobs doomers, which Marc Andreessen and other folks have made, is the lump of labor fallacy and Jevon's paradox, right? Or Jevon's paradox. I don't know how to pronounce it. They basically say that if something is cheaper, then actually demand can go up. And so if software is really cheap, more people will want software. If therapy is really cheap, even more people can access therapy and demand for therapy will go up. And there will always be new forms of work to do. People's desires are infinite. They're not limited. It's not like once you satisfy one desire, they won't want a new thing. And we see this where, you know, there are now yoga studios and maybe 100 years ago, we weren't spending our money on yoga studios or something like this. And I think in general, historically, this has been a really good argument and it has held true through history. Jasmine: [30:21] The thing that I think Daria would say as to, like, why would AI be different is that both of those arguments, Javon's paradox and lump of labor, assume that more labor equals more humans. So what they're saying is that demand is unlimited and that the amount of labor to do in the economy will always go up. But they also assume that there is an inherent link between productivity and labor and humans, right? Whereas the thing that AI promises to do is particularly AGI, like fully human replacing AI, is that you can have labor without having humans. So you can produce software without having humans. So yes, maybe demand for software goes up, but AIs are making all the software. Or yeah, maybe demand for therapists goes up, but AIs can do the therapy. Lots of people are already using AI for therapy. And so even though we're not yet in that world, because AI is very jagged, it can't do everything yet. And so humans remain complements to AI. Right now, humans are augmented by AI. For a lot of things like radiology, you need both a human and an AI together. And so if demand goes up, you still need human labor. Jasmine: [31:22] AI is generalizing really fast. It's improving really fast. And Dario believes that in the next two, three years, we're going to get AI that can do, that can produce infinite amounts of software therapy or whatever it is without the requirement of having any humans. And so let's take your software engineer example. Right now we see that overall demand for software engineers is going up, but the junior engineers are affected, right? So if you're a new grad engineer, you are actually struggling to get work because you're not really that much better than quad code. But if you're a senior engineer, you're totally fine. Lots of demand for senior engineers. The thing is, if you look at the way that AI models have progressed on software benchmarks year after year after year, they are improving really, really fast. And so right now, maybe AI can only replace a junior engineer, but it seems totally feasible to me that next year, AI will be able to replace a mid-level engineer. And maybe the year after that, it will be able to replace a senior engineer. And if that continues, then AI will, then we will no longer need human engineers to make more software, right? And so the argument that people like Dario would make here is that AI breaks the necessary tie between humans and labor. And that's the thing that people like Mark and Dreesen are failing to consider. Jasmine: [32:30] That would be me making the steel man for Dario's case. Ryan: [32:33] But then even there, on Dario's case, like it wouldn't be the case, let's say AI automates all of kind of the labor types of tasks in the economy. Isn't it the case that humans still have this insatiable demand for kind of status types of games. So you think about something like yoga or, you know, a personal trainer or something like this. This is just about fulfillment, I suppose, in life, or maybe there's some idea of a status game that's being played. You know, it's like, I can get stronger, I can get more fit, something like this. And so maybe all the software developers become personal trainers and, you know, they spend their time on more fulfilling tasks. And isn't it fantastic that Let all of these more labor intensive, boring types of jobs get filled and we'll just replace all of that. As long as humans are around, we'll just replace all of that with other games that we play, like status types of games. Ryan: [33:24] Yeah, so this is the argument that like Alex Emas, Jasmine: [33:26] The economist, has made, right? Is like, what will become scarce? Like, oh, relational goods, like you said, therapists, you know, personal trainers will become scarce. Party hosts, I think that there will be a lot of party hosts after AGI, event planners, whatever it is. And to be clear, I personally am like quite sympathetic to this argument. But the argument that Daria would make here, or when I'm feeling pessimistic, the argument that I would make here is, actually, AIs are really good at emotional and relational labor too. And even a lot of wealthy people choose that stuff. So before, maybe if you wanted to be entertained, you might have to go see a live play. You'd have to go see live theater and you need like 50 people to like make this production of live theater. Now you have like Netflix and TikTok. And increasingly in the future, we're going to have Netflix and TikTok with like AI avatars and AI storylines. You just need like way less humans to produce the same entertainment. Jasmine: [34:13] And like even people who are really rich sometimes prefer to watch Netflix and TikTok versus go to the theater, even though they can also afford to go to the theater. A lot of people do prefer asking ChatGBT for medical advice or what they should do about their relationship problems over asking a human therapist, even if they can afford the human therapist. So we see people make choices that prioritize the convenience and quality of technology over the status good of talking to a human over and over and over. We see that happen all the time. And I think that it is true, in my opinion, that there might be like some niche areas where people really want another human there. but that pool might actually be a lot smaller than people think. People pay more for Waymo's than they do for Ubers, for example. Even people who could afford a black cab taxi driver will often prefer the Waymo instead. And so... Jasmine: [35:01] I think that actually AIs are quite good at doing a lot of these relational tasks and will continue to get better at them. I also think that one of the things you want to look at in terms of demand is how many people can afford to produce demand. So I spent some time in China recently. China, one of the problems that China has, one, it's had white collar unemployment for quite a long time for non-AI related reasons. And one of the reasons for that is household spending is very low. And so you don't have as big of a services economy because there's not as big of a middle class. You have some very rich people. You have a lot of very poor people. And middle class spending is really necessary to drive consumer demand because rich people only have so many hours in a day. They only have so many wants. Right. And so if you have a world that's very unequal, which is something that we expect with AI because there's going to be more returns to capital, those rich people, they may be able to hire a few party planners and a few personal trainers, but they got 24 hours like the rest of us. And so you're just not going to have as much demand in a very unequal economy compared to one where there's a really strong middle class and everybody is buying a lot of services and goods all the time. So those are some arguments that I would consider making if I were trying to make the more extreme case. Jasmine: [36:08] But once again, I just want to say that my own beliefs are a little bit more moderate. Ryan: [36:12] And so let's zone in. What are your beliefs on the unemployment rate? Jasmine: [36:16] Yeah, I mean, I, so I, what do I think is going to happen? I lay some of this out in the New York Times piece. I do expect the near-term labor disruption. I think that there are certain categories of jobs that are way easier to automate than others. And this is where a lot of my disagreement with people like Dario comes from, is that software engineer is super easy to automate because code is verifiable. It's all the context is in a code base. You have this like open source data on the internet that you can go train on. Most jobs are not like that. Software engineering is a really weird type of job. Maybe accountants are also like that. There's like a few jobs like software engineering, maybe digital marketing, copywriting and freelance digital illustration, maybe like accountants or something, management consultants. Let's call it like 10% or 5% of the U.S. economy is jobs that are very, very easy to automate for like some slate of reasons like this. Those I do think are going to get disrupted pretty quickly because financial incentives are just going to make bosses choose to use AI over hiring humans, especially like when a human gets laid off or they quit their job. You're just not going to replace them if an AI can do a good job. So I do think we will see some labor impacts, even though I don't think it's going to be all of the jobs because physical world jobs, relational jobs, jobs that are protected by regulation like doctors. That stuff I think is going to take a long, long time to automate. So. I see these near-term disruptions. I also think that retraining is usually overestimated by economists. So folks who believe in stuff like lump of labor, economists, Jasmine: [37:41] They tend to say that people are just going to go move to other jobs. So before, during deindustrialization in the U.S., when a lot of factory jobs were automated, these economists predicted that the laid-off factory workers would just move to different geographies to work in different factories or that they would learn, like, digital skills, like learn to code. And I think we all kind of laugh at that now because we see over the past 10, 20 years that these steel workers did not learn to code. They also did not move. They often got addicted to opioids and had a really, really bad time. And like, we are still living out the political and the social consequences of deindustrialization, even though it wasn't that many workers. And actually it created more jobs total, but the new jobs that were created by factory automation were all like software jobs in San Francisco and not like jobs in Buffalo, New York, right? And so just because you have new jobs elsewhere in the economy does not mean that the people who are laid off are gonna be able to retrain even with income support, even with access to school into those new jobs, because these people might be like 50 years old, like they just, they don't have the brain elasticity anymore, they don't have the motivation anymore to go and learn something brand, brand new. Jasmine: [38:41] And so I think that even if it's not, even if it's, let's say, 5% of jobs are going to be automated by AI, and it's not all of the jobs immediately, I think a lot of these folks are going to really struggle to retrain. I don't think that they're all going to easily switch into a new job. I think they're going to build a lot of political resentment. And so this is where it's sort of connects to my interest in AI populism. This time, maybe instead of right-wing resentment, the kind that drove Trump, it might be more like left-wing resentment where it's blaming the AI billionaires. I think we already see that. I think some of the biggest critics and skeptics of AI are people like creatives whose jobs have already been impacted by AI. And so I think we're going to get a lot of populist backlash that results from people's jobs being threatened, even in small numbers. Ryan: [39:22] And I also think that on the macro scale, Jasmine: [39:24] Even in a world with full employment, you still might get a declining labor share of the economy, which is something to worry about. Which is, right, this idea that, yes, maybe everyone still has a job, but overall, wealth is accruing to capital owners who have the ability to rent infinite robot labor. And wealth inequality can cause its own kind of problems like these political imbalances, resentment at elites, things like that. And so that's something I worry about even in a world where people mostly retain their work. So I tend to be like, job displacement, it's not gonna be, it's not gonna happen all at once. It's not gonna be this sort of apocalypse. It's gonna affect some narrow categories of people, but those people are gonna be really, really mad about it and it's gonna really, really suck for them. And I kind of want our policymakers to be more proactive to tell people, if your job is automated by AI through no fault of your own, like you spent decades learning some skill and now it goes poof because of AI, Jasmine: [40:14] I do think we should support those people. I don't think it's their fault. David: [40:17] What do you think about the whole concept of just like the capitalism end game, which is like, it's just game over for labor. You take super intelligence and then not too long afterwards, you get super robots. You smash those things together and like the whole concept of being a human is just obsolete and redundant. And then this invokes the idea of just like the permanent underclass, where there are just people who are just stuck down there. And then you like zoom forward a few decades and you get movies like Elysium, where all the elites like escaped to their like super fortress in space and all the permanent underclass are like stuck on Earth. And this is like, it's just entrenched that way. What do you think about this? Ryan: [40:59] Well, yeah, it's kind of like the idea to flesh that out a bit more and add to that, David. It's like the idea that capital no longer needs labor to function. Like for all of its history, capital has had to hire labor in order to get jobs done, get work done. And now it has AI tokens to substitute for human labor. So it doesn't need labor any longer. David: [41:22] And this is like the extreme version of what might happen. Yeah. Right, right. Ryan: [41:25] There's like books like, or there was an essay called The Intelligence Curse. I don't know if you read that. We had the authors on. And also Garrison Lovely is coming up with a book called Obsolete, I think, which, you know, delves into this thesis. Basically, labor becomes obsolete in this world. Jasmine: [41:39] Yeah, I mean, I think that is the like one of the versions of the things that people like Dario, even more extreme than Dario, do believe. That's what they're worried about, right? Is like AI will be a one to one substitute for labor. It will be able to do literally everything and capital will discard people. And that's where I would start to make arguments like you've been making, Ryan, where I'm like, well, actually, if human labor is scarce, some people will want their human party planners. And so I do think there will be some jobs available in the relational economy. I think that it also requires believing in full automation. Again, like technology has to advance so much that it's not just replacing cognitive jobs, but it's also replacing like jobs in the Ryan: [42:17] Do I think that could happen someday? Jasmine: [42:18] Like maybe, probably, like robotics is improving, but we're pretty, pretty far away from that. Like I think that we are gonna have a lot more problems to deal with in the next decade before we get to the point where full automation is even worth considering. Even for folks who do map out and care about these full automation scenarios, like the economist Phil Trammell, who wrote his Capital in the 22nd Century essay, making a version of this argument. He's called it the 22nd Century because his very rough low confidence estimate was 100 years in the future. And again, we may never arrive there if the relational sector of the economy is big enough. Ryan: [42:51] Wait, 100 years in the future, what happens? Jasmine: [42:54] Labor will go to zero. So his prediction was full automation, labor goes to zero. If it's plausible, it's going to be like 100 years in the future or something like that. David: [43:03] Even if it's this drastic. Jasmine: [43:05] We have time. Yeah, like I think we have a lot of time and I think there are a lot of things that could happen between now and 100 years from now. And so maybe personally, I'm more focused on these near-term scenarios. But like, I do think that like it is worth considering that capital relies on labor right now. And if it doesn't require humans as much, I don't expect governments or corporations to be as generous in terms of things like welfare and caring about what people think about how things should go because they have robot alternatives. And so those political dynamics might start showing up. Ryan: [43:34] Yeah. I mean, that's the argument of the intelligence curse, basically, that it breaks the social contract between labor and capital and governments and its citizens. And so a new social contract has to be created. Jasmine: [43:46] Yeah. And I think that's why people are turning to things like violence, frankly. It's like if you are not, if you as a worker or as a normal person have no leverage as a result of doing work, because that's one of the traditional ways you have leverage, where do you have leverage? You can do violent acts, do terrorism and riot in the streets. And so I think people are recognizing that one of their few channels of leverage when you lose everything else is to do violence. And so that's why I think that even if I am a totally self-interested capitalist who doesn't care about people at all, I would be pretty concerned about making sure that not too many people end up unemployed and disempowered because I do not want to face these violent threats from people who are have been deprived Jasmine: [44:29] of every other channel for leverage except for violence. For sure. Ryan: [44:33] But like, OK, but does it seem a little early for that? Like, we don't know how this is going to end up yet. So what is unemployment in the U.S.? Is it something like 5%? Jasmine: [44:43] Yeah, I mean, I don't think it's too early to plan for scenarios, right? Like, I don't think that we have to institute an UBI right now. Like, I would not support that. Ryan: [44:52] No, to be clear, not to plan for scenarios. But why are people getting violent and angry already? Like, it hasn't happened yet at some level. That's what I find somewhat curious. Jasmine: [45:02] Because they feel like they can stop it, right? You know, like, David: [45:05] That's what they're trying to do. Ryan: [45:07] Okay, but is it a little Ted Kaczynski-ish? That's pretty strong, David. Jasmine: [45:11] I mean, I think these are extreme people, right? These are not normal people. Most people are not engaging in violent attacks. But the thing is, if you genuinely believe that this thing is going to come for you and your family and your community, then these people believe that if they do enough violence, they can stop the thing. Again, I do not endorse violence. I think this is super bad. Jasmine: [45:26] It's not that many people, but you can see how one would arrive at that. Ryan: [45:30] What I'm trying to find out, the violent actions aside, but all of this, this vitriol against tech populism, this vitriol against big tech is how much of it is vibes versus reality? Like, we don't actually know what is going to happen yet. It hasn't hit us yet. So, so much of this is narrative and vibe. Yeah. And it might turn out. David: [45:50] Narrative, which the AI leaders are fostering. Ryan: [45:54] Some of them, yes. David: [45:55] Which gives the vibe a lot of credibility. Like, no one is saying the other side of the vibe other than, like, Marc Andreessen. Jasmine: [46:01] And Marc Andreessen is investing in lots of companies whose value proposition is to replace workers. So, I know that Marc Andreessen is tweeting different things. But if you look at his portfolio of companies, many of them have a core value proposition of replacing workers. Right. And so, like, I see why people would be skeptical of Marc Andreessen's public statements. David: [46:20] Right. Right. And plus, Marc Andreessen just kind of politically aligned himself. And so now he's kind of like shoehorned in that sort of political camp. The other thing, Ryan, I think is kind of worth highlighting is like, did you read the text message or the statement that the recent Donald Trump assassin, attempted assassin, left behind? There was like a whole, in his manifesto, there was like a question, answer, question, answer. And he was answering his own question. Like, why are you the one to do this? And then he would answer it. He basically like rationalized himself and anyone who was curious in reading his manifesto as to like why he thought he was valid in making an assassination on Donald Trump. And this is clearly a guy who's like chronically online. He was like in Reddit communities. And it looks like just like kind of hyper rationalism. And I think that's, these are the same people who are doing political violence against Sam Altman. It's like, this is why I kind of said, it's a little Ted Kazinschius, is they think that they are stopping this future Terminator, like Skynet type thing that is going to happen in the future. And they just have to do the right thing in the now to solve that future, solve that future problem. So like, well, again, no one on this podcast is supporting or political violence. I can kind of see the logic. Ryan: [47:39] Well, you only need a very... If you think the stakes are this high, right? David: [47:42] Yeah. But the tech leaders are saying the stakes are this high. Jasmine: [47:45] Like, it's like, I don't know what Dario's PDoom is, right? But like, I think he probably has a PDoom that's like 30% or something is my guess. Like, it's probably like quite high, like relative to most people. Like, he is clearly very worried about the prospect that AI could kill everybody or leave the world a very bad place. And I can see that if you believe that, which I, again, I personally do not happen to believe. But if you thought that these tech leaders were actually gambling with your future and they were actually going to like do two coin flips and there's a 25% chance that you're going to end up, if not dead, in the permanent underclass, you might think like, just like you got to kill baby Hitler, you got to do this kind of violence. It's a kill baby Hitler thing. You know? And they've done too many thought experiments. This is the whole thing about the hyper-racialism, like online too much. I'm like, you have done too many thought experiments. Like read some virtue ethics. Ryan: [48:31] You need to touch grass. Yes, please, virtue ethics. Let's re-inject that. Please. David: [48:37] I want to talk about the kind of just like the political map that comes out of this. There's just a way to kind of divide how the future politics when it comes to AI looks like. There's left versus right. There's labor versus capital. There's Silicon Valley versus Washington, D.C. How do you think the lines are going to get drawn here? Clearly, like Bernie Sanders is on one side and I think AOC would join him. I don't know necessarily who the pro-AI politics are, but like when we see like factions joining together and political lines being drawn. How do you map this out? Jasmine: [49:10] Yeah, I think this is super interesting. I mean, like you said, there's a million ways it could break. One that I worry about when I'm like freaked out by all this is that it's going to be these like techno-capitalist elites from both sides of the aisle, sort of centrist, pro-neoliberalism, pro-technology folks against everybody else, whether they're like right, left, whatever, people who are, don't like technology. I mean, like some people have articulated it as friends of the future will be one camp and then like everyone who's trying to stop technology and stop change will be another camp. Jasmine: [49:39] I don't know that it will be that, but I think that sometimes feels really plausible, especially when I notice that a lot of these very anti-AI factions, they're very bipartisan. They have people from a lot of different political camps, like creatives, labor unions, environmentalists, states rights people, family first people, religious people, like so many different interest groups are coming together, all because they think that AI is going to alter the existing environment, existing jobs, people's existing social circles and their way of life. And then there are people who are more interested in like economic growth or the long run future or are just a little bit more pro-technology in general. And this freaks me out because I feel like personally, I am someone who really likes technology. I like using AI. I love the Internet. I feel like it's added so much to my life. I believe in economic growth. I just want to distribute the benefits of growth equally. Like I just think that we should care about the distribution, but I generally am pretty pro-technology. And that really freaks me out to think about this kind of thing. Like, you know, I wonder how Ezra Klein and Derek Thompson, like the abundance folks feel, because they are people who try to make a case to the Democrats that they should embrace technology more. Right. Jasmine: [50:45] Actually, if we think about the way that things like AI might bring down the cost of pharmaceuticals or unlock scientific discoveries or make work less onerous, like that could be an amazing thing for Democrats and for anyone who cares about like a broad, broad public well-being. And I was a pretty big fan of abundance. I'm pretty sympathetic to that argument. Jasmine: [51:03] But I don't think that's the way that the current Democratic Party is going to go because they're the ones whose voter base of like youngish, college educated, white collar people are the most impacted by AI. They are very scared. And we have a lot of distrust of the technology companies right now where people think, yeah, maybe there's going to be a cure for cancer, but I'm not going to get it is kind of the way that people feel like maybe there's going to be, you know, a therapist, teacher, whatever in your pocket for all those people. But I can't pay the 200 bucks a month to get the best models and I'm going to be left on the other side of that divide. And so I think with such low levels of social trust right now, trusting companies, trusting the government, trusting each other, I would not be surprised to see a kind of increasing split around these lines of, are you part of this broad populist group or are you sort of on the side of the techno-capitalist elites or whatever? Ryan: [51:54] Wait, wait. So the way you broke it out, right, and your fear, the thing you hope doesn't happen, but the thing that you're seeing take shape is some sort of a binary between sort of the futurists and the Luddites or the technophiles and the anti-tech people. Like the EX, the accelerationists and the D-cells. I'm very much seeing that too. And I think that is the worst possible outcome because there are a lot of people who are kind of more in the middle who are like, hey, technology, if it's good and if it helps people and there are ways to kind of marshal it towards that, we can't just be anti-tech. And also we can't just be pro-tech no matter what the technology is. It's kind of like a guided tech type of theory. And those people that are caught in the middle will have to pick a side I think probably Derek Thompson, Ezra Klein are among those who would have to pick a side. And I'm wondering where you think if those are the two sides, at least for this election cycle, where do you think that splits among party lines? It seems like the left is more going in kind of a D-cell type direction more than the right, though there are factions of the populist right. The right is David: [53:00] Not inherently pro-AI either. Ryan: [53:02] Well, but they seem to be at least in more than the left regime. And so if that's the break, are we going to get Democrats who have to be decel and then Republicans who have to be accelerationists? Jasmine: [53:14] Yeah. I mean, my sense is that in the 2028 election, unless things get really, really crazy with AI, it'll probably still be Republicans versus Democrats. But between those party lines, I think it is more likely the Democrats will be the decels, which... As someone who is personally mostly closer to a Democrat than I am to a Republican and also closer to a pro-technology person than an anti-technology person, I'm like, oh, I really don't like this. But I, yeah, I think that, again, AI impacts the voter bases of the Democrats more than it does the Republican voter base for the most part. I think that concern is one of them in terms of just like the job threat. But I think Democrats tend to be, these days, be more concerned over Jasmine: [53:57] I don't know, things like protecting labor, protecting the environment, protecting creatives, like a lot of these particular concerns that AI introduces are more aligned with the Democratic voter base. And I think the Republicans still have maybe, you know, like I think even in this current political environment, the fact that Trump was mostly an accelerationist and mostly a pro-AI person really prevented a lot of Republican Congress people who wanted to pursue AI regulation from doing so because they knew that Trump was going to, Trump or one of the aligned PACs or something was going to go after them if they tried to introduce too onerous of AI regulation. And so my guess is that Democrats would be the more decelerationist party. But then again, you do have folks like Gavin Newsom, who is the current Democratic front runner, who is pretty pro-tech and has aligned with himself with Silicon Valley a lot. So I'm not sure about that either, just because you do have, you know, people like Gavin Newsom or John Ossoff, who recently did a fundraiser in San Francisco with Chris Lehane, the open AI lobbyist, right? And so like you are, you do see a few Democrats going for the pro AI lane. I wonder if that's going to work, like in a money versus the people battle, like in a world of increasing populism and resentment against tech, does having this super PAC behind you, does having Silicon Valley money behind you, is that going to win you the primary against other people who are like, screw the AI billionaires? I have no idea, but they'll be interesting to watch. Ryan: [55:20] If the left or the Democrats do go in that direction, which is kind of like anti-tech, moratorium on data centers, the Bernie Sanders type of approach on Ryan: [55:28] this, doesn't this kill the Ezra Klein, Derek Thompson abundance agenda entirely? Because maybe you have abundance with kind of like housing, if you can even get there. But like... That means you don't have abundance on intelligence. And intelligence, as we were just discussing, can mean cheaper healthcare, cheaper therapy. It can be a deflationary force. Yeah. In theory, everything. I mean, if Dario is even a small percentage correct, then that can be a massive supply shock in a good way, supply economic shock to our entire economy. And it's essentially a progressive policy to give health care intelligence to every citizen of the United States. We could do that if we have an abundance agenda for intelligence. But it seems if you go full decel and you just do moratoriums on things, then you don't get that. Jasmine: [56:23] I mean, I think if the Bernie moratorium camp, like, takes over the Democratic Party, like, if right now most Democrats are not backing the moratorium, if they all decided to go that way, I do think the Dems would be the party of the decels, you know? Like, I think that would signal a big shift for the Democratic Party if they got majority support among Democrats for a moratorium. I will say that, like, if I were to steal Manazer Kline and Derek Thompson, because I think they actually talked about AI populism in their one-year retro on Abundance recently. I heard that, yeah. Yeah. And I think one argument that you could make if you were them is that the thing that's blocking health care provision and housing and all that is not really more intelligence, that it's either a political issue or something to do with manufacturing or stuff in the physical world. I mean, we've seen Bommel's cost disease where the cost of digital services goes down, but a lot of health care is still like surgery or housing requires like building in the real world or the U.S. Has lost a lot of manufacturing capacity compared to places like China. And so one could make an argument that one is pro-technology in the sense of physical things like drug development and manufacturing, things that deliver these broad-based benefits. Even if intelligence, we don't max out or something. And so I could imagine an argument to something like that. But I do broadly think that if the Democrats become a sort of firmly anti-tech party, that would be a blow to the Abundance-style progressive movement. Ryan: [57:42] Yeah, like New York State Senate, there was a bill being considered, Senate Bill S-7263. And this would basically prohibit AI chatbots from impersonating licensed professionals for therapy or healthcare advice or that sort of thing, which of course Ryan: [58:00] Drives the cost. Ryan: [58:01] It doesn't decrease the cost of providing those services if someone wants to get those inside of an AI or a chatbot, right? So that does seem to be part of the decelerationist agenda seeping into politics. I don't know if that'll pass or not, Jasmine: [58:15] But I can't. Yeah, I don't think it will. I mean, if it does, I think it'd be really stupid. I think it's a stupid bill. Most people like using chatbots for medical advice. That's one area where people, I would say, do not have popular sentiments is that most people find their chatbots quite useful for doing these kinds of little tasks, giving them advice. And I think taking that consumer surplus away from people would be a bad thing. Similarly, like I look at the Waymo battles, right? Like it's like Waymos are safer than human drivers. I think the research is pretty clear on that fact. They feel safe when you're in them. I love taking Waymos. I do think that like, I would like to see either Google or, you know, governments think about how to transition cab drivers into other roles, like if Waymo does expand in a city, because again, it's not those cab drivers' fault that they invested decades in a career that may go away. But like, I do want to see Waymo's rolled out eventually. I think the world will be better if we have technologies that make us safer. And so to me, the question is just how do we navigate that transition in a way that is empathetic to people who sort of are the losers lost out on the technology because it devalued their skills. But I do definitely, I would like to see a vision where we are still spreading Jasmine: [59:22] technologies that do make us safer, make us healthier, whatever. Ryan: [59:25] I was kind of wondering, and undercurrent of this whole AI populism. And our discussion today has been growing wealth inequality. And I sort of wonder if the AI populism is just a proxy battle in some way, or bundling of the greater problem of wealth inequality. And as I look at something like wealth inequality, I'm sort of like wondering what the the problems inside of that actually are. So if everyone is getting wealthier, but the top are getting wealthier at a faster pace, at some level, you look at that system, you'd say, okay, what's the problem as long as we're all getting wealthier? But then sometimes I wonder if wealth inequality, we call it wealth inequality, but it's really more about power inequality. And it's more about a concern that a certain group of elites are able to translate that wealth into coercive, direct power. And they begin to become kind of the rulers. Jasmine: [1:00:26] I don't know if Ryan: [1:00:26] You've given any thought to that, but what is the driver behind this backlash to wealth inequality? Is this really all just kind of a proxy battle here for power, right? Is that what's really in contention in the American political system? Jasmine: [1:00:45] Yeah, I think that's a good diagnosis. I think that a lot of inequality is a proxy battle for power, right? I think that's why people are not that excited about certain ideas like a UBI, because it feels like being on permanent welfare and relying on handouts from the people who actually do have all the money and power. And even if they're keeping you around so that you can pay your rent and pay for food, you don't really have a say because you're still reliant on them, right? The dependence is that you are dependent on, say, with UBI, like the state for doling out those welfare benefits. Or, you know, you look at corruption as a top five issue for what voters care about. And you look at a lot of corruption that's going on with current administration. You look at the way that Elon Musk got into politics, basically, by spending a ton of money. and not only did he spend all that money, but a lot of things that the Trump admin did basically went his way. He was allowed to do doge, like, cared about the issues that Elon wanted to care about. And he basically spent his way into political power. And people see that. People see that when you have money, you can influence policy. You can influence this physical world. You can buy yourself a lot of freedoms that other people don't have. And I think that's where the real frustration comes from. Because like you said, if people can pay their bills and pay for healthcare and pay for food, which again, not everyone can, but that's a different question. That's not the same as, oh yeah, what's the point of my vote when Elon Musk can just buy his way into power, right? Right. David: [1:02:04] Jasmine, you're clearly very sharp and informed about all these subjects. So I've definitely appreciated getting your wisdom and your takes on the podcast today. David: [1:02:11] When it comes to just actual policy positions, what are your recommendations? What do you think people should do? If you were the lady behind the policy machine, do you have any ideas or concepts of things that you think would actually be effective interventions here that would smooth out the hard edges on both sides? Jasmine: [1:02:29] Yeah. I mean, oh man, this is the hard question, right? I should say I'm not a policy wonk and I didn't focus most of my research on policy solutions. I've talked about them with a lot of people, but it's not something that I feel really confident on my prescriptions for. It's also, I will say it's something that I don't think anybody feels very confident about knowing what to do. Cause like Ryan said, like a lot of the impacts haven't played out yet. Like we are going to need a different policy situation for if we see like slow and gradual job displacement versus we actually do get this like big apocalypse or job shock, or maybe we get no job shock at all. Maybe everything's fine. And then we shouldn't, you know, do anything crazy. But I do think we should be planning for those different scenarios. I think that, Pretty likely to me seems like we're going to need some tax and redistribute for like corporate and capital gains taxes. Like if it is true that a ton of money basically flows to these AI infrastructure companies, for example, and they get way, way, way bigger than everything else in the economy, finding the right way to do tax and redistribution is pretty important. What do you spend on if you're going to redistribute? I think that some are like longer unemployment insurance. Like right now in California, where I live, you get six months of unemployment insurance. I think if we start to see a lot of AI displacement of these longtime jobs, people generally need more than six months to learn a new skill. Maybe you need 12 months or two years of unemployment insurance. Jasmine: [1:03:44] There's things like universal health care start to become relevant because one thing that I expect in an AI world is you're going to have more entrepreneurship and small business capital and more freelancers and small business people, right? It's less like you have a giant firm that employs like millions of people, not millions, but like tens of thousands of people or thousands of people. You're going to have more one-person companies, people doing startups, people doing small businesses, that person with a yoga studio or their event planning thing or whatever. Those folks are going to need healthcare. And right now, I think the economy is and the benefit system is wired for a place where most people are in these like normal W-2 jobs. But actually, what does it look like if you have a lot more small business owners and freelancers? We are going to need to think differently about benefits and healthcare and things like that. I also think like education is going to look really different, right? And so right now we of this four-year college system that everyone's been, not everyone, but a lot of government effort has been spent pushing people through the four-year liberal arts college system. I am a little bit pessimistic about how long that's going to last. I think there have been a lot of cracks in this sort of four-year college system for a long time, a lot of problems with it. This idea that you study history for four years and you get handed like an accounting job at the end or whatever it is. This has always been a broken promise, like your skills are not whatever tied to any of the classes that you went to. People are going to tons of debt and now they're not even getting another job at the other end of it. And so maybe we need apprenticeships. Maybe we need national service programs where some countries have national military service. Maybe we do national public service and you work some kind of job, whether it's cleaning up parks or working in administration, Jasmine: [1:05:13] Learn some actual on the job real skill that we need. And like you actually take that and like convert it into job skills instead of taking philosophy courses that you like chat GPT your way through, which is basically what's going on right now. And so I think that these are all the way that I'm sort of thinking about it is like, what are the ways we expect the economy to change? I expect less white collar IC work. I expect more small businesses. I expect more relational sector work. I expect more people who go through these periods of losing their job and needing to find a new thing to do. And like, how do we plan policies that are going to train people, that are going to give people a little bit of a cushion so that it doesn't ruin Jasmine: [1:05:48] your life if you're like in this period of like vast technological change? Ryan: [1:05:52] So say we give them a cushion, right? But say on the other side of that, Dario is more right than everybody else. And there's actually like no real job on the other side. Then do we get to a UBI? Like, what do you think about that? And there's some other interesting ideas, like the idea of a tax per token, per AI token generated, where you're just like taxing AI at the source of consumption. Or there's the idea of creating kind of a sovereign wealth fund, almost the way resource rich countries and oil and natural gas kind of do. And so we take a percent of AI and we create sort of a sovereign wealth fund that all citizens own. Any of these ideas appealing? Are they too radical to think about right now? Ryan: [1:06:38] I think we should think about them. Jasmine: [1:06:39] I think that like researchers should start to plan out what that would look like if it looks like we're moving on more track to a Dario world, which again, I would say right now we are not on that path. But if it seems like we're ticking towards that path, I would prefer if research had already been done. I mean, you know, Sam Altman did that UBI pilot a while ago, right? And it wasn't like he tried just giving a bunch of people money and running a randomized controlled trial on to see what people did with that money. One thing I often wonder is what is the next version of that? Do we need to do a pilot of a jobs guarantee? Do we need to do a pilot of some of these other programs so that if we hit a world with truly mass unemployment, we can know what the better options are? I think a public wealth fund is interesting. I think like the Norway model is pretty interesting. Jasmine: [1:07:23] Shorter work weeks are one that I think about a lot because again, like lump of labor fallacy, like in a world where humans are always necessary. You don't want to do that. But I think that like if humans are able to do less and less tasks because machines can literally just do the vast majority of tasks you ever could imagine because they're just smarter and more capable in all dimensions. I think that it would be better to me to shorten the work week so that people still have jobs. It's not like 10% of the people have jobs and 90% are unemployed. I would personally rather have a world where 90% are employed, but they have maybe two days a week work week. They have a 15 hour work week because again, that still gives you a little bit of leverage. When you care about these political issues, like do you actually have leverage? Is there some reason that capital or political, the government has to care about you? You have some role in the economy. You also have some purpose. I think it's better for people to feel like they have a purpose in life. I think about like shortening the work week and maybe we go from a 40-hour work week to a 30-hour work week to a 20-hour work week as the number of capabilities that AI can do expands and human capabilities in a comparative sense decrease. So shortening the work week is one that I think most people would be in support of because, again, I think people want purpose. They just want a relatively easy and chill job to do. Ryan: [1:08:34] Jasmine, I think one of my biggest fears is something you said earlier, which is like AI populism wins out to such an extent that decelerationists kind of win the day and we just like kill this technology. We say not in our town, not in our county, not in our state, not in our country. And then it moves somewhere else. Maybe it moves offshore and we lose the benefits of it. We lose the productivity gains. We lose the labor enhancement. Maybe another country gets these instead. And I hope we don't go too far in that direction. Maybe the direction, you know, critics would say Europe has gone in, in some areas, in some ways, you know, Germany with nuclear, for example, moratorium on nuclear power generation. And so there's no more nuclear power plants. Ryan: [1:09:23] But at the same time, we can't just have the tech optimist vision without any regard to how wealth gets distributed to the rest of the population. So if you were to think through some sort of a, maybe like a grand bargain where you're like mediating these two parties and you're like, there's Bernie Sanders on one side and there's maybe like Mark Andreessen on the other side. What kind of a grand bargain would you propose to have a meeting of the minds? I think Think about the way the U.S. government like worked in the 1990s, where the right and the left were all like, OK, maximize the pie. The left just wanted to tax it higher in order to pay for our social programs, for instance. Now we're of the mindset of either just like full accelerationist or full just hit the brakes. But is there some kind of grand bargain we can strike? What would that look like? Jasmine: [1:10:15] It's a hard and a big question, but I'm asking the same one. It seems to me that there has to be some kind of grand bargain. I mean, I think that the original New Deal and rewriting of the social contract with the introduction of workweek regulations, minimum wage, union bargaining power was that. I was having a conversation with a friend earlier about how come during the 20th century, the United States experienced a ton of mechanization and automation. And some people's jobs were displaced in that process, but you didn't see mass political violence. You didn't see a Luddite-style backlash. And there are different theories for why this is true. But one of the strongest theories is that in the 20th century, automation mostly affected factories that had strong unions that basically sat down at the union bargaining table and worked with the automators to figure out, OK, we're going to have wage guarantees for the people who keep their jobs. We want workers' wages to go up if productivity goes up. So let's tie workers' wages to productivity gains. and also you had like the expansion of federal level welfare in order to sort of, again, reassure people that Jasmine: [1:11:17] The jobs would be better jobs that they would be taking care of, that they would share in the gains. I think most people want to live in a growing economy. Most people want to be more productive. They just want to know that they are going to get a piece of that. And if their company ends up making more money because technology increases productivity, they want to get an equal or like some part of that as well. And I think that's the part that's broken. I don't know that today unions are the right people to be doing that bargaining. One is like, it's not, we're not affecting unionized industries anymore. We're talking about software engineers and marketers and whatever. And most of these people are not in unions. But that kind of what is the bargaining table is the thing I now think about. Like, I think when there's not a legitimate channel to have those kinds of conversations, that's when you see things like political violence, or you see these data center moratoriums, because you don't have a place where you can actually negotiate. So like with things like Waymo, I'm like, is there a way for the cab drivers and Waymo to come to the table and to figure out some kind of arrangement where Google, which is a very profitable company and is going to make even more money in a world where Ramos are everywhere. Can they somehow share some of that with the cab drivers who are affected, fund training programs? I don't know what it is, but like, these are the conversations that I'm really interested in. And I hope that policymakers, political candidates start to think about what their role in this looks like. Maybe they're sitting down with the AI executives and saying, where are you seeing impacts on jobs? Like, what do you think we should do there? Because Jasmine: [1:12:42] My belief, maybe naively, is if you can come to a deal, if you can get to a bargain, we're going to be able to preserve the gains of technology, Jasmine: [1:12:49] the growth that you get from technology without this kind of mass populist backlash. David: [1:12:53] One way to ensure that people get their share of the pie, I think, is to also kind of stay ahead of the curve and use AI to the best of their ability. The way me and Ryan talk about this, when we're optimizing our clods and our cloud co-works, is how do we get our clods to produce more valuable tokens? Like what do we need to do? What prompts do we need to do? What data do we need to give it to make the tokens that come out of our cloud more valuable? And Jasmine, you are also a content producer. We're all content producers here. You do a lot of writing on one of the fastest growing sub stacks, which we will link in the show notes if listeners want to subscribe. But maybe this is just a personal question. How do you use AI to do your work better? And what do you have to teach both myself, Ryan, and also the listeners? Ryan: [1:13:36] Yeah, we don't want to become NPCs. Jasmine: [1:13:38] Me neither. Ryan: [1:13:39] High agency only. Yes. Jasmine: [1:13:43] Oh my gosh. I mean, I feel like you guys are probably like masters at this. So I don't know that I have any crazy tips, you know? I mean, I use like, I pay for like the best models I use every few months. I will sort of, I have my own personal eval. So like mine is something like if I feed, if I feed an AI, like 10 interview transcripts and one paragraph about the kind of article I want to write, can it just spit out a reported article? I never copy paste these to be clear. I do not actually use them, but that's the eval that I measure them on because I want to know at what point will they be able to do that kind of work? And if they do start getting pretty good, I also want to know where's my comparative advantage going to be? The way that I think about this, and I think this is what most economists would advise as well, is technology is going to get better. But so long as humans have a comparative advantage, then you're going to be okay, right? As long as you're a complement to the technology. And so I am actually almost more interested oftentimes in what it is the tech can't do yet. The thing that the only way to find out what the tech can't do yet is to constantly be playing with AI so that you know, right? Because like if AI is way better than you as something, you should use it for that thing. Like I use ChatGPT for research, like transcript generation. Jasmine: [1:14:46] Sometimes I'll ask for feedback, like all the time. If AI is better than you as something, I think that oftentimes you should use it and take advantage of that. But the other thing is when I experiment a lot with AI, I really see the jagged edges. I see the things it's not good enough yet at. Like it cannot do a podcast like this. It cannot have a conversation. It cannot build trust with an interview source and get them to share their feelings about stuff. It can't go places in the physical world and describe like what it likes, what it is like to be in a place. So a lot of my writing is kind of like scene based in quote unquote, anthropological. And I think that is more interesting to people in a world where AI can just get facts off the internet. Like I read the facts off the internet, but what I can do is I can actually stand next to a data center and see what it sounds like and interview the people around it and say, what do you think of this thing? And so I would probably spend a lot of time, not just experimenting, but also asking what is my personal comparative advantage as a human against the AI? And how can I really invest in that? Because that's what is going to be robust as AI gets better and better. David: [1:15:44] Jasmine, thank you so much for coming on the show. This was a fantastic episode. Ryan: [1:15:48] Thank you both. Jasmine: [1:15:48] Yeah, love the conversation. David: [1:15:50] You write at substack, substack.com slash at Jasmine. You're also on Twitter. Where else do you want readers or listeners to go to to find you? Jasmine: [1:15:58] That's great. Yeah, Twitter and jasmine.substack.com are the best places to find me. David: [1:16:02] Thanks so much. We'll get all those in the show notes. Bankless Nation, you guys know the deal. We didn't really talk about crypto. We talked about AI, but nonetheless, it's risky. Either way, you can lose what you put in, but we are headed west. This is Frontier. It's not for everyone, but we are glad you're with us on the Bankless Journey. Thanks a lot.