Josherich's Blog

HOME SHORTS PODCAST SOFTWARES DRAWING ABOUT RSS

Tech Talk: AI Supremacy, TikTok’s Fate, and Crypto Decadence

24 Jan 2025

Tech Talk: AI Supremacy, TikTok’s Fate, and Crypto Decadence

Hey folks, today’s episode was recorded just before the Trump administration announced a new initiative codenamed Stargate to build out hundreds of billions of dollars of artificial intelligence infrastructure. We did not talk about that on this show, but we did talk about what that announcement means. Namely, that artificial intelligence is going to be one of the more important technologies of this decade, that it relies on very expensive infrastructure that both the private and public sectors want to site in the U.S., and that that infrastructure is incredibly energy-intensive and will require us to build out a lot of energy-generating technology.

So we do not name-check Stargate itself, but we absolutely talk about the motivation behind it. And on with the show. Today, tech talk with an old friend of the pod, Kevin Roos of the New York Times and the host of the Hard Fork podcast. So this is a show about everything, and I want it to be a show about everything.

You know, our last episode was literally about the assassination of James Garfield, and before that, the health effects of moderate drinking. I like that we’re all over the place because I’m all over the place. I’m interested in a little bit of so many different stations and disciplines and storylines. But I do recognize that one cost of this sort of purposeful lack of narrow focus is that sometimes I fail to communicate the gravity of the most important things that are happening in the world.

At the moment, I think you can make a very strong case that the most important stories in the world right now are happening in technology, and more specifically in the relationship between government and technology, a relationship that is closer now than it’s been in many decades. We begin with TikTok, the most popular source of news for Gen Z in America and the most downloaded mobile app in the world in 2024.

Last year, a bipartisan bill signed by Joe Biden required the parent company of TikTok, which is the Chinese firm ByteDance, to sell its American business or else face a ban. Well, they didn’t sell their American business and they were banned. Today, TikTok is legally banned in America by the letter of the law. But it’s also in broad use because Donald Trump, the man who called for the ban of the app in 2020, also saved the app in January 2025 by essentially declaring that he won’t uphold the law and he won’t require the Department of Justice to uphold the letter of the law.

It’s hard to say exactly why Trump reached this conclusion. Some folks say that Trump changed his mind when one of the ByteDance investors, Jeff Yask, became one of his biggest donators. Some say it’s also the fact that Donald Trump enjoyed huge popularity on TikTok during the election. But whatever the reason, TikTok is in this bizarre, as Kevin calls it, you know, quantum state of uncertainty where de jure, it is banned and de facto, it is in broad use.

We then spend most of this episode talking about the crescendo of predictions from Silicon Valley that the field of artificial intelligence is nearing a seismic breakthrough. In the last few weeks, members of OpenAI, Anthropic, and other frontier labs have claimed that they are less than three years away from building AI agents that are, to borrow their language, better than humans at everything. I ask Kevin what that means, how widespread these predictions are, whether we should believe them, what it would mean if these predictions are right, why they might be wrong, what the biggest bottlenecks are still standing in their way, and maybe above all, why it’s so hard for the news media to report responsibly on a story like this, where we’re essentially asked to take seriously the economy-shifting and life-shifting potential of a technology that we can’t actually truly report on because it doesn’t actually exist yet.

And then finally, because I’m completely bewildered by the bonfire of corruption that is erupting in crypto land, we close on crypto. I’m Derek Thompson. This is Plain English.

Kevin Roos, welcome back to the show.

Thanks for having me.

It’s been a while since we had you on, and probably for that very reason, it’s been a while since we had a proper tech news update. So I wanted to hit TikTok. I want to hit the spooky murmurings coming out of AI land. And then if we have time, maybe I’ll take your thoughts on crypto too. But let’s start on TikTok. A brief recap. In 2020, Trump said he wanted to ban the app. Joe Biden signed a piece of legislation that did ban TikTok if they failed to sell their domestic business to a firm domiciled in the U.S. TikTok did not do so. TikTok sued. They lost in court. The company was or is, by the letter of the law, illegal in the U.S. now.

Yet when my wife tried on Sunday to open TikTok on her phone, she found that in fact she could because the president has signed an executive order that has essentially said that a little bit like cannabis, TikTok is both technically federally illegal, but also that’s just not going to be enforced in broad swaths of the U.S. or pretty much anywhere. And so people are going to continue to use it. I’m very interested in the argument over whether or not TikTok belongs in America. I wonder if you can help me understand the smartest arguments for and against banning Chinese ownership of the app.

Kevin, let’s start here. What is the case for banning TikTok?

The case for banning TikTok is that it is owned by ByteDance, a Chinese company, that Chinese companies we know are subject to the data use laws of the Chinese government, that they are in fact inextricably linked to the Chinese government and that the Chinese government has pretty much carte blanche to do with Chinese-based internet companies what it wants. There’s a sort of less sophisticated version of this argument that goes something like TikTok is a tool of Chinese espionage that is being used to spy on Americans and track their movements and steal their data.

The more sophisticated version of that argument is that we just don’t know what China is doing or will do with TikTok given how important it is to millions of Americans. It could be used to covertly push propaganda or pro-Chinese or anti-American viewpoints into the American cultural bloodstream. And we have laws restricting foreign ownership of U.S. media companies right now.

If a Chinese billionaire wanted to buy the New York Times or the Atlantic or the Wall Street Journal, they would not be allowed to do that without special dispensation. And so we have long recognized as a country the importance of having sovereignty over the media outlets and platforms where Americans get their news and information. TikTok, we know, is, despite what it may claim, controlled by ByteDance, which is a Chinese company and subject to all the same rules and restrictions that Chinese internet companies are.

The metaphor that I’ve always found persuasive is that the U.S. would never, in 1971, allow the Soviet Union to buy CBS or the New York Times. So by the same token, it seems like we have a vested national security interest in ensuring that the people who own the most popular source of news for young Americans are not our geopolitical adversaries or under the thumb of our geopolitical adversaries in the Chinese Communist Party.

Before we touch on the case against banning TikTok, I want to recircle the question of espionage. So JP Morgan’s Michael Sembalist, who was my guest a few weeks ago on this show talking about economics, he pointed out that Chinese espionage has surged in the last few years to the extent that intelligence services around the world now believe that thousands of individuals associated with American military supply chains have been or are being spied on by the Chinese government. Are you persuaded by the claim that TikTok makes it easier for the CCP to spy on Americans?

Yeah, I mean, I’m definitely persuaded that there’s Chinese espionage happening and attempts to hack into various forms of information on Americans. I’m just not persuaded that TikTok is the best way that you would do that if you were the CCP looking to spy on Americans. I think the amount and the quality of data that is shared with TikTok by the people who use it, I mean, I’m just trying to think if a CCP espionage agent were to try to spy on me through my TikTok algorithm, they would learn that I like cooking videos, they would learn that I’m into tennis, they would learn that I have insomnia, but they wouldn’t get my social security number or my credit card number or anything that could really be used to compromise me. There are much more effective ways of doing that if you are a Chinese spy. So that’s my argument is not about the motivation to do it. It is simply about the efficacy of doing it through TikTok if you are the CCP.

Let’s talk about the case against. I think there’s a lot of interesting arguments against banning TikTok. There’s a free speech argument that says that this is fundamentally speech. Yes, the speech is ranked by an algorithm that’s owned by a Chinese company, but the underlying substrate is free speech. What about the fact that the US and China don’t want to enter into some kind of war and allowing TikTok to operate here could be interpreted as a kind of olive branch, a sort of opportunity for detente with China? Or maybe someone’s listening and saying, you guys sound paranoid as hell. It’s ridiculous to assume that the Chinese Communist Party is going to use TikTok as mind control over Gen Z.

Any of these or any other arguments that you consider the strongest counter argument, the case against banning TikTok?

Yeah, the strongest counter argument that I’ve heard is the free speech one. I mean, I’ve heard this from people who are not financially invested in TikTok, do not have, you know, sort of financial incentives to want it to stay operating in America. But there are lots of First Amendment scholars who will say that there is protected speech happening on this platform. And, you know, that would range from teens speaking their mind about political issues. We know that this was a big issue for lawmakers who looked at TikTok last year and said, we don’t like how much pro-Palestinian content is going viral on TikTok.

So the argument for keeping TikTok, for not banning it on free speech grounds is like, this is protected speech. If you want to take this platform away, you are effectively chilling the speech rights of Americans and that this could start a domino effect where every time there’s a platform where speech is happening that the administration or the legislative branch doesn’t like, they’re going to pass a law to ban it. There’s a sort of international diplomacy argument I can imagine, um, that goes something like, um, we don’t want an internet that is splintered into national apps.

Part of the beauty of the internet is that it is global. The more that we allow politicians to shut down apps they don’t like, the more we end up in a fractured, um, more nationalistic internet that may not be good for us in the long term.

It should be said that the TikTok ban has been litigated. It’s gone through the circuit court system. My understanding is that the circuit court has decided that the national security interests outweigh the potential free speech implications, and that’s why legally the ban was not struck down. It’s taken an executive order from Donald Trump which essentially says, I’m not going to enforce the ban.

This has allowed server companies like Oracle to bring the app back up in the U.S. Do you come down strongly on this? It’s important for us to like, understand the argumentative landscape here, both the case for and the case against. Do you have a strong opinion here? I know that I do.

Yeah. My opinion on this has actually shifted. I was opposed to a TikTok ban when the idea was first floated during the first Trump administration, um, because I thought it was sort of an odd singling out of a platform based on these sort of pretextual factors and these vague national security concerns. But as time went on, I started to shift my views based somewhat on the behavior of ByteDance and TikTok, which never seemed to be operating on the level.

They were doing things like snooping on journalists through their TikTok apps. They were obscuring the ties between them and China. Some very real, um, you know, and persuasive arguments that I heard about the case that you mentioned where we wouldn’t let, uh, Chinese conglomerate or a foreign billionaire in the Soviet Union control a major American media property at any other point in recent history.

Why would we allow TikTok, with 170 million users, to be controlled by our biggest political adversary? Um, and so my own view has shifted somewhat. I am now, I would say, weekly in favor of a TikTok ban. It is not something I feel a ton of conviction and urgency around, but I think on balance, um, forcing it to divest, which is really what the bill was. People forget this. It was not, uh, structured as an outright ban. It was simply saying you must sell. You, ByteDance, must sell TikTok to a domestic company.

The fact that ByteDance, frankly, didn’t even seem interested in pursuing that, um, and that there were reports that it was being blocked from even considering an acquisition or divestment by the Chinese government just furthered my suspicion that something is up here, that TikTok is not actually operating independently of the Chinese government.

The last point is so important—it’s really hard to simultaneously claim that TikTok is a normal independent company while its business decisions in this case look exactly like a firm whose behavior is subservient to the interests of the CCP. The one thing I would add is that attention is an incredibly powerful resource in the 2020s, and it matters who owns the ecosystems of attention. Elon Musk buying Twitter made Elon Musk more powerful, full stop.

And it’s not a dramatic leap to say that allowing the CCP to own TikTok is to cede a pretty extraordinary amount of power to our geopolitical adversary, and I’m not sure that’s wise. So, Kevin, how do you think Trump’s going to play out?

I have no idea. I mean, TikTok is, as you mentioned, sort of in this like weird quantum state right now where it both exists and doesn’t exist—like Schrodinger’s TikTok. If you have TikTok on your phone already, you can continue to use it. But if you go to the Google or Apple app stores to try to download it, you can’t. That is because Apple and Google have decided that it is too dangerous to offer the TikTok app in their app stores since this law, which has not been overturned, has been sort of delayed by this executive order that Donald Trump signed for 75 days.

But they are potentially liable for billions and billions of dollars in fines if the law were to be enforced as it exists on the books today. So, the reason you can use TikTok right now is because Oracle and Akamai and other service providers, uh, that are sort of necessary to keep TikTok operating on people’s phones have made the calculated decision that they trust Donald Trump not to enforce the law that is on the books.

Yet, there is nothing in that law, which stands, it’s good law, the courts have upheld it, that has not been overturned, that prevents retroactive application of these fines if Donald Trump were to change his mind down somewhere down the road. So essentially, the state of play is this app exists in limbo.

We will see in the next 75 days if Donald Trump wants to do something to try to save it, but the law was not supposed to be president-proof, right? The law was written in a way that did not make it trivially easy for a president to just snap their fingers and say this law no longer applies. And so I think there are some real questions about what TikTok could do in the next 75 days to avert a shutdown.

Donald Trump has said he wants to make a deal. He sees maybe the U.S. owning as much as half of TikTok. I’m not sure what that would even mean. But he thinks there’s a deal to be made, and I presume that he and his advisors are working hard at that as we speak. But I just do not feel comfortable predicting what is going to happen because I tried to predict that a week ago and I got it wrong.

I’ve made a series of predictions probably over the last couple of years about what would happen to TikTok and have not felt confident in any of them because it’s just moving so fast. Moving so quickly. And it’s in this quantum space. That’s why I called it digital marijuana, because marijuana also exists legally within Schrodinger’s proverbial box. It is simultaneously illegal. Cannabis is illegal at the federal level and by federal statute and also legal at the local and state level, but also illegal to transport across state lines.

In some places, it’s sort of the same way that TikTok is, uh, legal to own and open, but not legal to download. Like you can smoke it, but you can’t sell it is the rule in some states where it’s just a total jumble of misunderstanding. But there’s also a general understanding that the law on the books is not the law as it’s applied. And I think we’re going to be in that sort of quantum marijuana space for a while with TikTok, where it is illegal by law. And it’s going to get weirder, like, because for the next 75 days or until a deal is reached, TikTok cannot update its app. So the app is just going to degrade. They can’t fix bugs. They like, they can’t do it. You can’t push a new update to people on their phones if you, if the Apple and Google app stores refuse to serve you. And so the app may degrade.

Also, Instagram is releasing an editing tool that’s designed to compete with TikTok. So there’s like, it is just, it is one of the fastest moving and most confusing stories that I’ve ever covered in my career as a tech journalist. Yeah. It’s interesting. Everyone’s stuck with sort of December, 2024 vintage TikTok indefinitely. And it’s just going to like just sour inside of their phones over time until it’s no longer useful.

It’s, it’s fascinating. I want to shift to artificial intelligence because there’s something happening in this space right now. There’s rumblings in this space right now that are fascinating and spooky and potentially seismic. Um, Axios reported over the weekend that OpenAI believes it’s on the cusp of building what they call super agents with PhD level intelligence. OpenAI, you know, sometimes they say things that in fact come true. Sometimes they say things that, um, I think are a little bit hyped.

A company that’s not quite as known for hyping things is Anthropic. The CEO of Anthropic, Dario Amidai, said in an interview over the weekend, I believe from Switzerland, that in two to three years, he believes that AI systems will be quote, better than humans at almost everything. And a few years later, they’ll be better than all humans at everything. This is one of those statements that exists in that realm of one of two things has to be true. Either Dario is out of his mind or news media is failing to pay, or we are all failing to pay attention to the dawn of a technological breakthrough that’s essentially without precedent in human history.

So let me get at the question here this way: is the timeline that Dario is describing, which is basically artificial general intelligence in two to three years, is that timeline unique to Sam Altman? Dario at Anthropic, or is this an idea that’s fairly widely shared among the frontier players in this space? I would say that is a consensus view in the AI world. And by AI world, I mean like, you know, the sort of maybe 10,000 people in and around San Francisco who work at the frontier labs directly on this stuff.

I’ve talked to many of those people. I spend a lot of time with people like Dario Amidai, and I’ve talked to Dario Amidai himself. This is not a fringe radical view. In fact, it’s more conservative than some of the views that I’ve heard about when we’re going to get these very powerful AI systems. Now, we should say like they might be wrong, right? People in 2010 thought self-driving cars were five years away. They ended up being 10 or 12 years away.

But they are sincere. These people are not just hyping their startups. I mean, people with no financial stake in this are saying largely the same things, just looking at the trend lines and expecting them to continue. This episode is brought to you by Indeed. Hiring someone new for your business can be a big move. And I understand you probably want to take your time to make sure you found the right person.

But playing the waiting game could do more harm than good, because that’s extra work and extra stress you’re putting on you and your team. It’s not a healthy work environment. When it comes to hiring the right people fast, Indeed is all you need. Their sponsored jobs move your job post to the top of the page, letting you stand out first to relevant candidates. It makes a massive difference. According to Indeed data, sponsored jobs have 45% more applications than non-sponsored jobs.

Another great thing about sponsored jobs is that you’re only paying for results. You don’t have to worry about monthly subscriptions or long-term contracts. There’s no need to wait any longer. Speed up your hiring right now with Indeed. Listeners of this show will get a $75 sponsored job credit to get your jobs more at Indeed.com/plain. That’s Indeed.com/plain right now. And support our show by saying you heard about Indeed on this podcast. Indeed.com/plain. Terms and conditions apply. Hiring Indeed is all you need.

Girl, you have got to try this Laneige lip mask. Trust me, my lips needed that love. Love these vanilla vibes from K. Alley. That top note is just so smooth. Smooth and stunning. That’s what you get with this K-18 hair mask. Those brands everybody’s talking about, they’re at Sephora. Discover the hype with the hottest names in beauty, including Laneige, KLE, K-18, and Rare Beauty. Click or tap the banner to discover the next big thing. Only at Sephora.

You know what’s smart? Enjoying a fresh gourmet meal at home that you didn’t have to cook. Meet Factor, your loophole in the laws of mealtime. Chef-crafted meals, delivered with a tap, ready in just two minutes. You know what’s even smarter? Treating yourself without cheating your goals. Factor is dietician approved, chef prepared, and you plated. Pretty smart, huh? Refresh your routine and eat smart with Factor. Learn more at factormeals.com.

Where I think your answer is going to bump up against a lot of listeners’ experience, and certainly mine to a certain extent, is that when we play around with AI like ChatGPT, the experience for many people is interesting without seeming like, oh yeah, this is a product that’s 24 months away from changing everything about the economy and the world, right? Like, I think one comprehension gap here is that the frontier AI companies are working on an underlying technology that would power a rather different product, like something that isn’t just a chatbot in conversation with us, but rather what they call an agent.

Help me understand what they mean when they talk about agents. If we started off with these conversational chatbots where you had to ask it a question and get a response, the vision of AI agents is one where you could essentially have a remote co-worker, essentially. Think of it as a remote co-worker who you could, you know, onboard, in quotes, you know, give a bunch of context about your life or your work or some task that you’re trying to work on, and then it could go off and accomplish that task.

Whether it is, you know, open a bank account or, you know, build me a piece of software that does something I need or generate a report drawing on all these kinds of different sources. All the frontier labs right now are working on how to build agents that can go out and actually do things on your behalf using the internet the way that a human would. So it’s the year 2027. I tell my chat GPT agent, build me a website with all the transcripts from every plain English episode and make it searchable. Something like that.

Yeah. And right now these systems exist mostly in demos and prototypes. I’ve used a couple of them. Google is testing out this thing called Project Mariner, where they basically have an AI agent that can, you know, look at a recipe on one website and then go to another tab in Chrome and put all the ingredients for that recipe in the cart and then, you know, prepare it. So all you have to do is like click the button to check out. Anthropic also has a computer use AI agent that can actually click around and go visit websites and things like that as if they were a human.

These things are still very early. They are not reliable for most sort of mission critical tasks, um, and they’re mostly, uh, but, but I think that the investment is there, the research is there, and the people I’m talking to, if you’re fairly confident that in a year or two, these things will actually become quite capable of doing things that you might give to a coworker or a really smart intern. The implications of this are pretty astounding. Google CEO said that, um, they believe 95% of an S1 filing, which is the legal filing when you have an initial public offering could be completed by AI applications in just a few minutes compared to six banker analysts spending two weeks drafting documents.

Uh, that’s a quote from a recent JP Morgan report. I was thinking about that and thinking about, um, a talk that I saw a sort of off the record talk that I saw an AI founder give where he sort of jokingly talked about how startups are going to, um, uh, have what they call an H2R ratio that they talked to their investors about, a human-to-robot ratio that describes the amount of work that’s done at the company by human labor versus the amount of work that’s done at the company by robot labor.

And you’ve written a lot about the economic and sociological implications of automation. I want to focus us on this one particular implication. When you think about the work of six banker analysts spending two weeks drafting documents for an S1 or an IPO document, you’re talking about the typical white collar work that’s available to a 22 to 25-year-old graduates of a top 50 university or college in America. Like this is the bottom rung of white-collar work in this country, and what Dario is saying and what Google is saying and what you’re saying is that we’re on the verge of building a machine that can nearly automate all of it.

I mean, the implications for the labor force that you take this enormous tranche of white-collar work that maybe lots of listeners recognize all this stuff that these, you know, university and college graduates were doing for the first few years of their life of Excel and research and putting together PowerPoints and just going over documents and rereading and summarizing. The idea that all of this can be done by AI agents in a few years is, is unbelievably profound macroeconomically. When you start to think about like the implications of agents run amok in the white-collar economy, what are some of the most emergency level questions that leap to your mind?

I think you’re right. And I think I should clarify, like not all this is two to three years away. Some of this is happening now. Google’s CEO, Sunar Pichai, just recently said that in the last quarter, 25% of all the new code generated at Google was generated by AI. So, and of course, he has a reason to exaggerate that to be fair, but I also believe that it could be directionally accurate, right? I think it tracks with what I am hearing from companies across the tech industry, which is that their engineers, it’s not like they’re firing all their engineers and replacing them with AI. But you know, if you were starting a startup, and you might have previously, you know, 10 years ago, you might have needed 20 engineers to get a sufficient product cobbled together. You may be able to do the same thing now with two or three engineers in a really good AI development environment.

So that kind of thing is happening today. And I expect that these things will escalate as these tools get more and more capable and more agentic. The things that I am thinking about are what happens to the first rung on the white collar career ladder or the first three rungs. You know, if you are in law, you start by doing things that a first-year associate would do at a big law firm: doc review, contract review, um, you know, redlining things, writing briefs.

Um, those tasks can be done largely by AI today. And one of the questions that I have is like, yes, there is a lot of wasted work and grunt work, and maybe you can free those first-year associates up to do higher value tasks, but there is a skill-building component of that. It is not just make work. I remember, for example, when I started as a journalist, I was covering corporate earnings. And one of the things I had to do was to learn how to read a balance sheet and a 10 K.

And that was not particularly glamorous or fun work, but it did teach me a skill that I have continued to use throughout my career that I don’t know I would have developed if I had just been able to feed those documents into an AI model and say, pull out all the most important statistics. So there’s a skill atrophy issue. There’s a sort of career development. And then to extend that a couple of years into the future, where do you get the leaders of tomorrow in these industries if people are not going through this formative skill-building process early in their careers?

So if you have a kind of top-heavy organization where you have managers and then a bunch of AIs under the managers, where do you get the next generation of managers from? That is something that I’m just starting to hear companies think about. Yeah, it’s a good reminder that the way companies choose to use technology is just as important as what the underlying technology can do. The implementation is more important in many cases than the capacities of the underlying invention.

And to that point, I wonder if you can tell us about a use case for generative AI that’s outside the mainstream that you’re very interested in. Like, everybody knows this stuff is being used to amplify code and research projects and to cheat on high school English essays. What’s an application that you think has been undercovered? One area that I consistently come back to that I don’t think people are paying enough attention to is what’s happening in medicine and drug discovery with AI.

There are now drugs that were designed by AI that are going into clinical trials. Now, that may take a while to get to market because clinical trials take a while. But within the field of biology, biotech, basic pharmaceutical research, the advent of AI that actually works for doing things like discovering new drugs has been just a seismic shift. People tell you, like, the field looks nothing like it did five or ten years ago. And I expect that will be a surprise when these things actually start making it to market.

Let’s see. Any other areas where I feel like this is being underappreciated relative to its impact? One thing that I wonder is, like, you see all of these videos that have, like, a video game quality to them on Twitter or Instagram of people saying, you know, oh, surprise, surprise, I made this with AI. And it’s always like, a little bit hyper-real in that very specific way that is AI-ish. But I’m sometimes impressed by them.

And I wonder if you know if Hollywood or the video game industry is folding these technologies into either their VX divisions or their just construction of video game worlds. Yes, they are. I’ve spoken to several filmmakers and people who work in Hollywood about this. And they say, you know, it’s slow because of, not because of the tools, but because of the human resistance. You know, if you grew up doing visual effects one way and were trained on one set of systems, like, you’re not necessarily going to shift overnight to using a different set of systems.

Also, the tools are not quite where they need to be. The video, the text-to-video generators are not quite as good as the text-to-image generators or certainly than the pure text models. But they’re getting there. And I think people who, you know, make music videos or background footage or special effects are already starting to use this stuff.

So I think this is around the point where a skeptical listener is going to request a light pumping of the brakes here. Like, no technology enjoys a swift and frictionless curve from invention to implementation to world changes for good. Like, there are barriers and bottlenecks and disappointments for everything. When you think about the most important bottlenecks the folks in AI are paying the closest attention to, is it training? Is it data? Is it raw compute? Is it the energy to power all of the above? What’s the King Kong bottleneck here?

I think energy comes up a lot. These, at least with today’s style of language models, they require massive data centers with massive numbers of GPUs. You got to power those things somehow. That’s starting to put a strain on the grid in some places. And we do need new sources of energy. This is why you see people like Sam Altman investing in fusion startups or, you know, companies striking deals. Microsoft just signed a deal to bring back Three Mile Island, basically to power data centers for AI.

And so you’re starting to see companies sort of taking steps to lock down the energy that they believe they will need. The other bottleneck that I hear a lot about is just human refusal. There was a Dario of Anthropic wrote a great essay several months ago about the sort of optimistic vision that he saw for AI. And one of the things that he said in that, that I also strongly believe, is that there will be just a contingent of people who don’t want anything to do with AI.

That whether it’s because they’re just, they think it’s stealing from creatives, or it just, you know, they feel threatened by it, or they feel like it makes too many mistakes. There’s just going to be kind of an opt-out contingent, as Dario called it. And that, you know, in some industries, there will be probably regulations saying you can’t use AI for certain things. And so we will just sort of slow down AIs, not the progress of the systems themselves, but their sort of infusion into society and into institutions will be slowed by just human resistance.

Yeah, I think the energy question is going to sneak up on us. We have to build an unfathomable amount of energy to meet the demands of AI without an energy shortage driving up the price of life for everything, for regular Americans. And that’s a place where you could say, you know, you may not care about AI policy, but AI policy cares about you. Because if these data centers suck up all the electricity and drive up the cost of heating or cooling your home, you’re going to notice.

Another example of that principle is that if the AI optimists are right, this technology is going to be very good at breaking encryption, at hacking foreign governments, right? A swarm of agents dispatched to bring down an opponent’s energy grid. That is a massive national security threat. Do you think people are paying enough attention to AI as a near-term national security emergency?

I would say, yes, people are paying attention. I don’t know if enough people are paying attention. I mean, there are ongoing talks between all of the major U.S. AI labs and the U.S. government. The Department of Defense has, you know, has some AI that it’s using already and certainly more in the years ahead. But yes, this is something that people are paying attention to. One of the major categories of risk that people believe AI models could pose is around chemical and biological weapons.

You know, could it make it trivially easy to develop a novel pathogen using an AI model in the same way that it makes it trivially easy to develop new drugs to cure disease? Could you also create something to cause disease? There’s also military uses of this stuff already. And there’s a lot of interest among some of the Trump administration folks in bringing AI closer to the U.S. military. And then just the sheer fact that we are racing against other countries to develop this technology quickly.

There was just an update over the last couple of days where a very sort of flying under the radar Chinese AI lab called DeepSeek released a series of models that are quite good, according to folks that I’ve talked to in my own experiments. They’re not quite as good as the best U.S. AI models, but I think there were a lot of people in the U.S. AI community who assumed that China was several years behind the frontier of where the U.S. AI companies were.

And it now appears that, you know, that’s not true, that despite all of the steps that we’ve taken to, say, restrict the exporting of high-end GPUs, the chips that are used to train AI systems to China, they have been able to build models that are quite competitive with the ones that the AI labs in America are putting out. And so I think there’s a growing contingent of people who do see the geopolitical destabilization that could result from the development of very powerful AI models.

I mean, if you really do have AI super agents that are capable of doing sort of PhD-level tasks on their own, a lot of countries would be interested in deploying those for military or defensive purposes. And so there are a lot of people who feel strongly that we need to get there first. I just realized there’s a tie-in to TikTok here. Sometimes government policy has nothing to do with Silicon Valley. In fact, for much of the 2010s, I think you could say there was no government department devoted to making sure that Uber and Facebook worked well.

But what we’re seeing with TikTok and with AI is that Washington is moving closer to Silicon Valley at the same time that I suppose you could literally look at the inauguration and say that Silicon Valley is moving closer to Washington. You’ve got all those tech CEOs sitting up near Trump this past Tuesday or Monday. It didn’t really matter who was president in 2013 for, like, Facebook’s fortunes. But executive and legislative policy really does matter for the future of TikTok. And it really does matter for the future of artificial intelligence. Trump is, whatever you make of him, a very, very hard person to pin down ideologically, which is a bit nerve-wracking when you think that his policy decisions could shape the future of the most important technology of our time, right? I agree.

And I actually have a question for you, which is, like, to me, living in the Bay Area, spending time inside the AI bubble, it is very confusing why AI and AI policy gets so little attention on the national stage relative to other issues, despite, you know, the possibility that we could have very, very powerful, transformatively powerful AI systems within the next two to three years, within the next presidential term. Why do you think this gets so little attention relative to other issues?

I think it gets so little attention because people are paying attention to the wrong things. They’re paying attention to controversy that they can see, rather than potential that they can’t see. Controversy is what makes great news. Conflict is what makes for great news. And so conflict is what the news is very good at paying attention to, especially negative conflict. It’s hard to see what the most interesting negative conflict implication is from a comment like we might have superintelligent agents created in the next two to three years. It’s just hard to see the contours of that.

I mean, in many ways, I’ve said before, I think maybe in conversations with you, I think one of the hardest things about AI is that the way some people talk about this technology is as the last invention. How do you possibly wrap your head around something that is described by definition as being unprecedented? There’s no analog to draw on. So I think there’s, I think that’s sort of the big picture conceptual reason why less attention is paid to this, is that AI, while it clearly exists, its potential is a possibility rather than a reality.

I think a lot of people, frankly, don’t feel artificial intelligence making a huge difference in their life. ChatGPT exists. Sometimes they use it. It’s a faster Google. Sometimes they don’t use it. And their day is really made no better or happier by its absence. And so it’s hard to make contact with a mere promise that a tech CEO makes in Davos.

And then finally, I think that some of the more important bottlenecks that you mentioned are just very complicated. Understanding that AI lives in GPUs, in data centers that are sited in places like Virginia, that require an enormous amount of energy that has to be built in order to power and cool those centers, and that that is the hardware guts of this technology. There’s a lot of complicated stuff there in terms of energy generation and technology siding. And so I think some of the bottlenecks are complex enough that they don’t get the attention that they deserve, even though you could absolutely frame them, I think, as you sort of have, as like the most important bottleneck to the most important technology of the decade.

Like, that’s an incredibly profound framing. So, you know, I have a conceptual answer, which is that negative conflict that exists is what gets attention, and everything else pales in comparison. But there might be reasons that I can’t even think of. What do you think? You’ve lived in—

Yeah, I buy that. New York, you’re a creature of the Bay Area, but you travel around. You’re a broad-minded person. This is a great question to ask you. If this is the most important story of the decade, why isn’t it receiving 10 times more attention?

I think there is a media failure. I think that we have not collectively done a good job of communicating to people, not just the timelines that people who are working on this stuff see, but the sincerity with which they hold these beliefs. I think there’s still a strong narrative out there that, you know, AI is just the next crypto, the next metaverse, something that is, you know, promises to change everything and then doesn’t really materialize or materializes in some much worse form down the line.

So I think people are sort of pattern matching against those previous tech trends. I also think that we’re very bad at thinking in probabilities and thinking about the sort of high consequence, low or moderate probability thing. I think that if there’s even a 10% chance that Dario Amadei and his peers are correct, that we have only two or three years before things fundamentally start to change as a result of AI, that would make it one of the most important issues in the country that should be debated on every Sunday show by every politician and talked about at every dinner table.

So I guess that’s my job and your job to try to update people, to convince them that this is something worth their attention. But yeah, I think it’s a profound, there’s a profound gap between the way. I mean, where I live in the Bay Area, people are already starting to alter their daily routines based on the presumption that AGI is coming. I have met people who stopped saving for retirement a year or two ago because they fundamentally believe that money will have no meaning or that we might not live to retirement.

But if we do, money will be so—like, the robots will be taking care of all of our material needs. Now, I’m not saying that’s, like, a consensus view that people are withdrawing from their 401ks en masse, but there are people who are starting to adjust their daily routines and their sort of short-term and long-term planning based on the assumption that all of this is going to be as big a deal as people believe.

So, like, I do not have a good reason for why that is, except for the old line about, like, the future is already here, it’s just not evenly distributed. I think there will be parts of the country and sectors of the economy where this stuff takes off first, and it won’t necessarily be where we expect.

Yeah, I know for a fact that some people are going to listen to the show and they’re going to say, thank you for telling me to focus on this stuff. I’ve read a little bit about AI, and it is really fascinating what’s happening at the frontier. And I can absolutely promise you that people are going to listen to the show and think that we’re totally out of our minds.

And that when they look at their lives and at the newspaper, and even at the macroeconomic data of, you know, hiring and growth rates, it’s just hard to see where the most important technology in human history is, right? Like, little things are changing. Maybe productivity peaks up a little bit here. Maybe the hiring rate in the information sector declines here.

But, you know, we are talking in, like, very, very grand gestures about something that is, frankly, quite difficult to see in the data. And that’s what makes this story, I think, very hard to report on, is that I believe that generative intelligence, generative AI, is unbelievably significant. And I also think that sometimes the story reads to people as, like, people saying that a comet is coming out of the sky, but they can’t produce, like, really, really clear evidence of, like, what that comet looks like.

We can’t, they can’t quite see what is the thing that’s going to smash into the earth and where it’s going to smash and what would it do? It’s like, we’re back to this. It’s like a quantum comet. It’s very, very difficult to actually imagine and, like, place in time and space. And I do think that that is one of the reasons why this story is really, really hard to fully get our hands around. A lot of it exists in that space of possibility and reality. And yet it’s being talked about as, like, the most important thing that’s ever happened. That’s a weird combination of descriptors.

Sure is. Kevin, let’s close on crypto. My feeling about crypto is that I’m torn between two instincts. And instinct one is that I try to be, like, naturally positive about new technology. And I want to have conversations with people who are in crypto and see the possibility of actual progress there.

And then instinct two is that, like, I haven’t fully been moved off my spot of thinking that crypto is basically an unregulated global casino and that its accomplishments for humanity are not so distinct from the accomplishments of an unregulated global casino, which is to say money goes in, money comes out, written and unwritten laws and rules of decency are avoided or violated. And then some people get very, very rich.

What do you find interesting about the world of crypto right now? So I have been interested in crypto for a long time, but as you have, I have been disillusioned and periodically, you know, less interested because it just seems so scammy. The reason that I am growing more interested right now in what’s happening in crypto is that it’s not that it’s becoming less scammy. It’s actually that it’s becoming more scammy and more untethered from sort of financial fundamentals.

So the biggest story in crypto right now that I’m sure you’ve heard about, and maybe your listeners will have too, is these, are these Trump meme coins, right? So, last week, just before inauguration, Donald Trump launches an official Trump coin. This is a meme coin on the Solana blockchain. Basically, it is a speculative financial instrument, most of which is held by, controlled by entities affiliated with Donald Trump. So he has his own official coin. It zooms to, you know, more than $10 billion in market cap pretty much instantly. It’s since come back down.

Melania Trump then launches her own meme coin, which also becomes worth billions of dollars and then crashes. And we’ve seen this now with a number of sort of micro-internet celebrities. I’m sure you’re familiar with the Hawk to a coin, that briefly was worth many millions of dollars earlier this year, based on the meme that I will not explain to listeners of your podcast, but basically we have now created with the crypto economy, a way to not just bet on assets the way you might bet on the stock market, but to monetize attention, to take something that is happening, whether it is as fleeting as a meme on TikTok or as durable as the U.S. presidency and to create a market around nothing tangible, nothing physical, just the idea.

The crypto has created a way to financialize everything. And I just find that fascinating. I don’t think it’s a good trend to be clear. I think a lot of people are going to lose a lot of money on these meme coins. But I do think that what we’re seeing with the rise of prediction markets, with the rise of meme coins, with what people expect will be a dramatic deregulation of crypto under the Trump administration, I think we are just, we are already a nation of gamblers, and it is only going to become more so over the next four years. And I think that’s going to have a lot of knock-on effects.

Let’s talk about the knock-on effects at the highest level. Americans love gambling in casinos, on their phones, on sports, on stocks, one could argue. I suppose a defender of crypto would say, what’s the difference, right? Sportsbooks financialize sports, stocks financialize corporate cash flow, crypto financializes attention. What’s wrong with that? What is wrong with that?

Yeah, I mean, I should say, like, I enjoy gambling from time to time. I do not do it. I have not put four parlays on FanDuel in the time that we’ve been recording this podcast, but I do enjoy, you know, my annual trip to the casino. I think for consenting adults, that kind of thing should be legal.

Um, but there have been a bunch of studies done by reputable researchers about what happened after sports betting was legalized. And researchers looked at the knock-on effects of that. And they found pretty much across the board that in areas that had legalized sports betting, there was lower household savings. People tended to spend a significant chunk of their money on sports betting and that the effects were most seen in lower-income households.

So I think, you know, the old knock on the lottery is that it’s a tax on people who are bad at math. I think the rise of legalized gambling and meme coins and speculative, you know, trading, and just putting a, putting a casino in your pocket, feels very different to me qualitatively than having a space that is regulated and well-lit that you can choose to enter into or not. It just strikes me as like a completely different animal when everyone on the planet can have, you know, their own personal casino in their pocket at all times.

Yeah. I’m inclined to agree. I had Austin Campbell on the show a few weeks ago to make the responsible narrow case for positive crypto use cases. And while I think he might be right about those narrow responsible use cases, those gemstones really are surrounded by a ton of shit. And I wonder if at some point retail investors just get sick of the hype machine and the rug pulls.

Right now, there’s a narrative that crypto is in high demand among the public, but under Biden, it was held back by the government. Is it possible that the next four years, you think that narrative could flip in a way where there’s a public backlash to the excesses of crypto the same way there’s been public backlashes to other social vices?

Crypto temperance. Yeah, crypto temperance that crypto’s excesses could end up eating itself. Totally. I mean, I think, um, I enjoy going to the casino once a year. I’m glad I don’t live in one. You know, I’m glad that I don’t have to pass a casino on my way to work every day. I think the people do understand that these things have a cost on society.

And that’s to say nothing of the arguments for political corruption. I mean, we haven’t even touched on that, but one of the things that is worrying people about these Trump meme coins is that they have effectively opened up a way to enrich the president, without running afoul of any laws. I mean, you can buy millions of dollars worth of Trump coin. Effectively, Donald Trump pockets that, and that can become a way to, to, you know, influence the president.

So I think there’s a whole other angle when it comes to elected officials having their own meme coins, but just on the basic societal level. I mean, I do think there’s a possibility that this just becomes financially ruinous to so many people that there is a backlash. I mean, we’ve seen that at many points with various vices throughout history that the pendulum swings and then it swings back. And I think, you know, we could see in a couple of years that people just say, I know too many people who have lost their, you know, lost their shirts betting on sports or meme coins or, you know, gambling on their phones. And, um, and I just think it’s too much.

Yeah. I think you’re right that, you know, your first framing of crypto, which I think is really smart, is that crypto financializes attention, but from the political standpoint, I’m very compelled by the framing that it un-gates corruption because rather than having to buy a hotel room at Trump hotel in Washington DC, which to a certain extent, you know, requires that someone like put down a credit card and the exchange might be auditable, you can buy 10 million, a hundred million dollars of Trump coin.

And that is not as easily traceable. Is that right? And so the corruption is both un-gated and anonymous. And so it’s hard to tell whether the people who are buying the Trump coin are, you know, tech executives or, you know, Chinese officials trying to get a quid pro quo. It really does open up an incredibly nauseous horizon that unfortunately makes our politics really reminiscent of the Gilded Age. And I really want to do a show actually on that.

Well, there’s, there’s only one appropriate response to this, which is that you have to launch Derek coin. Yeah, no, clearly. Yeah. Yeah. Derek coin will, will be launched. Thank you for getting ahead of the announcement. All right. Well, sign me up for a 10 million.

All right. Fantastic. You’re see this, this, this kind of hurts the anonymity of the corruption for us to announce it on the podcast. But, uh, look, as long as I get my 10 mil, I’m happy either way. Kevin, thank you very much. Thanks for having me.

Many thanks to Kevin Roos of the New York Times and the wonderful Hard Fork podcast. One thing to remember for this episode is the idea that Washington is moving closer to Silicon Valley at the same time that Silicon Valley is moving closer to Washington. And here’s what I mean by that.

For the last few decades, Washington DC and big tech were on opposite coasts in more ways than one. It’s not just that the federal bureaucracy was in the East and the startups were in the West. It’s that Facebook and Google didn’t need special legislation from the government. What they wanted fundamentally was to be left alone. I think we’re in a new age now.

Artificial intelligence training and artificial intelligence use, which in the industry is called inference, requires energy at a scale that demands participation from federal and state governments. And the technology I think could be just years away, not decades, but years away from building tools that will be widely seen as potential digital weapons and therefore as fitting in under the auspices of the Pentagon or being filed as a national security risk or national security asset.

And that means I think we’re looking at a much bigger role for government in tech policy. Maybe another way of saying this is that it’s not an accident that folks like Mark Zuckerberg, Jeff Bezos, much less Elon Musk are cozying up to Trump, not to mention the CEO of TikTok who was there at the inauguration.

In the next few years, federal policy is going to be critical for the advance and advancement to the most important technologies, not just AI, but also rockets, which Musk and Bezos are currently competing on, right? The Department of Energy policy was not mission critical to Uber, DoorDash, Facebook, but we’re in a new age for tech policy now.

Silicon Valley is moving closer to Washington and Washington is moving closer to Silicon Valley. And I think that’s worth paying attention to. Thank you for listening. And we’ll talk to you next week.