DeepSeek Panic, US vs China, OpenAI $40B?, and Doge Delivers with Travis Kalanick and David Sacks
All right everybody, welcome back to the All In podcast. We’ve got an incredible crew today. Don’t forget to go to our YouTube, blah blah blah, subscribe, and make sure you check out Freeberg’s surprise drop with his hero Ray Dalio, live on all platforms today. How did that come about, Freeberg? Little surprise drop?
Just great! I was talking with Ray about his new book, which he just published on how countries go broke. Obviously, which country is going broke now? I think he talks a lot about the historical context of what’s gone on with the debt cycles in different countries. Basically, at the end of the book, he has a pretty, I think, important recommendation to try and get the U.S. to roughly three percent of GDP as our net deficit, net of all expenses including interest expense. So that’s the recommendation to the administration. I think it’s pretty timely with the change in administration. Anyway, great topics to talk through and really important book. Awesome! Well done!
And we are super delighted to have in the red throne, Travis Cowan. He is the co-founder and CEO of Cloud Kitchens. He also worked in the cab business for a little bit, co-founder and former CEO of Uber, and uh yeah we had a great interview at the All In Summit last year. He’s back up from his media hiatus; he’s been in the lab working on Cloud Kitchens. How you doing, brother?
I’m doing really well! I gotta say, just like at the summit, Jason, I’m—yes, it’s an honor to be in the presence of such a prominent Uber investor. Absolutely! I mean, finally, somebody has recognized my contribution to the greatness of J-Cal. Absolutely—I’ll mention it three or four times; we’ll appreciate it. I’ll give you the props; you don’t have to do it for yourself anymore. Thank you! I appreciate it.
Give everybody a little overview of Cloud Kitchens and the business and how it’s going because people are obviously addicted to ordering food at home, and it’s quite a trend.
Yeah, I mean the high level for it, the way to think about is it’s about the future of food. What does the future of food look like? You go, well, in a hundred years—start way out there—in a hundred years you’re going to have very high-quality food, very low cost, that’s incredibly convenient, and there are going to be machines that make it. They’re going to be machines that get it to you, and it’s going to be exactly tailored to your dietary preferences, your food preferences, etc. And it just comes to you, and it’s so inexpensive that it approaches or has surpassed the cost of going to the grocery store. That’s more of like a today analogy. So you go, a hundred years—of course, that’s the thing. Nobody’s going to be making food. What about 20? What about 10?
The company is real estate, software, and robotics that’s all about the future of food. If you can get the quality there and you can get that cost down to start approaching the cost of going to the grocery store, you do to the kitchen what Uber did to the car. That’s the thing. And it’s like a grind; it’s like a lot of, you know, bits and atoms in the Uber world. This is like five times more atoms per bit. This is like heavy-duty industrial stuff, probably more along the lines of like, you know, where Elon goes in some of his companies. They’re super interesting tech, but you’ve got to grind out those atoms.
Do you see people actually cooking in the future, or does it become a centralized service? Is it optimized to people’s health? What do you think the implications to the food supply are if your vision holds? How do you think about all those things?
Look, people will cook in the future as a hobby. I make a joke at the office; I’m like, I like horses—I love horses—but I don’t ride a horse to work. It’s going to be a little bit like that. Whereas you can cook, it’s a soulful thing to do; it’s very human. But, you know, it’s late—mom gets home late from the office, needs to get the kids a nutritious meal—she doesn’t have to cook it now. She won’t have to cook it, and she won’t have to go to McDonald’s either. It will be high quality and convenient and low cost all at the same time. Yes, dietary preference and everything because it’ll be hyper-personalized, like the way the internet is in content. Plus, plus, plus in terms of your specific preferences for what you want.
I mean, you’ve got these computers rocking—oh, these robots rocking. I think in Philly somewhere, in the lab where they’re making bowls. Yeah, I mean, we’re out of the lab at this point; we have our machine. So we have a machine called a bowl builder that basically makes different cuisine types with bowls. So like think of like Sweetgreen—like what they… yeah, we’re not working with these brands specifically, but it’s a good analogy—like think of Chipotle or Kava or Sweetgreen, or you get the idea. We created test brands that were like those things and built the machine at the same time as we were building an actual restaurant. We built that restaurant to prove that the machine works.
Then we have our customers now touring, checking out—we’re rolling out with five customers in April that are using the machine. The way it’s going to go down is they will come into—and of course, we have the real estate, so we have kitchens—tens of thousands of kitchens around the world. They will come into one of our kitchens in a facility; it’s a delivery-only restaurant. They’ll prep the food in the morning, and then they will leave. The machine will, if you will, order online—DoorDash, Hubris, etc.—they’ll order online the way they do, build your own bowl exactly as you want, and the bowl gets all the ingredients dispensed—hot or cold, sauce, etc. gets lidded—the bowl goes into a bag, the utensils go into the bag, the bag is sealed, and then it comes out on a conveyor belt.
The machine gets the bag; it goes to the front of the facility, gets put into a locker. That locker then is sitting there; the DoorDash driver comes, waves their phone with an app in front of a camera, and it pops open the locker that has the food that you’re supposed to… That’s so cool! So, if you’re a restaurateur, the grind of the on-demand meal—which is the restaurant world—goes away. You basically prep, and that’s asynchronous from when people order food. The machine does the final assembly or what’s known as plating.
Essentially, do you think there’s a service in the future where my physiology, I can share that with you, with Cloud Kitchens, and you guys can just always be optimizing my food based on what I know is good or bad for me?
So first, what we do is we serve the restaurants. What happened, so Chamath, you’ll be sharing your dietary preferences with Uber Eats or DoorDash or Sweetgreen or somebody. We like our customer promise at our company: we serve those who serve others. Or put in other ways, infrastructure for better food. So we are either the AWS or the Nvidia or whatever you want to call it, but for food, if that makes sense. We’re behind the scenes; we’re the infrastructure.
So you’ll give your preferences, right? It should be a brand—like then Sweetgreen or whomever—chipotle that says, “Hey guys, share with me like a—yes—an encrypted hash of your dietary restrictions, needs, whatever your lipid panel—and I’ll customize this thing.” Then you enable that. It’s pretty close, Chamath, right? You can do that, authenticate your Apple… That’s really awesome. Just authenticate Apple Health. When these bowls come off the line—and see how I talk, it’s like an assembly line—when these bowls come off the line, on the label on the bowl is how many grams of every ingredient is in it, plus a picture of what it was before we put the lid.
That can be sent to the person while the bowl’s on its way via courier.
What do you think, Travis, about this whole Maha movement and just the food supply itself? So then what? How does that change? Do restaurants embrace more farm-to-table stuff?
I think, look, what we see with supply chains in a bunch of different industries is it’s just going to get super wired up. So right now, we’re at the point of manufacturing, but what happens? So you go, okay, we’re doing assembly—then you go, okay, what about prep? Then you go further upstream and you’re like, what about supply chain? Like Cisco, U.S. Foods—and then you go further up and you’re like, well how does the mechanization occur on farms and in agriculture? Then how does that all get wired up to serve the customer and sort of what they’re looking for?
You can really know exactly what kind of wheat was put into that food, whether it was organic for real or not. Like, what was the actual field that it came from? Things like this. You can imagine really getting tight about supply chains as it relates to dietary stuff and as it relates to like Maha. Like, hell to the yes! I mean, I ordered a couple different… I went to RFK Jr.’s website, and they have merch; he has Maha merch. I have the green Maha merch hat. I should have worn it today. I’m all about it to get the onesie; that’s amazing!
It was crazy.
Your Bowl Builder, Friedberg, you tried to do this, right? It’s—diego saw it; he actually visited it when we built it. We designed the system around a canister mechanism, so all the food prep was done in a similar sort of like a commissary model. Then it was loaded in bulk and then put into little canisters, and there were 30 slots in the canister. Yeah, Spencer—and then the canister would move down the device, open up, and you could assemble bowls with rice and beans and all sorts of stuff. The whole thing was automated, and we were in the process of building out our first automated store when I actually took a medical leave of absence from Itza, and ultimately, the company did not get it into production. But it—we had great working demos and it was a very—
Yeah, I mean, it was just definitely a no-brainer. So you must love this; you love this.
And at the time, we were—we actually had, I’ll tell you guys this—we actually had a term sheet with Chipotle. This was nine years ago to actually put this into Chipotle stores. And then we were in the early conversations with Sweetgreen at the time as well, and obviously, Jonathan and team have gone on to develop their own system. But, you know, basically you can reduce so much of like QSR down to this bowl-based system and automated as Travis is doing. So it’s just a no-brainer, and it’s certainly necessary in a time when there’s either a labor shortage or labor price inflation that’s causing a real issue with the ability.
And yeah, this is the original automat in New York, in the early 20th century. I love this, but yeah, they had a commissary behind that wall, and they made plates of food. You put in there, you put a quarter in, you turn the knob, and you get your meal out; it’s the classic.
That’s the classic artificial intelligence. Right, this is like the mechanical Turk thing. I mean, look, here’s the thing, here’s the little nuance that’s super interesting about automation in QSR restaurants. They have an existing brick and mortar that’s built a certain way; that layout is meant for humans. And for those humans to work in certain processes in exact and very specific ways, every square inch of that kitchen and that space is dialed. When you go and put a machine like this in, it changes the whole thing. Just to get going, you’ve got to—like, if you’re to replace the front line at Chipotle, you got to take out that front line, you got to demo it, you got to put in a new machine—that’s the challenge that they all had. And so now, it’s like a huge amount of CAPEX; my store’s down for two to three months, and the economics start to not work. And by the way, I still have to have humans in that brick-and-mortar.
So, you know, look, we have a different take; we’re in that delivery-only model. So these are—it’s true infrastructure for making food behind the scenes for delivery, so you don’t have these issues. Of course, our setup—our infrastructure—these kitchens are designed for these kinds of machines to be in them, and vice versa. We’ve designed the machine to be in them.
When we did this early at Itza, it was like food delivery was very early. We built these Itza restaurants that were smaller footprint. We had an 800-square-foot restaurant that was doing three million a year in revenue, and it had a handful of people working in it. But we were putting about 800 people an hour during the lunch rush through that restaurant ordering custom bowls.
This was by one market. Right, one market exactly. And so, by the way, did you guys notice that J-Cal was plugging his product there in the background even though it has absolutely nothing to do with what Travis was saying? Oh, welcome back to the show. Nothing’s changed. Sax is here. No one else even noticed that. I just heard this voice from above; it was the czar of AI and crypto. I was like, wow, that’s all! Sit back and listen; the czar is back.
Sax, any anecdotes you want to share about life in D.C.? How exciting it’s been in the administration in the first week?
It’s been amazing! I mean, it’s hard to believe it’s only been a week, right? So you’re in the White House or that building next to it. Do you have an office—I mean, the Treasury building? Somebody was talking about there’s a building next to it or something; I don’t know. I have an office in the Old Executive Office Building, otherwise known as the Eisenhower Building, and then I have a pass where I can just walk over to the West Wing if I want to walk over to it. There’s kind of a whole White House complex behind the gates that the West Wing is part of it, and the Eisenhower Building and there’s a couple of other buildings in that complex.
It’s really cool! It is really neat to show up for work at the White House; just saying, it’s awesome! It’s like being in a movie or something, or a TV show. It is really cool!
Any interesting meetings you can talk about? I mean, I know we are here today to talk about Deep Seek, but any interesting meetings or anecdotes from just the vibes and walking around? What’s the coffee like? Is there like a commissary? Do you run into anybody interesting?
There is a commissary actually in the White House called the Navy Mess. I think they’re just opening up for business now. That is one of the cooler things you could do is you can take people to lunch at the Navy Mess. Oh, look forward to it! Jay Cal just invited himself!
I look forward to it; I look forward to taking Chamath and Freeberg there. I’ll wear my MAGA hat. All right, well, let’s get started. You’re here because you—we have a very specific—he’s here because the world is ending, Jason. The Western world is—
Okay, the Western world’s ending, and David Sax is going to save it. But we had a little bit of a freak-out the last week regarding this Deep Seek. If you don’t know, that’s a Chinese AI startup; they released a new language model called R1 and it’s on par basically with some of the best models in production in the West, like OpenAI’s O1 model. But they claim—and listen, you can trust claims coming out of China, you know, for what it’s worth—they claim to have done this all for six million dollars. For comparison, OpenAI spent reportedly 80 to 100 million to train GPT-4, which you’re all using now, and Sam claims they’re going to spend a billion dollars training GPT-5.
So, that’s about seven percent of the cost of GPT-4. Obviously, there are export restrictions on Nvidia H100s to China, so there’s a big debate as to if they actually have H1s or not. Monday was a bloodbath in the stock market. Nvidia had the worst day in the history of the stock market in terms of total dollar amount of market cap lost. It was down 17%, which is 600 billion dollars. TSMC was down, ARM was down, Broadcom was down.
So I guess everybody’s asking the question: How did they do this? Did they do it? And then there’s a bunch of debate on whether they stole—which is kind of rich coming from OpenAI, which got caught red-handed stealing everybody else’s content, and now they’re crying foul that the Chinese stole or trained, did what’s called distillation of their model in order to build theirs.
Sax, obviously you are the czar of AI. I’m curious what your take on all this is, and thanks for coming!
**Well, I think one of the really cool things about this job is just that when something like this happens, I get to kind of talk to everyone, and everyone wants to talk. I feel like I’ve talked to maybe not everyone in all the top people in AI, but it feels like most of them. There’s definitely a lot of takes all over the map on Deep Seek, but I feel like I’ve started to put together a synthesis based on hearing from the top people in the field. It was a bit of a freak-out. I mean, it’s rare that a model release is going to be a global news story or cause a trillion dollars of market cap decline in one day.
So it is interesting to think about why was this such a potent news story. I think it’s because there are two things about that company that are different. One is that obviously it’s a Chinese company rather than an American company, and so you have the whole China versus U.S. competition. The other is it’s an open-source company, or at least it open-sourced the R1 model. So you’ve kind of got the whole open-source versus closed-source debate, and if you take either one of those things out, it probably wouldn’t have been such a big story. But I think the synthesis of these things got a lot of people’s attention.
A huge part of TikTok’s audience, for example, is international; some of them like the idea that the U.S. may not win the AI race, that the U.S. is kind of getting a comeuppance here, and I think that fueled some of the early attention on TikTok. Similarly, there’s a lot of people who are rooting for open-source, or they have animosity towards OpenAI, and so they were kind of rooting for this idea—oh, there’s this open-source model that’s going to give away what OpenAI has done at one-twentieth the cost.
So I think all of these things provided fuel for the story. Now I think the question is okay, well, what should we make of this? I mean, I think there are things that are true about the story and then things that are not true or should be debunked. I think that, let’s call it a true thing here, is that if you had said to people a few weeks ago that the second company to release a reasoning model along the lines of O1 would be a Chinese company, I think people would have been surprised by that. So I think there was a surprise, and just to kind of back up for people, there’s two major kinds of AI models now. There are kind of the base LLM model, like Chatty 4.0, or the Deep Seek equivalent was V3, which they launched a month ago. That’s basically like a smart PhD; you ask a question, it gives you an answer.
Then there’s the new reasoning models, which are based on reinforcement learning—a sort of a separate process as opposed to pre-training. O1 was the first model released along those lines, and you can think of a reasoning model as like a smart PhD who doesn’t give you a snap answer but actually goes off and does the work. You can give it a much more complicated question, and it’ll break that complicated problem into a subset of smaller problems, and then it’ll go step by step to solve the problem and that’s called chain of thought, right?
So the new generation of agents that are coming are based on this type of idea of chain of thought—that an AI model can sequentially perform tasks and figure out much more complicated problems. So OpenAI was the first to release this type of reasoning model; Google has a similar model they’re working on called Gemini 2.0, Flash Thinking; they’ve released kind of an early prototype of this called Deep Research 1.5; Anthropic has something, but I don’t think they’ve released it yet. So other companies have similar models to O1 either in the works or in some sort of private beta, but Deep Seek was really the next one after OpenAI to release the full public version of it. Moreover, they open-sourced it, and so this created a pretty big splash, and I think it was legitimately surprising to people that the next big company to put out a reasoning model like this would be a Chinese company. Moreover, that they would open-source it, give it away for free, and I think the API access is something like 1/20th the cost.
So all of these things really did drive the news cycle, and I think for good reason because I think that if you’d asked most people in the industry a few weeks ago how far behind is China on AI models, they would say six to 12 months. And now I think they might say something more like three to six months, right? Because O1 was released about four months ago, and R1 is comparable to that. So I think it’s definitely moved up people’s time frames for how close China is on AI.
Now, let’s take the—we should take the claim that they only did this for six million dollars on this one. I’m with Palmer Lucky and Brad Gerstner and others, and I think this has been pretty much corroborated by everyone I’ve talked to that that number should be debunked. So first of all, it’s very hard to validate a claim about how much money went into the training of this model. It’s not something that we can empirically discover, but even if you accept it at face value that six million dollars was for the final training run, then the media is hyping up these stories saying that this Chinese company did it for six million and these dumb American companies did it for a billion.
It’s not an apples to apples comparison, right? I mean, if you were to make the apples to apples comparison, you would need to compare the final training run cost by Deep Seek to that of OpenAI or Anthropic, and what the founder of Anthropic said and what I think Brad has said being an investor in OpenAI and having talked to them is that the final training run cost was more in the tens of millions of dollars, about nine or ten months ago. So, you know, it’s not six million versus a billion. Okay, it’s a billion dollar number might include all the hardware they bought, the years of putting into it—a holistic number as opposed to the training number.
Yeah, it’s not just running it; it’s not fair to compare, let’s call it a soup to nuts number, a fully loaded number by American AI companies to the final training run by the Chinese company.
But real quick, Sax, you’ve got an open-source model, and they did—the white paper they put out there is very specific about what they did to make it and sort of the results they got out of it. I don’t think they give the training data, but you could start to stress test what they’ve already put out there and see if you can do it cheap essentially. Like I said, I think it is hard to validate the number. I think that if, let’s just assume that we give them credit for the six million number. My point is less that they couldn’t have done it, but just that we need to be comparing likes to likes.
So, if for example, you’re going to look at the fully loaded cost of what it took Deep Seek to get to this point, then you would need to look at what has been the R&D cost to date of all the models and all the experiments and all the training runs they’ve done, right? And the compute cluster that they surely have.
Dylan Patel, who’s leading semiconductor analyst, has estimated that Deep Seek has about 50,000 hoppers. Specifically, he said they have about 10,000 H100s, they have 10,000 H800s, and 30,000 H20s.
Now, the cost of a sack—sorry, is they Deep Seek or it’s Deep Seek plus the hedge fund? Deep Seek plus the hedge fund, but it’s the same founder, right? And by the way, that doesn’t mean they did anything illegal, right? Because the H100s were banned under export controls in 2022; then they did the H800s in 2023. But this founder was very farsighted; he was very ahead of the curve. Through his hedge fund, he was using AI to basically do algorithmic trading, so he bought these chips a while ago. In any event, you add up the cost of a compute cluster with 50,000 plus hoppers, and it’s going to be over a billion dollars. So this idea that you’ve got this scrappy company that did it for only six million? Just not true. They have a substantial compute cluster that they use to train their models and frankly that doesn’t count any chips that they might have beyond the 50,000 that they might have obtained in violation of export restrictions that obviously they’re not going to admit to.
I think that part of the story got overhyped. It’s hard to know what’s fact and what’s fiction. Everybody who’s on the outside guessing has their own incentive, right? Like, so if you’re a semiconductor analyst that effectively is massively bullish Nvidia, you want it to be true that it wasn’t possible to train on six million dollars. Obviously, if you’re the person that makes an alternative that’s that disruptive, you want it to be true that it was trained on six million dollars.
All of that I think is all speculation. The thing that struck me was how different their approach was, and TK just mentioned this, but if you dig into not just the original white paper of Deep Seek but they’ve also published some subsequent papers that have refined some of the details, I do think that this is a case—and Sax, you can tell me if you disagree—but this is a case where necessity was the mother of invention.
So, I’ll give you two examples where I just read these things, and I was like, man, these guys are really clever. The first is, as you said, let’s put a pin on whether they distilled O1, which we can talk about in a second. But at the end of the day, these guys were like, well, how am I going to do this reinforcement learning thing? They invented a totally different algorithm. There was the orthodoxy, right? This thing called PPO, that everybody used, and they were like, no, we’re going to use something else called—I think it’s called GRPO or something. It uses a lot less computer memory and it’s highly performant.
So, maybe they were constrained, Sax, practically speaking by some amount of compute that caused them to find this which you may not have found if you had just a total surplus of compute availability. And then, the second thing that was crazy is everybody is used to building models and compiling through CUDA, which is Nvidia’s proprietary language—I’ve said for a couple of times is their biggest moat—but it’s also the biggest threat vector for lock-in. And these guys worked totally around CUDA, and they did something called PTX, which goes right to the bare metal and it’s controllable, and it’s effectively like writing assembly.
Now, the only reason I’m bringing these up is we—meaning the West—with all the money that we’ve had didn’t come up with these ideas. And I think part of why we didn’t come up is not that we’re not smart enough to do it, but we weren’t forced to because the constraints didn’t exist. And so I just wonder how we make sure we learn this principle, meaning when the AI company wakes up and rolls out of bed and some VC gives them two hundred million dollars, maybe that’s not the right answer for a series A or a seed. And maybe the right answer is two million so that they do these Deep Seek-like innovations and strength makes for great art. What do you think, Friedberg, when you’re looking at this?
I think it also enables a new class of investment opportunity given the low cost and the speed. It really highlights that maybe the opportunity to create value doesn’t really sit at that level in the value chain but further upstream. Apology made a comment on Twitter today that was pretty funny, or I think about this about the right. He’s like, turns out the rapper may be the moat, the money, the mode, which is true at the end of the day. If model performance continues to improve, gets cheaper, and it’s so competitive that it commoditizes much faster than anyone even thought, then the value is going to be created somewhere else in the value chain.
Maybe it’s not the rapper; maybe it’s with the user. And maybe, by the way, here’s an important point, maybe it’s further in the economy. You know, when electricity production took off in the United States, it’s not like the companies making a lot of money that are making all the electricity; it’s the rest of the economy that accrues a lot of the value.
Well, you’re about to see a big test of this because if OpenAI raises $40 billion at $340 billion, 100 percent, that just hit the wire—the underwriting logic at $340 billion. Exactly what you just said, Friedberg, it is the wrapper, meaning ChatGPT is the next killer app. It’s getting to a billion plus MAU, hundreds of millions of DAU; it’s competing for consumer usage. That’s the model; that’s the model is like consumer usage, which puts them on an on a collision course with Meta. It’s the only company that could really impact that because the only company right now that has billions of eyeballs of DAU use per day, and who—and by the way, Zuck said this in his earnings release. He’s like, there’s only going to be one company that brings AI to a billion plus people and it will be us. Some version of that quote is in his earnings release yesterday. And then Microsoft showed weakness in the cloud, and then Microsoft’s down six percent today. And you know, I think it’s a window for OpenAI to say we’re going to go up against Meta. This is it; we’re going to be the players. What do you—and everyone’s kind of ignoring Google. What do you guys think is happening right now between OpenAI and Microsoft? Because if it’s true that this distillation thing actually happened, well there’s only one place where you could have distilled the O1 model; it’s on Azure. So what the hell is going on over there?
Well, and there are—one is supported on—explain distillation real quick.
When you have a big large parameter model, the way that you get to a smaller more usable model along the lines of what Sax mentioned is through this process called distillation where the big model feeds the little model. So the little model is asking questions of the big model, and you take the answers, and you refine. And by the way, you can see this—Nick, I sent you a clip; you guys can see this. There’s clearly distillation happening. Nick, can you show the clip of the Deep Seek run where it shows the China answer and then deletes it?
What was Winston’s job in 1984, right? And it sort of starts to go through this whole summary, and then the person says, are there any actual states that currently do that? Hold on; here it goes. It says, oh, Korea. Wait. It goes, China! And then, wait! Watch this, boom! So, the reason why this is happening is like you’re seeing this chain of thought; you’re seeing the several layers, and then it’s catching it after the fact. So we know that this is distilled from some other model, and my only point there—it’s the little tongue-in-cheek—is right now when you go and use OpenAI, you’re using it sitting in an Azure instance somewhere, right?
So this is Microsoft’s cloud infrastructure that runs it. So it begs the question; it’s not that it’s O1’s fault or OpenAI’s fault, but this distillation happened. And I’m not trying to assign blame, but typically if this were to happen, you’d look to your cloud provider and say, how are you letting this happen? And I don’t think anybody’s had a good answer for that.
Well, and the cloud provider is hosting R1 now, so they’re literally undercutting their partner OpenAI and pushing people to a cheaper model.
Well, whatever. I mean, look, Amazon’s going to host their own version of R1. Grok has a version of R—
Yeah! We have one! Cerebras just rolled out; it’s open source now!
Who has R1 on his laptop? You know? Yeah, exactly! But if it was—if it was stolen, as Sam is claiming—that would be like, you’d think he’d be able to call up software and say, hey, can you not put the stolen IP on your server and promote it to everybody at a lower cost? It just shows Microsoft has no loyalty to OpenAI.
Yeah, and they have—but you think they would have loyalty; they have no loyalty!
What it would take to distill a one, like brute force, it wouldn’t be like, oh, jeez, I can’t believe it was distilled. It would be like such a massive number of calls against an API or against something that it wouldn’t be unnoticed in it.
Well, they did actually—came out and said they blocked some suspicious activity recently.
Yeah, no, but they’re always doing that; that’s constant! You’re always doing that; that’s like, you know, the old school. Let me—you know, go ahead, Zach. Let me address the distillation point. So, I mentioned this a few days ago on Fox News, that I thought it was likely or possible that distillation had occurred, and there was some evidence for this. And it became like a news story—I didn’t even realize that saying that would be news, because it’s kind of an open secret in Silicon Valley. Everyone I talked to, they’re doing some level of distillation.
Yeah, because you need to test your model against theirs anyways!
Yeah! And every single person I’ve talked to basically has agreed that there was some distillation here from OpenAI. Now, that doesn’t mean it was the only thing going on here. I mean, to be sure, the Deep Seek team is very smart, and there were some innovations, but also there was some distillation. And really, this wasn’t even a fresh news story, I think, from the point of view of Silicon Valley, because a month ago we had a press cycle in Silicon Valley when Deep Seek’s V3 model came out that Deep Seek V3 was self-identifying as ChatGPT when you would ask it, who are you—like what model are you? Five out of eight times V3 would tell you that it was ChatGPT-4, and there’s lots of videos and examples of this online that have been posted. Right? The point is that we knew a month ago that V3 had been trained on a substantial amount of ChatGPT output. Obviously, because V3 was self-identifying as ChatGPT! And there are two ways that that could have happened.
So let’s call it the innocent explanation; deep seek had crawled the web and found lots of published output from ChatGPT and then trained on that and that wouldn’t be a violation of OpenAI’s terms of service or their IP.
Or the other explanation would be that they used the API from OpenAI and basically went to town! Yeah, went to town! And there’s no way, I think, based on what we know, to prove that one way or another. But I know what most people think happened. And at the end of the day, OpenAI can probably figure it out, and they’ve indicated that they think there was some improper distillation here. But yeah, in the Financial Times, it says OpenAI says it has found evidence that Chinese artificial intelligence startup Deep Seek used the U.S. company’s proprietary models to train its own open-source competitor. Right? That’s what I’m referring to. So they say—they’ve been very clear about this, by the way. You have to be sympathetic, I think, to OpenAI in this, because if you’re building a startup, you’re trying to raise money.
We’ve all gone through this cycle, guys, where it’s like there’s momentum; we celebrate internally the momentum—that’s what gets you the funding, right? energy to push your team even further and harder, and then all of a sudden, it turns out that some portion of that—like Travis said it well—is probably a chart inside of OpenAI’s offices showing how many times these APIs are getting hit. You know how many times these endpoints are getting hit. It all looks positive, and then you realize that some portion of it was actually bad and trying to undercut your value. It’s a hard pill to swallow, and then you have to course correct very quickly. You have to lock down.
This is one area where security is critical. We have not talked about this. You have to lock these models down now; you have to lock the endpoints down. Look, in the Biden administration, if this had happened, the first conversation would have been, “We need to KYC the people that use these models.” And it’s like, what are you talking about? We don’t KYC the cloud. If you’re trying to use an EC2 endpoint or an S3 bucket, you don’t have to prove who you are—you just use a credit card and go. That’s the whole point of why proliferation can happen so quickly.
But if we take the wrong takeaways from this period, there’s going to be a bunch of people that will clamor to lock these folks down and make innovation go much slower. I don’t think that would be a good outcome. Here’s the other side, and I totally agree, Chamath, but here’s the other side. You go through the white paper; you see what it is they did, what they innovated on—the science behind it, the thoroughness—and you’re like, these guys are badass. It doesn’t feel or sound like somebody who took something. Just when you get through it, it could be that OpenAI wrote the white paper for them. I’m just putting it out there, but it’s real innovative. I agree with that—real innovation, strong tech—you’re like, this is legit.
I agree with that, but in that paper, they’re very open about where the data is coming from, and they’re fairly transparent about everything else they did. However, they are not really clear about the data, and specifically, they say that to get from V3, which is the base model, to R1, which is the reasoning model, they had about 800,000 samples of reasoning. They were quite unclear about where those reasoning samples came from. By the way, it is remarkable that you can get from a base model to an R1 with just 800,000 samples, but this is the problem: we, meaning the Western AI community, have been trudging around on this path where we’ve had a very orthodoxical approach. The only way you can do reinforcement learning is through PPO, okay? But is that true?
It turns out that if you’re a really smart team that has no other choice, you move away and invent your way out of it. So we have to get that example, too. I think it’s technically brilliant, some of the things they’ve done, but they also use constraint as very much a feature, not a bug, and the Western AI economy has been the opposite so far. I think the best part of this is the fact that Sam Altman was supposed to be doing open source; he made it a closed source company, he stole everybody’s data, got caught red-handed, and he’s being sued by The New York Times for all that.
Now the Chinese have come and open-sourced all the stuff he stole, and he’s got a real competitor on the original mission of what OpenAI was supposed to do. I have zero sympathy for him or the team over there. I’m glad that it’s all going open-source. It should have been open-source, and it’s better for humanity. The fact that the Chinese did it to Sam Altman after he stole everybody else’s content—that’s my opinion, okay? You have it. But I don’t have strong opinions on it. It’s hilarious! Does nobody see the irony in this? He was supposed to be doing open source.
Well, it is interesting because, Jacal, I will say the models are closed. You’re right, there was the lawsuit with Scarlett Johansson for stealing her voice even when she said no. There’s a real question, and people have asked The New York Times, and then there’s now the question about YouTube data being used to train the video models. So there’s a lot of scrutiny on their heels a little bit. I definitely see your point about stealing.
I think the pressure right now is on Meta because I think Meta has to show up with the next iteration of Llama that beats and exceeds Gemini—that exceeds R1—and I think that is going to be crucial for us to have a counterweight to whatever China is going to put out after this. But I mean, Chamath, it’s open source. Does it not?
This is my point: embrace and extend. Meta has to embrace and extend everything that these guys have shown, meaning Meta’s buying tens of thousands of NVIDIA GPUs—great. But what did this show? This shows that actually CUDA and high-level languages, in general, I think we’ve all known that they suck. So we’ve all been going through it thinking that it’s the right thing to do. Deep Seek throws it out the window; they use something called PTX. What Meta does is critical now to understand. They need to embrace this stuff, and this is where I think, again, apologies to the Invidiables, but it’s going to create a more heterogeneous environment.
The reason is that there’s too much money and risk on the line to go through a single point of failure—a chip, a high-level framework to get to that chip—that’s nuts. So I think that kind of emperor has no clothes moment is upon us. Well, let me ask you another question. Let’s assume that we start the world of AI today. So there’s no legacy of the last three years, and you wake up today and there’s this open source model that’s 670 billion parameters. You can run it on your desktop computer; it’s completely available. Everything’s completely transparent.
I ask you the question: forget about all the big companies that are involved in everyone’s strategy historically. What’s the model today to build value here? Where do you build equity value as a business if you’re going to start a company or if you’re going to invest as an investor?
The first is you have to build a shim. I think the reason a shim is really critical is that there’s so much entropy at the model level. What this should show you is, you can’t pick any model. The problem is that the people that manipulate these models—the machine learning engineers and whatnot—they become too oriented to understanding how to get output of high quality using one thing. It shouldn’t have been the case that we have engineers that can only use Sonnet, right? That’s the Anthropic model, right? It shouldn’t be the case that people can only use OpenAI or people can only use Llama. Right now, that is kind of what we have.
You don’t have the flexibility to hot swap as models change. So if you’re starting a company today, the first technical problem I would want to solve for is that because tomorrow if it’s R2 or Alibaba’s model or Llama, I would want to be able to rip it out, put it back in, and have everything work. Right now, we can’t do that. The answer to your question is: the application layer. Because this is all going to become storage—it’s like YouTube being built on top of storage or Uber being built on top of GPS.
All these innovations are being commoditized, and this one is happening faster than all the rest. Do you want to be in the storage business, or do you want to be in the YouTube business? Do you want to be in the Uber business, or do you want to be in the GPS chip business? I mean, they’re both decent businesses, but Gavin Baker came on this podcast and said the fastest deprecating asset in the world was a large language model. He’s been proven right—they’re not worth anything. They’re all going to be open-source; they’re all going to be commoditized, and that’s for the best of humanity.
Now we’re going to be on the application level, the hardware level, with robots, and I think that’s where the opportunity is. Travis, what do you do? What company do you start today if you start a company today, given where the world is at, given the open-source models? Like, what do you do?
Oh, I’m getting so excited! Look, I think the first, the first degree out is: what’s a wrapper company, okay? So, of course, maybe those companies already exist. And then is there a tools company? Right? So in a funny way, even though Facebook could be the wrapper, they have a tools business that Deep Seek is basically challenging. They’re going full open-source and putting something out there that’s really good. What has to happen is Meta has to decide, “We are going to embrace and extend this. We’re going to make sure that all the developers come to us, that all the cool applications get built here.”
So I think it’s like there’s a tools business and then there’s the wrapper business. When AI gets cheap, you know what’s going to happen, guys? There’s going to be a lot more AI, right? I don’t think— I think the price elasticity on this one is actually positive. So as the price goes down, that’s right—revenue, usage, everything’s going to go up through the roof. This is a history of tech forever, since like Bill Gates said, “I don’t know what to do with more than 64 kilobytes of memory.”
The question is, did we cheap oil? Cheap oil in the United States drove the industrial revolution, right? When we started discovering oil, suddenly we were able to build factories and make stuff that we never imagined possible. So then you’re like, okay, AI is like—it’s going to get cheap, it’s going to be oil, but it’s also going to be specialized for different tasks, right? You’re going to start getting into nuances of like what is the investor AI look like? What does the autonomous car AI look like? What does the Google search? I’m trying to figure some lawyer AI look like?
Yeah, so you could go vertical and siloed—siloed, air quotes—but you understand what I’m saying. Yeah, there’s a thing called Jevons paradox which kind of speaks to this concept. Satya actually tweeted about it, which is the—it’s an economic concept where as the cost of a particular use goes down, the aggregate demand for all consumption of that thing goes up.
So the basic idea is that as the price of AI gets cheaper and cheaper, we’re going to want to use more and more of it. So you might actually get more spending on it in the aggregate. That’s right, because more and more applications will become economically feasible. Exactly, yeah. That is, I think, a powerful argument for why companies are going to want to continue to innovate on frontier models.
You guys are taking a very strong point of view that open source is definitely going to win, that the leading model companies are going to get commoditized and therefore there’ll be no return on capital, and basically continue to innovate on the frontier. I’m not sure that’s true. You know, for one thing, the R1 model is basically comparable to O1, which OpenAI released four months ago and was training internally, call it nine or ten months ago.
So OpenAI is on O3 now; its frontier is ahead of where R1 is. Anthropic and Google, I think, have things in the works and even Meta that may be ahead of where R1 is, so I think R1 or Deep Seek’s done a good job being a fast follower here. It’s not clear that this is the frontier, and those frontier model companies—now having seen what might have happened with distillation—have a pretty strong incentive to make sure that doesn’t happen again. And they’re going to be taking countermeasures.
I mean, there’s a question of like how much you can do to stop it, but I think it’s a little premature to conclude that there’s no reward for being at the frontier. Does anybody have any other questions for Sacks before we drop him off to go back to serving the American people?
Before we drop him off, one final point on the whole open source versus closed source: look, I’m not going to take sides in that, but I think that it’s a mistake to just view what happened here as, oh, it’s this like plucky upstart that’s doing the community a huge service out of the goodness of its heart. You know, it’s basically open sourcing, oh, they stole it, they stole it, it’s the Chinese.
Come on, you still have this huge geopolitical aspect to it, right? Deep Seek is a Chinese company, and they’re trying to catch up. If you’re behind, you’re trying to catch up, then open source is a strategy that actually really makes sense for you, and you know, they’re trying to basically undercut the leading American companies. I don’t think they did it with six million dollars—I mean, they have massive resources behind them. So I think some of the pro Deep Seek vibes are a little bit naive.
In Silicon Valley, it’s like that’s only the people who worked for Sam previously and quit who feel that way. I think there’s a lot of support for Deep Seek, yeah, in Silicon Valley because, again, people think that they’re doing this huge service for the community. And I think it’s a little bit more self-interested than that. It could be both, right? I mean, there is a theory that they’re trying to undercut and neuter the lead, and at the same time there are a bunch of people who believe in open source and nobody should control this—certainly not Sam Altman should be the person who controls it.
So two things could be true at the same time. David, thank you so much for coming on. We appreciate it, and thank you for all—thank you for coming on your own podcast, David. Thank you, David. I know that this is—and now we’re going to talk about a bunch of other crazy stuff. You’re a scholar and a gentleman, David. Yes, thank you.
Alright, thanks to David Sacks for coming in. Let’s open up the aperture here and talk a little bit about relations with China. We’re obviously in a bit of a cold war with them. We have tariffs, we have Taiwan, and then we have the sort of trade war going on here with exports of H100s. Where do we want to start, gentlemen?
Travis, you’ve got some deep—you’re one of probably five American entrepreneurs who ran an at-scale business with Uber and the Didi relationship in China, so you have a unique position of understanding business in this along with maybe Tim Cook and Elon—are the only other two people who’ve really had an at-scale business there—maybe Disney; they have Disneyland there.
What’s your take on the relationship and what’s going on here? How is China going to operate differently than the US? Travis, from your experience and your point of view, tell us a little bit about the culture and business ethics in China, particularly as it relates to AI.
Okay, so look, I had this thing—this is—I’m going back almost 10 years here. Uber China, and I cannot—there’s no way I could express the frenetic intensity of copying that they would do on everything that we would roll out in China. It was so epically intense that I basically had a massive amount of respect for their ability to copy what we did. I just couldn’t believe it.
We would do real hard work, make it, we’d dial it, and it would be epic, it would be awesome. We’d roll it out, and then like two weeks later—boom—they’ve got it. A week later—boom—they’ve got it. Of course, I used that to drive our team, and there are so many great stories. I mean, we had like 400 Chinese nationals in Silicon Valley at our offices in San Francisco. We had a whole floor for the China growth team, and it was primarily Chinese nationals. We had billboards on the 101 in Silicon Valley in Chinese—Uber billboards to join our team in Chinese to serve the homeland, right?
It was like an all-out war. It was really epic. And by the way, when you went to that floor in our office, you were in China. They rolled China style; the desks were literally smaller, like the density of the space. It was China. But what happens is when you get really, really good at copying, and that time gets tighter and tighter and tighter and tighter, you eventually run out of things to copy. And then it flips to creativity—into creativity and innovation.
Now at the beginning, you know, it’s sort of all over the place; like the kind of innovation when it was new was like—what? You know, you’re like really? But as they exercise that muscle, it gets better and better and better. So if you want to know about the future of food—like online food delivery—you don’t go to New York City; you go to Shanghai.
If you went to offices, like let’s say, Shanghai, Beijing, any of the major cities—Hangzhou, etc.—the office buildings have hundreds of lockers around their perimeter so that everything that you get, whether it be food or anything else—especially food—is just the couriers drop them off in these lockers at the office buildings, and then there are a whole other set of people that are sort of like inter-office runners—runners—that then bring it to your office.
As an example, when you see it, you’re like, “What the heck?” It’s epically efficient, and they’re taking advantage of their economics on labor and things like this. It wouldn’t exactly work that way here, but a lot of the innovation you will see coming out on Uber Eats or DoorDash—the stuff that’s coming out now—is stuff that existed three years ago, four years ago in China, maybe longer.
So eventually you cross that threshold of copying, and you are innovating, and then you’re leading, and I think we see that in a whole bunch of different places.
Yeah, here’s a look at these smart lockers that you can see available for sale when you go online. But yeah, these things are crazy, and you’ve experimented with those as well. Didn’t you have a commissary concept in DTLA?
Well, look, we got a couple of things. In every one of our facilities—and we’ve got hundreds of them—we’ll have lockers there. So the courier then waves their phone in front of a camera; the right locker pops open, they get the food from there, and they go. The courier pickup is asynchronous from production of food. You never—you don’t have lines anymore. There are no more lines, which then speeds up delivery, shortens the amount of time, and reduces how much money you spend on couriers.
We’ve got a whole other thing—this doesn’t work and it probably wouldn’t work in China for a lot of reasons—but let me explain what it is. It’s called Picnic, where if you are in an office building, you order food. You go to a website, you order whatever it is from a hundred different restaurants. Those restaurants happen to be in my facilities, and there’ll be one courier that goes to one of our facilities and picks up 50 orders at a time, brings it to an office, and puts it—there’s a shelf on every floor.
You get notified when your food arrives, and it arrives the same time every day, and you just go to the shelf, get it on your floor, and dip it right back into your meeting—saving people time at the office, giving them selection on food, especially in food deserts. But even going—like there’s a Sweetgreen right down there in my current office right now. I could save 20 minutes by just using our own service versus doing that and you get it at the same price because the courier economics—the courier is delivering 50 orders at a time—so courier costs go basically to zero.
What do we think of the export controls here? Should we maybe ban more H100s or other chips that are going there? Or is that futile? I don’t know the answer to that, and I think that Sacks and the president made a good decision. But here’s the curious case of the export controls: Nick, I sent you a couple of tweets if you want to bring this up.
So the first thing that people are claiming is that Deep Seek is getting access to a bunch of NVIDIA GPUs using Singapore as a backdoor. So essentially, you create a Singaporean shell company, you place an order with NVIDIA, and NVIDIA fulfills that into Singapore, and then the chips go someplace. There are a bunch of examples where people are saying that you’re talking about up to a quarter of all NVIDIA revenue going into Singapore, and the speculation right now is that 100 percent of those then go into China, which is an enormous claim because that’s a huge amount of NVIDIA’s revenue.
Now, the interesting thing is if you actually try to understand, well, maybe that’s not true and maybe it’s sitting inside of Singapore. This is where that kind of unravels. To be clear, Singapore is about 250 or 260 square miles—like it’s a small, small place. Also the TikTok headquarters.
I tried to find out how many data centers are in Singapore and it’s about a hundred. And so you would think that, okay, well, what does that mean? A hundred could mean anything, but then you look at the energy, and they publish that, and all of those hundred data centers consume about 876 megawatts. So these are small data centers, right? The entire industry is like a one-and-a-half to two billion dollar revenue business.
I do think that Sacks and the administration are going to have to dig into this and figure out what their opinion should be, but there is clearly a ton of these chips going into Singapore. I don’t think anybody knows where they end up, and the question is what does America think about that?
Why did we implement these export controls in the first place? If there’s a simple back door, how do you want to react? If the U.S. finds a path, I mean, let’s talk about what happened with sanctions in Russia and other prior sanctioning efforts around the world. But as you kind of close the floodgates and close access, the buyer or the receiver of those goods or that capital are going to look elsewhere.
They’re going to look to create a market somewhere else, and so if we do cut off access to NVIDIA chips, we do cut off access to U.S. exports, are we not kind of recognizing that the second-order effect of that is that China will take IP that they’ve stolen, copies that they’ve made, to Travis’s point, and develop and build out their own fabs? They’ll find ways to copy ASML technology. At the end of the day, there’s a lot to put together and I know it’s deeply technically complex, but if ever there were a group of people in the history of human civilization to pull it off, it’s probably the modern Chinese to be able to say, let’s go build our own infrastructure.
This is a great point, but it’s worse than that— the models today are capable of designing chips for you that don’t rely on the most complicated technologies that ASML creates. I mean, look, one of the luckiest things that happened to Grok is we designed our chip at 14 nanometers, which is effectively in the spectrum of technology like VHS and Beta.
So we chose a simple technology stack to build towards the latest cutting-edge chips at like two nanometers that use these complicated ASML machines. It’s not clear that the yield is actually that good, so why would you spend all that money? If China is forced to engineer its way around it—yeah, Freeberg, the answer to your question is they’ll use these models to design chips that can be manufactured in simple ways, and they’ll make simple stuff.
So this is not—sure, it solves the problem, is my point. Well, it doesn’t, and this is why I think— it doesn’t solve the real problem, which is how do we incentivize people in America to really out-engineer and out-innovate competition, or AI ushers in an era of extraordinary abundance? That abundance ultimately reduces the drive for conflict, and things are better off—or the other version as well is that China could just bear the cost as a central authority of building an incredibly great model, right?
They will spend all the money, and then they will tell the Chinese companies you can distill from this model for free because we have a golden vote and a seat on your board anyways—which is effectively what happens if you get big enough in China. So there’s that possibility as well, where one central authority bears the CapEx of creating something that then everybody else can draft off of.
Let’s talk a little bit about OpenAI. They’re in Washington asking for money now. Is that the concept now—is that our government should back it? The rumor today was they’re raising $40 billion at a $340 billion pre-money valuation with Masa potentially being the lead.
I would love to get Travis’s read on this because Travis has taken large money from Masa in the past and has been through this. How does he think about and make this decision? Obviously, we all know—and I mentioned you guys the meeting I had with him last summer where he basically kicked me out of the room because my company is not generative AI.
Someone said you should go meet with Masa, so I’m like, sure, I’ll sit down with him and start talking, and he just looked at me and said, “This is not generative AI. I only do generative AI. I think your company will be very successful. You will be very successful. Goodbye.” And he just walked out.
So great. Well, that’s all he’s doing now, so okay, so I need to bust a myth: I did not take money from Masa. So he begged me to take money for years, and we did not take it because he is a—he’s, what’s the word I’m looking for?—he’s a promiscuous investor. So once he invests in you, you should probably count on him using your information and investing in all of your competitors. At least that’s historically what he’s done.
I didn’t go there, but then he just kept investing in all of my competitors and they kept subsidizing these markets, and then I’m like, maybe I should have just saturated, soaked up the money that was there. So one of the things you should think about, like when you look at like, “Oh, is OpenAI taking a lot of money from a Masa-type situation?” is it’s a little bit of like a double-edged sword.
If you don’t take that money, it goes somewhere else, but if you do take that money, just know that whatever intelligence they get when they go through the process of giving you the money and maybe hanging around the board, or who knows what, will be used to do other things. And that is the nature of the Masa machine.
So you’re damned if you do, damned if you don’t, but you gotta pick. If the money’s going and it’s flowing, and access to capital is a strategic competitive weapon or advantage, you must play ball. Now, we were able to—we did stuff with the Saudis before even Vision Fund existed. They stroked a $3.5 billion check when that was like the biggest thing that ever happened. So we were okay with not having the Masa money, but that Masa money then went to all of our competitors—DoorDash.
In this OpenAI context, Travis, I mean, knowing what you know about AI, is this going to be a competitive advantage for Sam to raise $40 billion? Where does it go when he’s up against— we don’t know what in China, Microsoft, Alphabet, and Meta?
Well, look, I think this goes to some of the things that Chamath is saying, which is if constraint is the mother of invention or whatever that euphemism is, the aphorism is—if that’s the case, you get into a real weird spot when you get over-capitalized.
In the Uber model, like the war was subsidizing rides for market share, essentially being the wrapper for transportation and using the parlance we were using earlier in this discussion. So it was necessary—you’re screwed if you don’t. The question is do you get to this place of over-capitalized, too big, too bureaucratic, too loose, too weak, too soft? And with having an open source model that’s very smart and there’s a thousand flowers blooming—lots of innovation happening everywhere—could be an overwhelming force.
Now I think there are going to be different sectors treated different ways. We’re like going full stack in certain industry sectors is going to matter, and then in other places having a very sort of chaotic everyone does a little slice is going to be okay in other places. And I think we could probably spend dozens of hours just talking about the nuances there.
Well, it seems like there’s some degree of relationship between the stargate announcement with Masa and Sam standing up there with Larry and then Satya showing up in the conversation as well and this raise and the idea that more hardware, more infrastructure faster creates a moat. I guess that’s the real thing you have to believe, which becomes harder to believe in the context of what happened in the last week.
I personally think that these models—and I’ve said this for a while—it doesn’t make sense to have one large do-everything model. This mixture of experts architecture ultimately you can kind of think about taking a large model, making two copies of it and then trying to shrink each model down to whatever is necessary so that you run two models less frequently.
Meaning that that combination of two models uses less power and takes less time, and then you do the same thing again and you shrink it down to four and then 12. Eventually, you have lots of smaller models, some of which in some cases are experts at one thing, like doing mathematics or reading or writing. But the reality is we don’t know how—whether humans have kind of thought about the world the right way.
That the AI may resolve to having smaller expert models that we don’t really understand why that’s the expert on something, but you have a network of very small kind of things that work together, and that ultimately leads to a commoditization—not just in kind of model cost and in development and runtime, but also in what’s needed.
Do you really create much of an advantage by having all these data centers? Yeah, that’s the key. This is the key point, I think, Freeberg: you’re not going to get an advantage by having more H100s at a certain point, and the actual advantage is going to be in the IP and owning content.
The really smart thing to do would be for somebody to go buy Reddit, Quora, The New York Times, The Washington Post, and Disney, and take all that IP and then not allow other people to use it—sue the hell out of them every time they try. Take Washington Post off that list—
But yes, but I’ll say The New York Times comes off the list too.
Well, whatever! I mean, all those archives are definitely going to be—what would be great about those is you could then, like a patent troll, tell anybody else who’s absorbed New York Times stories historically or Disney, and you could just sue the hell out of them. And then you’ve got the best, most proprietary one.
You’re just describing text, so you’re describing text content which is a fraction of where this is important. Video, I think you can recognize that Google’s YouTube content library is probably 100 to 200 times larger than the rest of the internet combined.
But they don’t have the right to do it.
Well, they do actually. So, you’re such an old-school copyright guy, you’re such an old-school media guy—by the way, sorry! I believe in artists and their right to content.
Yeah, we’ve had a series of conversations that I feel very confident to tell you that they do have the right in a good chunk of that content—not in a lot of the copyrighted content that the big media companies have given them, but a lot of user-generated content they do have the right, and they are using it, and they’re legally doing it.
Then there’s the separate kind of body of content, which I think comes, for example, from Tesla. Tesla has an extraordinary advantage that they were really pressured to put cameras on everything years ago, and that gives them this ability to build models that do self-driving.
So I think that there’s a lot more data advantage that arises in certain industry segments than others, and that’s where the moat will lie, and that moat will allow you to actually build better products that get you a more persistent advantage in gathering more data. That’s ultimately where I think this resolves to—it may not necessarily be about who’s got the biggest data center network.
Yeah, I mean, here’s the thing—guys, at some point, the amount of data becomes the long pole in the tent. At some point, the quality of the algorithms becomes a long pole in the tent, and more compute is not going to change that. I don’t think we’re there yet; that’s the one thing that counters the cheap AI means more AI is—is there enough data and/or algorithms to make the more AI to make it work?
I do agree with the siloing it and getting expert and getting better in these ways, but I think this is an interesting sort of trade-off between some of these variables.
I just got offered $2,500 to put Angel my book into because HarperCollins did a deal with Microsoft and so I’m doing $500 per year. I think it’s for three years—is the license, and they just did this blanket license for every book. They didn’t look at yourself. They didn’t look at how desirable it was; it was just like a blanket deal. Everybody gets $2,500 per book for three years, and I think I’m going to just do it just to support proper licensing so that people can start going down this path.
But let’s get into Doge. It’s been a—I think we’re in ten days into this administration, and Trump formally established DOGE—the Department of Government Efficiency—in an executive order. Apparently, Elon’s been spending a lot of time at the offices. A bunch of wins DOGE is claiming on the interwebs to be saving American taxpayers around a billion dollars a day. That’s three dollars for every American every day—about a thousand dollars a year in savings for each U.S. citizen.
They claim they can triple this, and so for a family of five, that’d be about fifteen thousand dollars a year, maybe sixty thousand dollars during Trump’s second term. We’ve got thirty-six trillion dollars in debt. Have fun with some numbers there if you like, but the key announcement was a very similar, to the Twitter execution, the ability for people to resign in a very kind way—eight months of severance-ish is being offered to federal workers. They expect five to ten percent of federal workers to take this buyout, and it’s—I mean, this could be something like a hundred billion dollars in savings.
Eight months of severance is not actually a legal concept that you can do, so these are some sort of buyouts, and there’s obviously some hand-wringing about it, but I think they’re off to a good start. They’ve also been canceling leases, as we talked about, you know, pre-election. There is so much space not being used that the federal government is terminating a ton of stuff they own and going to sell it, consolidating folks.
At the same time, all of this is happening, everybody has to return to office. Who wants to go first here with you know, the sort of first ten days of DOGE? I see some eggplant emojis in the group chat.
First ten days—how do I get on this group chat? What’s that about?
I’m adding you right now. Literally, every time one of these hits the group chat, it’s just hilarious—eggplants. People are like, oh my god, we’re not burning taxpayer dollars! And the eggplant always comes from Freeberg first. I’m outing him as an eggplant!
I’m a big eggplant! Doge eggplant guy oh so much eggplant. Uh, so Freeberg, tell us about how much eggplant you love this. There’s nothing that I would say is particularly surprising in the first week. A lot of this was kind of talked about leading up to the inauguration. Vivek and Elon published their piece in the Wall Street Journal a couple of weeks ago. They talked about the mechanisms of action that they could utilize to kind of drive reduction in cost, one of which was coming back to the office. Another one of which is, you know, giving people a buyout offer. And by the way, the buyout offer is not new; Bill Clinton did the same thing during his presidency.
Yeah, if you guys remember when he tried to balance the budget and get to a surplus, which he did successfully, and his intention was to actually reduce U.S. debt to zero by the year 2013. He had a very specific economic and fiscal plan for doing that, which he put into place. Incredible era. I think we’re seeing them take the actions that they said they would do. They said they would demand that federal employees come back to the office, and they assumed some degree of attrition from that. And now the buyout offer, and we’ll see how far things go with the courts with respect to their ability to stop a legislative or statutorily mandated spending.
There’s a big question mark here on how much authority the executive branch has in stopping spending and how much they’re not allowed to stop because it’s dictated by laws passed by Congress. And so that’s going to be the big test over the next couple of months. A lot of lawsuits will fly; the courts will ultimately adjudicate, and we’ll see how far the Doge intention can take things.
Then there’s a separate set of efforts around legislative action here. There’s about a two trillion dollar annual deficit right now in the United States federal government—two trillion a year. If you look at the Dalio book on why countries go broke, there’s a pretty simple arithmetic in there, which is not complicated. It’s just, at the end of the day, the U.S. needs to get our federal deficit down below three percent of GDP, which means we’ve got to cut about a trillion to a trillion one of spending. If we can do that, then we’re in a more economically sound place.
By the way, an important point which is in the Dalio interview: as you cut spending, interest rates will come down. Right now, there is a significant sell-off in treasuries and a lot of risk associated with the U.S.’s ability to deliver its debt obligations over the next 30 years, which is why 30-year treasuries are at five percent right now, even though the Federal Reserve is cutting rates. The rate on treasuries is going up; people are still selling off treasuries. That’s also inflationary, Dave.
That’s right, for sure. As we cut spending, we will also see that there will be less inflation and the U.S.’s ability to pay back its debt obligations over the next 30 years goes up. So the rates will come down, and there’s actually a cyclical effect as these cuts start to materialize. The rate at which you can make the cuts actually affects the number of cuts you have to make.
The faster you make the cuts, the less you have to cut. That’s a really key principle going into this, and I think we should expect a big whirlwind of cutting in the next couple of months or an attempt to do so. The courts will adjudicate what needs to be legislated, and then they’re going to go to Congress and start trying to get some of these cuts in. But I will tell you, once again, after our visit to D.C. last week, there was not a single member of Congress that I spoke with who views cutting to be a mandate for them in the laws that they’re trying to pass. They all have a very different agenda than Doge.
This is really one of those interesting things where it’s like the difference between the legislature and the executive branch is like—Doge is really bringing it to life. What powers and controls does the executive branch have to spend and not to spend, especially to not spend when it’s been legislated to spend? This is where the action is. There’s no law that says you can give a bunch of folks eight months of severance, and they’re gone, and you don’t replace them.
There’s no law that says that the executive branch, and again I don’t know all the laws or the rules about how they go about doing it, but let’s say presumably they’re doing this, and there’s some legal backing behind it. They just go and do it, and now they’re not spending money. If it was really hard to hire people, could they even make it harder to hire people? Do they fight bureaucracy with bureaucracy? It could be harder to procure certain things that you’re supposed to spend money on. You can reduce the spend through a lot of very interesting nuanced friction rules that they control.
Some friction could slow things down. They’re talking about putting competency tests in; they’re talking about giving people reviews, and maybe they have to hit some standards. The gentleman’s riff: I mean, when you force people to come back to the office, you’re going to lose five to ten percent of people; and if ten percent take the buyout, then all of a sudden we’re saving money. I mean it’ll be interesting to see if it’s five or ten percent on RTO.
It could be a lot more. What I’m hearing about these buildings is that they are super, super empty—like next level empty. Let’s just say I’m really glad I don’t hold it like I’m an owner that has a bunch of leases to the federal government right now. Oh, the government. The interesting thing about those leases: I was talking to the team at Density, which does people accounting in buildings. They’re obviously very interested in that. The government is such a reliable client that they’re all on one-year leases, so people don’t do what they do at startups, which is to force them into five or ten years because they know, hey, this company could go out of business.
They’re just like, yeah, we’re just on a rolling year-over-year lease, so you can actually just cut these. It’s going to flood the market.
Chamath, your thoughts on also the stopping of payments because they’re obviously going for it. They stopped all payments, which is part of the playbook I saw on Twitter up close and personal, which is, hey, let’s turn off subscriptions and see if anybody’s using these subscriptions. Obviously, a judge got involved in that, but aid going to other countries, you know, we’re just starting to look at what we’re actually sending to other countries and for what purpose.
Then there’s a naming and shaming, and maybe appealing to the public through social media and saying, hey, do you want this money going here when we have tragedies in our own country that need to be solved? We have healthcare, we have houses burned down, we have infrastructure. Maybe you could talk a little bit about hearts and minds and winning those and what your general take is so far.
I think that we have to remember that we’re only nine or ten days into Doge. The fact that we’re already at a billion dollars a day is really incredible, and there has really been no discernible impact. There has been a lot of fissures of fake news and misinformation, but the real impacts have been negligible to none since they started making those cuts.
I think that Doge is a three-layer onion. So layer one is the people; we have now given a pretty generous offer to folks. I think Elon said it was basically the maximum allowed by these contracts, but they tried to do a very good thing there. The second, as you guys just said, the second layer of the onion is going to be the infrastructure—all the buildings, all the physical plants that the government owns and operates that may be empty or idle—and getting them back into private hands so that they can be repurposed. That’s going to save a ton of money.
But both of these will pale in comparison to the third layer of this onion, which is the IT and the services and the spend. What I mean by that is when you read how the department is set up, at the center and nucleus of every single one of these Doge teams is an engineer. I think the reason is that they can get into these systems of record and start to trace where the money is going.
I think when you start to uncover, through forensic analysis, where these dollars are going and how it’s spent, that’s probably how you’re going to close the gap from a trillion to, I suspect, to be honest, it could be more than two trillion dollars when it’s all said and done. That is an enormous amount of waste and it’s unproductive.
So I’m very excited for what happens over this next little while. Just the transparency is going to be incredible. Guys, just for kicks, check this out: right, 2009, if we took 2019 spend, the year before COVID, and put it up against 2024 revenues, we’re looking at a 500 billion dollar surplus. Wow, that’s versus the one and a half trillion dollar deficit. So I took a two trillion dollar swing on a four trillion dollar budget; that’s all waste.
Well, a lot of it, remember, we’ve got a trillion dollars a year of interest payments now. I mean, guys, this is the thing: like, there are two deflationary things that we need: one is Doge and two is where AI is going to take us if it really does its thing. That will keep us in an okay spot economically, but like, we gotta—this spend has to go, or we’re in sort of Greek territory if that makes sense.
I think this is where the popular support for this is pretty incredible. I’ll just go through a couple of numbers with you. Looking at what people agree with that Trump’s doing early on and what they disagree with, you know, obviously we talked about it last week; Chamath pardoning the January 6 protesters and ending requirements for government employees to report gifts—that’s sort of like the Supreme Court thing. These are tremendously unpopular.
Then if you go and you look at downsizing the federal government, imposing a hiring freeze, and requiring all federal employees to return to an office, these are incredibly popular. Elon tweeted these graphs out as well. So right now, you have Trump at the apex of his political popularity, and you have these issues specifically in a very polarized time as incredibly popular.
He’s also done an incredible job with the border; that’s another consensus-based issue. Trump now has downsizing the government and controlling immigration and getting rid of violent immigrants as incredibly popular parts of his mandate, and that’s the big win for him. If you look at his popularity, Trump is massively more popular than he was the first time around—he’s at 49 compared to last time 44.
He’s still the historically least popular president ever, so my point in all of this is, when you see Trump doing things like his meme coin or, you know, taking on Pete Buttigieg yesterday, all that kind of Trump 1.0 negativity, grifting—that’s the stuff that’s going to derail this. But the stuff that’s not going to derail it is focusing on the Trump 2.0 agenda. That is, as somebody who was a never-Trumper, as you all know in the audience, and now someone who is supporting him relentlessly, that margin—that extra 10 percent of people who support him right now is me and other folks who are looking at the people he puts around him.
He has to stay with the 2.0 agenda as hard as it is and stay away from the Steve Bannon agenda and the grifting. Those are the things that will take this all apart. So that’s my appeal to them. I told everybody I’d give a letter grade; I give him a B so far. Could do better, but pretty good. Less of the meme coin, less of the drug, you know, we have to make sure that we’re not dragging dishwashers and teachers and people who’ve been here 20 years out of the country.
It’s going to be a very deft and important approach here if this is going to be sustained. I think it’s a coin toss if he will be able to maintain his popularity. What he did today with this—like, I don’t know if you saw the Pete Buttigieg thing—he was attacking him over this tragedy. That’s the kind of stuff people don’t want. Less of that, please; more of the Doge. That’s my little rant.
Can we talk to Travis about Waymo now? Travis, can I ask, have you taken a production Waymo? Yes. What do you think about it, and do you think that’s the future of transportation? How does Uber play into the self-driving car business now? I mean, look, it’s funny because, as you guys know, back in the day—2015, 16, 17—we had our own autonomous vehicles out there and I remember the first one of ours that I took.
I got in the back, and all I had was a stop button—a big red stop button that I could push if things got weird. And I remember this is in Pittsburgh, where we had our robotics division and autonomy division at Uber. I got out of that car, and literally it’s like I got off a roller coaster ride. My legs were, I could not stand straight; I was a little wobbly because I was so freaked out, and the adrenaline was pumping.
You get in a Waymo today, and it’s like you’re not even thinking twice; it’s all good. You just get in, you get out. Now, part of it’s just the normalization. It’s just working, and that normalizing matters in terms of the psychology around it. We’re just there, so it just works now. Is it an optimized experience for ride-sharing? No. The cyber cab is sort of the ultra destination for what it means to be transported across the city in a vehicle that is not meant for a human to drive—no steering wheel, folks potentially facing each other, you know, just a whole bunch of different formats.
The technology works; we know that there are different ways to get to the technology. I think that probably one of the most interesting things that we should be or one of the most interesting things to think about is that cheap AI makes cheap autonomy. So if as cheap AI gets out there and proliferates and gets broadly distributed, we should expect autonomy gets easier and easier. You see some of the stuff that’s happening with Tesla and FSD—their new models are like, I think in a three-month period, they went up like 10x in terms of performance, meaning a number of miles per human intervention.
They’re seeing, you know, that’s the thing that Elon’s seeing right now because cheap AI, good AI makes cheap good autonomy, and that’s a thing we need to connect the dots on. I think the thing, then you go one level past that, you’re like, okay there’s the possibility that autonomy just gets easy and commoditized, similar to what’s happening to AI. The next part is, okay you get the hardware—you’re like, okay manufacturing’s hard. That’s interesting; that could be a long pull in the tent. I think that could be a place where Tesla, of course, has a huge advantage.
Then you look at who Waymo’s partners are; are they getting set up to do the right kind of manufacturing and get scale of cars out there? But then there’s like this dark horse that nobody’s talking about, which is electricity—it’s called power. All these vehicles are electric vehicles, and if you said, yeah I just did some quick back of the envelope calculations—if all of the miles in California went EV ride-sharing, you would need to double the energy capacity of California.
Let’s not even talk about what it would take to double the energy capacity in the grid and things like that in California. Let’s not even go there. Even getting 20 percent more, 10 percent more, is going to be a gargantuan five to ten-year exercise. You know, I live in L.A.—it’s a nice area—and we have power outages all the freaking time because the grid is kind of messed up, and they’re sort of upgrading it as things break. That’s literally where we’re at in L.A., one of the most affluent neighborhoods in L.A. That’s just where we are.
So, I think the dark horse kind of hot take is combustion engine AVs because I don’t know how you can go fast getting AV out there really, really, really massive with the electric grid as it is. What do you think about regulation in this regard? Because obviously, there was the cruise; you know, a person got hit by a regular car and they dragged the whole thing, and it imploded. We had the tragedy at Uber in Arizona, where somebody was playing Candy Crush when they were a safety driver.
What is your outlook on this stuff rolls out, and someone gets hurt, and then tens of thousands of cities that you brought Uber to? How receptive are they going to be towards this, and what do you think the regulatory framework will be like? You know I think similar to how you get normalized. It’s like you’re used to getting in a car; it’s normalized psychologically and in the public mindset, you get used to it.
We’re getting to a place where these vehicles are provably safer than human-driven vehicles. So yes, there are mistakes, but they’re just provably safer. People are just getting used to it, and that’s a big part of the cycle. So, I think we’re getting out of the hysteria and we’re getting into like, yeah, it’s great. Talk to people who are using it, and they feel safer—of course, like, I feel like we’re going to get in fewer accidents; but also, I feel safer because there’s less chance of an interpersonal problem that does happen, especially, you know, late at night, when people are out partying.
There is a level of safety on many different aspects for the drivers. Yeah, for the drivers—no, it’s for the—yeah, there’s safety aspects across the board. Sure. What do you think about BYD? And like you sort of mentioned, everybody getting to autonomy at the same time. Obviously, Waymo’s got the biggest lead; Tesla’s behind them; BYD and about ten other providers are out there doing this.
Do ten players get there at the same time, and then it’s just who can incorporate these into their network? What do you think of the strategy that Uber’s doing, of hey, we’ve got these eight partners; we’ll take everybody into the network and we’ll manage people vomiting in the back of cars, cleaning them, and charging them?
Look, I think the big issue you have with anything Chinese is: will you be allowed to bring it into the U.S.? Just period. Like, you maybe kind of can now; what happens with tariffs? Will there be blocks in bringing this kind of technology into the U.S.? What happens there?
I think that’s a whole thing. The bet that Uber makes is whether consciously or subconsciously—that’s like: will AI, will cheap democratized AI happen? If so, does that make cheap democratized autonomy? Then you’ve got to line up your physical hardware partners—the car manufacturers. Then you’ve got to say, okay, is the electricity where it’s at, and are there other bets to make to make sure that I can charge my cars?
So like, there is a huge real estate play here and fleet management play of, like, how do I electrify these plots of land known as parking lots and also set them up so that robots can clean cars in a very, very efficient way? There’s like a whole—when you talk about that, that’s super interesting, Travis. That’s like—it’s almost like the idea that we all talk about today is data centers, and data centers need their own power substation in order to meet the power demands.
But if we do see a world of robotics automation generally, and we’ve got these kind of moving robotic systems in our world, they need to have a similar sort of power demand met that probably looks like: hey, they all go into their recharge building and they get recharged—whether they’re a car or a humanoid robot or a food delivery robot on the sidewalk or whatever, you know—and they just kind of get recharged.
Robots need actuators; do you know what you need for an actuator? A permanent magnet. You know what you need for a permanent magnet? Rare earths. Who’s the rare earth king? X. China. Greenland. Let’s go!
So guys, I think there are a couple of interesting things. One of them is going to be: how are these companies thinking about real estate, electrifying that real estate in urban environments, and roboticizing that real estate so that they can do the servicing and maintenance, etc.? Look, I guess it could be manual for a while.
Can I put you on the spot? Just go one level above it because—merge the last two concepts together. You talked about—we talked about the federal government, Doge, etc. Isn’t there the potential for just a complete surplus of physical inventory that exists in America?
Oh, yeah! Big time! So what does that mean for commercial real? Like, how do you navigate around that? Because you gotta evade the falling knives first.
Okay, so let’s just go down the ride-sharing lane—so autonomous ride-sharing lane. You go down that lane: car ownership, which is already dropping, drops like a knife all the way down. There’s this thing in cities which takes up 20 to 30 percent of all the land—it’s called parking.
It’s no longer necessary because cars that exist on the roads are getting utilized 15 times more than they were before per car. So you need, hypothetically, 1/15th the number of cars—maybe you could say one-fifth or one-tenth because you want to be able to surge to rush hour and things like that. It depends on what kind of carpooling and things like that are going on.
Let’s just call it 10x fewer cars—one-tenth the land necessary for parking, at least one-tenth; maybe it’s less than that. Okay, so now you’re opening up 20 percent of the land in a city that just goes fallow. But what should we do with that, and is there a demand for that land?
Well, look, I mean maybe it should be housing, you know? And then don’t we have to reevaluate all of the city planning today? Because city planning today, to your point, works backwards from all these constraints that are 1.0 constraints. Here are the traffic flows; here are the traffic patterns. Those don’t exist theoretically anymore, or they would exist in a totally different way, right?
Yeah, I mean we’ve got a massive amount of creativity to say, what can I do with that land with a high ROI, right? Some people are like, you’re gonna have farms, you know, hydroponic farms in urban environments. I’m like, uh, you know, that’s not a bad idea if you want to have farm-to-table healthy food. It’s literally farm-to-table; it’s like a mile away from you, yeah?
So there are some interesting ideas. The land price has to really come crashing down, and there are interesting ramifications if it were to do that. You could imagine that—a bit of a scary thing to think about.
That’s what I wanted you to say, not to try to get you there, but you’re leading the witness. Well, that seems like the crazy thing that nobody is thinking about, which is in this push, this physical built inventory has so much value built up in the 401ks of individuals to the balance sheets of huge pension funds, but that value is—it could be very different.
But the crazy part is, it could just be electricity production and electric capacity on the grid could be the gating factor that makes it a slow burn potentially. I’m just riffing here, guys.
Right, right, right, right; makes total sense. And if you want to see what happens when you have unlimited land, if you live in Austin and you see the distance between San Antonio, Houston, and Dallas—and Austin in that triangle, you know, you get 30 minutes outside of the city centers, there’s just unlimited land and there’s less regulation.
You know what’s happened? Housing prices and rents have come down two or three years in a row. So this could happen in other major cities, and if Doge has less regulation, you can build more. It could be amazing for Americans to actually be able to afford homes again, and maybe convert some of this space into energy storage, electric grid upgrades, sort of modular energy capacity upgrades, like and production.
These are going to be very, very important. Right now, if you want to—when we do this all the time, we have, of course, facilities all over every major city in the U.S., and really around the world. Utility upgrades is the long pull in the tent in construction, development in a lot of our cities—not all cities, but in a lot of our cities.
The Fed held rates; they’re getting close to the goal of two percent. I guess we’re at 2.4, 2.9 in terms of inflation. Any thoughts on where we’re at with the Fed deciding to not cut? You put it on the docket here, Chamath; any wider thoughts there?
I would just say that the long end of the yield curve is basically telling us that there’s still a chance for inflation. So I think that the question is these next 30 or 60 days from the administration, I think, are basically critical. I think if Doge gets to the three billion a day number quicker than people thought, there’s going to be a lot of room for, I think, the president to make a very valid argument that rates are too high for where they are and that we’re going to be able to have a lot more cost control on the expenses, which means there’ll be less need to spend.
It doesn’t solve the problem that Yellen created—Yellen and Biden on the way out the door. The biggest problem was that they put America in this very difficult position because they issued so much short-term paper that is extremely expensive, and as all of that rolls off, we have to go and finance a ton of this debt at now five percent. So it’s still nearly thirty percent of the debt is going to get refinanced this year.
Then it’s like, what are these auctions going to look like? Guys, we all got at the last auction; it barely had 2x coverage. I think that could take a lot of the energy out of the market. Watch the Dalio interview because this is exactly the topic he covers. You know, as we end up needing to refinance this debt, the rates climb, the appetite isn’t there, and it becomes a spiral.
That’s why we have to cut fast in terms of the deficit to basically attract the market now. You know, the market’s moved a little bit, right? So on January 13th, the 30-year treasury peaked at exactly five percent, and it’s come down today; it’s at 4.77. So a little bit of relief since that peak is as kind of the administration’s gone into office and actually taken action.
But as more of this action is realized, if people do appreciate—and Doge is successful and the court’s adjudication does allow a reduction in spending, which I think is the intention—I think we could see this rate drop from 4.78 much more significantly than where it is, and that’ll create a great deal of relief.
And Dave, it’s like it either does that, or it really, really doesn’t. Yeah, or it does, like, the exact super nasty, really bad. That’s right. I got a text from someone who is pretty senior in capital markets who thinks this is going to go to five and a half percent before it goes down.
So they think that there’s going to be a little bit more of a turbulent run ahead, but it’s like—that whole thing of it’s going to get to five and a half before it comes down, it’s like, by the way, it spirals on itself. It’s like you got to print money to then get to that place, and then the printing drives it. You know, if you get to that spiral.
The problem is, if we go to five and a half percent, that’s not 80 basis points. What you really need to think about is the total tonnage of actual dollars that need to get repaid back. And if you look backwards, that’s effectively like 10 rates from 2000.
Could you imagine what the economy would have done if you had brought rates to 10 or 11 percent 20 years ago? It would have crippled the economy. So we don’t have a lot of room here where you can walk rates up to five and a half, six percent without a lot of things starting to break.
This is why I actually think Doge will be successful because as people internalize all of these things where every single congressperson, Freeberg, that may have wanted their own benefit for their community, will have to take a step back because the broader optimization for America just needs to take priority.
But Shamath, it just doesn’t work like that, man. Like, my thing is—I like, I agree with the notion, but I just don’t believe that any individual congressperson will take responsibility in this way. They won’t. They won’t. But the question is, can they block it?
Yeah, or put another way, again, the executive branch can slow roll spend in a lot of different ways. Except you cannot with Medicare and Social Security; discretionary spending is like 20 percent. The mandatory spending, Social Security, Medicare, Medicaid—these are the larger outlays. And this is where we come back to the fact that this will never, I hear you, get addressed until it has to be because of the political suicide that arises.
I just think this is where I think Elon’s fame can be helpful, and I mean very specifically this following idea. You know that famous Sputnik comment where NASA spent millions of dollars trying to engineer a pen that could write upside down, and it turned out that in Sputnik, the Russians just took a pencil? That is what we need to do to the U.S. government, because I suspect even though there’s a lot of mandated spend, the real question that nobody knows the answer to is, is that spend useful?
So even though it’s appropriated by Congress, there has to be a feedback loop that says you can just use a pencil; you don’t need the upside-down writing pen. I think that if there’s anybody that can broadcast that to the world, it’s him. This is where I think Trump gets enormous leverage by having Elon in the West Wing, but nobody else could give him the—rest of us would just be chirping into the darkness.
Yeah, this is the naming and shaming of government waste that’s actually going to work. And the Doge account on Twitter is doing it; they’re basically saying, hey, we’re giving foreign aid for this project, for that project. Is it going to be perfect every time? No, but you show an empty office space, you show people not coming to work, you show people wasting money.
Well, yeah, if that’s even real, you know, there’s going to be a bunch of back and forth here, but overall if you keep naming and shaming each of these projects, and then, you know, they were talking about blockchain whatever, and supposed to report, Elon is at like the government building working on leases at the moment. Like, this stuff is going to be extraordinarily popular because you can just take the number of 330 million Americans, and whatever you just save, you can just divide it by that number and tell every American how much they just paid less in taxes or how much they just saved individually.
The naming, shaming, and doing the back of the envelope math for every American is going to work. Do we want to wrap maybe a little bit on this tragedy in D.C.? Okay, what are your thoughts? We were talking with our friends Guy Dayton, who is very involved in aviation, and he’s got a lot of blog posts he’s done recently. He’s got a company he invested in to do pilot training.
I’ll share two things. One is anonymous; it’s from a friend of mine who gave it to me and said I could share it. There’s a commercial pilot, and he and I, and I posted this, I’ll just read it. Honestly, DCA is the sketchiest airport we fly into. I feel like the controllers there play fast and loose; hence the periodic runway incursions. I’ve said to every first officer in my threat briefings that we both need to be on red alert at all times. There, DCA calls out helo traffic, helicopter traffic, and vice versa all the time, but it’s borderline impossible to see them when you’re bombing along at 150 miles per hour.
I mean, that’s from a pilot; he has no incentive to sugarcoat things. Then I just wanted to read a message from Brian Utko, who’s the CEO of Whisk, who’s building a lot of these autonomous systems. He said, first, auto traffic collision avoidance systems do exist right now. These aircraft will not take control from the pilot to save the aircraft, even if software and systems on the aircraft know they’re going to collide.
That’s the bit flip that needs to happen in aviation. Automation can actually kick in and take over even in piloted aircraft to prevent a crash. That’s the minimum of where we need to go. Some fighter jets have something called automatic ground collision avoidance systems that do exactly this when fighter pilots pass out. It’s possible for commercial.
Then the second, he said, is we need to have better ATC air traffic control software and automation. Right now, we use VHF radio communications for safety and for critical instructions, and that’s kind of insane. We should be using data links, etc. The whole ATC system runs on 1960s technology; they deserve better software and automation in the control towers.
It’s totally ripe for change; the problem is that attempts at reform have failed. So I just wanted you guys to have that one from this commercial pilot and then two from Brian Utko, who I think understands this issue really well. There’s so much opportunity here to make this better; this should have never happened.
Our other friend, Sky Dayton, has been pushing really hard for the U.S. government to do advanced pilot training. One of the things he says constantly is just that a lot of the pushback is just union rhetoric around what they perceive the right thing for their constituency is. Hopefully, this starts this conversation because I think guys like Sky, guys like Brian, are working on this next level of autonomous solution that can just make flying totally, totally safe.
The crazy stat is that we haven’t had a commercial airline disaster in the United States in almost 25 years. I think it was 15. Yeah, it’s looking like pilot error here, and there also seems to be some question of why these Apaches are flying around this really crowded airspace. It seems like they’re shuttling politicians around, and maybe that’s not the best idea in this really dense area, as your pilot friend was referring to, Chamath.
So, thoughts and prayers and all that stuff for the families of the people who died; it’s just a terrible tragedy—terrible tragedy. Yeah, it’s just—this is an area to invest money and use the private sector and all this incredible innovation that’s available to upgrade these systems and infrastructure.
This has been another amazing episode of the All-In Podcast. Thanks, Travis, for joining us. Thank you, TK; that was a lot of fun, guys. First time—this is my first time on a podcast ever! Yes! You guys get right in; you were great. Come back anytime; you were great, man. You were great; appreciate it.
Appreciate it, very based—that’s what’s gonna like it. Tell us what you think, and we’ll see you all next time. Love you boys! Bye! Bye! Bye! We’ll let your winners ride, Rain Man, David Sacks. I’m going all in.
And it said we open-sourced it to the fans, and they’ve just gone crazy with it. Love you, besties! I’m going all in!
We should all just get a room and just have one big huge orgy because they’re all just useless—it’s like this sexual tension, but they just need to release somehow.
I’m going all in.