Josherich's Blog

HOME SHORTS TRANSCRIPT SOFTWARE DRAWING ABOUT RSS

Cognition CEO Scott Wu on acquiring Windsurf, AI replacing engineers, and the Moneyball-ification of everything

27 Aug 2025

Cognition CEO Scott Wu on acquiring Windsurf, AI replacing engineers, and the Moneyball-ification of everything

And this is stuff like if I ask you what’s 694 squared? It is 481,636.

I have 163, I have 16, 3, I have shuffled the cards, I have shuffled the cards, I am not collaborating, I am not collaborating, we give them to Scott.

So now you have six cards and you’re trying to make 163, right?

And one way that you could do that here is:

  • 2 times 8 is 16
  • 9 divided by 3 is 3
  • 3 plus 16 is 19
  • 12 times 12 times 12 is 144
  • 144 plus 19 is 163

And so almost all combinations can be-

But you’re probably thinking like, “I could have done that, that’s too easy.”

So, for this guy, you can just jump it upside down like that. And for this guy you can just jump it upside down like that.

Very good. Very good. Very good. Very good.

Scott Wu is the co-founder and CEO of Cognition which makes Devin the AI coding agent.

Scott is a triple IOI gold medal winner and kind of famous for being a math whiz and now he’s at the cutting edge of agentic software development.

Cheers.

Tell me about your upbringing and all the math stuff.

Yeah. I feel like you’re known for the math stuff these days.

Yeah, yeah.

So, I grew up, I’m from Baton Rouge. My parents were both chemical engineers and so they immigrated from China for grad school and then naturally when they were looking for jobs they were doing like air emissions permitting and things like that.

And, you know, Louisiana has a lot of oil and gas.

A lot of the air emissions too, check.

Yeah, yeah, yeah.

And so, that’s how we ended up there.

I always loved math as a kid. I had an older brother named Neil.

Super, super close. The whole way through.

And Neil was about five years older than me.

Neil started doing math competitions when he was in middle school.

And so, he would have been in like sixth grade and I was in first grade at the time.

And naturally, I as a little brother would go and, you know, just watch what he was doing and try to learn some of the same math too.

And that’s kind of how I first got into math.

And then, you know, I found that I really enjoyed math competitions and going and competing and doing these things.

And this is stuff like if I ask you, “what’s 694 squared?”

I think it’s probably not quite things of that nature.

It is 481,636.

But it’s things like, you know, yeah, like math puzzles, things like, you know, the frog that’s like going up and then every night falls down the well and how many days.

You know, these kinds of things where you get to-

The ants on the log.

Yeah, yeah, yeah, yeah, yeah, like where you kind of get to do the critical thinking and come up with interesting ideas and stuff like that.

So, I started doing math competitions in second grade.

I remember there was a contest at the local college that I went to which was for like middle schoolers and high schoolers.

And so I competed in the seventh grade math division as a second grader and I did the competition is like my first time doing any of these.

I just really liked math and stuff.

And then they were calling out like third place, second place, first place, and none of them were me.

And I still just remember I was just-I was so upset.

That’s your supervillain origin story?

Yeah, yeah, exactly. That’s how it all began, basically.

And so then I trained a bunch.

The next year I was in third grade and I competed in like Algebra 1 or something.

And like I won that year and then I basically kept doing math competitions from there.

My last year of high school, which would have been my junior year, I left a year early.

But I did IOI, the programming algorithm.

Yeah, yeah, yeah.

I did IOI three times and I got gold.

Yeah.

Heard you go to school?

No, I went-I took a year off, actually.

So I left high school a year early.

I wasn’t that good at school, I guess.

I left high school a year early.

Sorry, that’s surprising.

You weren’t that good at school.

Well, I just, you know, I wasn’t that good at finishing school, you know.

I have a middle school degree, but, you know, I didn’t really make it through high school or college.

So I left high school a year early.

I spent a year actually in the Bay working at a company called Addepar.

Sure, yeah.

And I did that as a software engineer.

That was back in 2014.

Yeah, wow.

And then-yeah, yeah, it was a while ago.

And then after that, I decided, OK, I will go try out college after all and see what that’s like.

I went to Harvard for two years and then I dropped out.

How did you end up at Addepar?

And that’s very forward thinking of them, obviously.

They took on a high school-aged high school dropout.

Yeah, yeah.

It was a fun group.

You know, funnily enough, there were four of us who started the same-at the same time as high schoolers.

And it was myself, Alexander Wang was actually another one. We started on the same day.

Eugene Chen, who’s now running Phoenix Dex.

And then Srinath Ravichandran, who’s most recently at Sandbox as the CDO.

Wait, sorry. This is a real small group theory moment. So you and Alex were in the same- That’s right. So we knew each other. We met in middle school.

Alex now of Meta. Yeah. Now of Meta, that’s right. MSL, I guess. Yeah, and so we met in sixth grade. He was from New Mexico. I was from Louisiana.

But we met in this math competition called Math Counts. We were both at the national competition. And then we started talking. Google Hangouts was the thing at the time.

It turns out there’s some math and AI. Yeah, this may be an inspiration. Yeah, yeah, it’s a fun thing.

Well, a lot of the folks, as it turns out, from our vintage ended up being- I think there’s a real infectiousness of being entrepreneurial, too.

I think Alex deserves a lot of credit for, I’d say, being the first of our group. Alex Wang got you into the idea of starting a company. Yeah, somehow, I think there’s definitely a bunch of that involved, for sure. Yeah.

But also, a lot of folks-

  • Johnny Ho, who’s one of the co-founders of Perplexity, for example.
  • Demi Guo, who started Pika.
  • Jesse Zhang, who started Decagon.

A lot of us were actually competing in these math and programming conferences in the same year. And we all knew each other.

OK, so this gets to something I was wondering. You know, there’s this topic that people talked about a while back of, where are the young founders?

There always used to be kind of people in their early 20s working on breakout companies. You know, Michael Dell was 19 when he started Dell, 23 when he took it public. Obviously, you know, Mark Zuckerberg was very young when he started working on Facebook. And when it was like a real breakout, you know, he was still very young.

And there was a period where there was no young founders. And now there’s many, many more, like a whole bunch of people that you mentioned.

You’re 28.

  1. running Cognition.

Is the presence of young people as founders of leading companies a biomarker for industry vibrancy? Where, you know, Michael Dell was young during the takeoff of the PC era. And, you know, Mark Zuckerberg was young during the takeoff of social networking. And now we’re in the takeoff of kind of AI coding tools. Yeah.

I was going to say, I appreciate you calling me young. I mean, I think relative to being 18 or 19, you know, it’s still a long way. The test is like in your 20s.

So I have, I have a take on this actually. And I’m curious to hear yours on this. I’ve been thinking about this question as well.

And my take is actually just that overall, being a founder has just gotten harder. And that’s probably like the biggest, like the highest order bit.

I think the reason that young founders who were just really sharp and really determined like it did very well is because at the end of the day, being a good first principles thinker does beat experience, you know, and just a lot of being a founder is doing something that has never existed before and coming to your own conclusions.

The thing is now there’s a lot of people who have both, you know, the first principles thinking and the experience. And I think things have gotten a lot more, you know, call it mature as a space.

And so it’s like, you know, but basically it’s, it’s gotten harder, you know, and so, so there are fewer that are literally coming out of college. I think now they’re-

It feels hard to make the claim that, you know, it was easy to start a leading business in prior eras, you know, Facebook faced lots of competition. It’s not like Dell was the only PC maker. And so I don’t think they had it easy by any stretch of the imagination.

However, I think you are getting at something where clearly all the large companies these days, they’re very aware, they’re very connected with the ecosystem. If you look at a Satya or Mark Zuckerberg, they are very aware of everything that’s going on AI and they’re paying a lot of attention to it.

And so, yeah, maybe there aren’t giant opportunities that are just being left on the ground by the big established companies.

Yeah. And maybe harder is not the right word. It’s more just that the space is a bit more mature and there’s more of a playbook and like more existing knowledge.

You know, there’s obviously something unique with every business, but a lot of the details of, you know:

- Here's how you should structure equity.
- Here's how you should figure out, you know, fundraising.
- Here's how you should hire your initial team.

You know, many of these things I think do carry over a lot with experience where, you know, I think in previous eras where the book wasn’t written at all almost.

And so, so it really just came down to how sharp you were and how good you were at making your own decisions. I think now there’s, there’s a lot more experience to draw from. Maybe that’s part of it. I also do kind of just have a theory of like, I guess I would call it the money ballification of everything.

So, to give a few examples, one of the things that I do casually for fun is playing poker, and poker is a very fun game. It’s actually much more mathematical than a lot of people realize.

It’s very, of course, people kind of think of it as-people know that like the poker solvers and the odds tables and everything like that- “Is it more mathematical than that?” No, no, I think that’s right. I think that’s right.

Well, I think there’s like a first order impression of, you know,

  • it’s all about just knowing what you got.
  • play the person on the other side.

And it obviously is much more mathematical than that.

But the one thing that’s kind of interesting is you see it in the evolution of the top players in the space as well.

Back in the day in the 80s or 90s, the top pros-again, I don’t think the idea is that it’s less competitive-but the skills that made someone a really great poker player were just like, you know, really great intuition.

Like they, I think, understood a lot of the mathematical concepts, but just at a very System 1 level of just being able to kind of think about them.

And obviously they had just a good feel for the game and a good sense of how they should be able to kind of improve their own play.

And now it’s just all math nerds, you know. It’s basically like at some point when the space gets mature enough that, you know,

  • you know what I mean?
  • for a less mature space, when people don’t know what the right questions to ask are, or how to even kind of think about it, like what is the right frame of reference.

Then I think there’s something about having a really sharp intuition and coming to your own conclusions.

And at some point, as these things get more mature, the conclusion of it kind of is math, you know?

And I feel like that’s been the case in a lot of different fields.

I feel like it’s happening a little bit for startups as well. I see more and more spaces have kind of resolved to their underlying, like a chess engine just deciding that the position is, you know, major 41 or something.

Yeah. And chess is totally the same way, by the way, which is like, you know, back in the 1800s, people played what was called the romantic style of play.

Yeah, exactly. The romantic style of play.

And now it’s kind of like, yeah, like there’s a right sequence of moves and you are just seeing how close you are to that optimum.

What are other domains for the monopolification of everything?

One of my other hobbies, which I played at least before the advent of Cognition, was a game called Super Smash Brothers.

I used to play tournaments for Smash, and you saw very much the same pattern with the game called Melee in particular.

I don’t know if you’ve played Smash Melee.

Okay. It’s for the GameCube, which came out in 2001. So it’s a very old game, but people just still keep playing the same game.

Yes.

For the first six to eight years of the game, the personality was very much really wily, sharp thinkers, people who are quick on their feet and coming up with ideas.

And now it’s just all math, you know.

The people who play and do really well are, I think, some of the RTS players are a little bit that way as well.

Yeah. It’s gotten less creative as people have gotten better at that.

Yeah. And it’s a funny thing where there’s a lot of beauty in the nerd side of it too.

It’s just like a difference in what skills get most selected for-that’s maybe the way I’d describe it.

Okay. I’m getting distracted from asking you about Cognition.

What is Cognition? What does it do?

So, we’re building the AI software engineer.

We’ve been building Devin for the last year and a half, and most recently just acquired Windsurf.

Devin is the agent in Windsurf, the IDE, but at a high level, we really want to build the future of software engineering.

Is it confusing for people that you have two brands, you have Cognition, the company, and then Devin, the slightly anthropomorphized instantiation of it?

We’ve been talking about that. Now there’s Windsurf as well, so now there’s a third thing. But I think some consolidation is probably good. Okay. And so people are maybe familiar with the, you know, the GitHub Copilot or the IDE style paradigm where you’re there writing code in your, in your IDE and it helps you autocomplete it, or you can give some instructions in the IDE.

That is not the cognition Devin paradigm. Instead, with Devin, you’re in a Slack channel with Devin and you’re prompting it to like “go off and build me an X or a Y,” but you’re talking to it as you would a coworker in Slack.

That’s right. Yeah. And so you can call it from Slack or Linear or Jira, or, you know, you can call it from your IDE as well, but you don’t have to. Right. But, but yeah, I think that’s exactly right.

You know, there’s been this paradigm, you know, in the past, I would say GitHub Copilot was really the biggest kind of like the most well-known originator of it, of IDEs.

And I would describe it as basically when you are typing at the keyboard as an engineer, making you a little bit faster at it and giving you the tools and the shortcuts and everything to do that faster.

And Devin is a very different paradigm of what I would call like an async experience, right? Where you have an agent and you delegate a task. And so Devin naturally operates a little bit more like at a ticket level or a project level or something like that.

You have some issue in GitHub or something and you tag Devin and then Devin gets to work on it.

Yep. Yep. And what level of task is Devin doing a good job of today?

Yeah. We like to call Devin a junior engineer today. There are some things that an AI, of course, is way, way better than all of us at, especially encyclopedic knowledge and just pulling facts and things like that.

There are some things that it still makes terrible decisions on. But I think that’s the right average overall.

And what we see folks typically using it for are things like:

  • Bugs
  • Simple feature requests
  • Fixes

where you’re talking about an issue and you and your team are figuring out what you should do.

And you’re just like, “Hey, Devin, go do this.”

On the other hand, a lot of the more repetitive TDS tasks that come up often in engineering work.

So that’s often:

  • Migrations
  • Modernizations
  • Refactors
  • Version upgrades

It’s crazy how much testing and documentation consume software engineers’ time. It’s crazy how much of their time is spent more on things like going and fixing your Kubernetes deploy than on things like building and coming up with really good dependency management. Yeah. All that kind of stuff.

What metrics can you share on where the business is at?

Yeah. So Devin is deployed in thousands of companies all over the world. We work with some of the biggest banks in the world, like Goldman Sachs and Citibank, all the way down to startups with two or three people.

In general, a lot of how we look at things is in terms of merged pull requests, and getting Devin to the point where it is a significant percentage of the merged pull requests in an org.

Typically, in a successful org, Devin is merging something in the range of 30 to 40% of all the pull requests that come through.

And you talked about this async model, but isn’t it the case that as I look at other, you know, the GitHub copilot and the cursors and everything like that, I mean, they are not fully synchronous because you now prompt them and they go off and do something.

So are these distinctions a moment-in-time thing? Do they kind of go away where everyone is synchronous in the cases when they can do it instantly and asynchronous in the cases where they don’t? But is this a durable distinction?

It’s a good question. I think the two experiences continue to exist for the next while. And then I actually think that figuring out the shared experience between them actually is the really interesting thing.

And that’s a lot of recently with Windsurf and things like that. It’s something that we’ve already been thinking about and now are pretty excited to ship some things in the near future on.

Do you know the concepts of essential complexity and accidental complexity? Have you heard about this?

Okay. Yeah. And I think there’s a real thing where maybe one way to describe it is the ethos of a software engineer. What it means to be a software engineer in my mind is basically just somebody who solves problems in the context of code, right?

It is somebody who tells the computer what to do and makes all these decisions of, you know, it can be big decisions like:

- What is the right architecture that we want to use for all of this?

Or it can be like a lot of these micro decisions like, “by the way, there’s a case where this balance is less than zero.” And what do we want to do here? Should we show an error or should we request this or whatever.

And all of these decisions are what people typically call the essential complexity of what is all of the actual underlying logic of the decisions of what the software is doing.

The accidental complexity is basically everything else, like all the things that you have to do to support things as they scale, or all of your standard features. For example, anytime you have a class, you probably have all the standard CRUD features along with that as well, where everyone knows that you need to have that in your class, but there’s no real decision that needs to be made in terms of going and doing that.

There’s an interesting thing, which is, up until AI coding has come along, I feel like the meat of software engineering has been in making the decisions. Yet, you spend 80 or 90% of your time doing more of the latter, just going and doing the routine implementation and so on.

I think this merged experience that comes up is basically something where for anything that actually needs you in loop, where you can make the decision and you’re looking at the high-level strategy or deciding what you want to build, you’re involved and you’re doing that synchronously.

Then for all the parts that are here or execution, you are able to hand that off asynchronously.

The interesting thing is that for an individual project, there are typically long stretches that actually are one or the other, and it alternates between both of them.

What that will effectively look like is:

  • The synchronized experience is the IDE where you are looking at the code directly and you see each of these things.
  • The asynchronous experience is the agent that will go off and do each of these things.

But to be able to go back and forth between your IDE. You want the engineer to be interactive with the agent as it’s working, but on the high impact moments of important choices as opposed to all the grunt work.


How do you get large enterprises comfortable with giving Devin sufficient permissions to be effective?

So, like you talk about the migration at USK, super boring. You change the table and get it pointing to the new table and then eventually you delete the old table.

And that last step is kind of scary.

People still have fear of the model making something up and doing it, although models hallucinate way less than they did before.

So, how do you get people comfortable with giving it enough power to be effective?

  • We strongly recommend that people using Devin don’t give it production database access, for example.
  • I don’t know of any instances where it has been an issue, but you’d rather not take that chance.

The framing I would give is,

“We have processes for these things because humans make mistakes too. That’s why we have pull requests and reviews, continuous integration (CI), and all these things already.”

Devin naturally slots neatly into all of these things.

Typically, the way folks will work with Devin is they’re doing some big code migration and will break up the task. Maybe they have 50,000 files that all need to be upgraded from one version of Angular to another. Devin will do each one and make pull requests.

So you review the code and make sure things look correct, but there’s still a human.

It’s back to your point of incidental complexity: the reason a migration is time-consuming is not the actual single deletion step; all the time costs come in other places.

In practice, what we see especially in enterprise migrations is when folks measure internally, they see something like an 8 to 15x gain for a lot of use cases with Devin.

Because, as you’re saying, you’re just reviewing the code. You’re not writing every single line or going through every single reference yourself.


Let’s talk about that because all organizations around the world are trying to figure out the productivity impact of AI coding.

Everyone sees engineers want access to AI tools for coding.

What’s not totally obvious is the impact on metrics like PRs per developer or what’s happening in that space. Generally, you see some increase there. But, of course, it’s not clear how good even a pull request per dev metric is. And then maybe you can say that there is some ongoing maintenance cost if you’re shipping low quality code or something like that.

Yeah. And so I feel like everyone right now is looking for some slam dunk productivity data on what is the impact of, you know, there’s probably some CTOs looking for the slam dunk data to justify the spend to their CTO.

So what’s your view on how big is the productivity impact? Is it actually measurable?

Yeah, for sure. Yeah. So I think this is something where actually this gradual shift towards agents actually will help a lot as it turns out. If anything, I think, to be honest, I think IDE productivity is often underrated because, you know, how do you state it to your point, right?

Like you look at the numbers and it’s, you know, of our engineering org on average, people took the tab completion 238 times this week. It seems quite clear that that should be worth something and it should make you faster, but how much faster does it make you, it’s a bit harder to say.

On the other hand, with agents, a lot of the workflow obviously is going and doing the task for you, right? And so if it’s a Jira ticket or something or a migration or things like that where you typically do have a good sense of:

  • How many engineering hours are going to be needed for this
  • What’s going on

And because it’s doing the whole thing end to end, it’s a lot more clear of like, “yeah, you didn’t have to do this migration anymore. You reviewed the PR in five minutes and like that’s all done.”

And I think as time goes on, I think these things will become more and more clear.

There is a view that some people have out there that coding tools are a moment in time thing that get run over by increasing model performance, you know, GPT-6 or GPT-7.

Yeah. Presumably you do not hold this view.

Yeah. How do you avoid getting run over by the labs?

Yeah, yeah, for sure. So look, I think the labs are obviously like, I think they’re incredible business. Like as best as I understand it, I would kind of describe this view as like a, call it like the nihilist computer use take, which is just like, of course, all of these different things that we do in the world, you know, in knowledge work just involve using a computer.

And the AI is going to get better and better and better at using the computer until someday there is nothing left except just the AI going and using your computer and doing your work for you is to the best of my understanding kind of the argument there.

I see the wisdom of it. This is the kind of thing that’s very hard to disprove. But I think that in practice, what we’ve seen in the space is naturally there is a lot of contextual knowledge. There’s a lot of industry details. There’s a lot of, and so, you know, as we were saying, like going and doing some angular migration or doing some, you know, it’s not to say that these things can’t get better.

In fact, I think they will continue to get much better. But I think that the way that we make models better and better at them is by giving it the right data of like, you know:

How good can you be at angular migrations if you've never seen angular,
you've never done an angular migration yourself, right?

And there’s this kind of a cap on that. And obviously, there are all sorts of these things of, you know, using your datadog to go and debug errors.

I think the biggest thing I would just say here is software engineering in the real world is so messy, and there’s all sorts of these things that come up. And I think in practice, most disciplines look like this. And I would say the same thing about law or medicine or and so on.

And so while the general intelligence will continue to get smarter and smarter, I think there is still a lot of work to do in making something both, you know, on the capability side really good for your particular use cases, but also in actually going and delivering a product experience and bringing that to customers of how that actually happens in the real world.

So it’s not a general intelligence task. It’s a specific intelligence of, you know, working in the Stripe code base requires some general intelligence, but requires a bunch of context, requires working within the workflows we have and everything like that.

And you think that persists as an area where you need to specialize?

Yeah, exactly. Maybe one way to put it is I think the argument is something like a super intelligence. And I think in some sense, yes, I think we are part of, you could consider us short super intelligence.

I think what we’re getting to with RL as this thing is improving and improving, like, and we see more and more of the gains and people are developing the techniques. You know, I think of RL and this paradigm of AI as basically the platonic ideal of it is the ability to solve any benchmark, right?

You have exactly a data set of here are the things that you want and here’s how we measure success and here’s how we do that. And whatever that benchmark is, it can be the hardest thing ever, you know, it can be like unsolved math problems or whatever.

Someday we want to get to the point where we can just take that and train a model that will just get 100% on it. And I think, frankly, we’re moving towards that idea a lot faster than most folks would have expected.

I think we’re really, I mean, there’s been some pretty crazy developments like the IMO gold medal or like, you know, the scores on SweetBanch or things like that. The thing is, when that happens, I don’t think what we end up with is just pure ASI, end of humanity, human knowledge work or whatever.

I think the thing that we end up in is basically a point where the hard question is, all right, now what is the benchmark, right? And I think defining the benchmark in all of these spaces is kind of like a lot of the practical, real messiness of the world, right?

And so for a software engineer, obviously, you know, it’s like, yeah, like what are all the tools that you interact with on a day-to-day basis? How do you use those tools? You know, what does it mean to build a representation of the code base over time? How do you decide whether shipping the feature was successful or not successful? You know, all of these various things and creating the right environments around them.

And so can there be a good benchmark for a model’s performance on the kinds of things that Devin wants to do? Or is that just, is like Devin’s business model and, you know, Devin’s revenue is the benchmark essentially.

“Yeah, yeah, it’s a good question.”

From our perspective, we have a lot of benchmarks internally. You know, the biggest is one that we call junior dev, which we might need to upgrade to senior dev pretty soon. But it’s basically the ability to do a variety of just random real-world junior dev tasks.

And so, you know, we’ve shared some of the examples. Obviously, we don’t publish the whole benchmark because then it would, you know, get obviated. But a lot of the tasks are things like:

  • Fix this Grafana dashboard and get this going
  • Pull up the results

And, you know, this is a very common thing that a software engineer does, right? And the thing that’s hard about it is perhaps not some algorithmic coding thing itself, but it’s like, turns out on the setup, actually, the server that’s hosting this is running the wrong version of some package.

And so you have to go through the errors and figure out what happened and then say, okay, I need to downgrade the package to this other one, which is actually the right dependency for this thing. And then I need to run it and pull this up and make sure the numbers look correct. You know, things like that, which are basically as close as we can make them to what real software engineers spend their time on.

So how have the newly released Claude 4.1 and GPT-5 done this benchmark?

“Yeah. I mean, both of them are, the two of them are better at this benchmark than any of the models that we’ve seen before this week.”

As you think about the AI business and industry over the next five to ten years, you can think about all the different layers of the stack:

  • The data centers
  • The labs
  • The application layers, such as yourself

“Yeah.”

Who benefits? Like, what gets more competitive? What gets less competitive? Are all these just classic competitive oligopolies?

“Yeah. Yeah. What’s the market structure?”

So everyone always makes fun of me whenever I say this, but I think all the layers are going to do very well. There’s just going to be a lot of AI.

I think the prices are cheap everywhere. I’ve been saying this at least for the last six to 12 months, and I think we’ve seen prices go up a decent bit across all of these.

But no, at a high level, yeah, first of all, there’s going to be a lot of AI. It can’t be understated in the sense that, like, I think we’re kind of coming off of a decade of a lot of various, you know, B2B SaaS.

And so, you know, I think there was the internet, obviously, in the 90s and early 2000s. And then there was the mobile phone and cloud, which were kind of late 2000s, early 2010s, right? And those were some of the biggest things in the last 30 years.

Over the last 10 years or so, I think there was a real time where most of the stuff that was being built was a lot more incremental, basically, right? Like, each next thing and building for a particular niche or for a small part of the workflow and making that more efficient. And AI now, I think, is the total opposite of that in the sense that, now we’re talking about the entirety of knowledge work and perhaps the entirety of physical work as well, depending on what happens with robotics, right?

And so, first thing is there’s just going to be a lot of AI. Yes.

And the second thing about where does the value accrue, my honest answer on that is the simple thing is value accrues wherever there’s meaningful differentiation in the layer, right? You know, simple, like, if there’s NVIDIA and there’s TSMC and, you know, for as long as NVIDIA needs to work with TSMC and for as long as TSMC needs to work with NVIDIA, of course, there will be some rubbing up on each other’s shoulders. But, like, they will continue to do great, right?

And you kind of see this down the stack as well, right? I would argue that the problems that are being solved in all these different layers are very, very different problems that have pretty meaningful differentiation, right?

  • You’re saying this prevents too much vertical integration, basically, where you get the layers kind of keep each doing their own thing?
  • Exactly.
  • Yeah, yeah.

And I think there’s a real difference where, yeah, as soon as you go from hardware to, obviously, foundation model training is its whole own can of worms and very much, like, the DNA of the companies is finding exceptionally strong researchers, giving them as many GPUs as you can afford to give them, and setting up a culture that kind of orients around that, right?

And then the application layer, I would say, is really focused, I would say, obviously, it has a lot of the elements of research as well, but I think in particular is really, really focused on just figuring out how to make one use case work.

For us, for example, like, the only thing that we care about is making, you know, building the future of software engineering.

And maybe one thing I would call out is, like, you know, people often talk about AI code abstractly in a vacuum. I think there are a lot of companies that think about code, you know, in the foundation model layer or things like that.

Like, I think we uniquely really think about software engineering, right, and all of the messiness that that comes with, and all the product interface, and all of the delivery, and the usage model, and, of course, like, a lot of these particular capabilities that come with that.

So, I think there’s, like, a real, you know, everyone has their own DNA, and everyone has their own things that they do best.

That makes sense.

We at Stripe have been thinking a lot about building the economic infrastructure for AI, and what is required. You can have an agent acting on behalf of a person, and you want to be able to just be prompting or doing stuff in your app, and part of the tool use that your AI can engage in is going off and conducting commerce in the real world.

And so, we’re building infrastructure for that.

And then we notice that because of the economics of AI, everyone has usage-based models, right, per token, per what have you.

And so, we’re building out, you know, usage-based billing infrastructure.

And, again, we find the billing systems people are building on Stripe, they’re very different from the classic SaaS per seat pricing, whereas, again, everything in AI is per unit consumed.

How you can get into how the agents engage in commerce with each other, where there’s, you know, no human in the loop.

So, there are all these ways in which our product roadmap is being formed.

But I’m curious what you think the economic infrastructure for AI needs to look like. Are there things that we should be keeping in mind?

Yeah, yeah, for sure. Yeah, seat-based to usage-based, big, big, big, big one for sure. I think on both sides, right, from the perspective of one, seats don’t really make sense when the AI themselves are arguably seats as well, you know, like, they’re doing a lot of the labor, too.

And then on the other side, I think, you know, usage obviously just goes so naturally with the COGS themselves because a lot of this is, you know, effectively GPU spend on how much you’re spinning the models, basically.

And so, I think that makes a ton of sense.

The other big one which comes to mind, obviously, is just for there to be an entire agent economy as well, right?

And so, I think today, I would say, is, you know, still probably more of a talking point than reality.

But I think things are pretty rapidly changing and getting to the point where your agents are, you know, funnily enough, we use Devin.

Devin is obviously entirely focused towards software engineering.

But, like, we order our DoorDash on Devin.

You know, we order our Amazon packages with Devin.

And it’s, like, there are pieces of that that turn out to work nicely anyway.

  • So, you order your Amazon packages with Devin?
  • Yeah. So, you’re just in Slack and you ask it to buy something for you? Yeah, yeah. Like, just at Devin, can you go buy some more whiteboards for us or something like that? Yeah.

At a certain point, do the real-world things you ask Devin to do run into just blockers with sites trying to block bot activity? You know, a lot of Devin working really well, obviously, relies on Devin being able to do these things and then you get through a bit. But some of these things, you know, I think are quite natural with the model, which is, you know, you often have API keys or secrets or things like that that you want Devin to be able to hold on to.

And so, that works for credit card numbers as well.

And, obviously, there’s a lot of work of, you know, real-world software engineering doesn’t involve a lot of just going and browsing the web and finding different sites and clicking around on that. You know, even if you’re just testing your own front end or putting in documentation or something.

And so, good browser use, I think, is an important piece of that as well.

And I think it’s just kind of something that’s…

So, shouldn’t you build a consumer app? Like, doesn’t everyone want this magic wand app where you can just have your virtual assistants? Like, there’s a million virtual assistant startups. It seems like none of them have really gotten to any scale.

Yeah, it’s a fun question. I think from our perspective, like, I think, on the one hand, like, it’s fun seeing Devin go and do these DoorDash things.

At the same time, we also just know that, you know, our team is so small. We just don’t have the kind of, you know, focus to be able to do that in addition to doing software engineering.

You’re pulling up Devin and you’re seeing this. And then, on the other side, there’s, like, the IDE there. But, like, you know, Devin’s just going on DoorDash or something. You know, it’s a very, like, fish-out-of-water experience. And I think it’s fine for us to keep it.

But, you know the way a lot of product development follows from people noticing how a product is being used. Like, emergent patterns.

Exactly, and these emergent patterns. Like, Twitter especially, you know, people started linking to photos off-site. So, they built, you know, in native image support. Or the hashtag was invented by the community.

So, similarly, you know, you’re checking the Devin logs and you notice people are buying a lot of DoorDash. Like, maybe that’s a suggestion on the product side of things.

Yeah, yeah, it’s funny. Well, to be fair, it’s mostly just ourselves. I know, it’s still… It’s still emerging product usage. I agree, I agree. It’s a fun one, yeah.

That’s funny. I love that.

Yeah, we had a fun one where Devin was, Walden had a flight that got canceled and was trying to, you know, use Devin to go and, like, negotiate with the airline to get the refund for it.

And Devin went to the site and, naturally, the site forwards you to their agent to have the conversation. And then Devin was kind of, like, explaining these things and, like, wasn’t making progress. And then, at some point, Devin said,

“this is not working. I need to speak to a human right now.”

And did it? It did, it did, yeah.

So, it got to the human, and then, you know, the human got on the line, and then it sent some, like, the link to, like, the airline contract of, like,

section 22 says this, this, and that

and then Walden actually did get it.

But, sorry, Devin was speaking? Devin was chatting with the human. I see, yeah, yeah, yeah. He basically made it past the robot agent equivalent and then got to a human.

And did it successfully get the flight refund? It got the refund, yeah.

Okay. Well, again, the people want this.

Going back to the economic infrastructure for AI, the other thing that, you know, we think about is it feels like trust. AI is going to become a bigger deal online.

Yeah. I don’t quite know what form that takes, because, obviously, you know, it’s been a big, bad internet for a long time.

Yeah. There’s a lot of scams out there. There’s a lot of hacking.

But, I don’t know, the hacking attempts become more sophisticated, the deep fakes and everything.

And so, having a good sense of who is a trusted individual, who is a trusted business. Yeah. Just seems to become much more important in this world.

Yeah, yeah. Like, related to that, too, I also think one of these things, you know, I feel like the Cloudflare with agents and everything is a hot topic.

Explain the Cloudflare issue.

Oh, yeah, yeah, yeah, of course. So, you know, there’s a lot more agents browsing the web these days. And there’s been certain things, you know, protections set up to not give agents access to websites. And I think the paradigm, up until now, the paradigm for a lot of this stuff, I mean, there’s robots.txt and all these things, has often been basically almost like, there are tons of things which you are not allowed to do as a non-human.

And I think what we will probably need to see a lot more of over time is basically, like, delegating access, if that makes sense. It’s like making it more clear that an agent can do something on your behalf. And, you know, in some sense, you’re attaching some of your reputation to it, too.

There’s a monetary question of how this works out, but there’s also just actions that the agent takes are attributable to you and on your behalf.

“That’s a great point.”

Right now we have, like, bots versus no bots, you know, clankers versus clankers not allowed, whereas instead it needs to be:

  • bots allowed if you sign for them.

Yeah, as I was going to say, the simple version is just, like, if you’re signed into your Google Chrome email account and you have a verified address, then you can have an agent run in that browser window and do things. But all of it, you know, you’re responsible for the work that it does.

Yes. Yeah, it’s sort of like API key permissions, but at a mass consumer scale across everything and all websites and everything.

“I like that.”


And how does the existence of Devin affect your own hiring of engineers?

Yeah, I mean, from our perspective, we’ve always loved keeping the core engineering team very tight and very elite.

What’s tight, like 30 people?

Yeah, so up until a few weeks ago, our whole team is about 35 people of whom…

Across all roles?

Across all roles, yeah. Of whom, I mean, almost everyone actually is an engineer by background, funnily enough. But what we call core engineering was about 19.

Yep.

With Windsurf, obviously, the team count has grown a lot, but actually, with core engineering itself, it hasn’t actually gotten all that much bigger. It’s gone from 19 to something in the range of 30 to 35.

Okay, so you keep the engineering team smaller.

And are the engineers themselves different versus a company being built 20 years ago?

Yeah, so it’s a pretty different profile of the work that we have to do in the sense that there is a lot of execution and implementation that has to be done. But Devin does that so that humans don’t need to.

And so what we typically look for, our whole interview process, for example, for Lotties, is basically just having people:

- build their own Devin in eight hours
- see how far they get with it

Sorry, build their own version of Devin or build stuff with Devin?

Build their own version, their own agent, their own full end-to-end agent in like eight hours or six hours or whatever.

Yeah, I think what we find is, and I think we’ll see this trend generally in software engineering, which is:

  • Knowing all the little, memorizing all the facts
  • Knowing all the little details
  • Being really good at syntax of some language

Are going to be less important.

And what’s going to be more important are a lot of the high-level decision-makings or understanding the technical concepts really well.

Yes.

You know, having a good sense of products and just having a good intuitive sense of what to build and what to do. And being like a self-owner that way, too.

Yes.

And so, yeah, a lot of our team actually are specifically former founders, which is kind of a fun one. Like, of our initial kind of 35, I think 21 of us have founded a company before. And so it’s been a very high density of that.

Wow.


When will you hire your last engineer?

It’s a good question.

I’ll make a distinction here, which is I think that there will come a point, and my guess on this point is probably in the neighborhood of, let’s say, two, three, four years from now where we stop using code as the main interface.

And basically being a software engineer really is just instructing your computer and telling your computer what to do and saying, oh, like, you know, you’re looking at your own product and you’re saying, hey.

“You think two to four years from now, software engineers are not really looking at code in their day-to-day just like they don’t look at assembly today.”

Exactly.

Yeah.

Yeah.

And so that’s going and looking at your own product and deciding:

  • Oh, yeah, we need to make a new page here.
  • By the way, all this data, let’s save this this way.
  • Let’s index this according to X, Y, and Z, you know, because here are the things that lookups that we need to do or whatever.

Making a lot of these architectural decisions but not looking at the code themselves, at least in the majority of circumstances.

I think at that point, obviously, the jobs change a lot. Funnily enough, I mean, I think, if anything, we will have way more software engineers, not fewer. And I think just because the interface is not code anymore doesn’t mean that the core skills of software.

Yes.

People often ask us, like, “my son or daughter is in high school or has just started college. Should they even be studying computer science?” And my answer is always absolutely yes.

If anything, you know, funnily enough, I feel like university computer science always had the opposite sin of doing too much of teaching you the concepts. What programming was about and what computer science was about and not enough of, like, “all right, here’s, like, syntax that you need to use and, like, here’s what it means to get a React app set up and whatever.”

I think we’ll get to a point where those theoretical concepts and that high-level understanding of, you know, maybe in one line, like, “the model of a computer and how to make decisions, you know, problem solve with the computer as a tool.” That is what programming will be.

And if anything, there’s going to be a lot more software engineers.

I think one of the nice things is everyone talks about Jevons paradox and how it relates to AI. You know, I think there’s nowhere that it’s more true than software because, you know, we really never seem to run out of demand for more code and more software. You can just run a lot of software.

You know, the half-joking way to say this is despite how many software engineers there are in the world, you know, we all know this, there are so many products out there that are still so bad:

  • You’re logging into your bank
  • You’re dealing with checkout and retail
  • Healthcare platforms where clicking around to find your thing is a struggle

All these things are still, like, super outdated, super buggy.

You know, We haven’t finished writing all the software yet.

Yeah.

Isn’t it shocking that the UIs haven’t changed at all? So we still…

We talked to Siri, which is the same, I mean, button placement and the same brand on the iPhone as pre-transformer models.

We…

You prompt Devin via Slack.

Yeah.

You know, we use our AI tools in, you know, in a web browser.

Yeah.

And we enter them into a text box like, you know, we’re playing Zork in the 1980s or whenever that came out.

And so…

70s, maybe?

I don’t know how old Zork is.

Do you know what Zork is?

I don’t.

I actually don’t.

And it was like the original text-based adventure game.

Oh, I see.

I see.

Yeah, yeah, yeah.

But, you know, when are we going to see AI UIs? Because it’s very retro right now.

Yeah.

My high-level thought on this is, you know, you always see this with new waves of technology. Like, I think mobile phone is a great example where, you know, the initial apps kind of just look like basically websites but in a smaller box, you know? And over time, you know, you can still get a lot of value out of those. Your core value prop of the phone was already there.

But of course, over time, we built a lot of cool touch interfaces or we, you know, developed a lot of the science of what makes a good app UX.

Yeah, but with no multi-touch, we’ve no rubber banding.

Yeah, yeah.

I think we are, you know, I think we are entering that phase now where, you know, for a few years, it was just kind of like replacing existing flows and just using AI to do that better.

And now we’re starting to think about a bit more of these kind of like various generative flows.

I mean, maybe the simplest example that comes to mind is a lot more products now have the little chat box at the bottom where, you know, rather than having to click through all the menus yourself, you can just kind of ask the chat box and find that, which is one very, very simple version of that.

But, you know, I think there’s way more innovation to do.

Yeah.

Yeah.

One framing I was thinking about with this is, you know, it became clear shortly after the invention of the transistor and the microchip that everything would have a microchip in it, right? You know, everything could benefit from having a small computer in it and, you know, your car would have a small computer in it and your dishwasher would have a small computer in it and, you know, everything.

And there’s some equivalent where everything will pass through a transformer model before it’s consumed.

Yeah.

One of my thoughts on this, too, is I think AI is, I’d say, uniquely different from some of these previous ways in an important way, which is, you know, personal computer or internet or mobile phone. All of these had, one of two things, or often both.

One was a big hardware component of, like, yeah, you had to just go ship modems to everybody and you had to get people on the internet and you had to give everyone a phone first, right? And then two was, like, a very core critical mass effect or, like, you know, empty room effect or whatever, you know, network effect, whatever you want to call it, where the internet was great and all, obviously, but, like, it doesn’t really get that useful until all your friends are on the internet, too.

And, like, the restaurant that you’re looking up is on the internet, too, and, you know, various other things as well, right?

AI actually has neither of those problems.

And as a result, what you kind of see is, like, as soon as the tech works for somebody, you know, it’s pure software, it can work single player and give you a ton of value directly.

It kind of works for everyone.

I think there’s been a few things that we’ve seen as a result of that. One is, you know, there’s a new person posting that they’re the fastest company from 1 million to 100 million every, you know, every couple weeks because, you know, AI is just so much faster.

As soon as it works, it works for everyone.

Yes.

But I think the other part of that is, I think, to your point, I think there’s actually a bit of lag with product, I would say, where, you know, I think you could freeze all the capabilities today and have no new models and no new research come out, and there would still be, like, a whole decade of product progress to make.

Whereas, I think before, you know, the product progress kind of tracked alongside the distribution itself, now it’s been much more sudden where it’s, like, you know, two years total where everyone’s been thinking about it.

And honestly, if we factor in a lot of the more recent capabilities, agentic capabilities, things like that, it’s, like, arguably less than one year for a lot of these.

And we are all kind of grappling with that all of a sudden and trying to figure out what the right new product experiences are, right?

And so it’s just taking a bit more time.


What are your AGI timelines?

Yeah, I think we have AGI.

Okay.

Now?

Well, so I was going to say, you know, there’s this joke that people talk about, which is, you know, back in 2017, if you ask, you know, do we have AGI? The answer is no.

And today, obviously, if you ask if we have AGI, you know, the first thing everyone always says,

“well, you have to go define AGI.”

Yeah, yeah, this hemming and hawing.

Yeah, yeah, yeah.

And I think it’s kind of true in some sense of…

Devin will order your DoorDash for you.

Sounds like AGI to me.

Yeah, yeah.

And so, obviously, a bit of a facetious answer, but my honest opinion is, I think there is some, you know, rapid singularity, superintelligence thing that people kind of talk about.

I would guess, it’s very hard to say, you know, nothing’s impossible, but I would guess that that’s not something that happens in the immediate future, especially because, you know, as we said, a lot of the work to do is going and collecting all the real world.

Like, what are the problems that you want to solve? What are the…

How do you define success for all these things?

With that said, I think, yeah, I mean, we’re going to just keep…

Like, I think it’s not so binary, basically.

I think we’re just going to keep rolling out more and more improvements, and these things are going to be more and more capable.

But I don’t know that we have some sudden shift, at least for the next few years.

Yeah.

No, that makes a lot of sense.


We’ve got to talk about Windsurf.

Oh, yeah.

It played out so quickly.

So give us the play-by-play.

So we heard the news that it was going to be Google buying Windsurf, or I guess not technically buying, this whole deal that was happening.

That Friday, at the same time everyone else did.

Okay.

So this is not something that played out in advance.

The Friday when the news came out. It was basically just as sudden for us.

We heard some rumors maybe the night before.

Devin was scrolling Twitter for you.

Yeah, exactly.

Yeah, Devin came back and said,

“hey, you guys should check this out. We probably should look at this.”

And so we heard the news then, and actually that afternoon we were kind of talking about it, thinking about, like, is there something that we should do off of this?

Yes.

You know, it’s not uncommon that there is some crazy news that happens in AI, you know.

But this is especially, I think, you know, in our space.

We talked about this idea.

We reached out to them cold that evening and got to meet the new Windsurf leadership, you know, Jeff and Graham and David, that evening.

And as we were kind of both talking about it, I think we kind of came to this conclusion together, which is,

if there is something to do here at all, then it has to be ready to go by Monday morning.

You know, because everyone, all the customers were reeling.

The whole team was like,

  • Do I have a job?
  • Do I not have a job?

It was a melting ice cube.

Exactly. And so it’s like, if it even waited until Thursday, instead of Monday, people were going to cancel their contracts, people were going to be interviewing at other places.

And so we said, okay, this is what this means: if we want to explore this, we have to just spend the entire weekend on this nonstop. A lot of fun moments there. I mean, we got to kind of the handshake agreement that Saturday, and then obviously there’s all the legal and everything to figure out.

You know, we all pulled an all-nighter that Sunday night with a very optimistic plan that we were going to get signed.

Did you also pull an all-nighter that Saturday night, or did you get some sleep? We got a couple hours of sleep on Saturday. It was especially, I mean, a huge shout-out to Jeff, Graham, and Kevin because they had had a pretty rough few days before as well, actually. They were already pretty sleep-deprived coming into it.

We were going through it. We had this optimistic view that we were going to get it signed on Sunday night, and so then we could go and focus on filming and figuring out how we address the team and everything. Obviously, that did not happen, and we got it signed on Monday at around 9 a.m. because us and the lawyers were up all night basically just sorting out all these things.

We luckily filmed the kind of Windsurf video in the Windsurf studio. We said, okay, we should just film it anyway. You realize you’re going to announce acquisitions without a video. “Yeah, yeah, yeah. It’s always nice to have one.”

And then as soon as we got things signed, we were up in front of the whole team and giving them the update and sharing that publicly pretty soon after. It was a lot of fun. I live for these moments, honestly.

So, you read the news on Friday. And you signed the deal and announced it on Monday. But that means that you decided more or less instantaneously that you wanted to buy the remaining part of Windsurf. Yeah. So, I think we talked it through on Friday evening.

I think from our perspective, there are a few things that were nice about this:

  • First of all, obviously, we know the space very well. So, in that sense, we didn’t really have to diligence the product or the customers because we knew that.
  • As we were kind of understanding the pieces of what happened exactly with the team, how many of the folks are still there and who less, we found that there was a very nice synergy in the sense that there was a core research and product engineering team that went to Google.
  • All of the other functions were entirely intact, which includes:
    • Enterprise engineering
    • Infrastructure
    • Deployed engineering
    • Go-to-market
    • Marketing
    • Finance
    • Operations
    • All these various things.

Funnily enough, with Cognition, for better or for worse, I think we had done a good job of building out this core research and product engineering team. But we were a little bit behind on growing all the other functions.

We found a very natural fit there as well. And as we were just talking, it’s like they had JP Morgan and we had Goldman Sachs, and there were all of these very natural ways to fit in. So, from our perspective, yeah, we knew there was something really interesting there and we wanted to do it.

A lot of the rest was just figuring out the details.

So, you got to acquire a bunch of people who have lots of familiarity with the space. They have a product offering that is in an adjacent but not identical place to Devin. And so, you get to accelerate the go-to-market efforts and broaden out the product portfolio.

That’s how you think about it? “Yeah, yeah, yeah, absolutely.”

And then, of course, the products themselves-funny enough, we were thinking about what does the interaction of an async product like Devin look like with a more synced product. We had some ideas for certain synchronous things that we wanted to build.

We weren’t going to build an IDE entirely because it felt like there were a couple players in town already. But as it turns out, having the IDE, there actually were a lot of natural synergies with a lot of the synchronous stuff that we thought about.

And, you know, a very simple thing: we shipped Wave 11 a few days later after we closed that deal. There are a lot of these basic things like:

- Being able to access your deep wiki in your IDE
- Using all of the Devin code-based representation in search
- Spinning up the agent there

All of these things, I think, we just felt were a lot of natural compliments. And so from there, it kind of felt like, you know, if there was a right person to work with and do this with, you know, it would be.

So in six months, do I buy Devin and I get Windsurf bundled? Do I separately buy Windsurf and I can buy Devin? Yeah. How will it work?

Yeah, a lot to figure out still. We certainly want to keep each of the product philosophies the same. Like I mentioned, I think there will still continue to be both sync and async products. But I think making the integration between them much stronger and much easier is going to be really nice.

And so certainly a lot that will be much easier from the customer perspective. But if for some reason they really wanted to use one of the two, I’d imagine that they would still be able to do that.

It’s obviously been an interesting aspect of the AI space that there have been a number of these 49% licensing type deals to avoid the risk of an acquisition being blocked. Companies buy the license to the IP and then the talent that they want to be able to be sure comes with the company.

Do you think that stays as a thing in the AI space?

“Like it’s a funny moment in time thing, right?”

Yeah. I certainly don’t feel like I’m the expert on this one. And it’s the thing that I find funny. There’s one new bell or whistle each time. You know, I feel like there’s a… Unlike all the legal and contractual stuff:

  • Adapt
  • Inflection
  • Character scale

You see like there’s one, oh, like and now we do, you know, this licensing deal. And so I think the metagame around that is certainly developing.

There is some amount of polarity at the top level of AI as a space in the sense that there is a point at which you want to just have like, you know, these things do scale with resources and they scale. And so I think basically the games get bigger, I guess is one way to put it.

And I think for most companies, the question is basically whether they think they will get there themselves or whether they want to work with another company.

You’re saying you would expect more M&A, whether it be like classical M&A or this new model of M&A because their scale benefits in this game.

Yeah, like maybe one of my hot takes is like I think for a lot of the big, of course, there will be many medium-sized outcomes in AI. But I think in this space a little bit more so than previous ones, it’s a little bit more polarized towards:

  • You become a hyperscaler
  • Or bust

And so, you know, for some companies that feel like that is like, you know, that is the trajectory and the moonshot that they want to go for. And that’s one thing. But for others, like, you know, working with someone is something that people do.

And so now as you’re bringing the Windsurf team on board, Cognition has this very intense culture. You know, you guys work, you know, you work on the weekends, you all work out of this house. And so you’re doing this buyout offer.

Yeah. Yeah. Yeah.

I think for us, it’s, you know, and most folks have been really excited to come in and do it. And only a small fraction have taken the buyout. But I think from our perspective, we just want to make sure it’s an opt-in situation for everyone. Because, you know, let’s be honest, it isn’t for everyone. And I think it is a very kind of intentional thing there.

Wasn’t the intensity you wanted people to, what did you want people to opt into?

Opt into the intensity and the new culture. And, yeah, we’re going to be going after some very ambitious goals. You know, I think, by revenue standards or by, you know, whatever you want to call it, there are, you know, folks might call us a mid or later stage company.

But from our perspective, you know, we are still very much early stage in terms of the profile of what happens next and how much more there is to build and how much more there is to do.

And obviously, at an early stage, yeah, you know, we do all have to be signing up for the uncertainty and the willingness to just go and take on a different challenge every week and to put in a lot of hours and to have that culture. That was a big piece of it.

Obviously, regardless of what happens, we wanted to make sure people were well taken care of.

Every day, Cognition is the largest company you’ve ever run. You’re speed running coming up to, it was true of me with Stripe as well, to be clear, because you’re speed running, learning how to run a company. I’m curious how you, how do you learn this stuff? How do you say I, but how do you learn more broadly?

Yeah, yeah. No, I mean, I’ve got a lot to learn still, for sure. I think many of these functions are, if anything, we, like I mentioned, have under-invested in a lot of functions, maybe because they’re not as top of mind for us as they should be. And now that’s something that we’re pretty actively working to do more of. I don’t believe in professional coach or career coach in the literal sense, but I think, obviously, you learn a lot from your peers and your friends who are doing similar things.

So having a lot of close friends who are winning companies. People you went to math camp with, apparently. Getting to, yeah, learning from these, all these different folks, and I do think, as an entrepreneur, it helps a lot to have a close group of friends.

And you can just be very honest and say,

“This thing is totally messed up, and I have no idea what we’re going to do, and please tell me if you have done anything like this before,”

or things like that, which has been really helpful.

You know, I think Eric and Kareem from Ramp, for example, or all these various folks from math competitions, or my previous co-founder, Vlad from Lunch Club. You know, a lot of different folks that I talk to for advice, and I think it really does help a lot.

Last question. I’m curious, what is your information diet in terms of how you learn about the world?

Yeah, a lot of, I feel like Twitter is really, for tech news, the place to be. We share a lot of things.

Do you not find there’s too much video in the algorithm these days?

I think there are. Like, it’s kind of become TikTok. There is a lot of video, but then I just don’t watch the videos, for the most part, or you see the first few seconds.

Which is an interesting thing to think about, as people who are making videos, too, of

“Make sure you can convey your point with no sound and with the first three seconds.”

Like, as much as you can do that, I think there are still another, like, 5x of users who reach that are in that camp.

The Twitter algorithm is the extent of how AI affects my information diet.

But that’s you on the receiving end of AI, as opposed to you using AI as a tool.

It’s a good point. It’s a good point.

I mean, I should have Devin - you know, just GitHub Actions, the morning report, like Zazu.

You have a con job, basically, where Devin just goes and does the morning report and gets that. There’s a lot of optimization to do still.

The president’s daily briefing.

Yeah.

Well, Scott, thank you. This was awesome.

Thank you so much for having me.