Q: As AI advances, do you think it's more important for humans to develop the tools or learn how to use the tools better? Like how can we position ourselves to remain essential in a world where intelligence is becoming democratized?
I feel like AGI has been overhyped, and so for a long time there'll be a lot of things that humans can do that AI cannot. And I think in the future, the people that are most powerful are the people that can make computers do exactly what you want it to do. And so I think staying on top of the tools—some of us will build tools sometimes, but there are a lot of other tools others will build that we can just use—but people that know how to use AI to get computers to do what you want it to do will be much more powerful. Not worry about people running out of things to do, but people that can use AI will be much more powerful than people that don't.
Q: Thank you so much. I have huge respect for you and I think you are a true inspiration for a lot of us. My question is about the future of compute. So as we move towards more powerful AI, where do you think that compute is heading? I mean we see people saying let's ship GPUs to space. Some people talking about nuclear power data centers. What do you think about it?
There's something I'm debating what I wanted to say in response to the last question about kind of AGI. Maybe I'll answer this and a little bit the last question. So it turns out there's one framework you can use for deciding what's hype and what's not hype. I think over the last two years there's been a handful of companies that hyped up certain things for promotional PR, fundraising, influence purposes. And because AI was so new, a handful of companies got away with saying almost anything without anyone fact-checking them because the technology was not understood.
So one of my mental filters is there are certain hype narratives that make these businesses look more powerful that have been amplified. And so, for example, this idea that AI is so powerful, we might accidentally lead to human extinction. That's just ridiculous. But it is a hype narrative that made certain businesses look more powerful, and it got ramped up and actually helped certain businesses' fundraising goals.
"AI is so powerful, soon no one will even have a job anymore." Just not true, right? But again, that made these businesses look more powerful, got hyped up. "We are so powerful that by training a new model, we will casually wipe out thousands of startups." That's just not true. Yes, Jasper ran into trouble. A small number of companies got wiped out. But it's not that easy to casually wipe out thousands of startups.
"AI needs so much electricity, only nuclear power is good enough for that. You know, that wind, solar stuff—not." That's just not true. So I think a lot of this—GPUs in space, you know, I don't know. It's like, go for it. I think we have a lot of room to run still for terrestrial GPUs. But I think some of these hype narratives have been amplified that I think are a distortion of what actually will be done.
Q: There's a lot of hype in AI and nobody's really certain about how we're going to be building the future with it. But what are some of the most dangerous biases or overhyped narratives that you've seen people talk about or get poisoned by that they end up running with that we should try to avoid or be more aware of and allow us to have a more realistic view as we are building this future?
So I think the dangerous AI narrative has been overhyped. AI is a fantastic tool, but like any other powerful tool like electricity, lots of ways to use it for beneficial purposes. Also some ways to use it in harmful ways. I find myself not using the term "AI safety" that much. Not because I think we should build dangerous things, but because I think safety is not a function of technology, it's a function of how we apply it.
So like an electric motor—you know, you can't—the maker of an electric motor can't guarantee that no one will ever use it for unsafe downstream tasks. Like, an electric motor can be used to build a drill, a machine, an electric vehicle can be used to build a smart bomb, but the electric motor manufacturer can't control how it's used downstream.
So safety is not a function of the electric motor, it's a function of how you apply it. And I think the same thing for AI. AI is neither safe nor unsafe. It is how you apply it that makes it safe or unsafe. So instead of thinking about AI safety, I often think about responsible AI because it is how we use it responsibly—hopefully—or irresponsibly that determines whether or not what we build with AI technology ends up being harmful or beneficial.
And I feel like sometimes the really weird corner cases that get hyped up in the news—I think just one or two days ago there was a Wall Street Journal article about AI losing control of AI or something. And I feel like that article took corner case experiments run in a lab and sensationalized it in a way that I think was really disproportionate relative to the lab experiment that was being run.
And unfortunately, technology is hard enough to understand that many people don't know better, and so these hype narratives do keep on getting amplified. And I feel like this has been used as a weapon against open source software as well, right? Which is really unfortunate.
Q: Thank you for your work. I think your impact is remarkable. My question is as aspiring founders, how should we be thinking about business in a world where anything can be disrupted in a day? Whatever great model, product or feature you have can be replicated with AI code and competitors in basically hours.
It turns out when you start a business, there are a lot of things to worry about. The number one thing I worry about is: are you building a product that users love? It turns out that when you build a business, there are lots of things to think about: the go-to-market channel, competitors, technology, moat—all that is important. But if I were to have a singular focus on one thing, it is: are you building a product that users really want? Until you solve that, it's very difficult to build a valuable business. After you solve that, the other questions do come to play.
Do you have a channel to get to customers? What is pricing long-term? What is your moat? I find that moats tend to be overhyped. Actually, I find that more businesses tend to start off with a product and then evolve eventually into a moat. But consumer products, brand is somewhat more defensible. And if you have a lot of momentum, it becomes harder to catch you. But enterprise products, sometimes if you have a—maybe moat is more of a consideration if there are channels that are hard to get into enterprises.
So I think—sorry—when AI Fund looks at businesses, we actually wind up doing a fairly complex analysis of these factors and writing a two- to six-page narrative memo to analyze it before we decide whether or not to proceed with it or not. And I think all of these things are important, but I feel like at this moment in time, the number of opportunities—meaning the amount of stuff that is possible that no one's built yet in the world—seems much greater than the number of people with the skill to build them.
So definitely at the application layer, it feels like there's a lot of white space for new things you can build that no one else seems to be working on. And I would say, you know, focus on building a product that people want, that people love. And then figure out the rest of it along the way. Although it's important to figure it along the way.
Q: Hi professor. Thanks for your wonderful speech. I am an AI researcher from Stanford and I think your metaphor in your speech is very interesting. You said the current AI tools are like bricks and can be built upon accumulation. However, so far it is difficult to see the accumulative functional expansion of the integration of AI tools because they often align on the stacking of functions based on intent distribution and are accompanied by dynamic problems of tokens and time overhead. So which is different from static engineering. So what do you think will be the perspective of a possible agent accumulation effect in the future?
Hey, just some quick remarks to that. You mentioned agent, token cost—my most common advice to developers is, to first approximation, just don't worry about how much tokens cost. Only a small number of startups are lucky enough to have users use so much of your product that the cost of tokens becomes a problem. It can become a problem—I've definitely been on a bunch of teams where the cost—you know, users like our product and we started to look at our GenAI bills and it was definitely climbing in a way that really became a problem.
But it's actually really difficult to get to a point where your token usage costs are a problem. And for the teams I'm on where we were lucky enough that users made our token costs a problem, we often had engineering solutions to then bend the curve and bring it back down through prompting, fine-tuning, RAG, or whatever.
And then what I'm seeing is that I'm seeing a lot of agentic workflows that actually integrate a lot of different steps. So for example, if you build a customer service chatbot, we'll often have to use prompting, maybe optimize some of the results with DSPI, build evals, build guardrails, maybe the customer service chatbot needs RAG part of the way to get information to feed back to the user.
So I actually do see these things grow. But one tip for many of you as well is I will often architect my software to make switching between different building block providers relatively easy. So for example, I have a lot of products that build on top of OpenAI, but sometimes you point to a specific product and ask me which OpenAI are we using? I honestly don't know because we built up evals, and when there's a new model that's released, we'll quickly run evals to see if the new model is better than the old one. And then we'll just switch to the new model if the new model does better on evals.
And so the model we use week by week—you know, sometimes our engineers will change it without even bothering to tell me because the evals show the new model works better. So it turns out that switching cost for foundation models is relatively low, and we often architect our software—AI Suite is open-sourcing that my friends and I worked on to make switching easier.
Switching cost for the orchestration platforms is a little bit harder. But I find that preserving that flexibility in your choice of building blocks often lets you go faster even as you're building more and more things on top of each other. So hope that helps.
Q: Thank you so much. In the world of education in AI, there are two paradigms mostly. So one is AI can make teachers more productive—automating grading and automating homeworks. But another school of thought is that there'll be personal tutors for every student. So every student can have one tutor that gets feedback from an AI and gets personal questions from them. So how do you see these two paradigms converge and how would education look like in the next five years?
I think everyone feels like a change is coming in edtech, but I don't think the disruption is here yet. I think a lot of people are experimenting with different things. So you know, Coursera has Coursera Coach, which actually works really well. DeepLearning.AI is more focused on teaching AI, also has some built-in chatbots. A lot of teams experiment with autograding. There's an avatar with me on the DeepLearning.AI website you can talk to if you want—DeepLearning.AI.
And then I think for some things like language learning with Duolingo, that has become clearer—some of the ways AI would transform it. For the broader educational landscape, the exact ways that AI would transform it—I see a lot of experimentation. I think what Khan Academy, which I've been doing some work with, is doing is very promising for K-12 education. But I think what I'm seeing is frankly tons of experimentation, but the final end state is still not clear.
I do think education will be hyper-personalized. But is that workflow an avatar? Is it a text chatbot? What's the workflow? I think I feel like the hype from a couple years ago that with AGI soon, it will all be so easy—that was hype. The reality is work is complex, right? Teachers, students, people do really complex workflows, and for the next decade we'll be looking at the work that needs to be done and figuring out how to map it to agentic workflows.
And education is one of the sectors where this mapping is still underway, but it's not yet mature enough to the point where the end state is clear. So I think we should all just keep working on it.
Q: All right. Thank you so much Andrew. My question is I think AI has a lot of great potential for good but there's also a lot of potential for bad consequences as well such as exacerbating economic inequality and things like that and I think a lot of our startups here while they'll be doing a lot of great things will also be you know just by virtue of their product be contributing to some of those negative consequences. So I was curious how do you think you know us as AI builders should kind of balance our product building with also the potential societal downsides of some AI products and essentially how can we both move fast and be responsible as you mentioned in your talk?
Look in your heart, and if fundamentally what you're building—if you don't think it'll make people at large better off, don't do it, right? I know it sounds simple, but it's actually really hard to do in the moment. But at AI Fund, we've killed multiple projects—not on financial grounds, but on ethical grounds—where there are multiple projects we looked at, the economic case is very solid, but we said, "You know what, we don't want this to exist in the world," and we just killed it on that basis.
So I hope more people will do that. And then I worry about bringing everyone with us. One thing I'm seeing is people in all sorts of job roles that are not engineering are much more productive if they know AI than if they don't. And so for example, on my marketing team, my marketers—they know how to code. Frankly, they were running circles around the ones that don't. So then everyone learned to code, and then they got better.
But I feel like trying to bring everyone with us, to make sure everyone is empowered to build with AI—that'll be an important part of what all of us do, I think.
Q: I'm one of your big fans and thank you for your online courses. Your courses make deep learning much more accessible to the world. And my question is also about education. As AI becomes more powerful and widespread, there seems to be a growing gap between what AI can actually do and what people perceive it can do. So what do you think about—is it important to educate the general public about deep learning stuff and not only educate those technical people and make people understand more what AI really does and how it works?**
I think that knowledge will diffuse. DeepLearning.AI—we want to empower everyone to build with AI. So we're working on it. Many of us work on it. I'll just tell you what I think is the main danger. I think there are maybe two dangers. One is if you don't bring people with us fast enough—I hope we'll solve that.
There's one other danger, which is it turns out that if you look at the mobile ecosystem, mobile phones, it's actually not that interesting. And one of the reasons is there are two gatekeepers: Android and iOS. And unless they let you do certain things, you're not allowed to try certain things on mobile. And I think this hampers innovators.
These dangers of AI have been used by certain businesses. They're trying to shut down open source because a number of businesses would love to be a gatekeeper to large-scale foundation models. So I think hyping up dangers, supposed false dangers of AI in order to get regulators to pass laws like the proposed SB 1047 in California—which thank goodness we shut down—would have put in place really burdensome regulatory requirements that don't make anyone safer, but would make it really difficult for startups to release open source and open-weight software.
So one of the dangers to inequality as well is if these regulatory—you know, awful regulatory approaches—and I've been in the room where some of these businesses said stuff to regulators that was just not true. So I think that some of these arguments—the danger is if these regulatory proposals succeed and end up stifling regulations, leaving us with a