Cool. We're going to open it up for questions for the next 10 minutes.
Question 1: Hi Fei, thank you for your talk. I'm a big, big, big fan, and yeah, so my question is more than two decades ago you worked on visual recognition. I am—I want to start my PhD. What should I work on so I become a legend like you are? I want to give you a thoughtful answer because I can always say, "Do whatever excites you." So first of all, I think AI research has changed because academia—if you're starting a PhD, you are in academia. Academia no longer has most of the AI resources. It's very different from my time, right? The chip, the compute, and the data are really low in terms of resourcing academia, and then there are problems that industry can run a lot faster. So as a PhD student, I would recommend you to look for those north stars that are not on a collision course of problems that industry can solve better using better compute, better data, and team science. But there are some really fundamental problems that we can still identify in academia that it doesn't matter how many chips you have, you can make a lot of progress, you know.
First of all, interdisciplinary AI to me is a really, really exciting area in academia, especially for scientific discovery. There's just so many disciplines that can cross AI. I think that's a big area that one could go to. On the theoretical side, I find it fascinating that the AI capability has 100% outrun theory. We don't know how—you know, we don't have explainability. We don't know how to figure out the causality. There's just so much in the models we don't understand that one could push forward. And you know, the list can go on. In computer vision, there's still representational problems we haven't solved, and also, you know, small data—that's another really interesting domain. And so yeah, these are the possibilities. Thank you so much, Fei.
Question 2: Thank you, Professor Li, and congratulations again on your honorary doctorate from Yale. I was honored there to witness that moment one month ago, and my question is, in your perspective, will AGI emerge more likely as a unified single unified model or as a multi-agent system? The way you ask this question is already two kinds of definition. One definition is more theoretical, which is define AGI as if there is an IQ test that one passes that defines AGI. The other part of your question is much more utilitarian. Is it functional? If it's agent-based, what tasks can it do? I struggle with this definition of AGI to be honest. Here's why. The founding fathers of AI who came together in 1956 in Dartmouth—you know, John McCarthy and Marvin Minsky of them—they wanted to solve the problem of machines that can think, and that's a problem that Alan Turing also put forward a few years earlier, 10 years or whatever earlier than them. And that statement is not a narrow AI. It's a statement of intelligence. So I don't really know how to differentiate that founding question of AI versus this new word AGI. To me, they're the same thing. But I get it that the industry today likes to call AGI as if that's beyond AI. And I struggle with that because I feel there—I don't know what exactly is AGI different from AI. If we say today's AGI-ish system performs better than the narrower AI system in '80s, '70s, '90s, or whatever, I think that's right. That's just the progression of the field. But fundamentally, I think the science of AI is the science of intelligence—is to create machines that can think and do things as intelligently or even more intelligently as humans. So I don't know how to define AGI. So I don't know—without defining it, I don't know if it's monolithic. If you look at the brain, it's one thing—you know, you can call it monolithic, but it does have different functionalities, and you can even—there's Broca's area for language. There's visual cortex, there's motor cortex. So I don't really know how to answer that question.
Question 3: Hi, my name is Yashna, and I just want to say thank you. I think it's really inspiring to see a woman playing a leading role in this field, and as a researcher, educator, and entrepreneur, I wanted to ask what type of person do you think should pursue graduate school in this rapid rise of AI? That's a great question, and that's a question even parents ask me. I really think graduate school is the four or five years where you have burning curiosity. You're led by curiosity, and that curiosity is so strong that there's no better other place to do it. It's different from a startup because startup is not just—you have to be a little careful. Startup cannot be just led by curiosity. Your investors will be mad at you. A startup has a more focused commercial goal, and some part of it is curiosity, but it's not just curiosity. Whereas for grad school, that curiosity to solve problem or to ask the right questions is so important that I think those going in with that intense curiosity would really enjoy the four or five years even if the outside world is passing by at the speed of light. You'll still be happy because you're there following that curiosity.
Question 4: I first wanted to say thank you for your time, thank you for coming out to speak to us. You mentioned that open-sourcing was a big part of the growth from ImageNet, and now with the recent release and growth of large language models, we've seen organizations taking different approaches with open source, which with some organizations staying fully closed source, some organizations fully releasing their entire research stack, some being somewhere in the middle, open-sourcing weights or having restrictive licenses and things of that nature. So I wanted to ask what do you think of these different approaches to open source, and what do you believe the right way to go about open source as an AI company is? I think the ecosystem is healthy when there are different approaches. I'm not religious in terms of you must open-source or you must close-source. It depends on the company's business strategy. And for example, it's clear why Facebook—Meta—wants to open-source, right? They are right now—their business model is not selling the model yet. They're using it to grow the ecosystem so that people come to their platform. So open-source makes a lot of sense. Whereas another company that is really monetizing on—even monetizing—you can think about an open-source tier and a closed-source tier. So I'm pretty open to that category or—at a meta level, I think open-source should be protected. I think if there is efforts of open-source both in public sector like academia as well as private sector is so important. It's so important for the entrepreneurial ecosystem. It's so important for public sector that I think that should be protected. It shouldn't be penalized.
Question 5: Hi, my name is Carl. I flew in from Estonia. I have a question about data. So you called very well the shift in machine learning towards data-driven methods with ImageNet. Now that you're working on world models, and you mentioned that we don't have this spatial data on the internet, it exists only in our heads. How are you solving this problem? What are you betting on? Are you collecting this data from the real world? Are you doing synthetic data? Do you believe in that, or do you believe in good old priors? Thanks. You should join World Labs, and I'll tell you. Oh, it's a good one. Look, as a company, I'm not going to be able to share a lot, but I think it's important to acknowledge that we're taking a hybrid approach. It is really important to have a lot of data, but also have a lot of quality data. Data at the end of the day, there is still garbage in, garbage out if you're not careful with the quality of data.
Question 6: We'll do one last question. Hi, Dr. Li. My name is Annie, and thank you very much for speaking with us. So in your book, "The World I See," you talk about the challenges you face as an immigrant girl and woman in STEM. I'm curious to know if there's a time that you feel the moment of being a minority in the workplace, and if so, how did you manage to overcome this or persuade others? Thank you for that question. I want to be very, very careful or thoughtful in answering you because we all come from different backgrounds, and how each of us feel is very unique. You know, it almost doesn't even matter what are the big categories. All of us have moments that we feel were the minority or the only person in the room. So of course I felt that way. Sometimes it's based on who I am. Sometimes it's based on my idea. Sometimes it's just based on—I don't know—the color of my shirt, whatever that is. But this is where I do want to encourage everybody. Maybe it is because since I was young, coming to this country, I kind of have experienced it is what it is. I am an immigrant woman. I almost developed a capability to not over-index on that. I'm here just like every one of you. I'm here to learn or to do things or to create things. I thank you. That was a great answer. And I really—all of you, you're about to embark on something or in the middle of embarking something, and you're going to have moments of weakness or strangeness or—I feel this every day, especially startup life. Sometimes I'm like, "Oh my god, I don't know what I'm doing." Just focus on doing it. Gradient descend yourself to the optimized solution. Yeah. All right. That's a great way to end. Thank you, Dr. Li.