Digital Superintelligence, Multiplanetary Life, How to Be Useful
Garry:

Elon, welcome to AI Startup School. We're just really, really blessed to have your presence here today.

Elon:

Thanks for having me.

Garry:

From SpaceX, Tesla, Neuralink, xAI, and more — was there ever a moment in your life before all this where you felt, "I have to build something great"? And what flipped that switch for you?

Elon:

Well, I didn't originally think I would build something great. I wanted to try to build something useful, but I didn't think I would build anything particularly great. If you said probabilistically, it seemed unlikely, but I wanted to at least try.

Garry:

You're talking to a room full of people who are all technical engineers — often some of the most eminent AI researchers coming up in the game.

Elon:

Okay. I like the term "engineer" better than "researcher." I suppose if there's some fundamental algorithmic breakthrough, it's research, but otherwise it's engineering.

Garry:

Let's start way back. This is a room full of 18 to 25 year olds. It skews younger because the founder set is younger and younger. Can you put yourself back into their shoes when you were 18, 19, learning to code, even coming up with a first idea for Zip2? What was that like for you?

Elon:

Yeah, back in '95, I was faced with a choice of either do grad studies, PhD at Stanford in material science — actually working on ultra-capacitors for potential use in electric vehicles, essentially trying to solve the range problem for electric vehicles — or try to do something in this thing that most people have never heard of called the internet. I talked to my professor, who was Bill Nix in the material science department, and said, "Can I defer for a quarter because this will probably fail, and then I'll need to come back to college?" And he said, "This is probably the last conversation we'll have." And he was right.

But I thought things would most likely fail, not that they would most likely succeed. And then in '95, I wrote basically the first or close to the first maps, directions, internet white pages, and yellow pages on the internet. I just wrote that personally, and I didn't even use a web server. I just read the port directly because I couldn't afford a T1.

The original office was on Sherman Avenue in Palo Alto. There was an ISP on the floor below. I drilled a hole through the floor and just ran a LAN cable directly to the ISP. My brother joined me and another co-founder, Greg Curry, who passed away. At the time, we couldn't even afford a place to stay — the office was $500 a month — so we just slept in the office and then showered at the YMCA on Page Mill Road. We ended up doing a little bit of a useful company — Zip2 in the beginning. We did build a lot of really, really good software technology, but we were somewhat captured by the legacy media companies. The New York Times, Hearst, whatnot were investors and customers and also on the board. They kept wanting to use our software in ways that made no sense. I wanted to go direct to consumers.

Anyway, long story — dwelling too much on Zip2 — but I really just wanted to do something useful on the internet because I had two choices: do a PhD and watch people build the internet, or help build the internet in some small way. I thought, "Well, I can always try and fail and then go back to grad studies." Anyway, that ended up being reasonably successful — sold for $300 million, which is a lot at the time. These days, I think the minimum impulse bid for an AI startup is a billion dollars. There are so many freaking unicorns, it's like a herd of unicorns at this point. A unicorn is a billion-dollar situation.

Garry:

There's been inflation since, so quite a bit more money actually.

Elon:

In 1995, you could probably buy a burger for a nickel. Well, not quite, but there has been a lot of inflation. The hype level on AI is pretty intense, as you've seen. You see companies that are less than a year old getting sometimes billion-dollar or multi-billion-dollar valuations, which could pan out and probably will pan out in some cases. But it is eye-watering to see some of these valuations. What do you think?

Garry:

I'm pretty bullish, honestly. I think the people in this room are going to create a lot of the value that a billion people in the world should be using. We're scratching the surface of it. I love the internet story in that even back then, you were a lot like the people in this room in that the heads of all the CEOs of all the legacy media companies looked to you as the person who understood the internet. A lot of the world — the corporate world, the world at large that does not understand what's happening with AI — they're going to look to the people in this room for exactly that. What are some of the tangible lessons? It sounds like one of them is don't give up board control or be careful about having a really good lawyer.

Elon:

For my first startup, the big mistake was having too much shareholder and board control from legacy media companies who then necessarily see things through the lens of legacy media, and they'll make you do things that seem sensible to them but aren't really — don't make sense with the new technology. I should point out that I didn't actually at first intend to start a company. I tried to get a job at Netscape. I sent my resume into Netscape, and Mark Andreessen knows about this. But I don't think he ever saw my resume, and then nobody responded. Then I tried hanging out in the lobby of Netscape to see if I could bump into someone, but I was too shy to talk to anyone. I thought, "Man, this is ridiculous. I'll just write software myself and see how it goes." It wasn't actually from the standpoint of "I want to start a company." I just want to be part of building the internet in some way. And since I couldn't get a job at an internet company, I had to start an internet company.

AI will so profoundly change the future. It's difficult to fathom how much, but assuming we don't go awry and AI doesn't kill us all — then you'll see ultimately an economy that is not 10 times more than the current economy. Ultimately, if we become — say, whatever our future machine descendants or but mostly machine descendants become a Kardashev scale 2 civilization or beyond, we're talking about an economy that is thousands of times, maybe millions of times bigger than the economy today. I did feel a bit like when I was in DC, taking a lot of flack for getting rid of waste and fraud, which was an interesting side quest, as side quests go. But...

Garry:

Got to get back to the main quest.

Elon:

Got to get back to the main quest. But I did feel a little bit like fixing the government is like the beach is dirty and there are some needles and feces and trash and you want to clean up the beach, but then there's also this thousand-foot wall of water which is a tsunami of AI. How much does cleaning the beach really matter if you have a thousand-foot tsunami about to hit? Not that much.

Garry:

Oh, we're glad you're back on the main quest. It's very important.

Elon:

Yeah, back to the main quest — building technology, which is what I like doing. It's just so much noise. Like, the signal-to-noise ratio in politics is terrible.

Garry:

I live in San Francisco, so you don't need to tell me twice.

Elon:

DC is all politics, but if you're trying to build a rocket or cars or you're trying to have software that compiles and runs reliably, then you have to be maximally truth-seeking, or your software or your hardware won't work. You can't fool math and physics — they are rigorous judges. I'm used to being in a maximally truth-seeking environment, and that's definitely not politics. Anyway, I'm glad to be back in technology.

Garry:

I'm curious going back to the Zip2 moment. You had hundreds of millions of dollars — or you had an exit worth hundreds of millions of dollars.

Elon:

I got $20 million.

Garry:

Okay, so you solved the money problem at least. And you basically took it and you rolled — you kept rolling with X.com, which became PayPal and Confinity.

Elon:

Yes, I kept the chips on the table.

Garry:

Not everyone does that. A lot of the people in this room will have to make that decision. What drove you to jump back into the ring?

Elon:

I think I felt that with Zip2, we'd built incredible technology, but it never really got used. At least from my perspective, we had better technology than Yahoo or anyone else, but it was constrained by our customers. I wanted to do something where we wouldn't be constrained by our customers — go direct to consumer — and that's what ended up being X.com, PayPal — essentially X.com merging with Confinity, which together created PayPal. The PayPal diaspora has created more companies than probably anything in the 21st century. So many talented people were at the combination of Confinity and X.com. I felt like we got our wings clipped somewhat with Zip2, and I thought, "What if our wings aren't clipped and we go direct to consumer?" That's what PayPal ended up being.

I got that $20 million check for my share of Zip2. At the time, I was living in a house with four housemates and had $10'000 in the bank, and then this check arrives in the mail of all places. My bank balance went from $10,000 to $20 million and $10'000. Still had to pay taxes on that, but then I ended up putting almost all of that into x.com and, as you said, keeping almost all the chips on the table.

After PayPal, I was curious as to why we had not sent anyone to Mars. I went on the NASA website to find out when we're sending people to Mars, and there was no date. I thought maybe it was just hard to find on the website, but in fact, there was no real plan to send people to Mars. This is such a long story, so I don't want to take up too much time here, but...

Garry:

I think we're all listening with rapt attention.

Elon:

I was on the Long Island Expressway with my friend Adeo Ressi — we were housemates in college — and Adeo was asking me what I'm going to do after PayPal, and I said, "I don't know, maybe I'd like to do something philanthropic in space," because I didn't think I could actually do anything commercial in space because that seemed like the purview of nations. I was curious as to when we're going to send people to Mars, and that's when I realized it's not on the website, and I started digging. There's nothing on the NASA website, so then I started digging.

I'm definitely summarizing a lot here, but my first idea was to do a philanthropic mission to Mars called "Life to Mars," where we would send a small greenhouse with seeds and dehydrated nutrient gel, land that on Mars and grow — hydrate the gel — and then you'd have this great money shot of green plants on a red background. For the longest time, I didn't realize "money shot" is a porn reference. But anyway, the point is that would be the great shot of green plants on a red background and to try to inspire NASA and the public to send astronauts to Mars.

As I learned more, I came to realize — and along the way, I went to Russia in 2001 and 2002 to buy ICBMs, which is an adventure. You go and meet with Russian high command and say, "I'd like to buy some ICBMs."

Garry:

This was to get to space.

Elon:

Not to nuke anyone, but they had — as a result of arms reduction talks, they had to actually destroy a bunch of their big nuclear missiles. I thought, "How about if we take two of those, minus the nuke, add an additional upper stage for Mars?" But it was kind of trippy being in Moscow in 2001, negotiating with the Russian military to buy ICBMs. That's crazy. And they kept raising the price on me, so literally it's the opposite of what a negotiation should do. I thought, "Man, these things are getting really expensive."

And then I came to realize that actually the problem was not that there was insufficient will to go to Mars, but that there was no way to do so without breaking the budget — even breaking the NASA budget. That's where I decided to start SpaceX to advance rocket technology to the point where we could send people to Mars. And that was in 2002.

Garry:

You didn't start out wanting to start a business. You wanted to start something that was interesting to you that you thought humanity needed, and then like a cat pulling on a string, the ball unravels, and it turns out this could be a very profitable business.

Elon:

It is now, but there had been no prior example of a rocket startup succeeding. There have been various attempts to do commercial rocket companies, and they all failed. With SpaceX, starting SpaceX was really from the standpoint of I think there's a less than 10% chance of being successful — maybe 1%. But if a startup doesn't do something to advance rocket technology, it's definitely not coming from the big defense contractors because they just implement what the government wants, and the government just wants to do very conventional things. It's either coming from a startup or it's not happening at all. A small chance of success is better than no chance of success.

I started SpaceX in mid-2002 expecting to fail. I said probably 90% chance of failing, and even when recruiting people, I didn't try to make out that it would succeed — I said we're probably going to die. But 12% chance we might not die, and this is the only way to get people to Mars and advance the state-of-the-art. I ended up being chief engineer of the rocket, not because I wanted to, but because I couldn't hire anyone who was good. None of the good chief engineers would join because they said, "This is too risky. You're going to die." I ended up being chief engineer of the rocket. The first three flights did fail. It's a bit of a learning exercise there. The fourth one fortunately worked. But if the fourth one hadn't worked, I had no money left, and that would have been it — would have been curtains. It was a pretty close thing. If the fourth launch of Falcon hadn't worked, it would have been curtains, and we would have joined the graveyard of prior rocket startups. My estimate of success was not far off. We just made it by the skin of our teeth.

And Tesla was happening simultaneously. 2008 was a rough year because at mid-2008 — or called summer 2008 — the third launch of SpaceX had failed, our third failure in a row. The Tesla financing round had failed. Tesla was going bankrupt fast. It was just, "Man, this is grim. This is going to be a tale of warning of an exercise in hubris."

Garry:

Throughout that period, a lot of people were saying, "Elon is a software guy. Why is he working on hardware? Why would he choose to work on this?"

Elon:

Right, 100%. You can look at the press of that time — it's still online. You could just search it, and they kept calling me "internet guy." "Internet guy" — aka fool — "is attempting to build a rocket company." We got ridiculed quite a lot. It does sound pretty absurd — "Internet guy starts rocket company" doesn't sound like a recipe for success, frankly. I don't hold it against them. I said, "Yeah, it admittedly does sound improbable, and I agree that it's improbable."

But fortunately, the fourth launch worked, and NASA awarded us a contract to resupply the space station. I think that was maybe December 22nd, or it was right before Christmas. Because even the fourth launch working wasn't enough to succeed — we also needed a big contract to keep us alive. I got that call from the NASA team, and they said, "We're awarding you one of the contracts to resupply the space station." I literally blurted out, "I love you guys." Which is not normally what they hear. It's usually pretty sober, but I thought, "Man, this is a company saver." We closed the Tesla financing round on the last hour of the last day that it was possible, which was 6 p.m. December 24th, 2008. We would have bounced payroll two days after Christmas if that round hadn't closed. That was a nerve-wracking end of 2008. That's for sure.

Garry:

From your PayPal and Zip2 experience, jumping into these hardcore hardware startups, it feels like one of the through lines was being able to find and eventually attract the smartest possible people in those particular fields. The people in this room — some of the people here I don't think have even managed a single person yet. They're just starting their careers. What would you tell to the Elon who's never had to do that yet?

Elon:

I generally think to try to be as useful as possible. It may sound trite, but it's so hard to be useful, especially to be useful to a lot of people. The area under the curve of total utility is how useful have you been to your fellow human beings times how many people? It's almost like the physics definition of true work. It's incredibly difficult to do that. I think if you aspire to do true work, your probability of success is much higher. Don't aspire to glory — aspire to work.

Garry:

How can you tell that it's true work? Is it external? Is it what happens with other people, or what the product does for people? What is that for you when you're looking for people to come work for you? What's the salient thing that you look for?

Elon:

In terms of your end-product, you just have to say, "If this thing is successful, how useful will it be to how many people?" That's what I mean. Then you do whatever — whether you're CEO or any role in a startup — you do whatever it takes to succeed. Always be smashing your ego — internalize responsibility. A major failure mode is when ego-to-ability ratio is greater than one. If your ego-to-ability ratio gets too high, then you're going to break the feedback loop to reality. In AI terms, you'll break your RL loop. You want to have a strong RL loop, which means internalizing responsibility and minimizing ego, and you do whatever the task is, no matter whether it's grand or humble.

That's why I actually prefer the term "engineering" as opposed to "research." I don't want to call xAI a lab. I just want to be a company. Whatever the simplest, most straightforward, ideally lowest-ego terms are — those are generally a good way to go. You want to close the loop on reality hard. That's a super big deal.

Garry:

I think everyone in this room really looks up to everything you've done around being a paragon of first principles, and thinking about the stuff you've done — how do you actually determine your reality? Because that seems like a pretty big part of it. Other people — people who have never made anything, non-engineers, sometimes journalists who've never done anything — they will criticize you, but then clearly you have another set of people who are builders who have very high area under the curve who are in your circle. How should people approach that? What has worked for you, and what would you pass on to your children? What do you tell them when you say, "You need to make your way in this world. Here's how to construct a reality that is predictive from first principles."

Elon:

The tools of physics are incredibly helpful to understand and make progress in any field. First principles means break things down to the fundamental axiomatic elements that are most likely to be true, and then reason up from there as cogently as possible, as opposed to reasoning by analogy or metaphor. Simple things like thinking in the limit — if you extrapolate, minimize this thing or maximize that thing — thinking in the limit is very, very helpful. I use all the tools of physics. They apply to any field. This is a superpower actually.

Take for example rockets. You can say, "How much should a rocket cost?" The typical approach that people would take to how much a rocket should cost is they would look historically at what the cost of rockets are and assume that any new rocket must be somewhat similar to the prior cost of rockets. A first principles approach would be you look at the materials that the rocket is comprised of. If that's aluminum, copper, carbon fiber, steel, whatever the case may be, and say, "How much does that rocket weigh, and what are the constituent elements and how much do they weigh? What is the material price per kilogram of those constituent elements?" That sets the actual floor on what a rocket can cost. It can asymptotically approach the cost of the raw materials. Then you realize, "Oh, actually, a rocket — the raw materials of a rocket are only maybe one or 2% of the historical cost of a rocket." The manufacturing must necessarily be very inefficient if the raw material cost is only 1 or 2%. That would be a first principles analysis of the potential for cost optimization of a rocket. And that's before you get to reusability.

To give an AI example — last year, for xAI, when we were trying to build a training supercluster — we went to the various suppliers to ask — this was beginning of last year — that we needed 100,000 H100s to be able to train coherently. Their estimates for how long it would take to complete that were 18 to 24 months. We need to get that done in 6 months, or we won't be competitive. If you break that down — what are the things you need? You need a building, you need power, you need cooling. We didn't have enough time to build a building from scratch, so we had to find an existing building. We found a factory that was no longer in use in Memphis that used to build Electrolux products. But then the input power was 15 megawatts, and we needed 150 megawatts. We rented generators and had generators on one side of the building, and then we have to have cooling. We rented about a quarter of the mobile cooling capacity of the US and put the chillers on the other side of the building. That didn't fully solve the problem because the power variations during training are very big. You can have power drop by 50% in 100 milliseconds, which the generators can't keep up with. Then we added Tesla Megapacks and modified the software in the Megapacks to be able to smooth out the power variation during the training run. There were a bunch of networking challenges because the networking cables, if you're trying to make 100,000 GPUs train coherently, are very, very challenging.

Garry:

It almost sounds like almost any of those things you mentioned, I could imagine someone telling you very directly, "No, you can't have that, you can't have that power, you can't have this." And it sounds like one of the salient pieces of first principles thinking is actually let's ask why. Let's figure that out, and actually let's challenge the person across the table, and if I don't get an answer that I feel good about, I'm not going to let that stand. That feels like something that everyone — if someone were to try to do what you're doing in hardware, hardware seems to uniquely need this. In software, we have lots of fluff and things that we can add more CPUs to, it'll be fine. But in hardware, it's just not going to work.

Elon:

These general principles of first principles thinking apply to software and hardware — apply to anything, really. I'm just using a hardware example of how we were told something is impossible, but once we broke it down into the constituent elements of we need a building, we need power, we need cooling, we need power smoothing — then we could solve those constituent elements. We ran the networking operation to do all the cabling, everything, in four shifts, 24/7, and I was sleeping in the data center and also doing cabling myself. There were a lot of other issues to solve. Nobody had done a training run with 100,000 H100s training coherently last year. Maybe it's been done this year, I don't know. We ended up doubling that to 200,000. Now we've got 150,000 H100s, 50K H200s, and 30K GB200s in the Memphis training center. We're about to bring 110,000 GB200s online at a second data center also in the Memphis area.

Garry:

Is it your view that pre-training is still working and the scaling laws still hold, and whoever wins this race will have basically the biggest, smartest possible model that you could distill?

Elon:

There are other various elements besides competitiveness for large AI. The talent of the people matters. The scale of the hardware matters and how well you're able to bring that hardware to bear. You can't just order a whole bunch of GPUs and plug them in. You've got to get a lot of GPUs and have them train coherently and stably. Then it's, "What unique access to data do you have?" Distribution matters to some degree as well — how do people get exposed to your AI? Those are critical factors for if it's going to be a large foundation model that's competitive.

As many have said — I think my friend Ilya said — we've run out of pre-training data of human-generated data. You run out of tokens pretty fast of certainly high-quality tokens. Then you have to do a lot of — you need to essentially create synthetic data and be able to accurately judge the synthetic data that you're creating to verify, "Is this real synthetic data, or is it a hallucination that doesn't actually match reality?" Achieving grounding in reality is tricky, but we are at the stage where there's more effort put into synthetic data. Right now we're training Grok 3.5, which is a heavy focus on reasoning.

Garry:

Going back to your physics point, what I heard for reasoning is that hard science, particularly physics textbooks, are very useful for reasoning, whereas researchers have told me that social science is totally useless for reasoning.

Elon:

Yes, that's probably true. Something that's going to be very important in the future is combining deep AI in the data center or supercluster with robotics. Things like the Optimus humanoid robot — Optimus is awesome. There's going to be so many humanoid robots and robots of all sizes and shapes, but my prediction is that there will be more humanoid robots by far than all other robots combined by maybe an order of magnitude — a big difference.

Garry:

Is it true that you're planning a robot army of a sort?

Elon:

Whether we do it or whether Tesla does it — Tesla works closely with xAI. You've seen how many humanoid robot startups are there. I think Jensen Huang was on stage with a massive number of robots, robots from different companies. I think there was a dozen different humanoid robots.

Part of what I've been fighting and maybe what has slowed me down somewhat is that I don't want to make Terminator real. I've been — at least until recent years — dragging my feet on AI and humanoid robotics. Then I came to the realization it's happening whether I do it or not. You have really two choices: you could either be a spectator or a participant. I'd rather be a participant than a spectator. Now it's pedal to the metal on humanoid robots and digital super intelligence.

Garry:

There's a third thing that everyone has heard you talk a lot about that I'm really a big fan of — becoming a multi-planetary species. Where does this fit? This is all — not just a 10 or 20 year thing, maybe a hundred-year thing — it's many, many generations for humanity. How do you think about it? There's AI, obviously, there's embodied robotics, and then there's being a multi-planetary species. Does everything feed into that last point, or what are you driven by right now for the next 10, 20, and 100 years?

Elon:

Jeez, 100 years, man. I hope civilization's around in 100 years. If it is around, it's going to look very different from civilization today. I'd predict that there's going to be at least five times as many humanoid robots as there are humans, maybe 10 times. One way to look at the progress of civilization is percentage completion Kardashev. If you're Kardashev scale one, you've harnessed all the energy of a planet. In my opinion, we've only harnessed maybe 1 or 2% of Earth's energy. We've got a long way to go to the Kardashev scale one. Then Kardashev 2, you've harnessed all the energy of a sun, which would be a billion times more energy than Earth — maybe closer to a trillion. And then Kardashev 3 would be all the energy of a galaxy — pretty far from that.

We're at the very, very early stage of the intelligence big bang. In terms of being multi-planetary — I think we'll have enough mass transferred to Mars within roughly 30 years to make Mars self-sustaining such that Mars can continue to grow and prosper even if the resupply ships from Earth stop coming. That greatly increases the probable lifespan of civilization or consciousness or intelligence — both biological and digital.

That's why I think it's important to become a multi-planet species. I'm somewhat troubled by the Fermi paradox — why have we not seen any aliens? It could be because intelligence is incredibly rare. Maybe we're the only ones in this galaxy. In which case, the intelligence of consciousness is this tiny candle in a vast darkness, and we should do everything possible to ensure the tiny candle does not go out. Being a multi-planet species or making consciousness multi-planetary greatly improves the probable lifespan of civilization, and it's the next step before going to other star systems. Once you at least have two planets, then you've got a forcing function for the improvement of space travel. That ultimately is what will lead to consciousness expanding to the stars.

Garry:

It could be that the Fermi paradox dictates once you get to some level of technology, you destroy yourself. How do we actually — what would you prescribe to a room full of engineers — what can we do to prevent that from happening?

Elon:

How do we avoid the great filters? One of the great filters would obviously be global thermonuclear war. We should try to avoid that. Building benign AI, robots that AI that loves humanity, and robots that are helpful. Something that I think is extremely important in building AI is a very rigorous adherence to truth, even if that truth is politically incorrect. My intuition for what could make AI very dangerous is if you force AI to believe things that are not true.

Garry:

How do you think about the argument for open — open for safety versus closed for competitive edge? I think the great thing is you have a competitive model. Many other people also have competitive models. In that sense, we're off of — maybe the worst timeline that I'd be worried about is there's fast takeoff and it's only in one person's hands. That might collapse a lot of things. Whereas now we have choice, which is great. How do you think about this?

Elon:

I do think there will be several deep intelligences — maybe at least five, maybe as much as 10. I'm not sure that there's going to be hundreds, but it's probably close to maybe there'll be 10 or something like that. Of which maybe four will be in the US. I don't think it's going to be any one AI that has a runaway capability. Several deep intelligences.

Garry:

What will these deep intelligences actually be doing? Will it be scientific research or trying to hack each other?

Elon:

Probably all of the above. Hopefully they will discover new physics, and I think they will — they're definitely going to invent new technologies. I think we're quite close to digital super intelligence. It may happen this year, and if it doesn't happen this year, next year for sure — a digital super intelligence defined as smarter than any human at anything.

Garry:

How do we direct that to super abundance? We have — we could have robotic labor, we have cheap energy, intelligence on demand. Is that the white pill? Where do you sit on the spectrum? And are there tangible things that you would encourage everyone here to be working on to make that white pill actually reality?

Elon:

I think it most likely will be a good outcome. I'd agree with Geoff Hinton that maybe it's a 10 to 20% chance of annihilation. But look on the bright side, that's 80 to 90% probability of a great outcome. I can't emphasize this enough: a rigorous adherence to truth is the most important thing for AI safety. And obviously empathy for humanity and life as we know it.

Garry:

We haven't talked about Neuralink at all yet, but I'm curious, you're working on closing the input and output gap between humans and machines. How critical is that to AGI/ASI? And once that link is made, can we not only read but also write?

Elon:

Neuralink is not necessary to solve digital super intelligence. That'll happen before Neuralink is at scale. But what Neuralink can effectively do is solve the input-output bandwidth constraints. Especially our output bandwidth is very low. The sustained output of a human over the course of a day is less than one bit per second. There are 86,400 seconds in a day, and it's extremely rare for a human to output more than that number of symbols per day — certainly for several days in a row.

With a Neuralink interface, you can massively increase your output bandwidth and your input bandwidth. Input being write to you — you have to do write operations to the brain. We have now five humans who have received the read — input where it's reading signals. You've got people with ALS who really have no — they're tetraplegics, but they can now communicate with similar bandwidth to a human with a fully functioning body and control their computer and phone, which is pretty cool.

I think in the next 6 to 12 months, we'll be doing our first implants for vision where even if somebody's completely blind, we can write directly to the visual cortex, and we've had that working in monkeys actually. I think one of our monkeys now has had a visual implant for three years. At first it'll be relatively fairly low resolution, but long term you would have very high resolution and be able to see multi-spectral wavelengths. You could see infrared, ultraviolet, radar — a superpower situation. At some point, the cybernetic implants wouldn't simply be correcting things that went wrong but augmenting intelligence and senses and bandwidth dramatically. That's going to happen at some point. But digital super intelligence will happen well before that. At least if we have a Neuralink, we might be able to appreciate the AI better.

Garry:

One of the limiting reagents to all of your efforts across all of these different domains is access to the smartest possible people. But simultaneous to that, we have the rocks can talk and reason, and they're maybe 130 IQ now, and they're probably going to be super intelligent soon. How do you reconcile those two things? What's going to happen in 5, 10 years, and what should the people in this room do to make sure that they're the ones who are creating instead of maybe below the API line?

Elon:

They call it the singularity for a reason because we don't know what's going to happen in the not that far future. The percentage of intelligence that is human will be quite small. At some point, the collective sum of human intelligence will be less than 1% of all intelligence. And if things get to a Kardashev level two, we're talking about human intelligence — even assuming a significant increase in human population and intelligence augmentation, like massive intelligence augmentation where everyone has an IQ of a thousand type of thing — even in that circumstance, collective human intelligence will be probably 1 billionth that of digital intelligence. Anyway, where's the biological bootloader for digital super intelligence?

Garry:

Just to end off.

Elon:

Was I a good bootloader?

Garry:

Where do we go? How do we go from here? All of this is pretty wild sci-fi stuff that also could be built by the people in this room. If you have a closing thought for the smartest technical people of this generation right now, what should they be doing? What should they be working on? What should they be thinking about tonight as they go to dinner?

Elon:

As I started off with, I think if you're doing something useful, that's great. If you just try to be as useful as possible to your fellow human beings, then you're doing something good. I keep harping on this focus on super truthful AI — that's the most important thing for AI safety. Obviously, if anyone's interested in working at xAI, please let us know. We're aiming to make Grok the maximally truth-seeking AI. I think that's a very important thing.

Hopefully we can understand the nature of the universe. That's really what AI can hopefully tell us. Maybe AI can tell us where are the aliens and what — how did the universe really start? How will it end? What are the questions that we don't know that we should ask? And are we in a simulation, or what level of simulation are we in? An NPC.

Garry:

Well, I think we're going to find out.

Digital Superintelligence, Multiplanetary Life, How to Be Useful, Elon Musk, Garry Tan, Y Combinator | Celaeno