1
Why did the pre-training scaling paradigm not get us to AGI? If you look back just two years ago, this was the standard dogma. Everybody was saying this and today almost no one believes this anymore. So what happened?2
Does test-time adaptation get us to AGI this time? And if that's the case, maybe AGI is already here. Some people believe so.3
Besides test-time adaptation, what else might be next for AI?