Frame at 0.00s

So good to see you. Great to see you too. Last time I was in San Francisco. Yes. I know you are all over the place. I'm curious if this year feels different than the last time you were in Davos. Gemini 3 is out. We've heard that OpenAI called a code red internally. Do you feel like Google got its mojo back? Well, I'm not sure if that's for me to say, but I feel like we had a very good year. So it's been hard work, really hard work getting our technology and the models sort of back to state of the art. I think we did that with Gemini 3 especially and Nanobun, our imaging software. And then I think we've also sort of adapted really to this new world of shipping very fast, kind of bringing a kind of startup energy to what we do. Do you think people underestimated Google or got something wrong? Yes, maybe. I'm not sure. I mean, I think we always had the ingredients, you know, to be at the forefront of this. Obviously, we've got the long history in it. I think, you know, over the last decade, Google and DeepMind, between us, we've invented most of the breakthroughs that the modern AI industry relies on now. Obviously, Transformers, most famously, but AlphaGo, Deep Reinforcement Learning, these things. And we have these incredible product services, billion-user services that are natural fits for AI, actually, from search to email to Chrome. But it's just getting all of that together and organized in the right way. And I think we've done that in the last couple of years. And there's still a lot more work to do, but I think we're starting to see the fruits of that. If you think you have an advantage, how big do you think your advantage is? How long does it last? Well, I think everything starts with research, in my opinion. and the models especially being state-of-the-art on all the different benchmarks. And that's what we focused on first when we put Google and DeepMind together. And I think with the Gemini series, we're very happy with how that's going. There's a lot more work to do there, but I think we're the only organization that has the full stack from the TPUs and the hardware, the data centers, the cloud business, the Frontier Lab, and all of these amazing products that can, you know, kind of natural fits for AI. So really structurally, from first principles, we should be doing very well. And I think there's actually a lot more headroom to come from here. I wonder what a day in the life of a frontier model leading AI CEO is like. Like, you know, I've read that you do most of your thinking between 1 and 4 a.m. Yes, that's right. Is it ever not a code red inside? Like, do you ever feel comfortable? No, you never feel comfortable. I mean, we try, you know, code reds are for very special circumstances, but it's always, I mean, for the last, I would say, three, four years, it's been unbelievably intense. And, you know, 100-hour weeks, 50 weeks a year, and that's the norm. And I think that's what you have to do at the forefront of this unbelievably fast-moving technology. It's ferociously competitive out there. Maybe the most intense competition there's ever been in technology. and the stakes are incredibly high, AGI and, you know, all of that that that can mean. So commercially, scientifically, and then if you add all the excitement of what we're doing and, you know, using it as my passion, as you know, is exploring scientific, you know, problems with AI, accelerating scientific discovery itself. This is what I've always dreamed about. And I've worked my whole life on AI towards this moment. And so it's sort of hard to sleep because there's so much work to do, but also there's so much exciting things to look into and to push forward. I mean, I know you're very focused on AI driving scientific progress, discovering new materials. Even we've seen now Gemini being integrated into humanoid robots. Is the alpha fold moment for the physical world and AI here? What is that and what does that look like? Yeah, I spent a lot of the last year actually looking very carefully into robotics. I do think we're on the cusp of a kind of breakthrough moment in physical intelligence. I still think we're about 18 months, two years away from doing, we need to do more research. But I think the foundation models like Gemini show the way forward. I mean, from the beginning, we made Gemini multimodal. So it could understand the physical world for multiple reasons. One was so we could build a universal assistant, maybe that exists on your glasses or your phone that understands the world around you. But, of course, a second use of that would be for robotics. So what does that moment for the physical world look like? I think it's having robots, you know, reliably do useful tasks out in the world. And I think there's a few things holding that back still. Part of it is the algorithms are still not quite there. They need a little bit more robustness. They have to work with less data than you get for the LLMs or the models that work on just digital realm. You can sort of create synthetic data. It's a lot harder to make that kind of data. And there's still sort of some problems in the hardware that are not solved, specifically things like the arm and the hand. Actually, when you look into robotics very carefully, you get a newfound appreciation, at least I did, for the human hand and how exquisite evolution has designed that. It's incredible. And it's very hard to match the reliability, the strength and the dexterity that the human hand has. So there's still quite a lot, in my opinion, pieces that are put together. But there's very exciting things. I mean, we just announced a new deep collaboration in the Boston Dynamics. They've got some very exciting robots in Hyundai and actually applying it to automotive manufacturing. And we'll see over the next year how that goes in sort of prototype form. And then maybe in a year or two, we'll have some really impressive demonstrations that we can scale up. A year ago, DeepSea seemed cataclysmic for the West. now, a year later, it's quiet. China seems to have been quieter. Yes. Has your opinion on competition from China changed? Not really. I mean, I didn't think it was cataclysmic in the first place. I think it was a massive overreaction. In the West, it was impressive. And I think it shows that the Chinese are very capable, the leading companies. I think companies like ByteDance, actually, I would say are the most capable. And they're maybe only six months behind, not one or two years behind the frontier. So I think that's what DeepSeek showed. Some of the claims were over about the amount of compute they used and being so minimal and so on because they relied on some Western models and also fine on the outputs of some of the leading Western models So it wasn't sort of de novo. And the other thing I think so far is not yet to be seen is, can China actually, the Chinese companies, innovate beyond the frontier themselves? So, you know, they're gaining, they're very good at kind of catching up to where the frontier is and increasingly capable of that. But I think they've yet to show they can innovate beyond the frontier. You helped define AGI. You have said we have a 50% chance of getting there by 2030. Is that still your timeline? It is. And is AGI still a useful target for you? I think so. I think it's a very, at least it's still my timeline. And it's a little bit longer than some others that you're here. It is longer than some others. But my bar is quite high. It's the ability to, you know, a system that exhibits all the cognitive capabilities humans have. And I think we're still clearly quite far from that. That means, you know, in things like scientific creativity, not just solving a conjecture or solving a problem in science, but actually coming up with the hypothesis or the problem in the first place. And as any scientist knows, finding the right question is actually often way harder than finding the answer. So, you know, it's not clear that these systems have that capability. In fact, they definitely don't right now. I think they will eventually, but it's not clear what's needed still. And there's things like continual learning, you know, online learning, going beyond what they've been sort of trained for. And then they're static out in the world. They need to learn on the fly. So there's quite a few, in my view, missing capabilities that are quite critical to what I would regard as an AGI system. Google's a major investor in Anthropik. Dario was here earlier today. Did you agree or disagree with his prediction that AI will wipe away 50% of entry-level white-collar jobs in five years? I think that's also my timelines and my view on that would be a lot longer. I mean, I think we're starting to see maybe the beginnings of that this year in terms of maybe entry-level jobs or internships, those types of things. But I think we would have to solve a lot more of this consistency that AI doesn't have right now. I call it jagged intelligences. We're very good at certain things, and it's very poor, the current systems, at other things. And if you want to offload or delegate an entire task to, say, an agent, rather than having what we have today, which are more like assistive programs, you're going to need a lot more consistency across the board. It's no good for it to be good at 95% of that task. You need it to be good at the whole task for you to be able to actually just sort of fire and forget on it. So I think there's still quite a lot more to be done before we'll see that kind of disruption. But that kind of disruption will happen. I think eventually, sure. I mean, in the limit with AGI, you know, if you have systems like that, I think that changes the whole economy, actually. It's way beyond the question of jobs. I think potentially if we build it right, we're in a post-scarcity world where we solve some of the kind of fundamental root nodes of the world, like energy sources, new, clean, renewable, basically free energy sources, free-solve fusion, something like that, with the help of AI, new materials, I think we'll be, you know, five, ten years past AGI, we'll be in a radical, abundant world. And so what does that mean to the economy and how society works, actually? Before we get to a post-scarcity world, though, if we get there, there is so much anxiety about what happens in between. you know I'm a mom I know you have kids like what scares you most for them what do you talk to them about what do you tell them that's coming I mean you know I've just heard so many oh my gosh for college graduates are going to have such a hard time well I don't know about that look I think it's it's it's it's going to be an age of disruption just like the industrial revolution was maybe 10x of that which is kind of unbelievable to think about and 10 times faster so I usually describe it it's going to be 10 times bigger and 10 times faster the industrial revolution To your kids. No, 100x. Okay. So 100x of it. No, I say this to everyone, but I think that comes with it huge opportunities. And I also am just a big believer in human ingenuity. We're extremely adaptable because our minds are so general. You know, the human mind is very general. We've adapted to look at the modern world around us. We, you know, our hunter-gatherer minds have managed to build modern civilization. So I think we'll adapt again. I think it's a little bit unprecedented because of the speed of it. Usually it takes one generation or two generations for a transformation like this to happen and the magnitude of the transformative power of this technology. But I think the kids today, you know, I'd be encouraging them to get incredibly proficient with these new tools and native with them. And they're almost equivalent of giving them superpowers, you know, in the creative arts that you could probably do the job of, you know, what would have taken 10 people in one. And I think that means, you know, if you're entrepreneurial, if you're creative with game design, films, projects, you can probably get a lot more done and break into those industries a lot more easily than you could in the past as, you know, a new upcoming talent. Some folks have advocated for a pause to give regulation time to catch up, to give society time to sort of adjust to some of these changes. In a perfect world, if you knew that every other company would pause, if every country would pause, would you advocate for that? I think so. I mean, I've been on record saying what I'd like to see happen. It was always my dream of the kind of the roadmap, at least I had when I started out DeepMind 15 years ago and started working on AI, you know, 25 years ago now. was that as we got close to this moment, this threshold moment of AGI arriving, we would maybe collaborate in a scientific way. I sometimes talk about setting up an international CERN equivalent for AI, where all the best minds in the world would collaborate together and do the final steps in a very rigorous scientific way, involving all of society, maybe philosophers and social scientists and economists, as well as technologists to kind of figure out what we want from this technology and how to utilize it in a way that benefits all of humanity. And I think that's what's at stake. Unfortunately it kind of needs international collaboration though because even if one company or even one nation or even the West decided to do that it has no use unless the whole world agrees at least on some kind of minimum standards And, you know, international cooperation is a little bit tricky at the moment. So that's going to have to change if we want to have that kind of rigorous scientific approach to the final steps to AGI. So if AGI comes in, let's say, 2030, we don't have the regulation set up yet. Are we destined for something difficult? Well, then we're, you know, I think I'm still optimistic that enough of the leading players will kind of communicate together and hopefully collaborate at least on things like safety and security protocols. There's a lot of that already. We work quite closely with Anthropic, for example, on those things. And that would be needed then. maybe kind of more peer-based cooperation if we can't get that international thing to work. But I would, you know, that would be a lot more pressure. Elon and Sam to cooperate with you? Potentially. You know, I think I'm on pretty good terms with pretty much all the other leaders of all the leading labs. I mean, I think if the stakes are high enough, and I think a lot of it is understanding what's at stake and what the risks are. And I think that will become clearer to everyone in the next two, three years. So let's talk about the technology and the next curve. Jan LeCun has said he doesn't think transformers and LLMs alone will get us to AGI. Do you agree or disagree? And, you know, if they're dead ends, then what are we doing? Yes. No, I disagree that they're dead ends. But obviously, and I think that's clearly wrong. I mean, they're so amazingly useful already. But I think the way I say it's an empirical question. I think it's a scientific question whether they're going to be enough on their own. I think it's a 50-50 that just scaling up existing methods with some tweaks will be enough. It might be. And you have to do that. And I think that's useful work because at a minimum, the way I look at it is these LLMs will be a component, a massively important component of the final AGI system. The only question in my mind is, is it the only component? Right. And I could imagine there are one or two breakthroughs, maybe a small handful, less than five, that are still needed from here. Less than five. Yes. And so those might be things like world models. That's something that Jan talked about. We're working on that. In fact, we have the best world model currently, which is Genie, our Genie system. And I work on that directly, and I think it's very important. But also things like continual learning and having consistent systems that don't have these jagged edges that they're good at and not good at. A general system shouldn't have that. So I think better reasoning, more long-term planning. There's quite a few capabilities that are still missing. And it's an open question whether a new architecture or new breakthrough is needed or more of the same. And we're just, from my point of view, from Google DeepMind's point of view, we're pushing as hard as possible on both those things, both inventing new things and scaling up existing things. So slightly different but related, Ilya Suskova said, the era of scaling and making bigger models make improvements is nearly over. Is that something you agree on? No, I don't agree. I think his exact quote was this. We're back to the age of research. And, you know, I love Ilya and we're very good friends and we agree on a lot of things. But my view is we never left the age of research, at least from the point of view of DeepMind. We've always invested. In my view, we've always had the deepest and broadest bench. actually Google and DeepMind together. If you look over the last decade, we've invented about 90% of the breakthroughs that the modern industry relies on. Of course, Transformers most famously, but also Deep Reinforcement Learning, AlphaGo, these kinds of reinforcement learning techniques. We pioneered all of that. So if some new breakthroughs are required in the future, I would back us to be the ones, just like in the past, to be the ones to make those breakthroughs you know, in the future. Last agree or disagree? Elon says we have entered the singularity. No, I think that's very premature. You know, I think the singularity is another word for, you know, a full AGI arriving. And I think I explained earlier why I think that we're still, you know, nowhere near that. I think we will get there. And, you know, five years, even five years, it's not a long amount of time if you think about what that is. But I think there's still a lot of work to be done before we have anything that looks like the singularity. So talk to us a little bit about the culture inside Google right now, you know, to, you know, win this race, but do it right. You know, the leadership, how involved are Larry and Sergey right now? How often do you talk to them and what are their priorities? Yeah, they're very involved in and, you know, Larry more on the strategic side, you know, see him at board meetings and other times when I visit the valley. So, again, he's more hands-on. He's involved in the coding on the Gemini team specifically, more in the algorithmic details. And it's great having them both energized around where we are and who wouldn't be at this moment, absolutely incredible moment for computer science. So just from a scientific point of view, which both of them are, it's an incredibly exciting moment in history, human history, really. And so, of course, everyone wants to be hands-on and heavily involved in that. So that's great. And for us, just as an entity, you know, I'm trying to combine the best of many worlds. So, you know, startup energy of shipping things fast and taking risks and doing things like that, which I think you're seeing the benefits of, you know, the big company resources is amazing, useful, but also protecting the space for long term research still and exploratory research, not just researching what we'll deliver in, you know, three months in a product. I think that would be a mistake, too. So I'm trying to balance all of those different factors together. And, you know, I think in the last year, things have been going well. I think we can still do better and I think we still will do better this year. But I'm very happy with our trajectory. I think it's the steepest trajectory of improvement and progress of anyone in the industry. You are a Nobel laureate. And I know how obsessed you are with, you know, AI powering scientific research. If AI itself, let's say, makes a Nobel worthy discovery, you should get the prize. The AI or the human? I think still the human I would say because I feel like I mean it depends what you mean by completely on its own right So for now these are still tools and I view them as maybe the ultimate scientific tool but better versions of telescopes and microscopes. We've always built tools so that we can investigate the natural world better. We're tool-making animals, basically. That's what distinguishes humans from the other animals, and that's our superpower. And I include computers in that, of course, and AI being the ultimate expression of that. So in some ways, I think of AI, and I've always thought of it, as the ultimate tool to do science. And I think for the foreseeable future, that's going to be a collaboration with top scientists, putting in the creative ideas and maybe the hypotheses with these amazing tools that enhance data processing and pattern matching and the exploration part of science. You obviously could have sold DeepMind to anyone. and I think all of these companies are asking a lot of us to trust you, especially if regulation doesn't keep up with technology, which history shows it probably won't. Why should we trust you and why do you think Google, which I would implicitly think you believe, is the place that we should believe in the most when it comes to something that feels so risky? Yes, I think you need to judge these companies by their actions and also look into, you know, the motivations, I would say, of the leaders involved in those, you know, those endeavors. And for me and for us, and that's one reason I picked Google as the right home for DeepMind is several reasons. The main one being that the founders of Google and the way Google was set up by them was as a scientific company. You know, many people forget Google itself was a PhD project, right? It was Larry and Sergey's PhD project. So I felt an immediate affinity with them, and Larry led the acquisition, but also the board, who they collected on the board. You have John Hennessy, who's the chair, who's a Turing Award winner himself, and Francis Arnold, another Nobel Prize winner. These are unusual people to have on a corporate board. So the whole environment is very scientific and science and research-led and engineering-led as a culture, and it's deeply ingrained in the culture. And that means doing science at the highest level means being really rigorous, being thoughtful and applying the scientific method everywhere you can. I think that's not just to the technology, but it's also to the way you operate as an organization. So I feel like, you know, we we're very we try to be very thoughtful and responsible and have as much foresight as possible over the technologies we put out in the world. It doesn't mean we'll get everything right because it's so complex and so new and nascent and so transformative, this technology. But we hope to coast-correct as quickly as possible if something does go wrong. And then the final thing I would say is I was just attracted by the types of things Google tries to do in the world. Organizing the world's information, I think, is a very noble goal, which is obviously Google's mission statement. And I think it fitted very well with DeepMind's mission statement of solving intelligence and using it to solve everything else. And I think those two mission statements are natural fit. AI and organizing the world's information naturally go together. And I think those are the types of products, the types of products that Google's well known for, from Maps to Gmail to obviously Search. I think they're genuinely useful products in the world. And I think AI is an easy fit how that would work with those products to enhance them for everybody to use in their everyday lives. And I think that's a great thing for the world. So, you know, I'm happy to be contributing to that. OK, so post-scarcity world, we're there, people no longer have jobs. What do you personally plan to do with your time once you have achieved all your technical goals, the research is just automating itself? Right. Well, I would love to use it for what I will do post the singularity is to use it for exploring the limits of physics. I think that was my my favorite subject at school was the big questions. You know, what is the fabric of reality? What's the nature of reality? What about the nature of consciousness? The answer to the Fermi paradox, all of these things. What is time? What is gravity? I'm amazed more people, you know, we just go around our daily lives, you know, not really thinking about these massive questions that for me are always kind of almost screaming at me for like, like, what is the answer to these things, these deep mysteries? and I would like to use AI and to explore all of those things, maybe traveling to the stars as well with the help of new energy sources and materials and other things that's unlocked by AI. Will we all have meaning and purpose if we don't have work? Well, I think that's the, to be honest with you, that's the thing I worry more about than the economics. I think the economics is almost a political question of like when we get all of these extra benefits and productivity, can we make sure that it's shared for the benefit of everyone? And I think obviously That's what I believe in. But then the bigger question than that is, what about purpose and meaning that a lot of us get from our jobs and scientific endeavors? How will we find that in the new world? And I think, you know, we all need some new great philosophers, in my opinion, to help with that and thinking that through. Maybe we'll, you know, be getting much more sophisticated with our art and exploration that we do and things like, you know, extreme sports. There's many things we do today that aren't just for economic gain. And perhaps we'll have very esoteric versions of those things in the future. All right. So everyone in the room is wondering what they should be doing. Like, what should I do about AI? What do I do? Sitting here in Davos in 10 years, what is the biggest mistake do you think people in this room will have made about AI? Well, look, I think there's two things I would say. One is for the younger generation and our kids and so on is the only thing we're certain of is there's going to be a huge amount of change. So I think in terms of learning skills, get ready to kind of learning to learn is the most important thing. How can quickly can you adapt to new situations, absorb new information using the tools that we have for the CEOs and business folks in the room? I think the most important thing to do now is there are many providers of leading models and leading service providers, and there'll be more for these AI models. You know, pick the partners that you feel are approaching it in the right way. And so, you know, kind of partner with those that are making the changes and approaching this technology in the way that you would like to see in the world. And I think together we can kind of build that future with AI coming down the line that we all want.