The following is a summary and article by AI based on a transcript of the video "What Is an AI Anyway? | Mustafa Suleyman | TED". Due to the limitations of AI, please be careful to distinguish the correctness of the content.
00:04 | I want to tell you what I see coming. |
---|---|
00:07 | I've been lucky enough to be working on AI for almost 15 years now. |
00:12 | Back when I started, to describe it as fringe would be an understatement. |
00:17 | Researchers would say, “No, no, we’re only working on machine learning.” |
00:21 | Because working on AI was seen as way too out there. |
00:25 | In 2010, just the very mention of the phrase “AGI,” |
00:29 | artificial general intelligence, |
00:31 | would get you some seriously strange looks |
00:34 | and even a cold shoulder. |
00:36 | "You're actually building AGI?" people would say. |
00:40 | "Isn't that something out of science fiction?" |
00:42 | People thought it was 50 years away or 100 years away, |
00:45 | if it was even possible at all. |
00:47 | Talk of AI was, I guess, kind of embarrassing. |
00:51 | People generally thought we were weird. |
00:54 | And I guess in some ways we kind of were. |
00:56 | It wasn't long, though, before AI started beating humans |
00:59 | at a whole range of tasks |
01:01 | that people previously thought were way out of reach. |
01:05 | Understanding images, |
01:07 | translating languages, |
01:09 | transcribing speech, |
01:10 | playing Go and chess |
01:12 | and even diagnosing diseases. |
01:15 | People started waking up to the fact |
01:17 | that AI was going to have an enormous impact, |
01:21 | and they were rightly asking technologists like me |
01:23 | some pretty tough questions. |
01:25 | Is it true that AI is going to solve the climate crisis? |
01:29 | Will it make personalized education available to everyone? |
01:32 | Does it mean we'll all get universal basic income |
01:35 | and we won't have to work anymore? |
01:37 | Should I be afraid? |
01:38 | What does it mean for weapons and war? |
01:41 | And of course, will China win? |
01:43 | Are we in a race? |
01:45 | Are we headed for a mass misinformation apocalypse? |
01:49 | All good questions. |
01:51 | But it was actually a simpler |
01:53 | and much more kind of fundamental question that left me puzzled. |
01:58 | One that actually gets to the very heart of my work every day. |
02:03 | One morning over breakfast, |
02:05 | my six-year-old nephew Caspian was playing with Pi, |
02:09 | the AI I created at my last company, Inflection. |
02:12 | With a mouthful of scrambled eggs, |
02:14 | he looked at me plain in the face and said, |
02:17 | "But Mustafa, what is an AI anyway?" |
02:21 | He's such a sincere and curious and optimistic little guy. |
02:25 | He'd been talking to Pi about how cool it would be if one day in the future, |
02:29 | he could visit dinosaurs at the zoo. |
02:32 | And how he could make infinite amounts of chocolate at home. |
02:35 | And why Pi couldn’t yet play I Spy. |
02:39 | "Well," I said, "it's a clever piece of software |
02:42 | that's read most of the text on the open internet, |
02:44 | and it can talk to you about anything you want." |
02:48 | "Right. |
02:49 | So like a person then?" |
02:54 | I was stumped. |
02:56 | Genuinely left scratching my head. |
03:00 | All my boring stock answers came rushing through my mind. |
03:04 | "No, but AI is just another general-purpose technology, |
03:07 | like printing or steam." |
03:09 | It will be a tool that will augment us |
03:11 | and make us smarter and more productive. |
03:14 | And when it gets better over time, |
03:16 | it'll be like an all-knowing oracle |
03:18 | that will help us solve grand scientific challenges." |
03:22 | You know, all of these responses started to feel, I guess, |
03:25 | a little bit defensive. |
03:28 | And actually better suited to a policy seminar |
03:30 | than breakfast with a no-nonsense six-year-old. |
03:33 | "Why am I hesitating?" I thought to myself. |
03:37 | You know, let's be honest. |
03:39 | My nephew was asking me a simple question |
03:43 | that those of us in AI just don't confront often enough. |
03:48 | What is it that we are actually creating? |
03:51 | What does it mean to make something totally new, |
03:55 | fundamentally different to any invention that we have known before? |
04:00 | It is clear that we are at an inflection point |
04:03 | in the history of humanity. |
04:06 | On our current trajectory, |
04:08 | we're headed towards the emergence of something |
04:10 | that we are all struggling to describe, |
04:13 | and yet we cannot control what we don't understand. |
04:19 | And so the metaphors, |
04:21 | the mental models, |
04:22 | the names, these all matter |
04:25 | if we’re to get the most out of AI whilst limiting its potential downsides. |
04:30 | As someone who embraces the possibilities of this technology, |
04:33 | but who's also always cared deeply about its ethics, |
04:37 | we should, I think, |
04:38 | be able to easily describe what it is we are building. |
04:41 | And that includes the six-year-olds. |
04:44 | So it's in that spirit that I offer up today the following metaphor |
04:48 | for helping us to try to grapple with what this moment really is. |
04:52 | I think AI should best be understood |
04:55 | as something like a new digital species. |
05:00 | Now, don't take this too literally, |
05:02 | but I predict that we'll come to see them as digital companions, |
05:07 | new partners in the journeys of all our lives. |
05:10 | Whether you think we’re on a 10-, 20- or 30-year path here, |
05:14 | this is, in my view, the most accurate and most fundamentally honest way |
05:19 | of describing what's actually coming. |
05:22 | And above all, it enables everybody to prepare for |
05:26 | and shape what comes next. |
05:29 | Now I totally get, this is a strong claim, |
05:31 | and I'm going to explain to everyone as best I can why I'm making it. |
05:36 | But first, let me just try to set the context. |
05:39 | From the very first microscopic organisms, |
05:42 | life on Earth stretches back billions of years. |
05:45 | Over that time, life evolved and diversified. |
05:49 | Then a few million years ago, something began to shift. |
05:54 | After countless cycles of growth and adaptation, |
05:57 | one of life’s branches began using tools, and that branch grew into us. |
06:04 | We went on to produce a mesmerizing variety of tools, |
06:08 | at first slowly and then with astonishing speed, |
06:12 | we went from stone axes and fire |
06:16 | to language, writing and eventually industrial technologies. |
06:21 | One invention unleashed a thousand more. |
06:25 | And in time, we became homo technologicus. |
06:29 | Around 80 years ago, |
06:30 | another new branch of technology began. |
06:33 | With the invention of computers, |
06:35 | we quickly jumped from the first mainframes and transistors |
06:39 | to today's smartphones and virtual-reality headsets. |
06:42 | Information, knowledge, communication, computation. |
06:47 | In this revolution, |
06:49 | creation has exploded like never before. |
06:53 | And now a new wave is upon us. |
06:55 | Artificial intelligence. |
06:57 | These waves of history are clearly speeding up, |
07:00 | as each one is amplified and accelerated by the last. |
07:05 | And if you look back, |
07:06 | it's clear that we are in the fastest |
07:08 | and most consequential wave ever. |
07:11 | The journeys of humanity and technology are now deeply intertwined. |
07:16 | In just 18 months, |
07:18 | over a billion people have used large language models. |
07:21 | We've witnessed one landmark event after another. |
07:25 | Just a few years ago, people said that AI would never be creative. |
07:30 | And yet AI now feels like an endless river of creativity, |
07:34 | making poetry and images and music and video that stretch the imagination. |
07:39 | People said it would never be empathetic. |
07:42 | And yet today, millions of people enjoy meaningful conversations with AIs, |
07:47 | talking about their hopes and dreams |
07:49 | and helping them work through difficult emotional challenges. |
07:53 | AIs can now drive cars, |
07:55 | manage energy grids |
07:57 | and even invent new molecules. |
07:59 | Just a few years ago, each of these was impossible. |
08:03 | And all of this is turbocharged by spiraling exponentials of data |
08:09 | and computation. |
08:10 | Last year, Inflection 2.5, our last model, |
08:16 | used five billion times more computation |
08:20 | than the DeepMind AI that beat the old-school Atari games |
08:24 | just over 10 years ago. |
08:26 | That's nine orders of magnitude more computation. |
08:30 | 10x per year, |
08:31 | every year for almost a decade. |
08:34 | Over the same time, the size of these models has grown |
08:37 | from first tens of millions of parameters to then billions of parameters, |
08:41 | and very soon, tens of trillions of parameters. |
08:45 | If someone did nothing but read 24 hours a day for their entire life, |
08:50 | they'd consume eight billion words. |
08:53 | And of course, that's a lot of words. |
08:55 | But today, the most advanced AIs consume more than eight trillion words |
09:01 | in a single month of training. |
09:03 | And all of this is set to continue. |
09:05 | The long arc of technological history is now in an extraordinary new phase. |
09:12 | So what does this mean in practice? |
09:15 | Well, just as the internet gave us the browser |
09:18 | and the smartphone gave us apps, |
09:20 | the cloud-based supercomputer is ushering in a new era |
09:24 | of ubiquitous AIs. |
09:27 | Everything will soon be represented by a conversational interface. |
09:32 | Or, to put it another way, a personal AI. |
09:35 | And these AIs will be infinitely knowledgeable, |
09:38 | and soon they'll be factually accurate and reliable. |
09:42 | They'll have near-perfect IQ. |
09:44 | They’ll also have exceptional EQ. |
09:47 | They’ll be kind, supportive, empathetic. |
09:53 | These elements on their own would be transformational. |
09:55 | Just imagine if everybody had a personalized tutor in their pocket |
09:59 | and access to low-cost medical advice. |
10:02 | A lawyer and a doctor, |
10:04 | a business strategist and coach -- |
10:06 | all in your pocket 24 hours a day. |
10:08 | But things really start to change when they develop what I call AQ, |
10:13 | their “actions quotient.” |
10:15 | This is their ability to actually get stuff done |
10:18 | in the digital and physical world. |
10:20 | And before long, it won't just be people that have AIs. |
10:24 | Strange as it may sound, every organization, |
10:27 | from small business to nonprofit to national government, |
10:30 | each will have their own. |
10:32 | Every town, building and object |
10:35 | will be represented by a unique interactive persona. |
10:39 | And these won't just be mechanistic assistants. |
10:42 | They'll be companions, confidants, |
10:46 | colleagues, friends and partners, |
10:48 | as varied and unique as we all are. |
10:52 | At this point, AIs will convincingly imitate humans at most tasks. |
10:57 | And we'll feel this at the most intimate of scales. |
11:00 | An AI organizing a community get-together for an elderly neighbor. |
11:04 | A sympathetic expert helping you make sense of a difficult diagnosis. |
11:09 | But we'll also feel it at the largest scales. |
11:12 | Accelerating scientific discovery, |
11:14 | autonomous cars on the roads, |
11:16 | drones in the skies. |
11:18 | They'll both order the takeout and run the power station. |
11:22 | They’ll interact with us and, of course, with each other. |
11:26 | They'll speak every language, |
11:28 | take in every pattern of sensor data, |
11:31 | sights, sounds, |
11:33 | streams and streams of information, |
11:35 | far surpassing what any one of us could consume in a thousand lifetimes. |
11:40 | So what is this? |
11:42 | What are these AIs? |
11:46 | If we are to prioritize safety above all else, |
11:51 | to ensure that this new wave always serves and amplifies humanity, |
11:56 | then we need to find the right metaphors for what this might become. |
12:01 | For years, we in the AI community, and I specifically, |
12:06 | have had a tendency to refer to this as just tools. |
12:11 | But that doesn't really capture what's actually happening here. |
12:14 | AIs are clearly more dynamic, |
12:17 | more ambiguous, more integrated |
12:19 | and more emergent than mere tools, |
12:22 | which are entirely subject to human control. |
12:25 | So to contain this wave, |
12:28 | to put human agency at its center |
12:31 | and to mitigate the inevitable unintended consequences |
12:33 | that are likely to arise, |
12:35 | we should start to think about them as we might a new kind of digital species. |
12:41 | Now it's just an analogy, |
12:42 | it's not a literal description, and it's not perfect. |
12:46 | For a start, they clearly aren't biological in any traditional sense, |
12:50 | but just pause for a moment |
12:52 | and really think about what they already do. |
12:55 | They communicate in our languages. |
12:58 | They see what we see. |
13:00 | They consume unimaginably large amounts of information. |
13:04 | They have memory. |
13:06 | They have personality. |
13:09 | They have creativity. |
13:12 | They can even reason to some extent and formulate rudimentary plans. |
13:16 | They can act autonomously if we allow them. |
13:20 | And they do all this at levels of sophistication |
13:22 | that is far beyond anything that we've ever known from a mere tool. |
13:27 | And so saying AI is mainly about the math or the code |
13:32 | is like saying we humans are mainly about carbon and water. |
13:37 | It's true, but it completely misses the point. |
13:42 | And yes, I get it, this is a super arresting thought |
13:46 | but I honestly think this frame helps sharpen our focus on the critical issues. |
13:52 | What are the risks? |
13:55 | What are the boundaries that we need to impose? |
13:59 | What kind of AI do we want to build or allow to be built? |
14:04 | This is a story that's still unfolding. |
14:06 | Nothing should be accepted as a given. |
14:09 | We all must choose what we create. |
14:12 | What AIs we bring into the world, or not. |
14:18 | These are the questions for all of us here today, |
14:21 | and all of us alive at this moment. |
14:24 | For me, the benefits of this technology are stunningly obvious, |
14:28 | and they inspire my life's work every single day. |
14:33 | But quite frankly, they'll speak for themselves. |
14:37 | Over the years, I've never shied away from highlighting risks |
14:40 | and talking about downsides. |
14:43 | Thinking in this way helps us focus on the huge challenges |
14:46 | that lie ahead for all of us. |
14:48 | But let's be clear. |
14:50 | There is no path to progress |
14:52 | where we leave technology behind. |
14:55 | The prize for all of civilization is immense. |
15:00 | We need solutions in health care and education, to our climate crisis. |
15:03 | And if AI delivers just a fraction of its potential, |
15:07 | the next decade is going to be the most productive in human history. |
15:13 | Here's another way to think about it. |
15:15 | In the past, |
15:17 | unlocking economic growth often came with huge downsides. |
15:21 | The economy expanded as people discovered new continents |
15:25 | and opened up new frontiers. |
15:28 | But they colonized populations at the same time. |
15:32 | We built factories, |
15:34 | but they were grim and dangerous places to work. |
15:38 | We struck oil, |
15:39 | but we polluted the planet. |
15:42 | Now because we are still designing and building AI, |
15:45 | we have the potential and opportunity to do it better, |
15:49 | radically better. |
15:51 | And today, we're not discovering a new continent |
15:53 | and plundering its resources. |
15:56 | We're building one from scratch. |
15:58 | Sometimes people say that data or chips are the 21st century’s new oil, |
16:03 | but that's totally the wrong image. |
16:06 | AI is to the mind |
16:08 | what nuclear fusion is to energy. |
16:12 | Limitless, abundant, |
16:14 | world-changing. |
16:17 | And AI really is different, |
16:20 | and that means we have to think about it creatively and honestly. |
16:24 | We have to push our analogies and our metaphors |
16:27 | to the very limits |
16:29 | to be able to grapple with what's coming. |
16:31 | Because this is not just another invention. |
16:34 | AI is itself an infinite inventor. |
16:38 | And yes, this is exciting and promising and concerning |
16:42 | and intriguing all at once. |
16:45 | To be quite honest, it's pretty surreal. |
16:47 | But step back, |
16:49 | see it on the long view of glacial time, |
16:52 | and these really are the very most appropriate metaphors that we have today. |
16:57 | Since the beginning of life on Earth, |
17:00 | we've been evolving, changing |
17:03 | and then creating everything around us in our human world today. |
17:08 | And AI isn't something outside of this story. |
17:11 | In fact, it's the very opposite. |
17:15 | It's the whole of everything that we have created, |
17:18 | distilled down into something that we can all interact with |
17:21 | and benefit from. |
17:23 | It's a reflection of humanity across time, |
17:27 | and in this sense, |
17:28 | it isn't a new species at all. |
17:31 | This is where the metaphors end. |
17:33 | Here's what I'll tell Caspian next time he asks. |
17:37 | AI isn't separate. |
17:39 | AI isn't even in some senses, new. |
17:43 | AI is us. |
17:45 | It's all of us. |
17:47 | And this is perhaps the most promising and vital thing of all |
17:50 | that even a six-year-old can get a sense for. |
17:54 | As we build out AI, |
17:55 | we can and must reflect all that is good, |
17:59 | all that we love, |
18:00 | all that is special about humanity: |
18:03 | our empathy, our kindness, |
18:05 | our curiosity and our creativity. |
18:09 | This, I would argue, is the greatest challenge of the 21st century, |
18:14 | but also the most wonderful, |
18:16 | inspiring and hopeful opportunity for all of us. |
18:20 | Thank you. |
18:21 | (Applause) |
18:26 | Chris Anderson: Thank you Mustafa. |
18:28 | It's an amazing vision and a super powerful metaphor. |
18:32 | You're in an amazing position right now. |
18:34 | I mean, you were connected at the hip |
18:35 | to the amazing work happening at OpenAI. |
18:38 | You’re going to have resources made available, |
18:40 | there are reports of these giant new data centers, |
18:44 | 100 billion dollars invested and so forth. |
18:48 | And a new species can emerge from it. |
18:52 | I mean, in your book, |
18:53 | you did, as well as painting an incredible optimistic vision, |
18:56 | you were super eloquent on the dangers of AI. |
19:00 | And I'm just curious, from the view that you have now, |
19:04 | what is it that most keeps you up at night? |
19:06 | Mustafa Suleyman: I think the great risk is that we get stuck |
19:09 | in what I call the pessimism aversion trap. |
19:11 | You know, we have to have the courage to confront |
19:14 | the potential of dark scenarios |
19:16 | in order to get the most out of all the benefits that we see. |
19:19 | So the good news is that if you look at the last two or three years, |
19:23 | there have been very, very few downsides, right? |
19:26 | It’s very hard to say explicitly what harm an LLM has caused. |
19:31 | But that doesn’t mean that that’s what the trajectory is going to be |
19:34 | over the next 10 years. |
19:35 | So I think if you pay attention to a few specific capabilities, |
19:39 | take for example, autonomy. |
19:41 | Autonomy is very obviously a threshold |
19:43 | over which we increase risk in our society. |
19:46 | And it's something that we should step towards very, very closely. |
19:49 | The other would be something like recursive self-improvement. |
19:52 | If you allow the model to independently self-improve, |
19:56 | update its own code, |
19:57 | explore an environment without oversight, and, you know, |
20:01 | without a human in control to change how it operates, |
20:04 | that would obviously be more dangerous. |
20:06 | But I think that we're still some way away from that. |
20:09 | I think it's still a good five to 10 years before we have to really confront that. |
20:12 | But it's time to start talking about it now. |
20:15 | CA: A digital species, unlike any biological species, |
20:17 | can replicate not in nine months, |
20:19 | but in nine nanoseconds, |
20:21 | and produce an indefinite number of copies of itself, |
20:24 | all of which have more power than we have in many ways. |
20:28 | I mean, the possibility for unintended consequences seems pretty immense. |
20:33 | And isn't it true that if a problem happens, |
20:35 | it could happen in an hour? |
20:37 | MS: No. |
20:38 | That is really not true. |
20:40 | I think there's no evidence to suggest that. |
20:42 | And I think that, you know, |
20:44 | that’s often referred to as the “intelligence explosion.” |
20:47 | And I think it is a theoretical, hypothetical maybe |
20:51 | that we're all kind of curious to explore, |
20:53 | but there's no evidence that we're anywhere near anything like that. |
20:56 | And I think it's very important that we choose our words super carefully. |
21:00 | Because you're right, that's one of the weaknesses of the species framing, |
21:03 | that we will design the capability for self-replication into it |
21:08 | if people choose to do that. |
21:09 | And I would actually argue that we should not, |
21:12 | that would be one of the dangerous capabilities |
21:14 | that we should step back from, right? |
21:16 | So there's no chance that this will "emerge" accidentally. |
21:19 | I really think that's a very low probability. |
21:22 | It will happen if engineers deliberately design those capabilities in. |
21:26 | And if they don't take enough efforts to deliberately design them out. |
21:30 | And so this is the point of being explicit |
21:32 | and transparent about trying to introduce safety by design very early on. |
21:39 | CA: Thank you, your vision of humanity injecting into this new thing |
21:45 | the best parts of ourselves, |
21:46 | avoiding all those weird, biological, freaky, |
21:49 | horrible tendencies that we can have in certain circumstances, |
21:52 | I mean, that is a very inspiring vision. |
21:54 | And thank you so much for coming here and sharing it at TED. |
21:58 | Thank you, good luck. |
21:59 | (Applause) |