The following is a summary and article by AI based on a transcript of the video "How to Govern AI — Even If It’s Hard to Predict | Helen Toner | TED". Due to the limitations of AI, please be careful to distinguish the correctness of the content.
00:03 | When I talk to people about artificial intelligence, |
---|---|
00:07 | something I hear a lot from non-experts is “I don’t understand AI.” |
00:13 | But when I talk to experts, a funny thing happens. |
00:16 | They say, “I don’t understand AI, and neither does anyone else.” |
00:21 | This is a pretty strange state of affairs. |
00:24 | Normally, the people building a new technology |
00:28 | understand how it works inside and out. |
00:31 | But for AI, a technology that's radically reshaping the world around us, |
00:36 | that's not so. |
00:37 | Experts do know plenty about how to build and run AI systems, of course. |
00:42 | But when it comes to how they work on the inside, |
00:45 | there are serious limits to how much we know. |
00:48 | And this matters because without deeply understanding AI, |
00:52 | it's really difficult for us to know what it will be able to do next, |
00:56 | or even what it can do now. |
00:59 | And the fact that we have such a hard time understanding |
01:02 | what's going on with the technology and predicting where it will go next, |
01:06 | is one of the biggest hurdles we face in figuring out how to govern AI. |
01:12 | But AI is already all around us, |
01:15 | so we can't just sit around and wait for things to become clearer. |
01:19 | We have to forge some kind of path forward anyway. |
01:24 | I've been working on these AI policy and governance issues |
01:27 | for about eight years, |
01:28 | First in San Francisco, now in Washington, DC. |
01:32 | Along the way, I've gotten an inside look |
01:35 | at how governments are working to manage this technology. |
01:39 | And inside the industry, I've seen a thing or two as well. |
01:45 | So I'm going to share a couple of ideas |
01:49 | for what our path to governing AI could look like. |
01:53 | But first, let's talk about what actually makes AI so hard to understand |
01:57 | and predict. |
01:59 | One huge challenge in building artificial "intelligence" |
02:03 | is that no one can agree on what it actually means |
02:06 | to be intelligent. |
02:09 | This is a strange place to be in when building a new tech. |
02:12 | When the Wright brothers started experimenting with planes, |
02:15 | they didn't know how to build one, |
02:17 | but everyone knew what it meant to fly. |
02:21 | With AI on the other hand, |
02:23 | different experts have completely different intuitions |
02:26 | about what lies at the heart of intelligence. |
02:29 | Is it problem solving? |
02:31 | Is it learning and adaptation, |
02:34 | are emotions, |
02:36 | or having a physical body somehow involved? |
02:39 | We genuinely don't know. |
02:41 | But different answers lead to radically different expectations |
02:45 | about where the technology is going and how fast it'll get there. |
02:50 | An example of how we're confused is how we used to talk |
02:53 | about narrow versus general AI. |
02:55 | For a long time, we talked in terms of two buckets. |
02:59 | A lot of people thought we should just be dividing between narrow AI, |
03:03 | trained for one specific task, |
03:05 | like recommending the next YouTube video, |
03:08 | versus artificial general intelligence, or AGI, |
03:12 | that could do everything a human could do. |
03:15 | We thought of this distinction, narrow versus general, |
03:18 | as a core divide between what we could build in practice |
03:22 | and what would actually be intelligent. |
03:25 | But then a year or two ago, along came ChatGPT. |
03:31 | If you think about it, |
03:33 | you know, is it narrow AI, trained for one specific task? |
03:36 | Or is it AGI and can do everything a human can do? |
03:41 | Clearly the answer is neither. |
03:42 | It's certainly general purpose. |
03:44 | It can code, write poetry, |
03:47 | analyze business problems, help you fix your car. |
03:51 | But it's a far cry from being able to do everything |
03:54 | as well as you or I could do it. |
03:56 | So it turns out this idea of generality |
03:58 | doesn't actually seem to be the right dividing line |
04:01 | between intelligent and not. |
04:04 | And this kind of thing |
04:05 | is a huge challenge for the whole field of AI right now. |
04:08 | We don't have any agreement on what we're trying to build |
04:11 | or on what the road map looks like from here. |
04:13 | We don't even clearly understand the AI systems that we have today. |
04:18 | Why is that? |
04:19 | Researchers sometimes describe deep neural networks, |
04:22 | the main kind of AI being built today, |
04:24 | as a black box. |
04:26 | But what they mean by that is not that it's inherently mysterious |
04:29 | and we have no way of looking inside the box. |
04:33 | The problem is that when we do look inside, |
04:35 | what we find are millions, |
04:38 | billions or even trillions of numbers |
04:41 | that get added and multiplied together in a particular way. |
04:45 | What makes it hard for experts to know what's going on |
04:47 | is basically just, there are too many numbers, |
04:50 | and we don't yet have good ways of teasing apart what they're all doing. |
04:54 | There's a little bit more to it than that, but not a lot. |
04:58 | So how do we govern this technology |
05:01 | that we struggle to understand and predict? |
05:04 | I'm going to share two ideas. |
05:06 | One for all of us and one for policymakers. |
05:10 | First, don't be intimidated. |
05:14 | Either by the technology itself |
05:16 | or by the people and companies building it. |
05:20 | On the technology, |
05:21 | AI can be confusing, but it's not magical. |
05:24 | There are some parts of AI systems we do already understand well, |
05:27 | and even the parts we don't understand won't be opaque forever. |
05:31 | An area of research known as “AI interpretability” |
05:34 | has made quite a lot of progress in the last few years |
05:38 | in making sense of what all those billions of numbers are doing. |
05:42 | One team of researchers, for example, |
05:44 | found a way to identify different parts of a neural network |
05:48 | that they could dial up or dial down |
05:50 | to make the AI's answers happier or angrier, |
05:54 | more honest, |
05:55 | more Machiavellian, and so on. |
05:58 | If we can push forward this kind of research further, |
06:01 | then five or 10 years from now, |
06:03 | we might have a much clearer understanding of what's going on |
06:06 | inside the so-called black box. |
06:10 | And when it comes to those building the technology, |
06:13 | technologists sometimes act as though |
06:14 | if you're not elbows deep in the technical details, |
06:18 | then you're not entitled to an opinion on what we should do with it. |
06:22 | Expertise has its place, of course, |
06:24 | but history shows us how important it is |
06:26 | that the people affected by a new technology |
06:29 | get to play a role in shaping how we use it. |
06:32 | Like the factory workers in the 20th century who fought for factory safety, |
06:37 | or the disability advocates |
06:39 | who made sure the world wide web was accessible. |
06:42 | You don't have to be a scientist or engineer to have a voice. |
06:48 | (Applause) |
06:53 | Second, we need to focus on adaptability, not certainty. |
06:59 | A lot of conversations about how to make policy for AI |
07:02 | get bogged down in fights between, on the one side, |
07:05 | people saying, "We have to regulate AI really hard right now |
07:08 | because it's so risky." |
07:10 | And on the other side, people saying, |
07:12 | “But regulation will kill innovation, and those risks are made up anyway.” |
07:16 | But the way I see it, |
07:17 | it’s not just a choice between slamming on the brakes |
07:20 | or hitting the gas. |
07:22 | If you're driving down a road with unexpected twists and turns, |
07:26 | then two things that will help you a lot |
07:28 | are having a clear view out the windshield |
07:31 | and an excellent steering system. |
07:34 | In AI, this means having a clear picture of where the technology is |
07:39 | and where it's going, |
07:40 | and having plans in place for what to do in different scenarios. |
07:44 | Concretely, this means things like investing in our ability to measure |
07:49 | what AI systems can do. |
07:51 | This sounds nerdy, but it really matters. |
07:54 | Right now, if we want to figure out |
07:56 | whether an AI can do something concerning, |
07:58 | like hack critical infrastructure |
08:01 | or persuade someone to change their political beliefs, |
08:05 | our methods of measuring that are rudimentary. |
08:08 | We need better. |
08:10 | We should also be requiring AI companies, |
08:12 | especially the companies building the most advanced AI systems, |
08:16 | to share information about what they're building, |
08:19 | what their systems can do |
08:21 | and how they're managing risks. |
08:23 | And they should have to let in external AI auditors to scrutinize their work |
08:29 | so that the companies aren't just grading their own homework. |
08:33 | (Applause) |
08:38 | A final example of what this can look like |
08:40 | is setting up incident reporting mechanisms, |
08:44 | so that when things do go wrong in the real world, |
08:46 | we have a way to collect data on what happened |
08:49 | and how we can fix it next time. |
08:51 | Just like the data we collect on plane crashes and cyber attacks. |
08:57 | None of these ideas are mine, |
08:58 | and some of them are already starting to be implemented in places like Brussels, |
09:03 | London, even Washington. |
09:06 | But the reason I'm highlighting these ideas, |
09:08 | measurement, disclosure, incident reporting, |
09:12 | is that they help us navigate progress in AI |
09:15 | by giving us a clearer view out the windshield. |
09:18 | If AI is progressing fast in dangerous directions, |
09:22 | these policies will help us see that. |
09:25 | And if everything is going smoothly, they'll show us that too, |
09:28 | and we can respond accordingly. |
09:33 | What I want to leave you with |
09:35 | is that it's both true that there's a ton of uncertainty |
09:39 | and disagreement in the field of AI. |
09:42 | And that companies are already building and deploying AI |
09:46 | all over the place anyway in ways that affect all of us. |
09:52 | Left to their own devices, |
09:53 | it looks like AI companies might go in a similar direction |
09:56 | to social media companies, |
09:58 | spending most of their resources on building web apps |
10:01 | and for users' attention. |
10:04 | And by default, it looks like the enormous power of more advanced AI systems |
10:08 | might stay concentrated in the hands of a small number of companies, |
10:12 | or even a small number of individuals. |
10:15 | But AI's potential goes so far beyond that. |
10:18 | AI already lets us leap over language barriers |
10:21 | and predict protein structures. |
10:23 | More advanced systems could unlock clean, limitless fusion energy |
10:28 | or revolutionize how we grow food |
10:30 | or 1,000 other things. |
10:32 | And we each have a voice in what happens. |
10:35 | We're not just data sources, |
10:37 | we are users, |
10:39 | we're workers, |
10:41 | we're citizens. |
10:43 | So as tempting as it might be, |
10:46 | we can't wait for clarity or expert consensus |
10:51 | to figure out what we want to happen with AI. |
10:54 | AI is already happening to us. |
10:57 | What we can do is put policies in place |
11:00 | to give us as clear a picture as we can get |
11:03 | of how the technology is changing, |
11:06 | and then we can get in the arena and push for futures we actually want. |
11:11 | Thank you. |
11:12 | (Applause) |