Article by AI based on youtube video transcript: How to Govern AI — Even If It’s Hard to Predict | Helen Toner | TED

Transcript of YouTube Video: How to Govern AI — Even If It’s Hard to Predict | Helen Toner | TED

The following is a summary and article by AI based on a transcript of the video "How to Govern AI — Even If It’s Hard to Predict | Helen Toner | TED". Due to the limitations of AI, please be careful to distinguish the correctness of the content.

Article By AIVideo Transcript

Summary by AI based on youtube video transcript

In this video, Helen Toner, an expert in AI policy and governance, addresses the widespread confusion and lack of understanding surrounding artificial intelligence (AI), even among experts. She explains that this uncertainty makes it challenging to predict AI's future capabilities and to develop effective governance strategies. Toner discusses the difficulty in defining intelligence and the limitations of understanding AI's complex inner workings. She proposes two key ideas for governing AI: overcoming intimidation by the technology and its builders, and focusing on adaptability rather than certainty in policy-making. She also emphasizes the importance of AI interpretability research, better measurement of AI capabilities, transparency from AI companies, and incident reporting mechanisms to guide the future of AI development.

Article by AI based on youtube video transcript

Understanding and Governing Artificial Intelligence

When I talk to people about artificial intelligence (AI), something I hear a lot from non-experts is, “I don’t understand AI.” But when I talk to experts, a funny thing happens. They say, “I don’t understand AI, and neither does anyone else.” This is a pretty strange state of affairs. Normally, the people building a new technology understand how it works inside and out. But for AI, a technology that's radically reshaping the world around us, that's not so.

Experts do know plenty about how to build and run AI systems, of course. But when it comes to how they work on the inside, there are serious limits to how much we know. And this matters because without deeply understanding AI, it's really difficult for us to know what it will be able to do next, or even what it can do now. And the fact that we have such a hard time understanding what's going on with the technology and predicting where it will go next, is one of the biggest hurdles we face in figuring out how to govern AI.

The Challenge of AI Governance

But AI is already all around us, so we can't just sit around and wait for things to become clearer. We have to forge some kind of path forward anyway. I've been working on these AI policy and governance issues for about eight years, first in San Francisco, now in Washington, DC. Along the way, I've gotten an inside look at how governments are working to manage this technology. And inside the industry, I've seen a thing or two as well. So I'm going to share a couple of ideas for what our path to governing AI could look like.

The Complexity of Defining Intelligence

But first, let's talk about what actually makes AI so hard to understand and predict. One huge challenge in building artificial "intelligence" is that no one can agree on what it actually means to be intelligent. This is a strange place to be in when building a new tech. When the Wright brothers started experimenting with planes, they didn't know how to build one, but everyone knew what it meant to fly. With AI on the other hand, different experts have completely different intuitions about what lies at the heart of intelligence. Is it problem-solving? Is it learning and adaptation, are emotions, or having a physical body somehow involved? We genuinely don't know.

But different answers lead to radically different expectations about where the technology is going and how fast it'll get there. An example of how we're confused is how we used to talk about narrow versus general AI. For a long time, we talked in terms of two buckets. A lot of people thought we should just be dividing between narrow AI, trained for one specific task, like recommending the next YouTube video, versus artificial general intelligence, or AGI, that could do everything a human could do.

The Evolution of AI Understanding

We thought of this distinction, narrow versus general, as a core divide between what we could build in practice and what would actually be intelligent. But then a year or two ago, along came ChatGPT. If you think about it, you know, is it narrow AI, trained for one specific task? Or is it AGI and can do everything a human can do? Clearly the answer is neither. It's certainly general purpose. It can code, write poetry, analyze business problems, help you fix your car. But it's a far cry from being able to do everything as well as you or I could do it.

So it turns out this idea of generality doesn't actually seem to be the right dividing line between intelligent and not. And this kind of thing is a huge challenge for the whole field of AI right now. We don't have any agreement on what we're trying to build or on what the road map looks like from here. We don't even clearly understand the AI systems that we have today.

The Black Box of AI Systems

Why is that? Researchers sometimes describe deep neural networks, the main kind of AI being built today, as a black box. But what they mean by that is not that it's inherently mysterious and we have no way of looking inside the box. The problem is that when we do look inside, what we find are millions, billions or even trillions of numbers that get added and multiplied together in a particular way.

What makes it hard for experts to know what's going on is basically just, there are too many numbers, and we don't yet have good ways of teasing apart what they're all doing. There's a little bit more to it than that, but not a lot.

Governing AI: Overcoming Intimidation and Seeking Adaptability

So how do we govern this technology that we struggle to understand and predict? I'm going to share two ideas. One for all of us and one for policymakers.

First, don't be intimidated. Either by the technology itself or by the people and companies building it. On the technology, AI can be confusing, but it's not magical. There are some parts of AI systems we do already understand well, and even the parts we don't understand won't be opaque forever.

An area of research known as “AI interpretability” has made quite a lot of progress in the last few years in making sense of what all those billions of numbers are doing. One team of researchers, for example, found a way to identify different parts of a neural network that they could dial up or dial down to make the AI's answers happier or angrier, more honest, more Machiavellian, and so on.

If we can push forward this kind of research further, then five or 10 years from now, we might have a much clearer understanding of what's going on inside the so-called black box. And when it comes to those building the technology, technologists sometimes act as though if you're not elbows deep in the technical details, then you're not entitled to an opinion on what we should do with it.

Expertise has its place, of course, but history shows us how important it is that the people affected by a new technology get to play a role in shaping how we use it. Like the factory workers in the 20th century who fought for factory safety, or the disability advocates who made sure the world wide web was accessible. You don't have to be a scientist or engineer to have a voice.

Second, we need to focus on adaptability, not certainty. A lot of conversations about how to make policy for AI get bogged down in fights between, on the one side, people saying, "We have to regulate AI really hard right now because it's so risky." And on the other side, people saying, "But regulation will kill innovation, and those risks are made up anyway."

But the way I see it, it’s not just a choice between slamming on the brakes or hitting the gas. If you're driving down a road with unexpected twists and turns, then two things that will help you a lot are having a clear view out the windshield and an excellent steering system. In AI, this means having a clear picture of where the technology is and where it's going, and having plans in place for what to do in different scenarios.

Concrete Steps for AI Policy

Concretely, this means things like investing in our ability to measure what AI systems can do. This sounds nerdy, but it really matters. Right now, if we want to figure out whether an AI can do something concerning, like hack critical infrastructure or persuade someone to change their political beliefs, our methods of measuring that are rudimentary. We need better.

We should also be requiring AI companies, especially the companies building the most advanced AI systems, to share information about what they're building, what their systems can do and how they're managing risks. And they should have to let in external AI auditors to scrutinize their work so that the companies aren't just grading their own homework.

A final example of what this can look like is setting up incident reporting mechanisms, so that when things do go wrong in the real world, we have a way to collect data on what happened and how we can fix it next time. Just like the data we collect on plane crashes and cyber attacks.

None of these ideas are mine, and some of them are already starting to be implemented in places like Brussels, London, even Washington. But the reason I'm highlighting these ideas, measurement, disclosure, incident reporting, is that they help us navigate progress in AI by giving us a clearer view out the windshield.

If AI is progressing fast in dangerous directions, these policies will help us see that. And if everything is going smoothly, they'll show us that too, and we can respond accordingly.

The Importance of Active Participation

What I want to leave you with is that it's both true that there's a ton of uncertainty and disagreement in the field of AI. And that companies are already building and deploying AI all over the place anyway in ways that affect all of us. Left to their own devices, it looks like AI companies might go in a similar direction to social media companies, spending most of their resources on building web apps and for users' attention.

And by default, it looks like the enormous power of more advanced AI systems might stay concentrated in the hands of a small number of companies, or even a small number of individuals. But AI's potential goes so far beyond that. AI already lets us leap over language barriers and predict protein structures. More advanced systems could unlock clean, limitless fusion energy or revolutionize how we grow food or 1,000 other things.

And we each have a voice in what happens. We're not just data sources, we are users, we're workers, we're citizens. So as tempting as it might be, we can't wait for clarity or expert consensus to figure out what we want to happen with AI. AI is already happening to us. What we can do is put policies in place to give us as clear a picture as we can get of how the technology is changing, and then we can get in the arena and push for futures we actually want.

Thank you.

Notes

That's all the content of the video transcript for the video: 'How to Govern AI — Even If It’s Hard to Predict | Helen Toner | TED'. We use AI to organize the content of the script and write a summary.

For more transcripts of YouTube videos on various topics, explore our website further.