Transcript

Transcript of YouTube Video: How AI Will Step Off the Screen and into the Real World | Daniela Rus | TED

Transcript of YouTube Video: How AI Will Step Off the Screen and into the Real World | Daniela Rus | TED

The following is a summary and article by AI based on a transcript of the video "How AI Will Step Off the Screen and into the Real World | Daniela Rus | TED". Due to the limitations of AI, please be careful to distinguish the correctness of the content.

Article By AIVideo Transcript
00:04

When I was a student studying robotics,

00:06

a group of us decided to make a present for our professor's birthday.

00:11

We wanted to program our robot to cut a slice of cake for him.

00:16

We pulled an all-nighter writing the software,

00:19

and the next day, disaster.

00:22

We programmed this robot to cut a soft, round sponge cake,

00:27

but we didn't coordinate well.

00:29

And instead, we received a square hard ice cream cake.

00:34

The robot flailed wildly and nearly destroyed the cake.

00:38

(Laughter)

00:39

Our professor was delighted, anyway.

00:41

He calmly pushed the stop button

00:45

and declared the erratic behavior of the robot

00:48

a control singularity.

00:50

A robotics technical term.

00:52

I was disappointed, but I learned a very important lesson.

00:56

The physical world,

00:58

with its physics laws and imprecisions,

01:01

is a far more demanding space than the digital world.

01:05

Today, I lead MIT's Computer Science and AI lab,

01:09

the largest research unit at MIT.

01:11

This is our buildingm where I work with brilliant and brave researchers

01:16

to invent the future of computing and intelligent machines.

01:21

Today in computing,

01:22

artificial intelligence and robotics are largely separate fields.

01:27

AI has amazed you with its decision-making and learning,

01:31

but it remains confined inside computers.

01:34

Robots have a physical presence and can execute pre-programmed tasks,

01:39

but they're not intelligent.

01:42

Well, this separation is starting to change.

01:45

AI is about to break free from the 2D computer screen interactions

01:50

and enter a vibrant, physical 3D world.

01:54

In my lab, we're fusing the digital intelligence of AI

01:58

with the mechanical prowess of robots.

02:01

Moving AI from the digital world into the physical world

02:03

is making machines intelligent

02:06

and leading to the next great breakthrough,

02:08

what I call physical intelligence.

02:11

Physical intelligence is when AI's power to understand text,

02:16

images and other online information

02:18

is used to make real-world machines smarter.

02:21

This means AI can help pre-programmed robots do their tasks better

02:27

by using knowledge from data.

02:31

With physical intelligence,

02:32

AI doesn't just reside in our computers,

02:37

but walks, rolls, flies

02:39

and interacts with us in surprising ways.

02:42

Imagine being surrounded by helpful robots at the supermarket.

02:47

The one on the left can help you carry a heavy box.

02:51

To make it happen, we need to do a few things.

02:54

We need to rethink how machines think.

02:57

We need to reorganize how they are designed and how they learn.

03:03

So for physical intelligence,

03:05

AI has to run on computers that fit on the body of the robot.

03:09

For example, our soft robot fish.

03:13

Today's AI uses server farms that do not fit.

03:17

Today's AI also makes mistakes.

03:20

This AI system on a robot car does not detect pedestrians.

03:25

For physical intelligence,

03:27

we need small brains that do not make mistakes.

03:31

We're tackling these challenges using inspiration

03:34

from a worm called C. elegans

03:37

In sharp contrast to the billions of neurons in the human brain,

03:42

C. elegans has a happy life on only 302 neurons,

03:47

and biologists understand the math of what each of these neurons do.

03:53

So here's the idea.

03:54

Can we build AI using inspiration from the math of these neurons?

04:01

We have developed, together with my collaborators and students,

04:05

a new approach to AI we call “liquid networks.”

04:10

And liquid networks results in much more compact

04:13

and explainable solutions than today's traditional AI solutions.

04:17

Let me show you.

04:19

This is our self-driving car.

04:21

It's trained using a traditional AI solution,

04:24

the kind you find in many applications today.

04:28

This is the dashboard of the car.

04:30

In the lower right corner, you'll see the map.

04:32

In the upper left corner, the camera input stream.

04:35

And the big box in the middle with the blinking lights

04:38

is the decision-making engine.

04:40

It consists of tens of thousands of artificial neurons,

04:44

and it decides how the car should steer.

04:48

It is impossible to correlate the activity of these neurons

04:51

with the behavior of the car.

04:53

Moreover, if you look at the lower left side,

04:57

you see where in the image this decision-making engine looks

05:01

to tell the car what to do.

05:03

And you see how noisy it is.

05:04

And this car drives by looking at the bushes and the trees

05:09

on the side of the road.

05:10

That's not how we drive.

05:11

People look at the road.

05:13

Now contrast this with our liquid network solution,

05:16

which consists of only 19 neurons rather than tens of thousands.

05:21

And look at its attention map.

05:23

It's so clean and focused on the road horizon

05:26

and the side of the road.

05:28

Because these models are so much smaller,

05:30

we actually understand how they make decisions.

05:34

So how did we get this performance?

05:38

Well, in a traditional AI system,

05:41

the computational neuron is the artificial neuron,

05:44

and the artificial neuron is essentially an on/off computational unit.

05:48

It takes in some numbers, adds them up,

05:50

applies some basic math

05:52

and passes along the result.

05:54

And this is complex

05:55

because it happens across thousands of computational units.

05:59

In liquid networks,

06:01

we have fewer neurons,

06:02

but each one does more complex math.

06:05

Here's what happens inside our liquid neuron.

06:08

We use differential equations to model the neural computation

06:12

and the artificial synapse.

06:14

And these differential equations

06:16

are what biologists have mapped for the neural structure of the worms.

06:22

We also wire the neurons differently to increase the information flow.

06:27

Well, these changes yield phenomenal results.

06:31

Traditional AI systems are frozen after training.

06:34

That means they cannot continue to improve

06:36

when we deploy them in a physical world in the wild.

06:40

We just wait for the next release.

06:43

Because of what's happening inside the liquid neuron,

06:46

liquid networks continue to adapt after training

06:49

based on the inputs that they see.

06:51

Let me show you.

06:53

We trained traditional AI and liquid networks

06:56

using summertime videos like these ones,

06:59

and the task was to find things in the woods.

07:02

All the models learned how to do the task in the summer.

07:06

Then we tried to use the models on drones in the fall.

07:10

The traditional AI solution gets confused by the background.

07:14

Look at the attention map, cannot do the task.

07:17

Liquid networks do not get confused by the background

07:20

and very successfully execute the task.

07:24

So this is it.

07:26

This is the step forward:

07:27

AI that adapts after training.

07:31

Liquid networks are important

07:33

because they give us a new way of getting machines to think

07:38

that is rooted into physics models,

07:40

a new technology for AI.

07:43

We can run them on smartphones, on robots,

07:46

on enterprise computers,

07:48

and even on new types of machines

07:50

that we can now begin to imagine and design.

07:53

The second aspect of physical intelligence.

07:56

So by now you've probably generated images using text-to-image systems.

08:02

We can also do text-to-robot,

08:04

but not using today's AI solutions because they work on statistics

08:08

and do not understand physics.

08:11

In my lab,

08:12

we developed an approach that guides the design process

08:16

by checking and simulating the physical constraints for the machine.

08:21

We start with a language prompt,

08:23

"Make me a robot that can walk forward,"

08:26

and our system generates the designs including shape, materials, actuators,

08:32

sensors, the program to control it

08:35

and the fabrication files to make it.

08:37

And then the designs get refined in simulation

08:41

until they meet the specifications.

08:44

So in a few hours we can go from idea

08:48

to controllable physical machine.

08:51

We can also do image-to-robot.

08:53

This photo can be transformed into a cuddly robotic bunny.

08:58

To do so, our algorithm computes a 3D representation of the photo

09:03

that gets sliced and folded, printed.

09:08

Then we fold the printed layers, we string some motors and sensors.

09:12

We write some code, and we get the bunny you see in this video.

09:16

We can use this approach to make anything almost,

09:20

from an image, from a photo.

09:23

So the ability to transform text into images

09:28

and to transform images into robots is important,

09:31

because we are drastically reducing the amount of time

09:35

and the resources needed to prototype and test new products,

09:39

and this is allowing for a much faster innovation cycle.

09:45

And now we are ready to even make the leap

09:48

to get these machines to learn.

09:50

The third aspect of physical intelligence.

09:54

These machines can learn from humans how to do tasks.

09:57

You can think of it as human-to-robot.

09:59

In my lab, we created a kitchen environment

10:02

where we instrument people with sensors,

10:05

and we collect a lot of data about how people do kitchen tasks.

10:09

We need physical data

10:11

because videos do not capture the dynamics of the task.

10:15

So we collect muscle, pose, even gaze information

10:18

about how people do tasks.

10:21

And then we train AI using this data

10:24

to teach robots how to do the same tasks.

10:28

And the end result is machines that move with grace and agility,

10:34

as well as adapt and learn.

10:36

Physical intelligence.

10:39

We can use this approach to teach robots

10:42

how to do a wide range of tasks:

10:44

food preparation, cleaning and so much more.

10:49

The ability to turn images and text into functional machines,

10:54

coupled with using liquid networks

10:56

to create powerful brains for these machines

10:59

that can learn from humans, is incredibly exciting.

11:02

Because this means we can make almost anything we imagine.

11:07

Today's AI has a ceiling.

11:09

It requires server farms.

11:11

It's not sustainable.

11:12

It makes inexplicable mistakes.

11:15

Let's not settle for the current offering.

11:18

When AI moves into the physical world,

11:20

the opportunities for benefits and for breakthroughs is extraordinary.

11:26

You can get personal assistants that optimize your routines

11:31

and anticipate your needs,

11:33

bespoke machines that help you at work

11:36

and robots that delight you in your spare time.

11:40

The promise of physical intelligence is to transcend our human limitations

11:45

with capabilities that extend our reach,

11:48

amplify our strengths

11:50

and refine our precision

11:52

and grant us ways to interact with the world

11:55

we've only dreamed of.

11:58

We are the only species so advanced, so aware,

12:02

so capable of building these extraordinary tools.

12:06

Yet, developing physical intelligence

12:09

is teaching us that we have so much more to learn

12:12

about technology and about ourselves.

12:16

We need human guiding hands over AI sooner rather than later.

12:20

After all, we remain responsible for this planet

12:23

and everything living on it.

12:26

I remain convinced that we have the power

12:29

to use physical intelligence to ensure a better future for humanity

12:34

and for the planet.

12:36

And I'd like to invite you to help us in this quest.

12:39

Some of you will help develop physical intelligence.

12:43

Some of you will use it.

12:45

And some of you will invent the future.

12:48

Thank you.

12:49

(Applause)