Navigating the New Era of AI

Generative AI is evolving rapidly, and is increasingly used in numerous areas. But it poses risks such as disinformation and copyright infringement. We will discuss how its human users should respond.

Moderator
Takao Minori
NHK WORLD-JAPAN

Panelists
Yoshua Bengio
Professor, University of Montreal

Andrew Ng
Founder, DeepLearning.AI

Meredith Whittaker
President, Signal

Oshiba Kojin
Co-Founder, Robust Intelligence

Transcript

00:08

Artificial intelligence is taking the world by storm.

00:14

The game changer was ChatGPT.

00:18

It can generate sentences as if written by a human being.

00:24

There are more than 100 million users worldwide and the number is growing.

00:31

AI can also produce images that closely resemble reality.

00:39

However, there are serious concerns surrounding the rapid evolution of AI.

00:46

Top industry leaders and experts released this statement in May,

00:50

warning of the risk of extinction posed by the technology.

00:57

Even the CEO of the company that developed ChatGPT called for the need to regulate AI.

01:05

If this technology goes wrong, it can go quite wrong.

01:09

And we want to be vocal about that.

01:11

We want to work with the government to prevent that from happening.

01:15

What benefits and risks will AI bring to our society?

01:28

Welcome to GLOBAL AGENDA from New York.

01:30

Over the past few months,

01:32

the development of ChatGPT has sparked fresh hype and renewed debate about artificial intelligence technology.

01:39

Ideas that were the stuff of science fiction movies just a few years ago are becoming reality,

01:44

including exciting breakthroughs and potential threats to society.

01:50

How could rapidly accelerating AI technology shape our future?

01:55

Well, our guests today will help us find answers to that question.

02:00

We are first joined here by Yoshua Bengio.

02:02

He is one of the world's most renowned pioneers in the field of deep learning.

02:07

He teaches computer science at the University of Montreal.

02:10

Thank you for joining us, Yoshua.

02:11

You are known as a godfather of AI.

02:14

This technology has been your life work.

02:17

Did you expect AI to develop so quickly and in the way that it has?

02:23

I did not expect it to develop so quickly and not in that way.

02:29

I thought that we would make progress on like lower-level forms of intelligence,

02:34

for example, perception and action before we make progress in things like language.

02:39

But for reasons that we're starting to understand, it's the other way around right now.

02:44

Are you surprised?

02:46

Yes.

02:48

Well, in the studio with me is Meredith Whittaker,

02:50

who has advised the White House, the Federal Trade Commission, the European Parliament, just to name a few organizations about AI.

02:58

Now, Meredith is the president of Signal,

03:00

a nonprofit that has developed a messaging app with a high priority on protecting privacy.

03:05

You worked once at Google, a major proponent of this technology.

03:10

How do you view the current buzz around generative AI and the role of humans in this advancement?

03:18

Great question.

03:19

I think, you know, I would...

03:23

I look at this from a longer-term perspective.

03:26

And I think, you know, when I look at AI, I think we need to recognize this field is not new.

03:31

It's over 70 years old.

03:33

So, the question that we need to then answer is:

03:36

why are we suddenly talking so much about it now?

03:39

And what really changed around 2010 wasn't necessarily new approaches to the AI algorithms,

03:46

the development in AI research and science, so much as the sudden presence of huge amounts of data

03:53

and huge amounts of computational power that hadn't been available before.

03:58

So, what I look at when I look at AI is the concentration of those resources in the hands of a handful of tech companies

04:06

which have propelled this wave of hype around AI, and the dependance of AI on those concentrated resources.

04:15

Well, talking about companies.

04:16

We go now to Oshiba Kojin, who joins us from Tokyo.

04:20

You are the cofounder of Robust Intelligence, a San Francisco startup that's creating software to ensure that AI is safe to use.

04:29

Now, Kojin, your clients range from medical companies to the travel industry to online banks.

04:35

As a young business leader,

04:37

how do you feel about the growing debate over AI's advantages versus the dangers that it presents?

04:44

Yeah, so we had, since the advent of ChatGPT, we had a lot of companies reaching out to us for help.

04:51

I would say the first three months since the arrival of ChatGPT,

04:55

I'd call it a honeymoon period that companies had with generative AI,

05:00

but now people are starting to realize the risks of introducing these models into production,

05:06

that are actually making real-time, high-risk decisions in front of customers.

05:10

And so, this is a serious risk that even in the short term, companies are facing right now.

05:16

And solving that problem requires both technical solutions like, the ones we're developing,

05:22

as well as collaboration with ethical perspectives and legal personnel.

05:29

Andrew Ng is a founder of Deep Learning AI,

05:33

a company that uses AI to create educational and training tools for workforce development.

05:39

Andrew, you also teach at Stanford University.

05:42

You're the former head of AI strategy at Baidu, the Chinese search engine company.

05:48

As a prominent figure in both business and education,

05:51

do you view the growing popularity of generative AI with excitement or with caution?

06:00

I think that it's very exciting.

06:02

While AI has risks, bias, fairness, concentration of power, maybe more catastrophic ways as well,

06:10

I think on balance, AI's making society much richer.

06:13

And what has happened in the last couple of years, maybe last six months or a year,

06:18

is that the rise of generative AI, the certain things that we can to do with AI has suddenly expanded again,

06:24

and just creates a lot of opportunities for everyone in society to use AI to process their own data.

06:30

Whether you're a giant tech company with massive amounts of data,

06:33

or a small, or an SMB, small and medium business, with more multimodal data,

06:38

the number of opportunities for everyone to embrace and use this to make everyone's lives better is even greater than it's ever been.

06:47

One thing that we cannot deny is the sudden surge in speculation surrounding ChatGPT and

06:52

other forms of generative AI and its revolutionary impact on businesses.

07:05

This travel agency in the US is trying out ChatGPT to draft travel plans.

07:13

Enter a trip to Paris for 3 people for a budget of $5,000, and the plan appears.

07:21

It includes highlights such as visiting Notre Dame and the Eiffel Tower.

07:27

To fit within the trip's budget, it suggests having meals at local bakeries.

07:33

It could affect some parts of the industry.

07:37

It could affect some travel agents in a way that they will feel obsolete.

07:44

Some people in the film industry are experimenting with AI in more sophisticated ways.

07:50

AI wrote most of the script for this seven-minute short film.

07:55

It depicts a world dominated by AI.

07:59

In the story, only one person in a family can escape to the safe zone, and three siblings argue over who will go.

08:09

Strong man like me!

08:11

I'm the strong one here!

08:12

You know what, I'm the most resourceful one.

08:14

Maybe I should go!

08:17

AI generated this story idea in less than a minute.

08:21

It also directed the actor's facial expressions, decided on the type of camera lens, and even how to shoot the scenes.

08:30

This technology is becoming so advanced that it could pose a threat to highly skilled jobs like writing and directing.

08:38

I definitely think that there is going to be AI-generated films, TV shows, plays, scripts.

08:48

It's just, it's inevitable, because it can.

08:52

Meanwhile, Generative AI is also creating new jobs.

08:57

One of them is called a "prompt engineer,"

09:00

a professional who specializes in crafting good questions to help AI tools achieve the best answers.

09:08

I think we need prompt engineers because ChatGPT is so powerful but it's only powerful if we're using it correctly.

09:16

The demand for prompt engineers is growing, and some posts offer more than $280,000 annually.

09:25

How will generative AI impact our society and who will benefit?

09:36

So, AI itself is not new.

09:40

You know, we've had this technology.

09:42

What exactly then is so different about it in the years past compared to what we're talking about today?

09:49

Yoshua, what is it that's so different?

09:53

Actually, it's interesting you're asking because at the level of the underlying principles, the science behind, say,

10:01

GPT4 and so on, as far as we know, because we don't know all the details, unfortunately.

10:07

There is nothing very new and there's probably a lot of like significant engineering

10:15

and the scale of both the data sets and the training.

10:21

There is some combinations of things we knew before, like fine tuning with reinforcement learning.

10:27

But the most surprising effect is that when you train these really, really large neural nets on really, really large datasets,

10:39

like 10,000 or 20,000 times more than a human could read in their lifetime,

10:47

it works surprisingly well.

10:49

In fact, so well that such a machine can pass for a human in, you know, for some time at least,

10:56

if you're not expert at asking all that tricky questions.

10:59

And yeah, so what do we do with that as a society? I think this is, uh, very important.

11:06

Currently, I'm not sure that, as you know, the direction in which we're going has more benefits than risks.

11:16

And we should be cautious.

11:18

Andrew what do we mean when we say that, you know, things are happening now,

11:24

even though like the technologies have hasn't changed so much.

11:27

Like why then is everyone fussing so much about this now?

11:32

Well, as Joshua said, a scaling of these algorithms have allowed them

11:36

to demonstrate levels of intelligence that were not possible before.

11:40

And to me, this is very exciting because AI is a general-purpose technology.

11:46

And this is true, both for generative AI and for the older deep-learning based methods from 5, 10, 15 years ago.

11:53

But what that means is it's not just useful for one thing, like travel or online advertising.

11:59

It's useful for a lot of different things across the entire economy, and this opens up opportunities for everyone.

12:06

Look around all sectors of the economy, I think adding more intelligence in the world seems like a good thing,

12:12

and adding more intelligence to all sectors of the economy creates a lot of opportunity for everyone.

12:18

Meredith, how concerned should we be about AI and our jobs?

12:25

Well, let's sort of roll it back a little.

12:27

I agree with Yoshua and Andrew.

12:30

You know, really what is new here is that, you know, is scale, is size.

12:36

But, and this is really important, they are developed and controlled by a handful of interested actors

12:43

who will ultimately design and deploy them to meet their incentives.

12:48

And what are the incentives of the modern corporation? Profit and growth going up forever.

12:53

So, these are not incentives that we can argue will lead to social benefit for everyone.

12:58

These are incentives that are going to benefit these corporations.

13:02

They're going to benefit the companies that use them.

13:05

So, if we look at the film industry right now, there is a large strike going on by writers.

13:11

The Writers Guild of America is a powerful union in the United States

13:15

representing Hollywood writers, and they are withdrawing their labor, saying,

13:19

no, you cannot introduce AI into our creative process and use that to degrade our working conditions.

13:27

I agree with Meredith that until now, AI has been very, relatively concentrated.

13:33

So, if we look around society and ask, why isn't AI more widely adopted yet?

13:38

You've been kind of talking about it for 10, 15 years. Right? Many of us.

13:43

I think that this is part of the problem,

13:45

which is if you were to list all current and potential AI projects one could do in the world,

13:50

there are the, you know, let's call it billion-dollar projects, like improving the ad system at a big tech company like Google.

13:56

They would figure it out.

13:57

I mean, my collaborators and I, we would figure out a recipe, hire dozens or hundreds of engineers,

14:02

write one piece of software that you then apply to a billion users that generates massive economy value.

14:08

So, we know how to do that.

14:10

But these days, I've been going to, for example,

14:12

I've been working with a pizza factory that needs to take pictures of the pizza

14:17

to make sure the cheese is spread evenly.

14:19

That's like a $5 million project, not a billion project.

14:22

An "all-recipe," of having dozens or even hundreds of engineers to work on a $5 million project, that doesn't make sense.

14:29

So, one of the things I'm excited about is with the new AI technologies, like prompting and really other things.

14:36

I feel like the AI tools are getting easier to use and that makes it easier for the IT department of,

14:40

say, of the pizza maker, to go and execute these projects.

14:44

And I think this will be a critical part of the recipe for how to push AI outside consumer software internet,

14:49

which has been incredibly lucrative, to really all of the economy.

14:55

Kojin, your company, though, in a way,

15:00

does benefit from the fact that this spread of generative AI,

15:06

has companies concerned about the safety of their businesses.

15:13

How do you see this whole new trend?

15:17

Yeah, so I see a lot of analogy to, for example,

15:22

the cyber security or the IT risk industry that has existed for probably decades.

15:29

When we actually started the company in 2019,

15:31

there was no generative AI and I guess deep learning was a hype back then.

15:36

But what we thought about then was that if there are so many companies that exist

15:43

to help other companies to protect themselves from cybersecurity attacks and IT risk,

15:49

there should be something similar for AI as well.

15:52

So yes, it is true that we're benefiting as a company from more people using AI and being exposed to AI risk.

16:02

But I feel like the work that we're doing,

16:05

is doing good for the companies that are trying to build AI in a responsible and safe way.

16:14

Yoshua, should we be concerned,

16:16

that AI will take away people's jobs or that it will add jobs for industries that perhaps did not exist before?

16:26

Oh, both things are going to happen, obviously.

16:29

But I think what we should worry about is the transition.

16:32

And in some countries, there is social safety net, which is reasonable, in others, it's nonexistent.

16:42

I'm thinking, for example, of developing countries.

16:46

A lot of the things that are, you know, done with cheap labor in these countries might be automated at some point.

16:55

The other thing I want to agree with Meredith is,

16:59

when you have powerful tools and humans have been building tools for, you know, since we are humans,

17:07

these tools can be, of course, you know, bringing benefits to humanity.

17:13

But in the kind of the law-of-the-jungle system or like a free capitalism system,

17:21

often what happens is concentration of power,

17:23

because the people who have more power will be able to design and use those tools.

17:29

Think about the Industrial Revolution, for example.

17:31

And it took governments to intervene in order to redistribute the wealth that came out of the Industrial Revolution.

17:39

I think here something similar may be needed as we build more wealth with AI.

17:47

We need governments to intervene to make sure it benefits everyone.

17:52

And as I'll talk about later, we, need to worry about the harms and risks as well.

17:58

And that's also governments' roles.

18:01

Yeah, I absolutely agree with Yoshua there and I want to actually build on some of the examples

18:06

that Andrew gave because I think this gives us a good sense of what we're working with.

18:11

So, you know, absolutely there are ways that a pizza place can apply AI, that can apply these derivatives.

18:20

But if we looked carefully at what they're doing, I think it would be really important to be concrete.

18:26

Because it's absolutely certain that a place with a $5 million IT budget is not running their own infrastructure.

18:34

They're very likely to be licensing AI from one of the large companies.

18:39

Perhaps they're doing a little bit of tuning on top.

18:42

But what we're not talking about is the ability for companies outside of these large organizations

18:49

to create these large AI systems from start to finish.

18:53

And so, the way these systems operate, what systems are even available,

18:59

and you know, what terms those systems are available on will be dictated by these large companies.

19:05

Even if many others can license systems from there and use them for one purpose or another.

19:13

While anything we choose to use should come with an acknowledgment of our own responsibilities,

19:18

that means being aware of the possibilities, but also of the risks and the potential consequences.

19:34

AI-generated fake images are now everywhere on the Internet.

19:39

This is a fake image of former President Trump being apprehended by police.

19:46

This photo of the Pope wearing a luxury puffer jacket was also reportedly created by AI.

19:55

In this deepfake video, Hillary Clinton is endorsing Ron DeSantis, a Republican candidate in the US presidential election.

20:04

You know, people might be surprised to hear me say this, but I actually like Ron DeSantis, a lot.

20:11

The number of deep fake videos posted online has increased three times this year compared to the same period last year,

20:19

according to AI platform company, DeepMedia.

20:25

AI images are created by typing in a description

20:28

which generates a picture using masses of data from the Internet and other resources.

20:37

However, this technology is causing another problem:

20:41

gender and racial biases in the system can reinforce discrimination.

20:48

A study by OpenAI, the company that created ChatGPT,

20:53

showed that typing the word "firefighter" resulted in only images of white men,

20:59

while the word "teacher" only produced images of white women.

21:05

Open AI took steps to fix this last year.

21:09

If you typed in the word "CEO", it used to be all men, but after the adjustment,

21:15

images now include women and people of various races.

21:20

How should we deal with these new risks?

21:27

Well, you know, as the Internet grew to become part of our daily lives,

21:31

people warned of its potential spread of misinformation.

21:36

Similar concerns have been raised about social media.

21:38

You know, when we start getting these new technologies in our hands,

21:42

then everyone starts talking about the risks and the dangers.

21:46

So, is the level of risk different this time? Yoshua, can we go to you?

21:54

Yes. Many of us in the AI community have been talking about the social impact of AI for almost a decade now,

22:05

and things like discrimination and bias and power concentration that that have been evoked.

22:12

But there are new risks now.

22:13

So, in the short term, as the video alluded,

22:18

I think democracy is at stake and trust that people have with each other and with the media.

22:26

It's the generated images, videos, sounds and texts that can fool us.

22:36

And if we're not careful, the level of misinformation which already exists is just going to be greatly increased.

22:47

And we don't know how this is going to affect the politics, but it's not something we want.

22:56

Yoshua, you actually use the word existential when it comes to threat,

23:01

not just a threat, but existential, and that's very strong.

23:04

What are those worst-case scenarios?

23:08

Well, there are a number of scenarios that worry me.

23:13

First, the simplest is a person or an organization, let's say in 5, 10 years from now,

23:21

when the technology is superior to human intelligence and is accessible fairly widely.

23:30

And such a person or organization could be, you know, military,

23:34

could be, you know, a terrorist, it could be, let's say, conspiracy theorists.

23:45

You can imagine all kinds of things.

23:47

And then if you take these systems,

23:52

what's important for people to realize is that you can change their goals on the fly,

23:56

if you put as goals something that could be highly destructive.

24:01

And if the machine has access to the Internet, which is already possible as the set of actions that it can do.

24:09

It can destabilize democracies, it can have, you know,

24:12

military impact and could be highly destructive on the scale of, like, nuclear war.

24:18

Then you have the even more worrisome risk,

24:21

which I'd call that the Frankenstein sort of tendency that we have to, wants to build machines that are like us.

24:31

And if we do that, I think it'll be a terrible mistake,

24:34

because that means we'll build machines that have their own self-interests.

24:38

And if those machines are smarter than us and they have their own self-interest, self-preservation,

24:44

it would be like creating a species smarter than us.

24:49

They, you know, they could do R&D, they could hack our cyber infrastructure.

24:54

You know, they might decide to do things for their own interests that are good for us or bad for us.

24:59

It's very hard for us to know, but we would lose control.

25:03

And that is very worrisome.

25:06

Andrew, you had your hand up.

25:08

It is true that we can't perfectly control AI today.

25:13

When I look at the rise of technology and another technology you can't perfectly control is airplanes, right?

25:18

They're quite safe today.

25:19

We can get them down, but airplanes get blown around by the winds

25:23

and no one can get the airplane to point exactly where you want.

25:26

What happened with the rise of aviation is there were a lot of plane crashes.

25:31

Many people died. It was completely tragic.

25:33

And but over time, we learned to control airplanes better and better to the point where we can now get in an airplane,

25:41

and almost, all the time, it is safe and massively, I think, benefits society.

25:45

There is one other challenge that I worry about significantly, which is what Meredith mentioned,

25:52

which is the concentration of power and job loss.

25:54

And in fact, if I were to paint the most dystopian, worst-case scenario that I can think of,

26:00

we've seen that a lot of countries that are natural resource rich, say or rich, have the worst human rights records.

26:07

If you search online there's something called the "resource curse,"

26:10

where countries that rely a lot on primary natural resources often are less democratic.

26:16

And that's because if people aren't important for the economy,

26:20

then the dictator of a country rich in natural resources doesn't need to take care of the people.

26:26

I think that could lead to a degradation in democracy.

26:30

And I think keeping the humanism is enough to make sure that governments work to bring these amazing benefits fairly to all.

26:38

I think that's the important thing we need to do even as we navigate the rise of AI.

26:43

Kojin, yeah, you had your hand up, but yeah, we've been talking.

26:48

Well, most of these tech companies are based in the US or, you know, in places, for example,

26:54

Asia is not necessarily, you know, as advanced in that way compared to, you know, the United States.

27:02

How do you see this whole situation?

27:06

Yeah, I think it's certainly problematic that the most powerful models are concentrated primarily in the US and

27:13

there's no open AI analog in Japan or any other parts of Asia or even in Europe.

27:22

And I think this is where, for the US, there are existing large tech companies that can be such model builders,

27:32

but in these other countries, these countries should take other potential measures such as the government funding,

27:38

the nationwide R&D of the development of these large language models.

27:44

There is a concentration of a few set of companies that are developing AI.

27:48

But even within those companies,

27:50

there's a concentration of certain roles that are involved in the development of AI that are....

27:56

and those groups tend to be very biased as well.

27:59

So concretely, for a long time, that AI has largely, as you can imagine,

28:06

been driven by engineers and data scientists who all have PhDs and come from very similar backgrounds,

28:13

and from a certain race and gender,

28:15

and when they are making decisions about what kind of ethical decisions AI should make,

28:26

that might not result in the most kind of fair or responsible outcomes.

28:32

And so, and it's not their fault.

28:34

It's just the nature that when you have technical stakeholders involved,

28:38

you should also have other stakeholders that are from the ethics, that are experts in ethics,

28:44

that are experts in compliance and risk.

28:48

And we should involve more of those people into these developments.

28:52

And that would ensure, to some extent, the safety and reliability of these models already.

29:00

How dangerous is it that the data that we have does seem to concentrate on

29:05

certain populations and certain people or that it directs us to certain ways?

29:10

And this includes not just race, but gender as well.

29:13

Yeah, it's certainly dangerous.

29:16

And it, I believe, is also certainly unavoidable given the type of data we have.

29:21

This data comes from the Internet and it reflects a world

29:26

that has also a history of racism and misogyny and other discriminations.

29:32

And so, you know, when we talk about this as a technical problem, I think we need to re-shift our frame.

29:37

This is a social problem.

29:39

And the tools we build by scraping these data will reflect these dynamics unless we change the social dynamics,

29:48

unless we actually fix the patterns that recreate and reproduce racism and misogyny.

29:55

So, you know, I think an example here that could be useful as we talk about these deep biasing techniques,

30:00

we saw these examples of open AI showing all white men and then, you know,

30:05

adding some nonwhite men and some women into the suite of CEOs.

30:10

Well, how is that accomplished?

30:12

There's a term called reinforcement learning with human feedback,

30:15

which is a fancy term that basically means a number of low paid workers are tasked

30:21

with this calibrating these models after they're designed.

30:26

They have to sit there and continually show them examples that de bias them so to speak.

30:32

And this work is often very traumatic.

30:34

These workers have to look at examples of bias, examples of violence,

30:38

examples of harm, examples of racism over and over and over again until the model conforms

30:47

to a more polite version of the world we want to see.

30:51

Yoshua, you had your hand up.

30:54

How do you feel we should approach this...

30:56

Well, I just want...

30:57

I agree and I'd like to add that there also things we can do in terms of that.

31:04

The technical people around the table right now in computer science

31:09

there is essentially zero training in terms of ethics, social sciences, you know, social impact and so on.

31:17

And yet computer scientists like changing the world with AI and other things.

31:23

So, this is something that, you know, needs to be taken at the source during training of these people.

31:29

And also, of course, the diversity issue is something many people in our community are concerned with.

31:37

How do we make sure we bring in, you know, a more diverse group of people on the technical side?

31:43

And, of course, we need to work with people with the skills in terms of social impact and ethics.

31:50

And we should be careful that it's not something that is just a, you know, for display, to show that we are a good company.

32:00

I don't believe in self-regulation.

32:02

I think this is something that governments need to, you know, put rules around.

32:08

Andrew, you have your hand up.

32:10

Let me share what I see, because I think sometimes the outside view is different,

32:14

than what I think is actually happening inside.

32:15

Which is I see my friends at Google or my friends at OpenAI,

32:19

really debating and discussing things and try to do the right thing.

32:23

I'm not saying that people will always make the right decision.

32:25

Definitely companies, big and small, make mistakes.

32:28

But I see, you know, for the most part, good people really try to make the right decisions.

32:33

But sometimes when the right decision is ambiguous, they will discuss and debate,

32:37

bring in diverse stakeholders and still not entirely know what's the right decision.

32:42

But it's not for lack of trying.

32:44

My teams at AI Fund, we do routinely, you know, we do kill projects that we assess

32:50

to be financially sound based on ethical grounds.

32:52

We've done that multiple times, so we keep on doing so.

32:54

So, I think there is a stereotype of this AI cowboy that just does whatever.

32:58

I think that stereotype, for the most part is not true.

33:01

To be candid, the one exception I've seen in tech

33:04

is when there's a very strong financial incentive at the company level.

33:08

For example, if a social media company has a strong financial incentive to just drive engagement,

33:14

you know, using user-time on site over all the things.

33:16

Sometimes that's difficult for the engineers to overcome.

33:19

But almost all the engineers I know are very ethical,

33:22

really trying to do the right thing and agonize over these discussions.

33:27

Meredith—

33:28

I want to agree with Andrew, that you know, absolutely not everyone in these companies is,

33:35

you know, out to be a cowboy or just, you know, break everything.

33:38

It's a very old mode in tech and I do know many thoughtful people in these companies who agonize over these decisions.

33:45

That's absolutely true.

33:47

However, that's not enough.

33:49

And I think want to use the example of OpenAI here.

33:52

You know, I think, I've talked to people at OpenAI.

33:55

I know, you know, there's a number of very ethical people.

33:58

But ultimately, you know, they don't have the expertise on outside domains that many do.

34:04

They're also not Microsoft.

34:06

And of course, OpenAI is effectively a subsidiary of Microsoft now.

34:10

So, the decisions are often not going to be made by the engineers or the developers.

34:15

And so, you're looking at a structural bias, a structural racial bias that exists in the fact,

34:22

that those who get to deploy these systems, those who profit from them,

34:27

are generally whiter and more male than those who are subject to them.

34:31

Those who are deployed, on whom the systems are deployed.

34:35

And I think we really have to keep this in mind,

34:37

in analyzing the incentive structures of the handful of corporations that deploy them.

34:43

It doesn't necessarily matter how the model performs.

34:47

It matters whose interests it's performing in and how they decide to use it.

34:53

Well, even those in the generative AI business have pointed to the need for regulation.

34:59

Do we need to slow things down? And who's responsible for making that happen?

35:12

Last May the CEO of Open AI, the company that developed ChatGPT,

35:18

urged law makers at a congressional hearing to regulate AI technology.

35:24

If this technology goes wrong, it can go quite wrong.

35:27

And we want to be vocal about that.

35:29

We want to work with the government to prevent that from happening.

35:33

US President Biden also expressed concern when he met with AI experts in June.

35:41

In seizing this moment, we need to manage the risks to our society, to our economy and our national security.

35:50

Meanwhile, Europe is already working on new legislation.

35:56

The European Parliament agreed to revise a draft bill, known as the AI Act, to set tighter rules on generative AI.

36:06

Here there is one thing that we will not compromise on:

36:09

any time technology advances, it must go hand in hand with our fundamental rights and democratic values.

36:17

The draft bill prohibits the use of live facial recognition systems in public places to protect people's privacy.

36:26

It also requires the disclosure of copyright information if copyrighted data is used to train AI.

36:34

AI generated texts, images and audio must be identified clearly to make them detectable.

36:41

This helps to distinguish deepfake images from real ones.

36:48

These regulations are still being discussed and questions remain about whether they will be effective.

36:59

What do you think of this, these regulatory movements, Yoshua?

37:05

The movement in Europe, for example, is really a step in the right direction.

37:10

I'm very happy they're moving in that direction, but it has to be something that we also discuss globally.

37:16

One thing I want to say about regulation is that we need to set up agile regulatory bodies.

37:26

It's not enough to have a law that says, okay, you can do this, you cannot do that, and so on.

37:31

Because what, you know, governments need to do in order to intervene and protect the public is going to evolve.

37:38

The technology changes that the good and bad users are also going to evolve.

37:43

Some we don't think about right now,

37:46

some threats to democracy we haven't really thought about, and we'll need new rules.

37:50

And, you know, these laws take years to pass.

37:53

So that's not good enough.

37:55

In general, what we need from these, first of all, is oversight, audits.

38:01

We need to...

38:02

maybe not everyone, because I understand companies want to protect their secrets,

38:06

but people who represent the government, social society, scholars that don't have conflicts of interests

38:15

need to be part of this oversight.

38:18

And I'm talking about conflict of interest.

38:20

It's important because right now in those discussions, for example, at the US government,

38:26

we've mostly heard from people working for companies that have an interest in the development of AI.

38:33

And, you know, maybe they're, you know, I'm sure they're speaking in good faith,

38:37

but I would feel better if we see more academics that don't have ties to companies, that don't have an interest,

38:46

at least playing an important role in those discussions.

38:50

- Meredith-
- Yeah.

38:50

I think disinterested engagement is absolutely essential and I am,

38:56

I'm heartened by what's going on in Europe.

38:58

There are many flaws in that regulation as well.

39:01

And of course, in Europe in particular, enforcement matters.

39:04

So, whether and how and who's going to enforce these laws is still an open question.

39:08

And as with GDPR enforcement has been very patchy there.

39:13

So, I think we can't count on Europe to save us.

39:16

We can't count on the US coming up with a robust set of laws,

39:19

particularly given the amount of lobbying that's happening by these companies in each.

39:24

And, throughout history, what you see is actually not often automation or mechanization replacing workers.

39:32

It's mechanization and automation being brought in by bosses to justify paying them less,

39:38

even if the automation or mechanization doesn't do what they say it does.

39:43

So, I think here we need to keep our eye on the ball and we need look to regulatory efforts from the people

39:48

who are most at risk of harm from the deployment of these technologies.

39:53

Kojin?

39:55

One thing that worries me about when I read the AI Act is that

39:59

it's extremely demanding and that worries me because there,

40:06

I suspect, that there are certain sets of companies that will be able to comply with these regulations,

40:14

but there might be smaller, mid-sized companies that want to do good but might not have the energy, the resources,

40:22

the bandwidth to have all the controls in place.

40:27

And there's serious concern for hindering innovation with small to midsize companies.

40:33

I don't think that one potential solution, which is I don't think is emphasized enough in the AI Act,

40:39

for example, is something related to what Yoshua mentioned is leveraging the ecosystem.

40:45

So not making the regulation as something that's government versus big tech.

40:51

But there are other entities in the entire AI ecosystem.

40:54

There's the academics, and there are also companies that like us, for example,

40:58

that are not building AI models but are solely built for the purpose of auditing.

41:04

And if you look at accounting, for example,

41:07

you see a lot of accounting firms that do financial audits of big enterprises as well as smaller ones.

41:14

And so, I think it's very important to look at this regulation from the standpoint of

41:19

how can we leverage the entire AI ecosystem in order to not just enforce certain controls,

41:26

but at the same time encourage new innovation and balance those trade-offs.

41:34

Andrew—

41:36

You know, I'm fully on board with the importance of transparency audits so we can better figure out how to things forward.

41:43

But one missing piece of the regulatory puzzle is, I think in addition to mitigating harms,

41:49

governments have an important role to play to enable value creation.

41:53

So, for example, I think there is a possible path to putting a, you know,

41:57

personalized tutor in every child's pocket, or maybe having some sort of health care thing in everyone's because of AI.

42:05

But we need the government's help to make more research investments in universities and other nonprofit research organizations.

42:13

I think that some of the regulations need to change in order to enable new forms of healthcare and education

42:18

and financial services that are now possible that would create a lot of value.

42:22

And then things like the strike at the Writers Guild of America,

42:26

I feel like there will be significant job loss and then job displacement and the role of government is

42:32

to invest in upskilling and reskilling of workers as well as creating a safety net so that...

42:37

You know, we don't see a lot of elevator operators around anymore because now elevators are automatic.

42:44

You don't need a human to operate elevators.

42:46

Those jobs went away.

42:47

But I think the role of government is to invest in education and reskilling a safety net so the society can create new value,

42:53

even when, frankly, people whose jobs are affected, they are well taken care of.

42:57

I think it's important that even as we create value, we try to take care of everyone in society.

43:03

Yeah, I think that's a great point.

43:06

And you know, I would love your optimism, Andrew.

43:11

You know, on the comment on the writers' strike, I think,

43:15

you know, it's certainly true that what counts as a job shifts over time.

43:19

And in fact, the construct of waged work as a way that we redistribute or distribute our resources

43:25

is itself fairly recent in human history.

43:28

But I don't know that I can sit comfortably with the idea that the process of creative human storytelling

43:36

will somehow be eliminated and we will instead defer to machines.

43:41

I think that, you know, there are some things where we do want the writers to win.

43:45

We want the unions to win.

43:46

That's the solution.

43:47

Not upskilling writers to edit AI content or not otherwise allowing ourselves to be placed

43:55

by narratives of intelligence that are being sold by a handful of companies.

44:00

So, you know, I do think we need to draw some lines and not accept these narratives as inevitable.

44:06

You also...

44:07

go ahead, Andrew.

44:10

I'm actually very sympathetic to a lot of what you said.

44:14

For what it's worth, when I write codes and software code,

44:17

the way I code has changed dramatically because of GPT4 much more than I would have realized.

44:21

And I feel like a lot of job roles, I think people that use AI will replace people that don't use AI,

44:28

but to navigate that and where human and AIs together can do better than pure replacement,

44:34

I think it's a complicated puzzle.

44:36

There'll be different pieces of the economy.

44:38

And government has a large constructive role that it should play in that.

44:43

- Yoshua—
- Yes.

44:45

Companies are going to be doing things that are profitable with AI.

44:49

But there are many other things that are not sufficiently profitable for companies to really invest in.

44:54

So, what I would like to see and others have been talking about is

44:58

the creation of an international organizations that may have research centers in different countries that is not for profit,

45:11

has a mandate to work towards these beneficial obligations,

45:15

socially beneficial obligations that don't get the attention that they should receive.

45:19

For example, preparing for, you know, using AI to care for future pandemics as an example,

45:24

developing solutions to help with biodiversity and climate change.

45:30

And also do the necessary research to understand the risks, the harms that are currently going on,

45:36

the harms that, you know, we are concerned about in know coming years with democracy or later with loss of control.

45:45

We need like massive investments in protecting the public.

45:50

That is not really happening right now.

45:53

We need a better alignment here.

45:54

We can't rely fully on companies for doing these things.

46:07

I'd like to go to the last question for all of you actually.

46:11

You've all devoted years to AI, undoubtedly, because you originally found it

46:17

to be a fascinating field of work and research and you're still leading the way.

46:22

So then what, in your view, is the best way for us to really coexist with this technology? Meredith.

46:30

Well, I think we need to coexist on our terms.

46:33

We can't be willing to buy the narratives of companies that are self-interested,

46:37

and we need to ask the hard, basic questions about who this technology serves and how.

46:44

Andrew.

46:46

When I was in high school, to make a bit of extra money,

46:49

I got a job as an office assistant where I remember just doing a lot of photocopying

46:54

and a highlight was using this shredder which was much more exciting.

46:58

So even back then as a teenager I thought, boy, if only I could build something to do all this photocopying for me,

47:03

maybe I could spend my time doing something more worthwhile.

47:06

I think that AI technologies have advanced to the point

47:09

where we have the potential to free a lot of human, humanity from mental drudgery,

47:15

to free us all up, to do even more valuable, more exciting, more meaningful things.

47:20

And I think it's important that we democratize access to AI, that everyone learn it,

47:24

that everyone have access to its tools so that the benefits that are fairly and widely shared.

47:31

Kojin—

47:34

Yeah, actually I was looking back on my time in college for just like five years ago.

47:39

I actually got, received one of the AI ethics type-education that started that year

47:45

and that's actually what got me into this field.

47:49

And just thinking back about that,

47:50

I think the first thing that is extremely important is education awareness for everyone,

47:56

researchers as well as consumers that are either developing or being the subject of these technologies.

48:04

But obviously that is not sufficient.

48:06

And I think what we need to do is, in addition to the longer-term discussion,

48:10

discussions about the longer-term consequences of AI,

48:12

there are certain AI technologies that are exposing risk to us individuals.

48:18

And so, taking small steps one at a time to develop the right toolkits,

48:23

the organizations and the products that are necessary to address these risks.

48:29

Yoshua, again, we know you've devoted your, you know, decades of research to this field.

48:35

How do you view would be the best way for us to balance this technology.

48:41

Well, we shouldn't let things go as they are.

48:45

We should realize that we have agency -individually and collectively-

48:51

to change the course of the development of this technology for the protection of human rights,

48:58

protecting the public, protecting democracies, protecting humanity.

49:04

And it could feel like, you know,

49:07

like people trying to do something about climate change that sometimes we can feel desperate,

49:11

but there's always something we can do to move the needle in the right direction.

49:17

Well, thank you all very much for your insight today.

49:21

Well, the number of technological tools at our disposal is always increasing,

49:26

providing ever more opportunities, and hazards.

49:29

Generative AI appears to be here to stay,

49:31

and that is why it's imperative that we understand where it's going and be prepared to respond.

49:38

Thank you very much for watching this edition of GLOBAL AGENDA from New York.