Home Subtitle videos AI 可以识别你的谎言,会怎么样?

AI 可以识别你的谎言,会怎么样?

Video thumbnail
Споделяне:
00:04

This is something you won't like.

00:06

But here everyone is a liar.

00:12

Don't take it too personally.

00:14

What I mean is that lying is very common

00:17

and it is now well-established that we lie on a daily basis.

00:22

Indeed, scientists have estimated that we tell around two lies per day,

00:27

although, of course, it's not that easy to establish those numbers with certainty.

00:32

And, well, I introduce myself.

00:34

I'm Riccardo, I'm a psychologist and a PhD candidate,

00:38

and for my research project I study how good are people at detecting lies.

00:44

Seems cool, right? But I'm not joking.

00:47

And you might wonder why a psychologist was then invited

00:51

to give a TED Talk about AI.

00:55

And well, I'm here today

00:57

because I'm about to tell you how AI could be used to detect lies.

01:03

And you will be very surprised by the answer.

01:06

But first of all, when is it relevant to detect lies?

01:12

A first clear example that comes to my mind

01:15

is in the criminal investigation field.

01:18

Imagine you are a police officer and you want to interview a suspect.

01:23

And the suspect is providing some information to you.

01:26

And this information is actually leading to the next steps of the investigation.

01:31

We certainly want to understand if the suspect is reliable

01:36

or if they are trying to deceive us.

01:40

Then another example comes to my mind,

01:43

and I think this really affects all of us.

01:46

So please raise your hands

01:48

if you would like to know if your partner cheated on you.

01:52

(Laughter)

01:53

And don't be shy because I know.

01:55

(Laughter)

01:56

Yeah. You see?

01:59

It's very relevant.

02:02

However, I have to have to say that we as humans

02:05

are very bad at detecting lies.

02:08

In fact, many studies have already confirmed

02:11

that when people are asked to judge

02:14

if someone is lying or not

02:15

without knowing much about that person or the context,

02:19

people's accuracy is no better than the chance level,

02:23

about the same as flipping a coin.

02:27

You might also wonder

02:28

if experts, such as police officers, prosecutors, experts

02:33

and even psychologists

02:35

are better at detecting lies.

02:37

And the answer is complex,

02:39

because experience alone doesn't seem to be enough

02:43

to help detect lies accurately.

02:45

It might help, but it's not enough.

02:49

To give you some numbers.

02:51

In a well-known meta-analysis that previous scholars did in 2006,

02:56

they found that naive judges' accuracy

02:59

was on average around 54 percent.

03:03

Experts perform only slightly better,

03:07

with an accuracy rate around 55 percent.

03:11

(Laughter)

03:12

Not that impressive, right?

03:15

And ...

03:18

Those numbers actually come from the analysis

03:21

of the results of 108 studies,

03:23

meaning that these findings are quite robust.

03:26

And of course, the debate is also much more complicated than this

03:30

and also more nuanced.

03:32

But here the main take-home message

03:34

is that humans are not good at detecting lies.

03:38

What if we are creating an AI tool

03:43

where everyone can detect if someone else is lying?

03:48

This is not possible yet, so please don't panic.

03:50

(Laughter)

03:52

But this is what we tried to do in a recent study

03:55

that I did together with my brilliant colleagues

03:58

whom I need to thank.

03:59

And actually, to let you understand what we did in our study,

04:06

I need to first introduce you to some technical concepts

04:11

and to the main characters of this story:

04:15

Large language models.

04:17

Large language models are AI systems

04:20

designed to generate outputs in natural language

04:23

in a way that almost mimics human communication.

04:27

If you are wondering how we teach these AI systems to detect lies,

04:31

here is where something called fine-tuning comes in.

04:34

But let's use a metaphor.

04:36

Imagine large language models being as students

04:40

who have gone through years of school,

04:42

learning a little bit about everything,

04:44

such as language, concepts, facts.

04:48

But when it's time for them to specialize,

04:51

like in law school or in medical school,

04:54

they need more focused training.

04:56

Fine-tuning is that extra education.

05:00

And of course, large language models don't learn as humans do.

05:03

But this is just to give you the main idea.

05:07

Then, as for training students, you need books, lectures, examples,

05:14

for training large language models you need datasets.

05:19

And for our study we considered three datasets,

05:23

one about personal opinions,

05:25

one about past autobiographical memories

05:28

and one about future intentions.

05:31

These datasets were already available from previous studies

05:34

and contained both truthful and deceptive statements.

05:39

Typically, you collect these types of statements

05:41

by asking participants to tell the truth or to lie about something.

05:45

For example, if I was a participant in the truthful condition,

05:49

and the task was

05:51

"tell me about your past holidays,"

05:53

then I will tell the researcher about my previous holidays in Vietnam,

05:58

and here we have a slide to prove it.

06:01

For the deceptive condition

06:03

they will randomly pick some of you who have never been to Vietnam,

06:06

and they will ask you to make up a story

06:09

and convince someone else that you've really been to Vietnam.

06:12

And this is how it typically works.

06:16

And as in all university courses, you might know this,

06:21

after lectures you have exams.

06:23

And likewise after training our AI models,

06:27

we would like to test them.

06:29

And the procedure that we followed,

06:31

that is actually the typical one, is the following.

06:34

So we picked some statements randomly from each dataset

06:39

and we took them apart.

06:41

So the model never saw these statements during the training phase.

06:44

And only after the training was completed,

06:47

we used them as a test, as the final exam.

06:52

But who was our student then?

06:55

In this case, it was a large language model

06:58

developed by Google

06:59

and called FLAN-T5.

07:01

Flanny, for friends.

07:03

And now that we have all the pieces of the process together,

07:07

we can actually dig deep into our study.

07:12

Our study was composed by three main experiments.

07:17

For the first experiment, we fine-tuned our model, our FLAN-T5,

07:22

on each single dataset separately.

07:27

For the second experiment,

07:29

we fine-tuned our model on two pairs of datasets together,

07:34

and we tested it on the third remaining one,

07:37

and we used all three possible combinations.

07:41

For the last final experiment,

07:43

we fine-tuned the model on a new, larger training test set

07:47

that we obtained by combining all the three datasets together.

07:52

The results were quite interesting

07:55

because what we found was that in the first experiment,

07:59

FLAN-T5 achieved an accuracy range between 70 percent and 80 percent.

08:06

However, in the second experiment,

08:09

FLAN-T5 dropped its accuracy to almost 50 percent.

08:15

And then, surprisingly, in the third experiment,

08:18

FLAN-T5 rose back to almost 80 percent.

08:23

But what does this mean?

08:26

What can we learn from these results?

08:31

From experiment one and three

08:33

we learn that language models

08:35

can effectively classify statements as deceptive,

08:40

outperforming human benchmarks

08:42

and aligning with previous machine learning

08:44

and deep learning models

08:45

that previous studies trained on the same datasets.

08:49

However, from the second experiment,

08:52

we see that language models struggle

08:55

in generalizing this knowledge, this learning across different contexts.

09:00

And this is apparently because

09:03

there is no one single universal rule of deception

09:06

that we can easily apply in every context,

09:09

but linguistic cues of deception are context-dependent.

09:15

And from the third experiment,

09:18

we learned that actually language models

09:21

can generalize well across different contexts,

09:24

if only they have been previously exposed to examples

09:28

during the training phase.

09:30

And I think this sounds as good news.

09:34

But while this means that language models can be effectively applied

09:41

for real-life applications in lie detection,

09:44

more replication is needed because a single study is never enough

09:48

so that from tomorrow we can all have these AI systems on our smartphones,

09:52

and start detecting other people's lies.

09:56

But as a scientist, I have a vivid imagination

09:59

and I would like to dream big.

10:01

And also I would like to bring you with me in this futuristic journey for a while.

10:05

So please imagine with me living in a world

10:09

where this lie detection technology is well-integrated in our life,

10:13

making everything from national security to social media a little bit safer.

10:19

And imagine having this AI system that could actually spot fake opinions.

10:24

From tomorrow, we could say

10:26

when a politician is actually saying one thing

10:30

and truly believes something else.

10:31

(Laughter)

10:34

And what about the security board context

10:37

where people are asked about their intentions and reasons

10:40

for why they are crossing borders or boarding planes.

10:46

Well, with these systems,

10:48

we could actually spot malicious intentions

10:51

before they even happen.

10:54

And what about the recruiting process?

10:57

(Laughter)

10:59

We heard about this already.

11:01

But actually, companies could employ this AI

11:04

to distinguish those who are really passionate about the role

11:08

from those who are just trying to say the right things to get the job.

11:13

And finally, we have social media.

11:16

Scammers trying to deceive you or to steal your identity.

11:19

All gone.

11:21

And someone else may claim something about fake news,

11:24

and well, perfectly, language model could automatically read the news,

11:28

flag them as deceptive or fake,

11:31

and we could even provide users with a credibility score

11:35

for the information they read.

11:38

It sounds like a brilliant future, right?

11:42

(Laughter)

11:44

Yes, but ...

11:47

all great progress comes with risks.

11:51

As much as I'm excited about this future,

11:54

I think we need to be careful.

11:58

If we are not cautious, in my view,

12:01

we could end up in a world

12:02

where people might just blindly believe AI outputs.

12:07

And I'm afraid this means that people will just be more likely

12:11

to accuse others of lying just because an AI says so.

12:17

And I'm not the only one with this view

12:19

because another study already proved it.

12:24

In addition, if we totally rely on this lie detection technology

12:29

to say someone else is lying or not,

12:31

we risk losing another important key value in society.

12:36

We lose trust.

12:38

We won't need to trust people anymore,

12:40

because what we will do is just ask an AI to double check for us.

12:47

But are we really willing to blindly believe AI

12:51

and give up our critical thinking?

12:55

I think that's the future we need to avoid.

13:00

With hope for the future is more interpretability.

13:04

And I'm about to tell you what I mean.

13:06

Similar to when we look at reviews online,

13:09

and we can both look at the total number of stars at places,

13:13

but also we can look more in detail at the positive and negative reviews,

13:18

and try to understand what are the positive sides,

13:20

but also what might have gone wrong,

13:23

to eventually create our own and personal idea

13:27

if that is the place where we want to go,

13:29

where we want to be.

13:32

Likewise, imagine a world where AI doesn't just offer conclusions,

13:36

but also provides clear and understandable explanations

13:40

behind its decisions.

13:43

And I envision a future

13:45

where this lie detection technology

13:47

wouldn't just provide us with a simple judgment,

13:51

but also with clear explanations for why it thinks someone else is lying.

13:57

And I would like a future where, yes,

14:00

this lie detection technology is integrated in our life,

14:04

or also AI technology in general,

14:07

but still, at the same time,

14:10

we are able to think critically

14:13

and decide when we want to trust in AI judgment

14:16

or when we want to question it.

14:20

To conclude,

14:22

I think the future of using AI for lie detection

14:26

is not just about technological advancement,

14:30

but about enhancing our understanding and fostering trust.

14:35

It's about developing tools that don't replace human judgment

14:39

but empower it,

14:41

ensuring that we remain at the helm.

14:45

Don't step into a future with blind reliance on technology.

14:49

Let's commit to deep understanding and ethical use,

14:53

and we'll pursue the truth.

14:56

(Applause)

AITransDub

AI-захранван видео превод и дублиране

Прекъсвайте езиковите бариери моментално! AI-захранван прецизен превод и светкавично дублиране за вашите видеоклипове.