Artwork

Conteúdo fornecido por New York Times Opinion. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por New York Times Opinion ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

Is A.I. the Problem? Or Are We?

1:16:39
 
Compartilhar
 

Manage episode 294123661 series 2858887
Conteúdo fornecido por New York Times Opinion. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por New York Times Opinion ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

If you talk to many of the people working on the cutting edge of artificial intelligence research, you’ll hear that we are on the cusp of a technology that will be far more transformative than simply computers and the internet, one that could bring about a new industrial revolution and usher in a utopia — or perhaps pose the greatest threat in our species’s history.

Others, of course, will tell you those folks are nuts.

One of my projects this year is to get a better handle on this debate. A.I., after all, isn’t some force only future human beings will face. It’s here now, deciding what advertisements are served to us online, how bail is set after we commit crimes and whether our jobs will exist in a couple of years. It is both shaped by and reshaping politics, economics and society. It’s worth understanding.

Brian Christian’s recent book “The Alignment Problem” is the best book on the key technical and moral questions of A.I. that I’ve read. At its center is the term from which the book gets its name. “Alignment problem” originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that’s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn’t understand how it really worked or what we had actually asked it to do.

So this conversation is about the various alignment problems associated with A.I. We discuss what machine learning is and how it works, how governments and corporations are using it right now, what it has taught us about human learning, the ethics of how humans should treat sentient robots, the all-important question of how A.I. developers plan to make profits, what kinds of regulatory structures are possible when we’re dealing with algorithms we don’t really understand, the way A.I. reflects and then supercharges the inequities that exist in our society, the saddest Super Mario Bros. game I’ve ever heard of, why the problem of automation isn’t so much job loss as dignity loss and much more.

Mentioned:

“Human-level control through deep reinforcement learning”

“Some Moral and Technical Consequences of Automation” by Norbert Wiener

Recommendations:

"What to Expect When You're Expecting Robots" by Julie Shah and Laura Major

"Finite and Infinite Games" by James P. Carse

"How to Do Nothing" by Jenny Odell

If you enjoyed this episode, check out my conversation with Alison Gopnik on what we can all learn from studying the minds of children.

You can find transcripts (posted midday) and more episodes of "The Ezra Klein Show" at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein.

Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

“The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin.

  continue reading

304 episódios

Artwork

Is A.I. the Problem? Or Are We?

The Ezra Klein Show

3,939 subscribers

published

iconCompartilhar
 
Manage episode 294123661 series 2858887
Conteúdo fornecido por New York Times Opinion. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por New York Times Opinion ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

If you talk to many of the people working on the cutting edge of artificial intelligence research, you’ll hear that we are on the cusp of a technology that will be far more transformative than simply computers and the internet, one that could bring about a new industrial revolution and usher in a utopia — or perhaps pose the greatest threat in our species’s history.

Others, of course, will tell you those folks are nuts.

One of my projects this year is to get a better handle on this debate. A.I., after all, isn’t some force only future human beings will face. It’s here now, deciding what advertisements are served to us online, how bail is set after we commit crimes and whether our jobs will exist in a couple of years. It is both shaped by and reshaping politics, economics and society. It’s worth understanding.

Brian Christian’s recent book “The Alignment Problem” is the best book on the key technical and moral questions of A.I. that I’ve read. At its center is the term from which the book gets its name. “Alignment problem” originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that’s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn’t understand how it really worked or what we had actually asked it to do.

So this conversation is about the various alignment problems associated with A.I. We discuss what machine learning is and how it works, how governments and corporations are using it right now, what it has taught us about human learning, the ethics of how humans should treat sentient robots, the all-important question of how A.I. developers plan to make profits, what kinds of regulatory structures are possible when we’re dealing with algorithms we don’t really understand, the way A.I. reflects and then supercharges the inequities that exist in our society, the saddest Super Mario Bros. game I’ve ever heard of, why the problem of automation isn’t so much job loss as dignity loss and much more.

Mentioned:

“Human-level control through deep reinforcement learning”

“Some Moral and Technical Consequences of Automation” by Norbert Wiener

Recommendations:

"What to Expect When You're Expecting Robots" by Julie Shah and Laura Major

"Finite and Infinite Games" by James P. Carse

"How to Do Nothing" by Jenny Odell

If you enjoyed this episode, check out my conversation with Alison Gopnik on what we can all learn from studying the minds of children.

You can find transcripts (posted midday) and more episodes of "The Ezra Klein Show" at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein.

Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.

“The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin.

  continue reading

304 episódios

Wszystkie odcinki

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências