Artwork

Conteúdo fornecido por Lex Fridman. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Lex Fridman ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

2:22:39
 
Compartilhar
 

Manage episode 421648770 series 2430467
Conteúdo fornecido por Lex Fridman. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Lex Fridman ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
Yahoo Finance: https://yahoofinance.com
MasterClass: https://masterclass.com/lexpod to get 15% off
NetSuite: http://netsuite.com/lex to get free product tour
LMNT: https://drinkLMNT.com/lex to get free sample pack
Eight Sleep: https://eightsleep.com/lex to get $350 off

Transcript: https://lexfridman.com/roman-yampolskiy-transcript

EPISODE LINKS:
Roman’s X: https://twitter.com/romanyam
Roman’s Website: http://cecs.louisville.edu/ry
Roman’s AI book: https://amzn.to/4aFZuPb

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(09:12) – Existential risk of AGI
(15:25) – Ikigai risk
(23:37) – Suffering risk
(27:12) – Timeline to AGI
(31:44) – AGI turing test
(37:06) – Yann LeCun and open source AI
(49:58) – AI control
(52:26) – Social engineering
(54:59) – Fearmongering
(1:04:49) – AI deception
(1:11:23) – Verification
(1:18:22) – Self-improving AI
(1:30:34) – Pausing AI development
(1:36:51) – AI Safety
(1:46:35) – Current AI
(1:51:58) – Simulation
(1:59:16) – Aliens
(2:00:50) – Human mind
(2:07:10) – Neuralink
(2:16:15) – Hope for the future
(2:20:11) – Meaning of life

  continue reading

470 episódios

Artwork
iconCompartilhar
 
Manage episode 421648770 series 2430467
Conteúdo fornecido por Lex Fridman. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Lex Fridman ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
Yahoo Finance: https://yahoofinance.com
MasterClass: https://masterclass.com/lexpod to get 15% off
NetSuite: http://netsuite.com/lex to get free product tour
LMNT: https://drinkLMNT.com/lex to get free sample pack
Eight Sleep: https://eightsleep.com/lex to get $350 off

Transcript: https://lexfridman.com/roman-yampolskiy-transcript

EPISODE LINKS:
Roman’s X: https://twitter.com/romanyam
Roman’s Website: http://cecs.louisville.edu/ry
Roman’s AI book: https://amzn.to/4aFZuPb

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(09:12) – Existential risk of AGI
(15:25) – Ikigai risk
(23:37) – Suffering risk
(27:12) – Timeline to AGI
(31:44) – AGI turing test
(37:06) – Yann LeCun and open source AI
(49:58) – AI control
(52:26) – Social engineering
(54:59) – Fearmongering
(1:04:49) – AI deception
(1:11:23) – Verification
(1:18:22) – Self-improving AI
(1:30:34) – Pausing AI development
(1:36:51) – AI Safety
(1:46:35) – Current AI
(1:51:58) – Simulation
(1:59:16) – Aliens
(2:00:50) – Human mind
(2:07:10) – Neuralink
(2:16:15) – Hope for the future
(2:20:11) – Meaning of life

  continue reading

470 episódios

Semua episod

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências

Ouça este programa enquanto explora
Reproduzir