Artwork

Conteúdo fornecido por LessWrong. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por LessWrong ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

“AGI Safety and Alignment at Google DeepMind:A Summary of Recent Work ” by Rohin Shah, Seb Farquhar, Anca Dragan

18:39
 
Compartilhar
 

Manage episode 435336824 series 3364758
Conteúdo fornecido por LessWrong. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por LessWrong ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We wanted to share a recap of our recent outputs with the AF community. Below, we fill in some details about what we have been working on, what motivated us to do it, and how we thought about its importance. We hope that this will help people build off things we have done and see how their work fits with ours.
Who are we?
We’re the main team at Google DeepMind working on technical approaches to existential risk from AI systems. Since our last post, we’ve evolved into the AGI Safety & Alignment team, which we think of as AGI Alignment (with subteams like mechanistic interpretability, scalable oversight, etc.), and Frontier Safety (working on the Frontier Safety Framework, including developing and running dangerous capability evaluations). We’ve also been growing since our last post: by 39% last year [...]
---
Outline:
(00:32) Who are we?
(01:32) What have we been up to?
(02:16) Frontier Safety
(02:38) FSF
(04:05) Dangerous Capability Evaluations
(05:12) Mechanistic Interpretability
(08:54) Amplified Oversight
(09:23) Theoretical Work on Debate
(10:32) Empirical Work on Debate
(11:37) Causal Alignment
(12:47) Emerging Topics
(14:57) Highlights from Our Collaborations
(17:07) What are we planning next?
---
First published:
August 20th, 2024
Source:
https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of
---
Narrated by TYPE III AUDIO.
  continue reading

365 episódios

Artwork
iconCompartilhar
 
Manage episode 435336824 series 3364758
Conteúdo fornecido por LessWrong. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por LessWrong ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We wanted to share a recap of our recent outputs with the AF community. Below, we fill in some details about what we have been working on, what motivated us to do it, and how we thought about its importance. We hope that this will help people build off things we have done and see how their work fits with ours.
Who are we?
We’re the main team at Google DeepMind working on technical approaches to existential risk from AI systems. Since our last post, we’ve evolved into the AGI Safety & Alignment team, which we think of as AGI Alignment (with subteams like mechanistic interpretability, scalable oversight, etc.), and Frontier Safety (working on the Frontier Safety Framework, including developing and running dangerous capability evaluations). We’ve also been growing since our last post: by 39% last year [...]
---
Outline:
(00:32) Who are we?
(01:32) What have we been up to?
(02:16) Frontier Safety
(02:38) FSF
(04:05) Dangerous Capability Evaluations
(05:12) Mechanistic Interpretability
(08:54) Amplified Oversight
(09:23) Theoretical Work on Debate
(10:32) Empirical Work on Debate
(11:37) Causal Alignment
(12:47) Emerging Topics
(14:57) Highlights from Our Collaborations
(17:07) What are we planning next?
---
First published:
August 20th, 2024
Source:
https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of
---
Narrated by TYPE III AUDIO.
  continue reading

365 episódios

Todos os episódios

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências