Artwork

Conteúdo fornecido por Emily M. Bender and Alex Hanna, Emily M. Bender, and Alex Hanna. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Emily M. Bender and Alex Hanna, Emily M. Bender, and Alex Hanna ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

Episode 36: About That 'Dangerous Capabilities' Fanfiction (feat. Ali Alkhatib), June 24 2024

1:02:00
 
Compartilhar
 

Manage episode 429656756 series 3483641
Conteúdo fornecido por Emily M. Bender and Alex Hanna, Emily M. Bender, and Alex Hanna. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Emily M. Bender and Alex Hanna, Emily M. Bender, and Alex Hanna ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

When is a research paper not a research paper? When a big tech company uses a preprint server as a means to dodge peer review -- in this case, of their wild speculations on the 'dangerous capabilities' of large language models. Ali Alkhatib joins Emily to explain why a recent Google DeepMind document about the hunt for evidence that LLMs might intentionally deceive us was bad science, and yet is still influencing the public conversation about AI.

Ali Alkhatib is a computer scientist and former director of the University of San Francisco’s Center for Applied Data Ethics. His research focuses on human-computer interaction, and why our technological problems are really social – and why we should apply social science lenses to data work, algorithmic justice, and even the errors and reality distortions inherent in AI models.

References:

Google DeepMind paper-like object: Evaluating Frontier Models for Dangerous Capabilities

Fresh AI Hell:

Hacker tool extracts all the data collected by Windows' 'Recall' AI

In NYC, ShotSpotter calls are 87 percent false alarms

"AI" system to make callers sound less angry to call center workers

Anthropic's Claude Sonnet 3.5 evaluated for "graduate level reasoning"

OpenAI's Mira Murati says "AI" will have 'PhD-level' intelligence

OpenAI's Mira Murati also says AI will take some creative jobs, maybe they shouldn't have been there to start out with

You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.

Follow us!
Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

  continue reading

42 episódios

Artwork
iconCompartilhar
 
Manage episode 429656756 series 3483641
Conteúdo fornecido por Emily M. Bender and Alex Hanna, Emily M. Bender, and Alex Hanna. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Emily M. Bender and Alex Hanna, Emily M. Bender, and Alex Hanna ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

When is a research paper not a research paper? When a big tech company uses a preprint server as a means to dodge peer review -- in this case, of their wild speculations on the 'dangerous capabilities' of large language models. Ali Alkhatib joins Emily to explain why a recent Google DeepMind document about the hunt for evidence that LLMs might intentionally deceive us was bad science, and yet is still influencing the public conversation about AI.

Ali Alkhatib is a computer scientist and former director of the University of San Francisco’s Center for Applied Data Ethics. His research focuses on human-computer interaction, and why our technological problems are really social – and why we should apply social science lenses to data work, algorithmic justice, and even the errors and reality distortions inherent in AI models.

References:

Google DeepMind paper-like object: Evaluating Frontier Models for Dangerous Capabilities

Fresh AI Hell:

Hacker tool extracts all the data collected by Windows' 'Recall' AI

In NYC, ShotSpotter calls are 87 percent false alarms

"AI" system to make callers sound less angry to call center workers

Anthropic's Claude Sonnet 3.5 evaluated for "graduate level reasoning"

OpenAI's Mira Murati says "AI" will have 'PhD-level' intelligence

OpenAI's Mira Murati also says AI will take some creative jobs, maybe they shouldn't have been there to start out with

You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.

Follow us!
Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

  continue reading

42 episódios

Todos os episódios

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências