Artwork

Conteúdo fornecido por Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish

38:44
 
Compartilhar
 

Manage episode 385014002 series 2503772
Conteúdo fornecido por Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech?

Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.

RECOMMENDED MEDIA

Open-Sourcing Highly Capable Foundation Models

This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI

BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B

This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities

Centre for the Governance of AI

Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI

AI: Futures and Responsibility (AI:FAR)

Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

Palisade Research

Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever

RECOMMENDED YUA EPISODES

A First Step Toward AI Regulation with Tom Wheeler

No One is Immune to AI Harms with Dr. Joy Buolamwini

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

  continue reading

124 episódios

Artwork
iconCompartilhar
 
Manage episode 385014002 series 2503772
Conteúdo fornecido por Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech?

Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.

RECOMMENDED MEDIA

Open-Sourcing Highly Capable Foundation Models

This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI

BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B

This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities

Centre for the Governance of AI

Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI

AI: Futures and Responsibility (AI:FAR)

Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

Palisade Research

Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever

RECOMMENDED YUA EPISODES

A First Step Toward AI Regulation with Tom Wheeler

No One is Immune to AI Harms with Dr. Joy Buolamwini

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

  continue reading

124 episódios

Alla avsnitt

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências