Artwork

Conteúdo fornecido por Dev and Doc. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Dev and Doc ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

#07 a conversation on safety and risks of AI models | AI safety summit 2023

51:36
 
Compartilhar
 

Manage episode 428686722 series 3585389
Conteúdo fornecido por Dev and Doc. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Dev and Doc ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
As the AI safety summit nears in England, the UK positions itself as a leader in AI safety, but what does this mean? Is AI safety all about preventing doom for the human race ? 🤖Dev and doc👨🏻‍⚕️ are here to break down this fascinating topic. Dev and Doc is a Podcast where developers and doctors join forces to deep dive into AI in healthcare. Together, we can build models that matter. 👨🏻‍⚕️Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/ 🤖Dev - Zeljko Kraljevic https://twitter.com/zeljkokr Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :) 00:00 intro 01:28 Start 06:30 AI safety definition 10:21 AI safety vs AI regulation 13:05 UK positions itself as a leader in the AI safety summit (featuring Rishi Sunak) 15:50 AI safety summit - responsible scaling 16:50 what does this mean for an AI researcher? When should we slow down research? 19:40 Yann Lecunn- we will get nowhere by simply scaling. The transformer architecture will NOT lead to AGI 25:30 Tackling AI safety with model evaluation,red teams and ethical hackers 33:36 Trusts sharing data, federated learning platform, over-regulation 38:57 google already has all of your data 41:49 there is a lack of research on AI safety The podcast 🎙️ 🔊Spotify: https://open.spotify.com/show/3QO5Lr3w4Rd6lqwlfKDaB7?si=e7915d844994403e 📙Substack: https://aiforhealthcare.substack.com/ 🎞️ Editor- Dragan Kraljević https://www.instagram.com/dragan_kraljevic/ 🎨Brand design and art direction - Ana Grigorovici https://www.behance.net/anagrigorovici027d
  continue reading

24 episódios

Artwork
iconCompartilhar
 
Manage episode 428686722 series 3585389
Conteúdo fornecido por Dev and Doc. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Dev and Doc ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
As the AI safety summit nears in England, the UK positions itself as a leader in AI safety, but what does this mean? Is AI safety all about preventing doom for the human race ? 🤖Dev and doc👨🏻‍⚕️ are here to break down this fascinating topic. Dev and Doc is a Podcast where developers and doctors join forces to deep dive into AI in healthcare. Together, we can build models that matter. 👨🏻‍⚕️Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/ 🤖Dev - Zeljko Kraljevic https://twitter.com/zeljkokr Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :) 00:00 intro 01:28 Start 06:30 AI safety definition 10:21 AI safety vs AI regulation 13:05 UK positions itself as a leader in the AI safety summit (featuring Rishi Sunak) 15:50 AI safety summit - responsible scaling 16:50 what does this mean for an AI researcher? When should we slow down research? 19:40 Yann Lecunn- we will get nowhere by simply scaling. The transformer architecture will NOT lead to AGI 25:30 Tackling AI safety with model evaluation,red teams and ethical hackers 33:36 Trusts sharing data, federated learning platform, over-regulation 38:57 google already has all of your data 41:49 there is a lack of research on AI safety The podcast 🎙️ 🔊Spotify: https://open.spotify.com/show/3QO5Lr3w4Rd6lqwlfKDaB7?si=e7915d844994403e 📙Substack: https://aiforhealthcare.substack.com/ 🎞️ Editor- Dragan Kraljević https://www.instagram.com/dragan_kraljevic/ 🎨Brand design and art direction - Ana Grigorovici https://www.behance.net/anagrigorovici027d
  continue reading

24 episódios

Tất cả các tập

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências