Artwork

Conteúdo fornecido por Per Axbom & James Royal-Lawson, Per Axbom, and James Royal-Lawson. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Per Axbom & James Royal-Lawson, Per Axbom, and James Royal-Lawson ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

Building trust in AI with Carol Smith

35:26
 
Compartilhar
 

Manage episode 401188731 series 28471
Conteúdo fornecido por Per Axbom & James Royal-Lawson, Per Axbom, and James Royal-Lawson. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Per Axbom & James Royal-Lawson, Per Axbom, and James Royal-Lawson ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

S02E11 (#321). How do we know when to trust a system? Carol Smith leads the Trust Lab team at Carnagie Mellon Universty, where they conduct research into making trustworthy, human centered, and responsible AI systems. Our conversation highlights the importance of guardrails and ethical considerations in AI development, as well as to ask the right questions and to be critical of the work we are doing – in order to make the best systems we can for the people who are using them or who will be affected by them.

“If the system is providing the right kind of evidence of how it’s making decisions, how it’s making recommendations, if it is a situation where the people understand the capabilities of that system in that particular context, and also know what the edges are – it can’t handle this type of situation, or it will perform poorly in this type of situation – then they can begin to build what is called calibrated trust. “

– Carol Smith

(Listening time: 35 minutes, transcript)

References:

This conversation was recorded at UXLx 2023.

The post Building trust in AI with Carol Smith appeared first on UX Podcast.

  continue reading

215 episódios

Artwork

Building trust in AI with Carol Smith

UX Podcast

2,981 subscribers

published

iconCompartilhar
 
Manage episode 401188731 series 28471
Conteúdo fornecido por Per Axbom & James Royal-Lawson, Per Axbom, and James Royal-Lawson. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Per Axbom & James Royal-Lawson, Per Axbom, and James Royal-Lawson ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

S02E11 (#321). How do we know when to trust a system? Carol Smith leads the Trust Lab team at Carnagie Mellon Universty, where they conduct research into making trustworthy, human centered, and responsible AI systems. Our conversation highlights the importance of guardrails and ethical considerations in AI development, as well as to ask the right questions and to be critical of the work we are doing – in order to make the best systems we can for the people who are using them or who will be affected by them.

“If the system is providing the right kind of evidence of how it’s making decisions, how it’s making recommendations, if it is a situation where the people understand the capabilities of that system in that particular context, and also know what the edges are – it can’t handle this type of situation, or it will perform poorly in this type of situation – then they can begin to build what is called calibrated trust. “

– Carol Smith

(Listening time: 35 minutes, transcript)

References:

This conversation was recorded at UXLx 2023.

The post Building trust in AI with Carol Smith appeared first on UX Podcast.

  continue reading

215 episódios

Todos os episódios

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências