Artwork

Conteúdo fornecido por MLSecOps.com. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por MLSecOps.com ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

A Holistic Approach to Understanding the AI Lifecycle and Securing ML Systems: Protecting AI Through People, Processes & Technology; With Guest: Rob van der Veer

29:25
 
Compartilhar
 

Manage episode 376231457 series 3461851
Conteúdo fornecido por MLSecOps.com. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por MLSecOps.com ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

Send us a text

Joining us for the first time as a guest host is Protect AI’s CEO and founder, Ian Swanson. Ian is joined this week by Rob van der Veer, a pioneer in AI and security. Rob gave a presentation at Global AppSec Dublin earlier this year called “Attacking and Protecting Artificial Intelligence” which was a large inspiration for this episode. In it, Rob talks about the lack of security considerations and processes in AI production systems compared to traditional software development, and the unique challenges and particularities of building security into AI and machine learning systems.

Together in this episode, Ian and Rob dive into things like practical threats to ML systems, the transition from MLOps to MLSecOps, the [upcoming] ISO 5338 standard on AI engineering, and what organizations can do if they are looking to mature their AI/ML security practices.
This is a great dialogue and exchange of ideas overall between two super knowledgeable people in this industry. So thank you so much to Ian and to Rob for joining us on The MLSecOps Podcast this week.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

40 episódios

Artwork
iconCompartilhar
 
Manage episode 376231457 series 3461851
Conteúdo fornecido por MLSecOps.com. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por MLSecOps.com ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

Send us a text

Joining us for the first time as a guest host is Protect AI’s CEO and founder, Ian Swanson. Ian is joined this week by Rob van der Veer, a pioneer in AI and security. Rob gave a presentation at Global AppSec Dublin earlier this year called “Attacking and Protecting Artificial Intelligence” which was a large inspiration for this episode. In it, Rob talks about the lack of security considerations and processes in AI production systems compared to traditional software development, and the unique challenges and particularities of building security into AI and machine learning systems.

Together in this episode, Ian and Rob dive into things like practical threats to ML systems, the transition from MLOps to MLSecOps, the [upcoming] ISO 5338 standard on AI engineering, and what organizations can do if they are looking to mature their AI/ML security practices.
This is a great dialogue and exchange of ideas overall between two super knowledgeable people in this industry. So thank you so much to Ian and to Rob for joining us on The MLSecOps Podcast this week.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

40 episódios

Todos os episódios

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências