Artwork

Conteúdo fornecido por SquareX. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por SquareX ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

Using LLMs for Offensive Cybersecurity | Michael Kouremetis | Be Fearless Podcast EP 11

9:46
 
Compartilhar
 

Manage episode 440902967 series 3579095
Conteúdo fornecido por SquareX. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por SquareX ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

In this DEF CON 32 special, Michael Kouremetis, Principal Adversary Emulation Engineer from MITRE discusses the Caldera project, research on LLMs and their implications for cybersecurity. If you’re interested in the intersection of AI and cybersecurity, this is one episode you don’t want to miss!
0:00 Introduction and the story behind Caldera
2:40 Challenges of testing LLMs for cyberattacks
5:05 What are indicators of LLMs’ offensive capabilities?
7:46 How open-source LLMs are a double-edged sword
🔔 Follow Michael and Shourya on:
https://www.linkedin.com/in/michael-kouremetis-78685931/
https://www.linkedin.com/in/shouryaps/
📖 Episode Summary:
In this episode, Michael Kouremetis from MITRE’s Cyber Lab division shares his insights into the intersection of AI and cybersecurity. Michael discusses his work on the MITRE Caldera project, an open-source adversary emulation platform designed to help organizations run red team operations and simulate real-world cyber threats. He also explores the potential risks of large language models (LLMs) in offensive cybersecurity, offering a glimpse into the research he presented at Black Hat on how AI might be used to carry out cyberattacks.
Michael dives into the challenges of testing LLMs for offensive cyber capabilities, emphasizing the need for real-world, operator-specific tests to better understand their potential. He also discusses the importance of community collaboration to enhance awareness and create standardized tests for these models.

🔥 Powered by SquareX
SquareX helps organizations detect, mitigate, and threat hunt web attacks happening against their users in real-time. Find out more about SquareX at https://www.sqrx.com/

  continue reading

Capítulos

1. Introduction and the story behind Caldera (00:00:00)

2. Challenges of testing LLMs for cyberattacks (00:02:40)

3. What are indicators of LLMs’ offensive capabilities? (00:05:05)

4. How open-source LLMs are a double-edged sword (00:07:46)

28 episódios

Artwork
iconCompartilhar
 
Manage episode 440902967 series 3579095
Conteúdo fornecido por SquareX. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por SquareX ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

In this DEF CON 32 special, Michael Kouremetis, Principal Adversary Emulation Engineer from MITRE discusses the Caldera project, research on LLMs and their implications for cybersecurity. If you’re interested in the intersection of AI and cybersecurity, this is one episode you don’t want to miss!
0:00 Introduction and the story behind Caldera
2:40 Challenges of testing LLMs for cyberattacks
5:05 What are indicators of LLMs’ offensive capabilities?
7:46 How open-source LLMs are a double-edged sword
🔔 Follow Michael and Shourya on:
https://www.linkedin.com/in/michael-kouremetis-78685931/
https://www.linkedin.com/in/shouryaps/
📖 Episode Summary:
In this episode, Michael Kouremetis from MITRE’s Cyber Lab division shares his insights into the intersection of AI and cybersecurity. Michael discusses his work on the MITRE Caldera project, an open-source adversary emulation platform designed to help organizations run red team operations and simulate real-world cyber threats. He also explores the potential risks of large language models (LLMs) in offensive cybersecurity, offering a glimpse into the research he presented at Black Hat on how AI might be used to carry out cyberattacks.
Michael dives into the challenges of testing LLMs for offensive cyber capabilities, emphasizing the need for real-world, operator-specific tests to better understand their potential. He also discusses the importance of community collaboration to enhance awareness and create standardized tests for these models.

🔥 Powered by SquareX
SquareX helps organizations detect, mitigate, and threat hunt web attacks happening against their users in real-time. Find out more about SquareX at https://www.sqrx.com/

  continue reading

Capítulos

1. Introduction and the story behind Caldera (00:00:00)

2. Challenges of testing LLMs for cyberattacks (00:02:40)

3. What are indicators of LLMs’ offensive capabilities? (00:05:05)

4. How open-source LLMs are a double-edged sword (00:07:46)

28 episódios

كل الحلقات

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências