Artwork

Innhold levert av SquareX. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av SquareX eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

Using LLMs for Offensive Cybersecurity | Michael Kouremetis | Be Fearless Podcast EP 11

9:46
 
Del
 

Manage episode 440902967 series 3579095
Innhold levert av SquareX. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av SquareX eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

In this DEF CON 32 special, Michael Kouremetis, Principal Adversary Emulation Engineer from MITRE discusses the Caldera project, research on LLMs and their implications for cybersecurity. If you’re interested in the intersection of AI and cybersecurity, this is one episode you don’t want to miss!
0:00 Introduction and the story behind Caldera
2:40 Challenges of testing LLMs for cyberattacks
5:05 What are indicators of LLMs’ offensive capabilities?
7:46 How open-source LLMs are a double-edged sword
🔔 Follow Michael and Shourya on:
https://www.linkedin.com/in/michael-kouremetis-78685931/
https://www.linkedin.com/in/shouryaps/
📖 Episode Summary:
In this episode, Michael Kouremetis from MITRE’s Cyber Lab division shares his insights into the intersection of AI and cybersecurity. Michael discusses his work on the MITRE Caldera project, an open-source adversary emulation platform designed to help organizations run red team operations and simulate real-world cyber threats. He also explores the potential risks of large language models (LLMs) in offensive cybersecurity, offering a glimpse into the research he presented at Black Hat on how AI might be used to carry out cyberattacks.
Michael dives into the challenges of testing LLMs for offensive cyber capabilities, emphasizing the need for real-world, operator-specific tests to better understand their potential. He also discusses the importance of community collaboration to enhance awareness and create standardized tests for these models.

🔥 Powered by SquareX
SquareX helps organizations detect, mitigate, and threat hunt web attacks happening against their users in real-time. Find out more about SquareX at https://sqrx.com/

  continue reading

Kapitler

1. Introduction and the story behind Caldera (00:00:00)

2. Challenges of testing LLMs for cyberattacks (00:02:40)

3. What are indicators of LLMs’ offensive capabilities? (00:05:05)

4. How open-source LLMs are a double-edged sword (00:07:46)

30 episoder

Artwork
iconDel
 
Manage episode 440902967 series 3579095
Innhold levert av SquareX. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av SquareX eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

In this DEF CON 32 special, Michael Kouremetis, Principal Adversary Emulation Engineer from MITRE discusses the Caldera project, research on LLMs and their implications for cybersecurity. If you’re interested in the intersection of AI and cybersecurity, this is one episode you don’t want to miss!
0:00 Introduction and the story behind Caldera
2:40 Challenges of testing LLMs for cyberattacks
5:05 What are indicators of LLMs’ offensive capabilities?
7:46 How open-source LLMs are a double-edged sword
🔔 Follow Michael and Shourya on:
https://www.linkedin.com/in/michael-kouremetis-78685931/
https://www.linkedin.com/in/shouryaps/
📖 Episode Summary:
In this episode, Michael Kouremetis from MITRE’s Cyber Lab division shares his insights into the intersection of AI and cybersecurity. Michael discusses his work on the MITRE Caldera project, an open-source adversary emulation platform designed to help organizations run red team operations and simulate real-world cyber threats. He also explores the potential risks of large language models (LLMs) in offensive cybersecurity, offering a glimpse into the research he presented at Black Hat on how AI might be used to carry out cyberattacks.
Michael dives into the challenges of testing LLMs for offensive cyber capabilities, emphasizing the need for real-world, operator-specific tests to better understand their potential. He also discusses the importance of community collaboration to enhance awareness and create standardized tests for these models.

🔥 Powered by SquareX
SquareX helps organizations detect, mitigate, and threat hunt web attacks happening against their users in real-time. Find out more about SquareX at https://sqrx.com/

  continue reading

Kapitler

1. Introduction and the story behind Caldera (00:00:00)

2. Challenges of testing LLMs for cyberattacks (00:02:40)

3. What are indicators of LLMs’ offensive capabilities? (00:05:05)

4. How open-source LLMs are a double-edged sword (00:07:46)

30 episoder

Tất cả các tập

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2024 | Sitemap | Personvern | Vilkår for bruk | | opphavsrett