Artwork

Innhold levert av Kashif Manzoor. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Kashif Manzoor eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

AI Security and Agentic Risks Every Business Needs to Understand with Alexander Schlager

27:42
 
Del
 

Manage episode 506108095 series 2922369
Innhold levert av Kashif Manzoor. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Kashif Manzoor eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation.

Chapters

00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions

Episode # 166

Today's Guest: Alexander Schlager, Founder and CEO of AIceberg.ai

He's founded a next-generation AI cybersecurity company that's revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he's restoring trust in an era of automation.

What Listeners Will Learn:

  • Why real-time AI security and runtime protection are essential for safe deployments
  • How explainable AI builds trust with users and regulators
  • The unique risks of agentic AI and how to manage them responsibly
  • Why AI safety and governance are becoming strategic priorities for companies
  • How education, awareness, and upskilling help close the AI skills gap
  • Why natural language processing (NLP) is becoming the default interface for enterprise technology

Keywords:

AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning

Resources:

  continue reading

177 episoder

Artwork
iconDel
 
Manage episode 506108095 series 2922369
Innhold levert av Kashif Manzoor. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Kashif Manzoor eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation.

Chapters

00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions

Episode # 166

Today's Guest: Alexander Schlager, Founder and CEO of AIceberg.ai

He's founded a next-generation AI cybersecurity company that's revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he's restoring trust in an era of automation.

What Listeners Will Learn:

  • Why real-time AI security and runtime protection are essential for safe deployments
  • How explainable AI builds trust with users and regulators
  • The unique risks of agentic AI and how to manage them responsibly
  • Why AI safety and governance are becoming strategic priorities for companies
  • How education, awareness, and upskilling help close the AI skills gap
  • Why natural language processing (NLP) is becoming the default interface for enterprise technology

Keywords:

AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning

Resources:

  continue reading

177 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2025 | Personvern | Vilkår for bruk | | opphavsrett
Lytt til dette showet mens du utforsker
Spill