Gå frakoblet med Player FM -appen!
EP185 SAIF-powered Collaboration to Secure AI: CoSAI and Why It Matters to You
Manage episode 433820956 series 2892548
Guest:
David LaBianca, Senior Engineering Director, Google
Topics:
The universe of AI risks is broad and deep. We’ve made a lot of headway with our SAIF framework: can you give us a) a 90 second tour of SAIF and b) share how it’s gotten so much traction and c) talk about where we go next with it?
The Coalition for Secure AI (CoSAI) is a collaborative effort to address AI security challenges. What are Google's specific goals and expectations for CoSAI, and how will its success be measured in the long term?
Something we love about CoSAI is that we involved some unexpected folks, notably Microsoft and OpenAI. How did that come about?
How do we plan to work with existing organizations, such as Frontier Model Forum (FMF) and Open Source Security Foundation (OpenSSF)? Does this also complement emerging AI security standards?
AI is moving quickly. How do we intend to keep up with the pace of change when it comes to emerging threat techniques and actors in the landscape?
What do we expect to see out of CoSAI work and when? What should people be looking forward to and what are you most looking forward to releasing from the group?
We have proposed projects for CoSAI, including developing a defender's framework and addressing software supply chain security for AI systems. How can others use them? In other words, if I am a mid-sized bank CISO, do I care? How do I benefit from it?
An off-the-cuff question, how to do AI governance well?
Resources:
198 episoder
Manage episode 433820956 series 2892548
Guest:
David LaBianca, Senior Engineering Director, Google
Topics:
The universe of AI risks is broad and deep. We’ve made a lot of headway with our SAIF framework: can you give us a) a 90 second tour of SAIF and b) share how it’s gotten so much traction and c) talk about where we go next with it?
The Coalition for Secure AI (CoSAI) is a collaborative effort to address AI security challenges. What are Google's specific goals and expectations for CoSAI, and how will its success be measured in the long term?
Something we love about CoSAI is that we involved some unexpected folks, notably Microsoft and OpenAI. How did that come about?
How do we plan to work with existing organizations, such as Frontier Model Forum (FMF) and Open Source Security Foundation (OpenSSF)? Does this also complement emerging AI security standards?
AI is moving quickly. How do we intend to keep up with the pace of change when it comes to emerging threat techniques and actors in the landscape?
What do we expect to see out of CoSAI work and when? What should people be looking forward to and what are you most looking forward to releasing from the group?
We have proposed projects for CoSAI, including developing a defender's framework and addressing software supply chain security for AI systems. How can others use them? In other words, if I am a mid-sized bank CISO, do I care? How do I benefit from it?
An off-the-cuff question, how to do AI governance well?
Resources:
198 episoder
Alle episoder
×Velkommen til Player FM!
Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.