Artwork

Innhold levert av MIT CISR. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av MIT CISR eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

Building AI Explanation Capability for the AI-Powered Organization

14:34
 
Del
 

Manage episode 345463341 series 3409705
Innhold levert av MIT CISR. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av MIT CISR eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Ida Someh reads MIT CISR's July 2022 research briefing, which she co-authored with Barb Wixom and Cynthia Beath. See the text version and related content at https://cisr.mit.edu/publication/2022_0701_AIX_SomehWixomBeath. Abstract: Four characteristics of AI—unproven value, model opacity, model drift, and mindless application—make it challenging to get stakeholders to trust AI solutions. As a result, organizations that strive to become AI-powered adopt practices to produce AI solutions that are trustworthy. Over time, these practices build AI Explanation (AIX) capability: an emerging enterprise capability that arises from practices AI teams use to build stakeholder confidence in AI solutions. In this briefing, we first describe AIX capability and four sets of practices used to build it. We then draw on a case study about the AI journey of Microsoft to illustrate examples of the practices that company has leveraged.
  continue reading

80 episoder

Artwork
iconDel
 
Manage episode 345463341 series 3409705
Innhold levert av MIT CISR. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av MIT CISR eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Ida Someh reads MIT CISR's July 2022 research briefing, which she co-authored with Barb Wixom and Cynthia Beath. See the text version and related content at https://cisr.mit.edu/publication/2022_0701_AIX_SomehWixomBeath. Abstract: Four characteristics of AI—unproven value, model opacity, model drift, and mindless application—make it challenging to get stakeholders to trust AI solutions. As a result, organizations that strive to become AI-powered adopt practices to produce AI solutions that are trustworthy. Over time, these practices build AI Explanation (AIX) capability: an emerging enterprise capability that arises from practices AI teams use to build stakeholder confidence in AI solutions. In this briefing, we first describe AIX capability and four sets of practices used to build it. We then draw on a case study about the AI journey of Microsoft to illustrate examples of the practices that company has leveraged.
  continue reading

80 episoder

Alla avsnitt

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2024 | Sitemap | Personvern | Vilkår for bruk | | opphavsrett