Artwork

Innhold levert av re:verb, Calvin Pollak, and Alex Helberg. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av re:verb, Calvin Pollak, and Alex Helberg eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

E97: re:joinder - OI: Oprahficial Intelligence

1:27:57
 
Del
 

Manage episode 445582016 series 3069188
Innhold levert av re:verb, Calvin Pollak, and Alex Helberg. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av re:verb, Calvin Pollak, and Alex Helberg eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

On today’s show, we once again fire up our rhetorical stovetop to roast some dubious public argumentation: Oprah Winfrey’s recent ABC special, “AI and the Future of Us.” In this re:joinder episode, Alex and Calvin listen through and discuss audio clips from the show featuring a wide array of guests - from corporate leaders like Sam Altman and Bill Gates to technologists like Aza Raskin and Tristan Harris, and even FBI Director Christopher Wray - and dismantle some of the mystifying rhetorical hype tropes that they (and Oprah) circulate about the proliferation of large language models (LLMs) and other “AI” technologies into our lives. Along the way, we use rhetorical tools from previous episodes, such as the stasis framework, to show which components of the debate around AI are glossed over, and which are given center-stage. We also bring our own sociopolitical and media analysis to the table to help contextualize (and correct) the presenters’ claims about the speed of large language model development, the nature of its operation, and the threats - both real and imagined - that this new technological apparatus might present to the world.

We conclude with a reflection on the words of novelist Marilynne Robinson, the show’s final guest, who prompts us to think about the many ways in which “difficulty is the point” when it comes to human work and developing autonomy. Meanwhile, the slick and tempting narratives promoting “ease” and “efficiency” with AI technology might actually belie a much darker vision of “the future of us.”

Join us as we critique and rejoin some of the most common tropes of AI hype, all compacted into one primetime special. In the spirit of automating consumptive labor, we watched it so you don’t have to!

Works & Concepts cited in this episode:

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Rather, S. (2024, 25 Apr.). How is one of America's biggest spy agencies using AI? We're suing to find out. ACLU. [On the potential harms of the US NSA implementing AI into existing dragnet surveillance programs domestically & internationally]

Robins-Early, N. (2024, 3 Apr.). George Carlin’s estate settles lawsuit over comedian’s AI doppelganger. The Guardian.

re:verb Episode 12: re:blurb - Stasis Theory

re:verb Episode 17: re:blurb - Dialogicality

re:verb Episode 75: A.I. Writing and Academic Integrity (w/ Dr. S. Scott Graham)

re:verb Episode 91: Thinking Rhetorically (w/ Robin Reames) [Episode referencing the importance of following the stasis categories in public debates]

  continue reading

97 episoder

Artwork
iconDel
 
Manage episode 445582016 series 3069188
Innhold levert av re:verb, Calvin Pollak, and Alex Helberg. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av re:verb, Calvin Pollak, and Alex Helberg eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

On today’s show, we once again fire up our rhetorical stovetop to roast some dubious public argumentation: Oprah Winfrey’s recent ABC special, “AI and the Future of Us.” In this re:joinder episode, Alex and Calvin listen through and discuss audio clips from the show featuring a wide array of guests - from corporate leaders like Sam Altman and Bill Gates to technologists like Aza Raskin and Tristan Harris, and even FBI Director Christopher Wray - and dismantle some of the mystifying rhetorical hype tropes that they (and Oprah) circulate about the proliferation of large language models (LLMs) and other “AI” technologies into our lives. Along the way, we use rhetorical tools from previous episodes, such as the stasis framework, to show which components of the debate around AI are glossed over, and which are given center-stage. We also bring our own sociopolitical and media analysis to the table to help contextualize (and correct) the presenters’ claims about the speed of large language model development, the nature of its operation, and the threats - both real and imagined - that this new technological apparatus might present to the world.

We conclude with a reflection on the words of novelist Marilynne Robinson, the show’s final guest, who prompts us to think about the many ways in which “difficulty is the point” when it comes to human work and developing autonomy. Meanwhile, the slick and tempting narratives promoting “ease” and “efficiency” with AI technology might actually belie a much darker vision of “the future of us.”

Join us as we critique and rejoin some of the most common tropes of AI hype, all compacted into one primetime special. In the spirit of automating consumptive labor, we watched it so you don’t have to!

Works & Concepts cited in this episode:

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Rather, S. (2024, 25 Apr.). How is one of America's biggest spy agencies using AI? We're suing to find out. ACLU. [On the potential harms of the US NSA implementing AI into existing dragnet surveillance programs domestically & internationally]

Robins-Early, N. (2024, 3 Apr.). George Carlin’s estate settles lawsuit over comedian’s AI doppelganger. The Guardian.

re:verb Episode 12: re:blurb - Stasis Theory

re:verb Episode 17: re:blurb - Dialogicality

re:verb Episode 75: A.I. Writing and Academic Integrity (w/ Dr. S. Scott Graham)

re:verb Episode 91: Thinking Rhetorically (w/ Robin Reames) [Episode referencing the importance of following the stasis categories in public debates]

  continue reading

97 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2024 | Sitemap | Personvern | Vilkår for bruk | | opphavsrett