Artwork

Innhold levert av Gus Docker and Future of Life Institute. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Gus Docker and Future of Life Institute eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

Special: Jaan Tallinn on Pausing Giant AI Experiments

1:41:08
 
Del
 

Manage episode 368109392 series 1334308
Innhold levert av Gus Docker and Future of Life Institute. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Gus Docker and Future of Life Institute eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
On this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments. Timestamps: 0:00 Nathan introduces Jaan 4:22 AI safety and Future of Life Institute 5:55 Jaan's first meeting with Eliezer Yudkowsky 12:04 Future of AI evolution 14:58 Jaan's investments in AI companies 23:06 The emerging danger paradigm 26:53 Economic transformation with AI 32:31 AI supervising itself 34:06 Language models and validation 38:49 Lack of insight into evolutionary selection process 41:56 Current estimate for life-ending catastrophe 44:52 Inverse scaling law 53:03 Our luck given the softness of language models 55:07 Future of language models 59:43 The Moore's law of mad science 1:01:45 GPT-5 type project 1:07:43 The AI race dynamics 1:09:43 AI alignment with the latest models 1:13:14 AI research investment and safety 1:19:43 What a six-month pause buys us 1:25:44 AI passing the Turing Test 1:28:16 AI safety and risk 1:32:01 Responsible AI development. 1:40:03 Neuralink implant technology
  continue reading

218 episoder

Artwork
iconDel
 
Manage episode 368109392 series 1334308
Innhold levert av Gus Docker and Future of Life Institute. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Gus Docker and Future of Life Institute eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
On this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments. Timestamps: 0:00 Nathan introduces Jaan 4:22 AI safety and Future of Life Institute 5:55 Jaan's first meeting with Eliezer Yudkowsky 12:04 Future of AI evolution 14:58 Jaan's investments in AI companies 23:06 The emerging danger paradigm 26:53 Economic transformation with AI 32:31 AI supervising itself 34:06 Language models and validation 38:49 Lack of insight into evolutionary selection process 41:56 Current estimate for life-ending catastrophe 44:52 Inverse scaling law 53:03 Our luck given the softness of language models 55:07 Future of language models 59:43 The Moore's law of mad science 1:01:45 GPT-5 type project 1:07:43 The AI race dynamics 1:09:43 AI alignment with the latest models 1:13:14 AI research investment and safety 1:19:43 What a six-month pause buys us 1:25:44 AI passing the Turing Test 1:28:16 AI safety and risk 1:32:01 Responsible AI development. 1:40:03 Neuralink implant technology
  continue reading

218 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2024 | Sitemap | Personvern | Vilkår for bruk | | opphavsrett