Artwork

Innhold levert av Daniel Filan. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Daniel Filan eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

33 - RLHF Problems with Scott Emmons

1:41:24
 
Del
 

Manage episode 423107256 series 2844728
Innhold levert av Daniel Filan. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Daniel Filan eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

Reinforcement Learning from Human Feedback, or RLHF, is one of the main ways that makers of large language models make them 'aligned'. But people have long noted that there are difficulties with this approach when the models are smarter than the humans providing feedback. In this episode, I talk with Scott Emmons about his work categorizing the problems that can show up in this setting.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/06/12/episode-33-rlhf-problems-scott-emmons.html

Topics we discuss, and timestamps:

0:00:33 - Deceptive inflation

0:17:56 - Overjustification

0:32:48 - Bounded human rationality

0:50:46 - Avoiding these problems

1:14:13 - Dimensional analysis

1:23:32 - RLHF problems, in theory and practice

1:31:29 - Scott's research program

1:39:42 - Following Scott's research

Scott's website: https://www.scottemmons.com

Scott's X/twitter account: https://x.com/emmons_scott

When Your AIs Deceive You: Challenges With Partial Observability of Human Evaluators in Reward Learning: https://arxiv.org/abs/2402.17747

Other works we discuss:

AI Deception: A Survey of Examples, Risks, and Potential Solutions: https://arxiv.org/abs/2308.14752

Uncertain decisions facilitate better preference learning: https://arxiv.org/abs/2106.10394

Invariance in Policy Optimisation and Partial Identifiability in Reward Learning: https://arxiv.org/abs/2203.07475

The Humble Gaussian Distribution (aka principal component analysis and dimensional analysis): http://www.inference.org.uk/mackay/humble.pdf

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!: https://arxiv.org/abs/2310.03693

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

42 episoder

Artwork
iconDel
 
Manage episode 423107256 series 2844728
Innhold levert av Daniel Filan. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Daniel Filan eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

Reinforcement Learning from Human Feedback, or RLHF, is one of the main ways that makers of large language models make them 'aligned'. But people have long noted that there are difficulties with this approach when the models are smarter than the humans providing feedback. In this episode, I talk with Scott Emmons about his work categorizing the problems that can show up in this setting.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/06/12/episode-33-rlhf-problems-scott-emmons.html

Topics we discuss, and timestamps:

0:00:33 - Deceptive inflation

0:17:56 - Overjustification

0:32:48 - Bounded human rationality

0:50:46 - Avoiding these problems

1:14:13 - Dimensional analysis

1:23:32 - RLHF problems, in theory and practice

1:31:29 - Scott's research program

1:39:42 - Following Scott's research

Scott's website: https://www.scottemmons.com

Scott's X/twitter account: https://x.com/emmons_scott

When Your AIs Deceive You: Challenges With Partial Observability of Human Evaluators in Reward Learning: https://arxiv.org/abs/2402.17747

Other works we discuss:

AI Deception: A Survey of Examples, Risks, and Potential Solutions: https://arxiv.org/abs/2308.14752

Uncertain decisions facilitate better preference learning: https://arxiv.org/abs/2106.10394

Invariance in Policy Optimisation and Partial Identifiability in Reward Learning: https://arxiv.org/abs/2203.07475

The Humble Gaussian Distribution (aka principal component analysis and dimensional analysis): http://www.inference.org.uk/mackay/humble.pdf

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!: https://arxiv.org/abs/2310.03693

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

42 episoder

すべてのエピソード

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2024 | Sitemap | Personvern | Vilkår for bruk | | opphavsrett