Artwork

Innhold levert av Daniel Filan. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Daniel Filan eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

21 - Interpretability for Engineers with Stephen Casper

1:56:02
 
Del
 

Manage episode 362189720 series 2844728
Innhold levert av Daniel Filan. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Daniel Filan eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

Lots of people in the field of machine learning study 'interpretability', developing tools that they say give us useful information about neural networks. But how do we know if meaningful progress is actually being made? What should we want out of these tools? In this episode, I speak to Stephen Casper about these questions, as well as about a benchmark he's co-developed to evaluate whether interpretability tools can find 'Trojan horses' hidden inside neural nets.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

Topics we discuss, and timestamps:

- 00:00:42 - Interpretability for engineers

- 00:00:42 - Why interpretability?

- 00:12:55 - Adversaries and interpretability

- 00:24:30 - Scaling interpretability

- 00:42:29 - Critiques of the AI safety interpretability community

- 00:56:10 - Deceptive alignment and interpretability

- 01:09:48 - Benchmarking Interpretability Tools (for Deep Neural Networks) (Using Trojan Discovery)

- 01:10:40 - Why Trojans?

- 01:14:53 - Which interpretability tools?

- 01:28:40 - Trojan generation

- 01:38:13 - Evaluation

- 01:46:07 - Interpretability for shaping policy

- 01:53:55 - Following Casper's work

The transcript: axrp.net/episode/2023/05/02/episode-21-interpretability-for-engineers-stephen-casper.html

Links for Casper:

- Personal website: stephencasper.com/

- Twitter: twitter.com/StephenLCasper

- Electronic mail: scasper [at] mit [dot] edu

Research we discuss:

- The Engineer's Interpretability Sequence: alignmentforum.org/s/a6ne2ve5uturEEQK7

- Benchmarking Interpretability Tools for Deep Neural Networks: arxiv.org/abs/2302.10894

- Adversarial Policies beat Superhuman Go AIs: goattack.far.ai/

- Adversarial Examples Are Not Bugs, They Are Features: arxiv.org/abs/1905.02175

- Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974

- Softmax Linear Units: transformer-circuits.pub/2022/solu/index.html

- Red-Teaming the Stable Diffusion Safety Filter: arxiv.org/abs/2210.04610

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

39 episoder

Artwork
iconDel
 
Manage episode 362189720 series 2844728
Innhold levert av Daniel Filan. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av Daniel Filan eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

Lots of people in the field of machine learning study 'interpretability', developing tools that they say give us useful information about neural networks. But how do we know if meaningful progress is actually being made? What should we want out of these tools? In this episode, I speak to Stephen Casper about these questions, as well as about a benchmark he's co-developed to evaluate whether interpretability tools can find 'Trojan horses' hidden inside neural nets.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

Topics we discuss, and timestamps:

- 00:00:42 - Interpretability for engineers

- 00:00:42 - Why interpretability?

- 00:12:55 - Adversaries and interpretability

- 00:24:30 - Scaling interpretability

- 00:42:29 - Critiques of the AI safety interpretability community

- 00:56:10 - Deceptive alignment and interpretability

- 01:09:48 - Benchmarking Interpretability Tools (for Deep Neural Networks) (Using Trojan Discovery)

- 01:10:40 - Why Trojans?

- 01:14:53 - Which interpretability tools?

- 01:28:40 - Trojan generation

- 01:38:13 - Evaluation

- 01:46:07 - Interpretability for shaping policy

- 01:53:55 - Following Casper's work

The transcript: axrp.net/episode/2023/05/02/episode-21-interpretability-for-engineers-stephen-casper.html

Links for Casper:

- Personal website: stephencasper.com/

- Twitter: twitter.com/StephenLCasper

- Electronic mail: scasper [at] mit [dot] edu

Research we discuss:

- The Engineer's Interpretability Sequence: alignmentforum.org/s/a6ne2ve5uturEEQK7

- Benchmarking Interpretability Tools for Deep Neural Networks: arxiv.org/abs/2302.10894

- Adversarial Policies beat Superhuman Go AIs: goattack.far.ai/

- Adversarial Examples Are Not Bugs, They Are Features: arxiv.org/abs/1905.02175

- Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974

- Softmax Linear Units: transformer-circuits.pub/2022/solu/index.html

- Red-Teaming the Stable Diffusion Safety Filter: arxiv.org/abs/2210.04610

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

39 episoder

Todos los episodios

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2024 | Sitemap | Personvern | Vilkår for bruk | | opphavsrett