Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
Gå frakoblet med Player FM -appen!
Worst-Case Thinking in AI Alignment
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424744781 series 3498845
Alternative title: “When should you assume that what could go wrong, will go wrong?” Thanks to Mary Phuong and Ryan Greenblatt for helpful suggestions and discussion, and Akash Wasil for some edits. In discussions of AI safety, people often propose the assumption that something goes as badly as possible. Eliezer Yudkowsky in particular has argued for the importance of security mindset when thinking about AI alignment. I think there are several distinct reasons that this might be the right assumption to make in a particular situation. But I think people often conflate these reasons, and I think that this causes confusion and mistaken thinking. So I want to spell out some distinctions. Throughout this post, I give a bunch of specific arguments about AI alignment, including one argument that I think I was personally getting wrong until I noticed my mistake yesterday (which was my impetus for thinking about this topic more and then writing this post). I think I’m probably still thinking about some of my object level examples wrong, and hope that if so, commenters will point out my mistakes.
Original text:
https://www.lesswrong.com/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment
Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Kapitler
1. Worst-Case Thinking in AI Alignment (00:00:00)
2. My list of reasons to maybe use worst-case thinking (00:01:26)
3. Differences between these arguments (00:09:07)
85 episoder
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424744781 series 3498845
Alternative title: “When should you assume that what could go wrong, will go wrong?” Thanks to Mary Phuong and Ryan Greenblatt for helpful suggestions and discussion, and Akash Wasil for some edits. In discussions of AI safety, people often propose the assumption that something goes as badly as possible. Eliezer Yudkowsky in particular has argued for the importance of security mindset when thinking about AI alignment. I think there are several distinct reasons that this might be the right assumption to make in a particular situation. But I think people often conflate these reasons, and I think that this causes confusion and mistaken thinking. So I want to spell out some distinctions. Throughout this post, I give a bunch of specific arguments about AI alignment, including one argument that I think I was personally getting wrong until I noticed my mistake yesterday (which was my impetus for thinking about this topic more and then writing this post). I think I’m probably still thinking about some of my object level examples wrong, and hope that if so, commenters will point out my mistakes.
Original text:
https://www.lesswrong.com/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment
Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Kapitler
1. Worst-Case Thinking in AI Alignment (00:00:00)
2. My list of reasons to maybe use worst-case thinking (00:01:26)
3. Differences between these arguments (00:09:07)
85 episoder
Alle episoder
×1 Introduction to Mechanistic Interpretability 11:45
1 Illustrating Reinforcement Learning from Human Feedback (RLHF) 22:32
1 Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback 32:19
1 Constitutional AI Harmlessness from AI Feedback 1:01:49
1 Empirical Findings Generalize Surprisingly Far 11:32
1 Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions 16:39
1 Least-To-Most Prompting Enables Complex Reasoning in Large Language Models 16:08
1 ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation 16:08
1 Imitative Generalisation (AKA ‘Learning the Prior’) 18:14
Velkommen til Player FM!
Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.