Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
Gå frakoblet med Player FM -appen!
An Investigation of Model-Free Planning
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424087971 series 3498845
The field of reinforcement learning (RL) is facing increasingly challenging domains with combinatorial complexity. For an RL agent to address these challenges, it is essential that it can plan effectively. Prior work has typically utilized an explicit model of the environment, combined with a specific planning algorithm (such as tree search). More recently, a new family of methods have been proposed that learn how to plan, by providing the structure for planning via an inductive bias in the function approximator (such as a tree structured neural network), trained end-to-end by a model-free RL algorithm. In this paper, we go even further, and demonstrate empirically that an entirely model-free approach, without special structure beyond standard neural network components such as convolutional networks and LSTMs, can learn to exhibit many of the characteristics typically associated with a model-based planner. We measure our agent’s effectiveness at planning in terms of its ability to generalize across a combinatorial and irreversible state space, its data efficiency, and its ability to utilize additional thinking time. We find that our agent has many of the characteristics that one might expect to find in a planning algorithm. Furthermore, it exceeds the state-of-the-art in challenging combinatorial domains such as Sokoban and outperforms other model-free approaches that utilize strong inductive biases toward planning.
Source:
https://arxiv.org/abs/1901.03559
Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Kapitler
1. An Investigation of Model-Free Planning (00:00:00)
2. Abstract (00:00:17)
3. Section 1: Introduction (00:01:41)
85 episoder
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424087971 series 3498845
The field of reinforcement learning (RL) is facing increasingly challenging domains with combinatorial complexity. For an RL agent to address these challenges, it is essential that it can plan effectively. Prior work has typically utilized an explicit model of the environment, combined with a specific planning algorithm (such as tree search). More recently, a new family of methods have been proposed that learn how to plan, by providing the structure for planning via an inductive bias in the function approximator (such as a tree structured neural network), trained end-to-end by a model-free RL algorithm. In this paper, we go even further, and demonstrate empirically that an entirely model-free approach, without special structure beyond standard neural network components such as convolutional networks and LSTMs, can learn to exhibit many of the characteristics typically associated with a model-based planner. We measure our agent’s effectiveness at planning in terms of its ability to generalize across a combinatorial and irreversible state space, its data efficiency, and its ability to utilize additional thinking time. We find that our agent has many of the characteristics that one might expect to find in a planning algorithm. Furthermore, it exceeds the state-of-the-art in challenging combinatorial domains such as Sokoban and outperforms other model-free approaches that utilize strong inductive biases toward planning.
Source:
https://arxiv.org/abs/1901.03559
Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Kapitler
1. An Investigation of Model-Free Planning (00:00:00)
2. Abstract (00:00:17)
3. Section 1: Introduction (00:01:41)
85 episoder
Alle episoder
×1 Introduction to Mechanistic Interpretability 11:45
1 Illustrating Reinforcement Learning from Human Feedback (RLHF) 22:32
1 Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback 32:19
1 Constitutional AI Harmlessness from AI Feedback 1:01:49
1 Empirical Findings Generalize Surprisingly Far 11:32
1 Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions 16:39
1 Least-To-Most Prompting Enables Complex Reasoning in Large Language Models 16:08
1 ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation 16:08
1 Imitative Generalisation (AKA ‘Learning the Prior’) 18:14
Velkommen til Player FM!
Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.