Artwork

Innhold levert av The Gradient. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av The Gradient eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

Sasha Rush: Building Better NLP Systems

54:03
 
Del
 

Manage episode 403801840 series 2975159
Innhold levert av The Gradient. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av The Gradient eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

In episode 113 of The Gradient Podcast, Daniel Bashir speaks to Professor Sasha Rush.

Professor Rush is an Associate Professor at Cornell University and a Researcher at HuggingFace. His research aims to develop natural language processing systems that are safe, fast, and controllable. His group is interested primarily in tasks that involve text generation, and they study data-driven probabilistic methods that combine deep-learning based models with probabilistic controls. He is also interested in open-source NLP and deep learning, and develops projects to make deep learning systems safer, clearer, and easier to use.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:47) Professor Rush’s background

* (03:23) Professor Rush’s reflections on prior work—importance of learning and inference

* (04:58) How much engineering matters in deep learning, the Rush vs. Frankle Bet

* (07:12) On encouraging and incubating good research

* (10:50) Features of good research environments

* (12:36) 5% bets in Professor Rush’s research: State-Space Models (SSMs) as an alternative to Transformers

* (15:58) SSMs vs. Transformers

* (18:53) Probabilistic Context-Free Grammars—are (P)CFGs worth paying attention to?

* (20:53) Sequence-level knowledge distillation: approximating sequence-level distributions

* (25:08) Pruning and knowledge distillation — orthogonality of efficiency techniques

* (26:33) Broader thoughts on efficiency

* (28:31) Works on prompting

* (28:58) Prompting and In-Context Learning

* (30:05) Thoughts on mechanistic interpretability

* (31:25) Multitask prompted training enables zero-shot task generalization

* (33:48) How many data points is a prompt worth?

* (35:13) Directions for controllability in LLMs

* (39:11) Controllability and safety

* (41:23) Open-source work, deep learning libraries

* (42:08) A story about Professor Rush’s post-doc at FAIR

* (43:51) The impact of PyTorch

* (46:08) More thoughts on deep learning libraries

* (48:48) Levels of abstraction, PyTorch as an interface to motivate research

* (50:23) Empiricism and research commitments

* (53:32) Outro

Links:

* Research

* Early work / PhD

* Dual Decomposition and LP Relaxations

* Vine Pruning for Efficient Multi-Pass Dependency Parsing

* Improved Parsing and POS Tagging Using Inter-Sentence Dependency Constraints

* Research — interpretable and controllable natural language generation

* Compound Probabilistic Context-Free Grammars for Grammar Induction

* Multitask prompted training enables zero-shot task generalization

* Research — deep generative models

* A Neural Attention Model for Abstractive Sentence Summarization

* Learning Neural Templates for Text Generation

* How many data points is a prompt worth?

* Research — efficient algorithms and hardware for speech, translation, dialogue

* Sequence-Level Knowledge Distillation

* Open-source work

* NamedTensor

* Torch Struct


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

131 episoder

Artwork
iconDel
 
Manage episode 403801840 series 2975159
Innhold levert av The Gradient. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av The Gradient eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

In episode 113 of The Gradient Podcast, Daniel Bashir speaks to Professor Sasha Rush.

Professor Rush is an Associate Professor at Cornell University and a Researcher at HuggingFace. His research aims to develop natural language processing systems that are safe, fast, and controllable. His group is interested primarily in tasks that involve text generation, and they study data-driven probabilistic methods that combine deep-learning based models with probabilistic controls. He is also interested in open-source NLP and deep learning, and develops projects to make deep learning systems safer, clearer, and easier to use.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:47) Professor Rush’s background

* (03:23) Professor Rush’s reflections on prior work—importance of learning and inference

* (04:58) How much engineering matters in deep learning, the Rush vs. Frankle Bet

* (07:12) On encouraging and incubating good research

* (10:50) Features of good research environments

* (12:36) 5% bets in Professor Rush’s research: State-Space Models (SSMs) as an alternative to Transformers

* (15:58) SSMs vs. Transformers

* (18:53) Probabilistic Context-Free Grammars—are (P)CFGs worth paying attention to?

* (20:53) Sequence-level knowledge distillation: approximating sequence-level distributions

* (25:08) Pruning and knowledge distillation — orthogonality of efficiency techniques

* (26:33) Broader thoughts on efficiency

* (28:31) Works on prompting

* (28:58) Prompting and In-Context Learning

* (30:05) Thoughts on mechanistic interpretability

* (31:25) Multitask prompted training enables zero-shot task generalization

* (33:48) How many data points is a prompt worth?

* (35:13) Directions for controllability in LLMs

* (39:11) Controllability and safety

* (41:23) Open-source work, deep learning libraries

* (42:08) A story about Professor Rush’s post-doc at FAIR

* (43:51) The impact of PyTorch

* (46:08) More thoughts on deep learning libraries

* (48:48) Levels of abstraction, PyTorch as an interface to motivate research

* (50:23) Empiricism and research commitments

* (53:32) Outro

Links:

* Research

* Early work / PhD

* Dual Decomposition and LP Relaxations

* Vine Pruning for Efficient Multi-Pass Dependency Parsing

* Improved Parsing and POS Tagging Using Inter-Sentence Dependency Constraints

* Research — interpretable and controllable natural language generation

* Compound Probabilistic Context-Free Grammars for Grammar Induction

* Multitask prompted training enables zero-shot task generalization

* Research — deep generative models

* A Neural Attention Model for Abstractive Sentence Summarization

* Learning Neural Templates for Text Generation

* How many data points is a prompt worth?

* Research — efficient algorithms and hardware for speech, translation, dialogue

* Sequence-Level Knowledge Distillation

* Open-source work

* NamedTensor

* Torch Struct


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

131 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2024 | Sitemap | Personvern | Vilkår for bruk | | opphavsrett