Artwork

Innhold levert av EDGE AI FOUNDATION. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av EDGE AI FOUNDATION eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

Deploying TinyML Models at Scale: Insights on Monitoring and Automation with Alessandro Grande of Edge Impulse

20:34
 
Del
 

Manage episode 444991878 series 3574631
Innhold levert av EDGE AI FOUNDATION. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av EDGE AI FOUNDATION eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

Send us a text

Unlock the secrets of deploying TinyML models in real-world scenarios with Alessandro Grande, Head of Product at Edge Impulse. Curious about how TinyML has evolved since its early days? Alessandro takes us through a journey from his initial demos at Arm to the sophisticated, scalable deployments we see today. Learn why continuous model monitoring is not just important but essential for the reliability and functionality of machine learning applications, especially in large-scale IoT deployments. Alessandro shares actionable insights on how to maintain a continuous lifecycle for ML models to handle unpredictable changes and ensure sustained success.
Delve into the intricacies of health-related use cases with a spotlight on the HIFE AI cough monitoring system. Discover best practices for data collection and preparation, including identifying outliers and leveraging Generative AI like ChatGPT 4.0 for efficient data labeling. We also emphasize the importance of building scalable infrastructure for automated ML development. Learn how continuous integration and continuous deployment (CI/CD) pipelines can enhance the lifecycle management of ML models, ensuring security and scalability from day one. This episode is a treasure trove of practical advice for anyone tackling the challenges of deploying ML models in diverse environments.

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

  continue reading

Kapitler

1. Deploying TinyML Models at Scale: Insights on Monitoring and Automation with Alessandro Grande of Edge Impulse (00:00:00)

2. Model Monitoring in Real-World Deployment (00:00:05)

3. Health Workflow and Data Collection (00:11:26)

4. Automated Model Deployment in Production (00:18:14)

24 episoder

Artwork
iconDel
 
Manage episode 444991878 series 3574631
Innhold levert av EDGE AI FOUNDATION. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av EDGE AI FOUNDATION eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

Send us a text

Unlock the secrets of deploying TinyML models in real-world scenarios with Alessandro Grande, Head of Product at Edge Impulse. Curious about how TinyML has evolved since its early days? Alessandro takes us through a journey from his initial demos at Arm to the sophisticated, scalable deployments we see today. Learn why continuous model monitoring is not just important but essential for the reliability and functionality of machine learning applications, especially in large-scale IoT deployments. Alessandro shares actionable insights on how to maintain a continuous lifecycle for ML models to handle unpredictable changes and ensure sustained success.
Delve into the intricacies of health-related use cases with a spotlight on the HIFE AI cough monitoring system. Discover best practices for data collection and preparation, including identifying outliers and leveraging Generative AI like ChatGPT 4.0 for efficient data labeling. We also emphasize the importance of building scalable infrastructure for automated ML development. Learn how continuous integration and continuous deployment (CI/CD) pipelines can enhance the lifecycle management of ML models, ensuring security and scalability from day one. This episode is a treasure trove of practical advice for anyone tackling the challenges of deploying ML models in diverse environments.

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

  continue reading

Kapitler

1. Deploying TinyML Models at Scale: Insights on Monitoring and Automation with Alessandro Grande of Edge Impulse (00:00:00)

2. Model Monitoring in Real-World Deployment (00:00:05)

3. Health Workflow and Data Collection (00:11:26)

4. Automated Model Deployment in Production (00:18:14)

24 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2025 | Sitemap | Personvern | Vilkår for bruk | | opphavsrett
Lytt til dette showet mens du utforsker
Spill