Artwork

Innhold levert av The Mad Botter. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av The Mad Botter eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

636: Red Hat's James Huang

20:53
 
Del
 

Manage episode 525035138 series 2440919
Innhold levert av The Mad Botter. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av The Mad Botter eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

Links
James on LinkedIn
Mike on LinkedIn
Mike's Blog
Show on Discord

Alice Promo

  1. AI on Red Hat Enterprise Linux (RHEL)

Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised.

Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms.

Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments.

  1. Rama-Llama & Containerization

Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp.

Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes.

Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises.

  1. Enterprise AI Infrastructure

Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation).

Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.

  continue reading

584 episoder

Artwork

636: Red Hat's James Huang

Coder Radio

1,182 subscribers

published

iconDel
 
Manage episode 525035138 series 2440919
Innhold levert av The Mad Botter. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av The Mad Botter eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

Links
James on LinkedIn
Mike on LinkedIn
Mike's Blog
Show on Discord

Alice Promo

  1. AI on Red Hat Enterprise Linux (RHEL)

Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised.

Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms.

Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments.

  1. Rama-Llama & Containerization

Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp.

Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes.

Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises.

  1. Enterprise AI Infrastructure

Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation).

Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.

  continue reading

584 episoder

همه قسمت ها

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2025 | Personvern | Vilkår for bruk | | opphavsrett
Lytt til dette showet mens du utforsker
Spill