Artwork

Innhold levert av HackerNoon. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av HackerNoon eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.
Player FM - Podcast-app
Gå frakoblet med Player FM -appen!

I Fine-Tuned an LLM With My Telegram Chat History. Here’s What I Learned

10:53
 
Del
 

Manage episode 423584197 series 3474148
Innhold levert av HackerNoon. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av HackerNoon eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

This story was originally published on HackerNoon at: https://hackernoon.com/i-fine-tuned-an-llm-with-my-telegram-chat-history-heres-what-i-learned.
Pretending to be ourselves and our friends by training an LLM on Telegram messages
Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #fine-tuning-llms, #ai-model-training, #training-ai-with-telegram, #personalized-ai-chatbot, #russian-language-ai, #mistral-7b-model, #lora-vs-full-fine-tuning, #hackernoon-top-story, and more.
This story was written by: @furiousteabag. Learn more about this writer by checking @furiousteabag's about page, and for more stories, please visit hackernoon.com.
I fine-tuned a language model using my Telegram messages to see if it could replicate my writing style and conversation patterns. I chose the Mistral 7B model for its performance and experimented with both LoRA (low-rank adaptation) and full fine-tuning approaches. I extracted all my Telegram messages, totaling 15,789 sessions over five years, and initially tested with the generic conversation fine-tuned Mistral model. For LoRA, the training on an RTX 3090 took 5.5 hours and cost $2, improving style mimicry but struggling with context and grammar. Full fine-tuning, using eight A100 GPUs, improved language performance and context retention but still had some errors. Overall, while the model captured conversational style and common topics well, it often lacked context in responses.

  continue reading

236 episoder

Artwork
iconDel
 
Manage episode 423584197 series 3474148
Innhold levert av HackerNoon. Alt podcastinnhold, inkludert episoder, grafikk og podcastbeskrivelser, lastes opp og leveres direkte av HackerNoon eller deres podcastplattformpartner. Hvis du tror at noen bruker det opphavsrettsbeskyttede verket ditt uten din tillatelse, kan du følge prosessen skissert her https://no.player.fm/legal.

This story was originally published on HackerNoon at: https://hackernoon.com/i-fine-tuned-an-llm-with-my-telegram-chat-history-heres-what-i-learned.
Pretending to be ourselves and our friends by training an LLM on Telegram messages
Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #fine-tuning-llms, #ai-model-training, #training-ai-with-telegram, #personalized-ai-chatbot, #russian-language-ai, #mistral-7b-model, #lora-vs-full-fine-tuning, #hackernoon-top-story, and more.
This story was written by: @furiousteabag. Learn more about this writer by checking @furiousteabag's about page, and for more stories, please visit hackernoon.com.
I fine-tuned a language model using my Telegram messages to see if it could replicate my writing style and conversation patterns. I chose the Mistral 7B model for its performance and experimented with both LoRA (low-rank adaptation) and full fine-tuning approaches. I extracted all my Telegram messages, totaling 15,789 sessions over five years, and initially tested with the generic conversation fine-tuned Mistral model. For LoRA, the training on an RTX 3090 took 5.5 hours and cost $2, improving style mimicry but struggling with context and grammar. Full fine-tuning, using eight A100 GPUs, improved language performance and context retention but still had some errors. Overall, while the model captured conversational style and common topics well, it often lacked context in responses.

  continue reading

236 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM scanner netter for høykvalitets podcaster som du kan nyte nå. Det er den beste podcastappen og fungerer på Android, iPhone og internett. Registrer deg for å synkronisere abonnement på flere enheter.

 

Hurtigreferanseguide

Copyright 2024 | Sitemap | Personvern | Vilkår for bruk | | opphavsrett