Search results for "0a01aeea43bea9b63f15fd3353922297" in md5 (5)

/g/ - /lmg/ - Local Models General
Anonymous No.106502954
VibeVoice seems to ignore punctuation...

What do?
/g/ - /lmg/ - Local Models General
Anonymous No.106355943
>Serious question about Fine-tuning

What is the Rule of Thumb regarding batch size. Doe it make any sense to try to fill up the entire VRAM? I know that I will have to increase the number of steps/epochs anyway if I were to go for bigger batches

As of now just trying default settings found in some dubious colab notebooks
/g/ - /lmg/ - Local Models General
Anonymous No.106289671
This is my first time fune-tuning a LLM.

Because I'm retarded, I will go with python scripts provided by unsloth brothers,. and at first try something small like gemma-3-270m

Related question: LoRA vs. full model tuning

Is LoRA just as valid for LLM as it is for video/image generation?

Or should I go for full model tuning?
/g/ - /lmg/ - Local Models General
Anonymous No.106231698
How can we prevent China from winning the AI arms race?
/g/ - /lmg/ - Local Models General
Anonymous No.106203492
I can get a laptop for cheap which includes 64GB DDR4 RAM and RTX A5000 with 16 GB

Is it worth bothering? What SOTA models would IO be able to run on this pile of shit?

>Proud RAM1TB/VRAM24GB enjoyer