Search Results
6/24/2025, 7:09:53 AM
https://jerryliang24.github.io/DnD/
>By leveraging a lightweight text encoder and a cascaded hyperconvolutional decoder, DnD produces task-specific LoRA matrices from unlabeled task prompts in seconds. It achieves up to 12,000× lower overhead than full fine-tuning, outperforms the strongest training LoRAs by up to 30% on zero-shot common-sense reasoning, math, coding, and multimodal benchmarks, and generalizes robustly across domains, all requiring only unlabeled data prompts
damn, do you think it could be used to make diffusion loras aswell?
>By leveraging a lightweight text encoder and a cascaded hyperconvolutional decoder, DnD produces task-specific LoRA matrices from unlabeled task prompts in seconds. It achieves up to 12,000× lower overhead than full fine-tuning, outperforms the strongest training LoRAs by up to 30% on zero-shot common-sense reasoning, math, coding, and multimodal benchmarks, and generalizes robustly across domains, all requiring only unlabeled data prompts
damn, do you think it could be used to make diffusion loras aswell?
Page 1