[R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention arxiv.org Submitted by floppy_llama t3_1266d02 on March 30, 2023 at 12:46 AM in MachineLearning 47 comments 233
floppy_llama t1_j9opzwx wrote on February 23, 2023 at 2:22 PM Reply to [D] Model size vs task complexity by Fine-Topic-6127 Unfortunately a lot of ML is just trial and error Permalink 7
floppy_llama t1_j9opzwx wrote
Reply to [D] Model size vs task complexity by Fine-Topic-6127
Unfortunately a lot of ML is just trial and error