Submitted by floppy_llama t3_1266d02 in MachineLearning
-_1_2_3_- t1_je8p5xx wrote
Reply to comment by dreaming_geometry in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
They are using gpt-4 to accelerate their work
idontcareaboutthenam t1_je8rz5a wrote
Can you elaborate?
drizel t1_je8v9cj wrote
GPT-4 can parse millions of papers and help uncover new optimizations or other improvements much faster than without it. Not only that but you can brainstorm ideas with it.
Swolnerman t1_jead4wo wrote
How can it do that with a context window of 32k?
On top of that, I don’t think gpt4 can make informed decisions on picking between academic research papers as of yet
Viewing a single comment thread. View all comments