Submitted by floppy_llama t3_1266d02 in MachineLearning
hailfire27 t1_je8l7id wrote
Reply to comment by ghostfaceschiller in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
I think he's talking about how during conversations, there are different cognitive levels to a conversation. You are basically having a conversation with yourself about what to say and remembering things to talk about, while at the same time considering the context of the situation, such as the environment or activity.
So he's saying for a model like this, would it be possible to tune the model so that it is able to give better answers in a conversation.
Viewing a single comment thread. View all comments