Submitted by balthierwings t3_122q3h7 in MachineLearning
light24bulbs t1_jdtgrjb wrote
Reply to comment by endless_sea_of_stars in [P] Using ChatGPT plugins with LLaMA by balthierwings
Based on how much langchain struggles to use tools and gets confused on them, I'd bet on fine tuning. I asked a contact to reveal what they're injecting into the prompt but it's not public information yet so i couldn't get it
endless_sea_of_stars t1_jdtik00 wrote
It is mostly public information. The API developer is required to make a specification document that describes the API. This gets injected into the prompt. They may transform it from json to something the model better understands. It may also inject some other boilerplate text.
light24bulbs t1_jdtiq9w wrote
I'm aware of that part. The wording of the test that's injected is not public. If it was, if use it in my langchain scripts.
Again i really expect there's fine-tuning, we will see eventually maybe.
alexmin93 t1_jdup63s wrote
Do you have GPT-4 API? Afaik plugins run on GPT-4 which even in current state is way better at following formal rules. But it's likely that they've indeed fine tuned it to make decisions to use tools
light24bulbs t1_jduuuep wrote
I do, still struggling with it
[deleted] t1_jduysn7 wrote
[removed]
Viewing a single comment thread. View all comments