pilibitti
pilibitti t1_jc56vv5 wrote
Reply to comment by disgruntled_pie in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
hey do you have a link for how one might set this up?
pilibitti t1_j1aimsd wrote
Reply to comment by judasblue in [D] When chatGPT stops being free: Run SOTA LLM in cloud by _underlines_
I think they price by generated token in their other products? if so there should be a way to make chatgpt less verbose out of the box.
also this stuff will be a lot more popular than the other products but the hardware power isn't really there for such demand using older prices I assume. So it might be a bit more expensive than their other offerings.
pilibitti t1_j1ai82j wrote
Reply to comment by londons_explorer in [D] When chatGPT stops being free: Run SOTA LLM in cloud by _underlines_
it can be crowdsourced once we have something up and running. this stuff will be commoditized eventually.
pilibitti t1_jc5was5 wrote
Reply to comment by disgruntled_pie in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
thank you!