Queue_Bit t1_j9jt52q wrote
Reply to comment by Ylsid in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
All it takes is one smart, slightly motivated person to make a free option that's "good enough"
Ylsid t1_j9jtgcg wrote
More than one. It takes a lot of skill, time and money, which are hard to come by if you aren't a megacorp. That isn't to say it can't happen, but that it's much more difficult than you may expect.
WithoutReason1729 t1_j9jx2u1 wrote
Language models seem to be a way steeper difficulty curve though. The difference between Stable Diffusion and image generators from like a few years before it is big, but the older models are still good enough to often produce viable output. But the difference between a huge language model and a large open-source one is a way bigger gap, because even getting small things wrong can lead to completely unintelligible sentences that were clearly written by a machine.
Queue_Bit t1_j9jyddi wrote
Yeah, for sure, but as technology improves it's just going to get easier and easier. And this technology is likely to get so good that to a normal person, the difference between the best and the world and "good enough for everyday life" is likely huge.
Viewing a single comment thread. View all comments