Submitted by simpleuserhere t3_11usq7o in MachineLearning
Taenk t1_jctdmvi wrote
Reply to comment by starstruckmon in [Research] Alpaca 7B language model running on my Pixel 7 by simpleuserhere
I haven’t tried the larger models unfortunately. However I wonder how the model could be „shockingly bad“ despite having almost three times the parameter count.
starstruckmon t1_jcte34d wrote
🤷
Sometimes models just come out crap. Like BLOOM which has almost the same number of parameters as GPT3, but is absolute garbage in any practical use case. Like a kid from two smart parents that turns out dumb. Just blind chance.
Or they could be wrong. 🤷
Viewing a single comment thread. View all comments