Submitted by Vegetable-Skill-9700 t3_121a8p4 in MachineLearning
farmingvillein t1_jdntw7b wrote
Reply to comment by Sorry-Balance2049 in [D] Do we really need 100B+ parameters in a large language model? by Vegetable-Skill-9700
pure marketing.
not even weights...due to the ToS issues with the fine-tune set, presumably.
austintackaberry t1_jdrau92 wrote
Yes, that's correct.
​
>[@matei_zaharia] The code is at https://github.com/databrickslabs/dolly. You can also contact us for weights, just want to make sure people understand the restrictions on the fine tuning data (or you can get that data from Stanford and train it yourself).
https://twitter.com/matei_zaharia/status/1639357850807054336?s=20
Viewing a single comment thread. View all comments