TemperatureAmazing67 t1_jbzcn6a wrote
Reply to comment by hebekec256 in [D] Is anyone trying to just brute force intelligence with enormous model sizes and existing SOTA architectures? Are there technical limitations stopping us? by hebekec256
>extensions of LLMs (like
>
>PALM-E
>
>) are a heck of a lot more than an abacus. I wonder what would happen if Google just said, "screw it", and scaled it from 500B to 50T parameters. I'm guessing there are reasons in the architecture that it would
The problem is that we have scaling laws for NN. We just do not have the data for 50T parameters. We need somehow to get these data. The answer on this question costs a lot.
Co0k1eGal3xy t1_jbzi8wc wrote
- Double Decent, more parameters are MORE data efficient.
- Most of these LLMs barely complete 1 epoch, so there is no concern about overfitting currently.
Viewing a single comment thread. View all comments