CosmicVo
CosmicVo t1_ix2q8f1 wrote
Reply to comment by michael_mullet in 2023 predictions by ryusan8989
Scale is indeed not all we need. In fact GPT-4 has less parameters than GPT-3. Or the same. Idk. Anyway the focus is shifting toward trainingdata (e.g. learning rate, batch size, sequence length, etc). They’re trying to find optimal models instead of just bigger ones. Hyperparameter tuning is unfeasible for larger models but result in a performance increase equivalent to doubling the number of parameters.
CosmicVo t1_ix2poxk wrote
Reply to comment by overlordpotatoe in 2023 predictions by ryusan8989
Interesting new dillemmas as to what is real and what’s not in the virtual space. Informationwarfare will accelerate.
CosmicVo t1_ja84zk1 wrote
Reply to comment by Zermelane in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
True, but also (when i put my doomer hat on) totally in line with the agument that this tech will be shitting gold until the first super intelligence goes beyond escape velocity and we can only hope it alligns with our values...