Submitted by [deleted] t3_115ez2r in MachineLearning
KPTN25 t1_j95kx5j wrote
Reply to comment by overactor in [D] Please stop by [deleted]
Reproducing language is a very different problem than true thought or self-awareness, is why.
LLMs are no more likely to become sentient than a linear regression or random forest model. Frankly, they're no more likely than a peanut butter sandwich to achieve sentience.
Is it possible that we've bungled our study of peanut butter sandwiches so badly that we may have missed some incredible sentience-granting mechanism? I guess, but it's so absurd and infinitesimal it's not worth considering or entertaining practically.
The black box argument is intellectually lazy. We have a better understanding of what is happening in LLMs and other models than most clickbaity headlines imply.
overactor t1_j95oem0 wrote
Your ridiculous hyperbole is not helping your argument. It's entirely possible that sentience is an instrumental goal for achieving a certain level of text prediction. And I don't see why a sufficiently large LLM definitely couldn't achieve it. It could be that another few paradigm shifts will be needed, but it could also be an we need to do is scaling up. I think anyone who claims to know if LLMs can achieve sentience is either ignorant or lying.
Viewing a single comment thread. View all comments