Maleficent_Refuse_11

Maleficent_Refuse_11 t1_jdji46u wrote

How do you quantify that? Is it the downdoots? Or is it the degree to which I'm willing to waste my time discussing shallow inputs? Or do you go by gut feeling?

On a serious note: Have you heard of brandolinis law? The asymmetry described there has been shifted by several magnitudes with generative "ai". Unless we are going to start to use the same models to argue with people (e.g. chatbot output) on the net, we will have to choose much more carefully what discussions we involve ourselves on, don't you think?

−5

Maleficent_Refuse_11 t1_jdgmml7 wrote

I get that people are excited, but nobody with a basic understanding of how transformers work should give room to this. The problem is not just that it is auto-regressive/doesn't have an external knowledge hub. At best it can recreate latent patterns in the training data. There is no element of critique and no element of creativity. There is no theory of mind, there is just a reproduction of what people said, when prompted regarding how other people feel. Still, get the excitement. Am excited, too. But hype hurts the industry.

36