Viewing a single comment thread. View all comments

dookiehat t1_j1gtna8 wrote

Reply to comment by Ortus12 in Hype bubble by fortunum

LLMs, while undeniably useful and interesting do not have intentions, and only respond to input.

Moreover, it is important to remember that Large Language models are only trained on text data. There is no other data to contextualize what it is talking about. As a user of a large language model, you see coherent “thoughts” then you fill in the blanks of meaning with your sensory knowledge.

So an iguana eating a purple apple on a thursday means nothing to a large language model except the words’ probablistic relationship to one another. Even if this is merely reductionist thinking, i am still CERTAIN that a large language model has no visual “understanding” of the words. It has only contextual relationships within its model and is devoid of any content that it is able to reference and understand meaning

13