TemperatureAmazing67 t1_jbzc8cc wrote
Reply to comment by Username912773 in [D] Is anyone trying to just brute force intelligence with enormous model sizes and existing SOTA architectures? Are there technical limitations stopping us? by hebekec256
'require input to generate an output and do not have initiative' - use random or other's network output.
Also, the argument about next token is skrewed up. For a lot of task everything you need is perfectly predicted next token.
Username912773 t1_jbze0ug wrote
That’s not a solution. That doesn’t make LLMs sentient it just makes them a cog in a larger machine.
Logic and task performance and sentience are different.
Viewing a single comment thread. View all comments