limitless__ t1_jda7lh3 wrote
This was guaranteed to happen but it's great that it's happening now. OpenAI, Google, Microsoft etc. are going to have no option but build in the concept of trusted sources to their AI models so that is learns to recognize misinformation, irony etc.
Right now it's chaos and that expected behaviour. AI is already in HEAVY use in technical circles and it is scarcely believable how much of a time-saver it is. Issues like this just scares away the troglodytes which, right now, is a good thing.
marumari t1_jda9kwq wrote
Even with entirely trustworthy sources, AIs are not deterministic. They can easily spit out false information having only ingested the truth.
erasmause t1_jdar95d wrote
Pretty sure they are deterministic but chaotic
pm_me_wet_kittehs t1_jdbycpx wrote
nope, they are unequivocally nondeterministic.
simple proof? submit the exact same prompt twice. you don't get the same output. You would if the system was merely chaotic, because the input is exactly the same.
Therefore there must be an element of randomness in there
erasmause t1_jdcqz2h wrote
I'll admit, I don't know a ton about the internals of these particular models. Is that true l truly non-determinism, or is it some sort of complex feedback (essentially, "remembering" previous responses by updating weights or something)?
marumari t1_jdas22s wrote
If you can’t predict what will come out then it’s not particularly deterministic. Maybe within a certain broad range of behaviors but that’s about it.
erasmause t1_jdaxed1 wrote
Deterministic just means that the process isn't stochastic, which aligns with my understanding of AI models (at least after they've been trained). Chaotic means the output is highly sensitive to small changes in the input, which admittedly also isn't a great description of what's going on, but does capture the difficulty of predicting the output without restoring to non-determinism.
marumari t1_jdb0gk6 wrote
It's possible we are using different semantics for "deterministic," I am mostly meaning that given the same input the AI will produce the same output. This is not what happens, although from a mathematics determinism standpoint you are correct.
born-out-of-a-ball t1_jdbj1fx wrote
OpenAI's GPT model is deterministic, but for ChatGPT they deliberately add variation to the user's input to get more creative answers.
erasmause t1_jdcr6ps wrote
Strictly speaking, a deterministic system only needs a feedback mechanism to generate different responses to the same input.
jdm1891 t1_jdb1hqe wrote
It is not deterministic, it picks randomly, with the distribution skewed towards more probable words. That is what makes them so good - the difference between deterministic and stochastic AI doing this is the difference between GPT and predictive text on your phone. The predictive text always picks the most probable word leading to run on and repetitive sentences GPt has the ability to pick a less likely word which allows it to write much more varied and niche things.
It being stochastic is also the reason you can have it regenerate responses.
Human-Concept t1_jdbur3i wrote
Doesn't help if information is contextual. E.g. if I say, sun rises from east. That's a statement with context "on earth, as far as humans notice from earth". Sun doesn't rise at all if context is changed to "in space".
We can solve "trustworthy sources" issue. We can't solve contextual errors. That's why legalese is so long winded. At least, for now, context will always be an issue for these AI programs. Maybe some day we will figure out how to code context, but it is definitely not happening right now.
Viewing a single comment thread. View all comments