Peleton011
Peleton011 t1_jdvtqq0 wrote
Reply to comment by SkinnyJoshPeck in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
Unless I'm wrong somewhere LLMs work with probabilities, they output the most likely response based on training.
They definitely could be able to show you how likely of a response a given paper is, and given that the real papers would be part of the training set answers it's less sure of are going to statistically be less likely to be true.
Peleton011 t1_jdxolt1 wrote
Reply to comment by RageOnGoneDo in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
I mean, i said LLMs definetely could do that, i never intended to convey that that's what's going on in OPs case or that chatgpt specifically is able to do so.