agm1984
agm1984 t1_je5gc8n wrote
Reply to comment by AthKaElGal in Aggregate measure of financial misreporting for nearly 2,000 companies in the U.S. suggests that the collective probability of fraud across major companies is the highest in over 40 years by marketrent
Earnings Before I Tricked Dumb Auditors (EBITDA)
agm1984 t1_jdcoyl5 wrote
Reply to comment by cas13f in Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow by altmorty
I agree with you, but it also represents the interface between human and machine, so it must be accurate.
The issue I am highlighting is minor but might be revealing some unfortunate aspects. For example, if you can adopt a mathematical approach to deciding what words to use, there is a kind of latent space in which the answer to your question draws an octopus tentacle of sorts. The shape of the tentacle is analogous to the chosen words.
My argument is that the tentacle can be deformed at parts of a sentence related to the word 'is' (which is comically an equals sign) because it misrepresents the level of precision it is aware of. For me this is a huge problem because it means either (or both) the "AI" is not extrapolating the correct meaning from the lowest common denominator of cumulative source materials, or the source materials themselves are causing the "AI" to derive a bum value in the specific context with the problem.
My example of gravity 'at all scales' is interesting because there is no world where a scientist can assert such a concrete clause. In actual english terms, it's more like a restrictive clause because the statement hinges on the context around it. Maybe there is a sentence that says "currently" or "to the best of our knowledge", or maybe there is an advanced word vector such as "has been" that helps indicate that gravity is solved here at the moment but might not be in the future.
It's very minor, but my warning extends to a time when a human is reading that and taking fragments at face value because they feel like the "AI" is compiling the real derived truth from the knowledge base of humankind. My warning also extends to a time when a different "AI" is receiving a paragraph from ChatGPT and for the exact same reasons misinterprets it due to these subtle errors of confidence. There's something uncanny about it, and this is where I see an issue currently if you want to use it as an interface. Maybe my side point is that it doesn't make sense to use it as an AI-to-AI interface because you lose so much mathematical accuracy and precision when you render the final output into fuzzy english. Other AIs need to know the exact angle and rotation between words and paragraphs.
agm1984 t1_jd9dejk wrote
Reply to comment by ObligatoryOption in Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow by altmorty
I find a lot of problem so far with the false equivalence of following commonly associated words (ie: filling in the blanks with the highest probability combination).
For example ChatGPT will make a statement that says general relativity explains gravity at all scales (in some specific context), and you are left wondering if it means all the peer-reviewed articles use zero detractions from concrete. Are the majority of peer-reviewed articles indicating that gravity is explained at "all scales"?
First of all it does not explain scales at or below quantum, so immediately we have a boundary problem. ChatGPT is asserting the boundary of general relativity goes below where it is logically supported in a rigorous manner.
Second of all, this is likely highlighting a problem that more rigorous AIs will have to solve. How do they know what clauses are sufficiently proven if the body of science says gravity is explained at all scales. How long until the AI discovers that boundary problem and searches smaller scales?
I've noticed other times too when ChatGPT says "is" instead of "has been". To me it implies a level of certainty that I don't appreciate as a proponent of the scientific method.
To expand slightly more, the problem reminds me in some ways of a composition of math functions, some of which are failing to consider inflection points or second derivative tests in order to selectively change "is" to "has been" or "is" to "is likely" or "is" to "is according to significant authority sources". ChatGPT hits the "is" but fails to give the precise pointer.
Side note: I use crazy term like second derivative test which is not to be taken literally. Think of it more as a highlightion of scenario where a higher order analysis might be needed to resolve loose to strict or strict to loose equality of a given fragment. Implicit context is a nasty thing when you must recurrently analyze micro and macro closure as each new token or group of tokens is added as each affects the overall accurate and precise meaning. My goal is not to provide a solution area but rather to provide specific example of an issue type.
agm1984 t1_jeahicy wrote
Reply to It's becoming increasingly clear that fintech has a fraud problem by marketrent
The trick is to unpack fintech to its true symbol: currency transmission siphoning