ObligatoryOption t1_jd99cw3 wrote
Soon, common knowledge will be entirely made up.
agm1984 t1_jd9dejk wrote
I find a lot of problem so far with the false equivalence of following commonly associated words (ie: filling in the blanks with the highest probability combination).
For example ChatGPT will make a statement that says general relativity explains gravity at all scales (in some specific context), and you are left wondering if it means all the peer-reviewed articles use zero detractions from concrete. Are the majority of peer-reviewed articles indicating that gravity is explained at "all scales"?
First of all it does not explain scales at or below quantum, so immediately we have a boundary problem. ChatGPT is asserting the boundary of general relativity goes below where it is logically supported in a rigorous manner.
Second of all, this is likely highlighting a problem that more rigorous AIs will have to solve. How do they know what clauses are sufficiently proven if the body of science says gravity is explained at all scales. How long until the AI discovers that boundary problem and searches smaller scales?
I've noticed other times too when ChatGPT says "is" instead of "has been". To me it implies a level of certainty that I don't appreciate as a proponent of the scientific method.
To expand slightly more, the problem reminds me in some ways of a composition of math functions, some of which are failing to consider inflection points or second derivative tests in order to selectively change "is" to "has been" or "is" to "is likely" or "is" to "is according to significant authority sources". ChatGPT hits the "is" but fails to give the precise pointer.
Side note: I use crazy term like second derivative test which is not to be taken literally. Think of it more as a highlightion of scenario where a higher order analysis might be needed to resolve loose to strict or strict to loose equality of a given fragment. Implicit context is a nasty thing when you must recurrently analyze micro and macro closure as each new token or group of tokens is added as each affects the overall accurate and precise meaning. My goal is not to provide a solution area but rather to provide specific example of an issue type.
jayzeeinthehouse t1_jdafx37 wrote
I'm going to call this the Neil Degrasse Tyson problem because it can be confident about it's core knowledge, like validated articles, but it's also confidently providing incorrect information outside of that bubble and users don't know any better. Let's wait for advertisers to muddy that even more. Accurate information is going to become so hard to come by that I think the internet will eat itself.
cmfarsight t1_jdb8c0k wrote
It's just a massive dunning kruger effect machine. Sure it can look at and sort a massive data set but it doesn't actually understand any of it so will respond with rubbish with the same confidence as the truth.
cas13f t1_jdcbtkv wrote
The real issue is people expecting a language model, which is all it is, to be an "ai" to "knows everything".
It can write. That is what it is for. It does not have any intelligence, regardless of being a "language AI". The purpose of the model is to generate text in a generally grammatically correct manner when prompted, which is why it has been known to just make up citations when prompted to include them--because as far as the model is concerned, a citation looks a certain way and includes specific grammatical configuration, so it just needs to do that in relation to the prompted words.
It's why it can't do math. It wasn't designed to do so. The model was not trained to do so. It can use word association to write something that looks grammatically relevant.
agm1984 t1_jdcoyl5 wrote
I agree with you, but it also represents the interface between human and machine, so it must be accurate.
The issue I am highlighting is minor but might be revealing some unfortunate aspects. For example, if you can adopt a mathematical approach to deciding what words to use, there is a kind of latent space in which the answer to your question draws an octopus tentacle of sorts. The shape of the tentacle is analogous to the chosen words.
My argument is that the tentacle can be deformed at parts of a sentence related to the word 'is' (which is comically an equals sign) because it misrepresents the level of precision it is aware of. For me this is a huge problem because it means either (or both) the "AI" is not extrapolating the correct meaning from the lowest common denominator of cumulative source materials, or the source materials themselves are causing the "AI" to derive a bum value in the specific context with the problem.
My example of gravity 'at all scales' is interesting because there is no world where a scientist can assert such a concrete clause. In actual english terms, it's more like a restrictive clause because the statement hinges on the context around it. Maybe there is a sentence that says "currently" or "to the best of our knowledge", or maybe there is an advanced word vector such as "has been" that helps indicate that gravity is solved here at the moment but might not be in the future.
It's very minor, but my warning extends to a time when a human is reading that and taking fragments at face value because they feel like the "AI" is compiling the real derived truth from the knowledge base of humankind. My warning also extends to a time when a different "AI" is receiving a paragraph from ChatGPT and for the exact same reasons misinterprets it due to these subtle errors of confidence. There's something uncanny about it, and this is where I see an issue currently if you want to use it as an interface. Maybe my side point is that it doesn't make sense to use it as an AI-to-AI interface because you lose so much mathematical accuracy and precision when you render the final output into fuzzy english. Other AIs need to know the exact angle and rotation between words and paragraphs.
Sea-Strategy-8314 t1_jdahtby wrote
It pretty much already is
petwalrus t1_jdanpeo wrote
Yup. Common being a more important factor than knowledge.
SidewaysFancyPrance t1_jd9dxb0 wrote
What will art look like in 100 years, when it's just AI copying AI copying AI copying AI...copying AI copying artists from today? Because many potential artists will eventually stop learning art, because people will stop paying for art after AI drops the value they place on it. Sure, some artists will keep their crafts alive, because actual human art will be prized by the wealthy, but the number of paying art jobs will fall over time. Back to the old days of patronage.
[deleted] t1_jdag2y1 wrote
[removed]
Super_Capital_9969 t1_jdlm2rm wrote
What do you mean soon Wikipedia has been around for years.
Viewing a single comment thread. View all comments