Submitted by Wagamaga t3_10ssa4j in technology
Comments
Chief_Beef_ATL t1_j73d6kh wrote
Yes. Yes it could.
fitzroy95 t1_j738kmc wrote
it will do whatever its programmers tell it to do, based on the data and models it is supplied with.
Whatever it returns will be based on whatever source data its been fed, and the accuracy thereof. Which leaves it open to bias of any sort, of spreading misinformation, etc, if that has been included in its source feed, whether deliberately or unwittingly.
However, if there is an effort to sanitise all of those data feeds, there is also a risk of introducing bias from that sanitisation process based on the chosen sources
[deleted] t1_j73aoq4 wrote
[deleted]
defcon_penguin t1_j73n9mm wrote
Bots spreading fake news, misinformation, spam and or phishing links will become much smarter and convincing
nuisanceCreator t1_j745cnm wrote
Who decides what is a false narrative?
anti-torque t1_j74crla wrote
Is that what he's been doing since he retired from baseball?
Marchello_E t1_j74d5g2 wrote
With a bit of critical thinking most manipulative narratives can be seen from 'miles away'. Yet we've seen with COVID deniers, Q-anon manipulations, and certain ex-president sprouting nonsense each and every day that a large group just follow the narrative.
So what's a false narrative: Avoidable harm caused by that narrative. A narrative that's causing fear, uncertainty and doubt. A narrative that attacks a person to win an argument but doesn't provide a solution...
nuisanceCreator t1_j74lc5x wrote
This sounds as if it came straight from ChatGPT.
For example, here's an answer from ChatGPT to the question "Are trans women women?"
> Trans women are individuals who identify as women and live as women. Gender identity is a personal and deeply held sense of one's own gender, and can be different from the sex assigned at birth. People have the right to define their own gender identity and to live in accordance with that identity.
Is that a true narrative?
Marchello_E t1_j74olui wrote
No it did not. Basically the narrative is decided by its fallacy in its used context. Or its manipulative use.
As your question, there is no definitive answer: it depends on the context.
ayleidanthropologist t1_j77hwoh wrote
I found ChatbotGPT
Marchello_E t1_j738ab0 wrote
ChatGPT is already able to convince people on having the right information. It is trained to find statistical correlations in language, not truths. For now it gets information from non-AI inspired sources, but who actually knows what these sources are. The more often certain information gets repeated, the more likely it is to end up as a source of training data. When more and more articles in the near future get written by an AI (not necessarily the same) the validity of the constructed narrative will start to spiral downwards at an alarming rate as the "source" simply gets reinforced by its own wackyness.