Comments
smokebomb_exe t1_jc77lq3 wrote
Iggitron90 t1_jc7ebzo wrote
You think our own agencies aren’t also trying to dominate the online conversation?
Throwmedownthewell0 t1_jc97pr2 wrote
COINTELPRO used against your own citizens is how you know you live in a democracy with free will.
/s
smokebomb_exe t1_jc7f38x wrote
Passively listening to, of course. Otherwise, government agencies have very little need to sew division for whatever nefarious plots they may have since Americans are dividing themselves. Mention the word "drag queen" to a Republican or "AR-15" to a Liberal and watch cities burn and Capitals fall.
MEMENARDO_DANK_VINCI t1_jc7pvzn wrote
Well in America you’re right but they’re probably doing similar things in Russia and China
CocoDaPuf t1_jcafic7 wrote
>Americans are dividing themselves
That's where you're wrong, Americans were not dividing themselves this much until nations started directly influencing the public conversation.
Edit: I also don't want to imply that I think American agencies aren't conducting their own AI driven disinformation and "public sentiment shaping" campaigns. That's certainly a thing that is happening. If anything the US has a larger incentive to use AI for that, as here it would be much harder to keep the kind of programs China and Russia use under wraps, the "troll farms" which are like huge call centers for spreading misinformation, anger and doubt.
sinsaint t1_jcahoj3 wrote
And until republicans started scraping for votes by turning the uneducated into a cult using meme-worthy propaganda.
Drag queens, hating responsibilities, and prejudice against anything a rep thinks is 'woke'. It'd be comical if it wasn't so effective.
And before it was that, it was Trump telling everyone a bunch of lies they wanted to hear, all while using the presidency to advertise his buddy's canned beans in the Oval Office.
Other countries didn't make us crazy, the crazies just didn't know who to vote for before.
CocoDaPuf t1_jcawtyo wrote
Absolutely, right on all counts.
smokebomb_exe t1_jcaithr wrote
Correct, the divisive political memes from Russia and China I mentioned in a reply here
Cr4zko t1_jcfbi1n wrote
The classic 'if you're not with me you're a Russian bot!'. It's all so tiresome.
smokebomb_exe t1_jcfff5x wrote
Exactly. (Edit: misread your post) Until the 2016 elections, Americans on the Left or Right were just (semi) friendly rivals occasionally jabbing each other on the shoulder. "How do you like that old sport!" as Democrats would name a school after Martin Luther King Jr. "Well how about one of these!" Republicans would say as they give teh military a raise.
But then our enemies saw an opportunity: "look at how much Americans depend on social media for their news... and look- there's a new guy on the political scene that has ties with us..." And suddenly hundreds of posts with Spongebob or Lisa Simpson or Spider-man standing in front of a chalkboard started appearing on everybody's timelines.
EDIT: No, it's not that a person isn't allowed to have personal political opinions. It's one of the small things that barely keeps us from becoming a Totalitarian country. It's the information that we ingest that is altered by the Russians/ Chinese.
https://www.brookings.edu/techstream/china-and-russia-are-joining-forces-to-spread-disinformation/
https://www.wired.com/story/russia-ira-propaganda-senate-report/
https://www.fdd.org/analysis/2022/12/27/russia-china-web-wars-divide-americans/
https://www.theguardian.com/us-news/2017/oct/14/russia-us-politics-social-media-facebook
playnite t1_jc7vmtx wrote
They ain't got shit on the US.
sayamemangdemikian t1_jcb833d wrote
But not automated yet.
If the day comes, ill be joining amish community
6thReplacementMonkey t1_jc7hkhn wrote
This is already happening.
[deleted] t1_jc6w9kq wrote
[removed]
[deleted] t1_jc6ybl1 wrote
[removed]
[deleted] t1_jc6zuvp wrote
[deleted]
count023 t1_jc8szxh wrote
no just that, but psychological work ups on individuals cross referenced by handles and known social medial presence. So not _just_ disinformation, but specific disinformation targetted per user that will push exactly the right buttons to trigger that person.
Throwmedownthewell0 t1_jc97mfm wrote
unenlightenedgoblin t1_jc72v0t wrote
Obviously it’s way above my (nonexistent) clearance level, but I’d be absolutely shocked if such a tool were not already in late-stage development, if not already deployed.
bremidon t1_jc7eivj wrote
>I’d be absolutely shocked if such a tool were not already in late-stage development, if not already deployed.
Me as well.
We see what the *relatively* budget constrained OpenAI can do. Now imagine an organization powered by $10,000 hammers.
mariegriffiths t1_jc7vhyq wrote
Have you seen the right wing bias in MS chat GPT?
indysingleguy t1_jc7bvq3 wrote
You do know you are likely 10 years too late with this question...
AE_WILLIAMS t1_jc7madd wrote
At least 30...
mysterious_sofa t1_jc7mjmy wrote
I doubt the old 10 years behind the government adage holds up with this stuff but I do believe they have something slightly more advanced that has existed for maybe a year or 2 ahead of what we can use
36-3 t1_jc7dxnn wrote
nyet, comrade. Do not concern yourself in these matters.
micahfett t1_jc7gqiz wrote
Maybe one of the most basic applications would be persistent infiltration of niche social organizations, such as radical groups, subversive organizations, etc. Get a profile set up, establish a presence, develop a track record with some credibility. Become an inroad for future interactions.
Rather than devoting a lot of agents to monitoring and working their way into groups, set up an AI to do it. Monitor what's going on and notify the agency if certain triggers are met. At that point a human could take over and begin the investigative process.
AI can be involved in tens of thousands of groups and always be engaged and responsive whereas an agent could not.
Also the ability to digest massive amounts of information and extract understanding from it, rather than looking for key phrases or keywords, then produce summaries of what the information relates to.
Imagine an algorithm that searches for keywords like "bomb" and then flags a conversation for review by an agent. That agent then needs to look at context and tangential information, go back and search profiles for previous posts and begin to try and put together a picture of what's going on, taking days or weeks to do so. An AI could do that for thousands of instances simultaneously.
Is any of this "concerning"? I guess I leave that up to the individual to decide for themselves.
Yearofthehoneybadger t1_jc8iee0 wrote
Somebody sent us up the bomb.
klaaptrap t1_jc7ie5b wrote
You ever argue with someone on here that has a competent take on something and shifts goalposts through a conversation to attempt to invalidate what happened in reality and if you final pin them down on a fact you get banned from politics? Yeah that was one.
pizzapapaya t1_jc7jlf7 wrote
Said so as to suggest that AI is not already a powerful and dangerous tool in the hands of private corporations?
AlexMTBDude t1_jc7psvr wrote
This tech has been around for a decade in different forms (Natural Language Processing, OpenAI). Count on the fact that NSA, CIA and other intelligence agencies around the world have been using it for a loooong time.
data-artist t1_jc7v2gx wrote
You should already be concerned that big tech companies use AI to censor content. It is even more disconcerting that they do it through shadow banning. Content is quietly shut down, but they are able to make it appear that it is just an unpopular opinion. They can also make unpopular or ludicrous ideas seem popular by only letting positive comments for ideas that they like. AI enables censorship on a mass scale and in real time. Something that would have taken a lot of time and money to do before just for the fact that a person would need to be involved in the censorship.
Voyage_of_Roadkill t1_jc6vvvh wrote
We should be concerned with any well-funded intelligence that can act in any way it pleases in the name of "its better good."
It's way too late to do anything about the alphabet agencies, but chat GPT is just a tool like Google Search was before google became an ad company. Eventually, Chat GPT will be used to sell us our favorite products too, all the while patting itself on the back for offering us something we already wanted.
AE_WILLIAMS t1_jc7mfvj wrote
Google was funded by CIA
HastyBasher t1_jc7841p wrote
Probably already exists. They could use it to summarize an entire accounts digital footprint for one site. Then see if they can link that account to other sites. All from some simple prompts which they could automate.
honorspren000 t1_jc7qc56 wrote
Don’t know about NSA, but I do know that Microsoft/CharltGPT has been soliciting NASA for research purposes. They did a presentation the other day to my husband’s department. I think they are looking ways to improve chatGPT’s science and code handling skills. They said they wanted to see how NASA uses AI.
My husband already uses ChatGPT for coding menial tasks, like unit tests, and minor scripts. NASA works with very proprietary hardware, because it has to survive being in space, so Microsoft might be curious what kind of hardware questions they ask to help other hardware companies.
Microsoft might also be soliciting NASA just to say, “we are so awesome that even NASA uses us!”
I’m any case, I wouldn’t be surprised if they haven’t already reached out to NSA. Much of NSA’s internal data is probably classified/restricted, so they won’t share that stuff with ChatGPT, but all the miscellaneous web data gathering could probably be handled by chatGPT.
yoaviram OP t1_jc6qu4t wrote
This is a thought experiment exploring current state-of-the-art and future trends in LLMs used by intelligence agencies and their implications on our online privacy. Is this a realistic scenario? Is it not going far enough?
net_junkey t1_jc77e6y wrote
NSA has the data, but limited ability to use it. An AI can sort the data. Something like Chat GPT can remove the need for a skilled IT to be involved. With such a simple system anyone with access to the system can do investigations as a 1 man operations. Combined with secrecy laws public would have even less knowledge how their privacy is violated.
varwal t1_jc7j0ia wrote
Am I the only one who takes offense to the term "a ChatGPT"?
mariegriffiths t1_jc7v8xo wrote
If you ask the right questions on MS Bing you can reveal its pro capitalism and authoritarian bias.
beaverhausen_a t1_jc7q299 wrote
Destabilising national and international governments. Enabling larger scale profiteering of big corporations they’re in bed with. It’ll basically allow them to do what they’ve always done just a lot faster and more accurately. Yay!
DefTheOcelot t1_jc7tz47 wrote
No itll be a joke
Now a CIA made by chatgpt?
A plausible and terrifying future.
Odd_Perception_283 t1_jc7tzpy wrote
Awhhhh I’ve been busy thinking about all the cool stuff AI will do for humanity. Now I’m thinking about being forever enslaved. Thanks friend!
dgj212 t1_jcd3qz0 wrote
Honestly, it's not the agencies you have to worry about, it's the corporations with aggressive marketing tactics, and even unethical ones like slandering their competitors
SuckmyBlunt545 t1_jc6vkb9 wrote
Data analysis of natural language patterns from collected data on massive scale that correctly interprets it. They have so much data on us so deciphering this is high priority.
FuturologyBot t1_jc6vlsk wrote
The following submission statement was provided by /u/yoaviram:
This is a thought experiment exploring current state-of-the-art and future trends in LLMs used by intelligence agencies and their implications on our online privacy. Is this a realistic scenario? Is it not going far enough?
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11r66hm/what_can_a_chatgpt_developed_by_a_wellfunded/jc6qu4t/
[deleted] t1_jc70kkq wrote
[deleted]
whogivesaluck t1_jc76ssk wrote
Well, with full access to all the public information out there, by analyzing the style you're writing and thinking on one, it could easily track you down on other sites under different usernames.
Teach the AI a psychology and behavior patterns, and some day no doubt you will have an ambulance and police waiting for you outside your house before you even think about jumping off the bridge or doing something worse. :D
In time AI will know us better than we do ourselves.
TheSanityInspector t1_jc779gh wrote
Monitoring and interdicting potential terrorist activity and digital threats by foreign powers. Yes we should be concerned that a) the threats are real and b) the technology might be misused.
NatashOverWorld t1_jc7c9xs wrote
This sounds like classic 'we have finally invented the Nexus of Torment' energy.
[deleted] t1_jc7czt6 wrote
[removed]
[deleted] t1_jc7eg9g wrote
[removed]
Ifch317 t1_jc7hvts wrote
You know how you can ask ChatGPT to tell you a story about a pirate, a tortoise and a golden bowl in the Shakespearean English? Imagine all media (music, movies, advertisement etc.) created by AI in a voice that always supports the existing political order.
"FutureChatGPT, please make a song about how power grid failures in Texas are caused by trans child abuse and make it sound like ZZ Top."
im_thatoneguy t1_jc7jzt1 wrote
You can intercept every phone call in the world but you're no closer to finding a call organizing a terrorist attack.
A natural language model could act like an agent listening to every single conversation intercepted in far more depth than current search engines.
You create a prompt like "conversation between two terrorists planning an attack" and then compare every phone call's text to how similar it is to the prompt's output space.
You could also go deeper and also include every conversation they've ever had to see if it's a one-off false positive or there are "terroristic" trends to their speech.
You could also potentially link accounts and phone numbers and recordings by creating a style profile of known language and then again comparing an anonymous sentence to "in the style of Terrorist John Doe" to find potential linked data.
mariegriffiths t1_jc7wds1 wrote
Because they Want terrorist attacks against innocents to give them power. You could say.
mariegriffiths t1_jc7wlcz wrote
I cannot say anything regarding your last paragraph.
Infinite_Flatworm_44 t1_jc7t7n4 wrote
Who went to prison for illegally spying on innocent citizens and foreign countries? That’s right...no one. They can do whatever they want since Americans have become sheep quite some time ago and don’t stand for anything and keep voting the same corrupt status quo into office.
[deleted] t1_jc7tk0l wrote
[removed]
BeWiseExercise t1_jc7x2le wrote
I'm more concerned about robocalls soon being able to have an actual conversation in your local dialect, and lie with ease about whatever scam they're running. That's going to fool some people a lot more than a scam call from "Microsoft" or any other company, or relative asking for money.
Mercurionio t1_jc9ufyx wrote
Once it starts to pop up, internet call centers will become less popular, since people will start to look at the goodies before buying them. Not all, but many.
Any shit can, and will, spiral out of control
Fitis t1_jc7xntn wrote
Didn’t Microsoft recently inject that company with like 10B? How much funds you talking about of this isn’t considered well funded?
No-Arm-6712 t1_jc80cfe wrote
Some technology: exists
Should we be be concerned what governments will do with it
YES
No-Wallaby-5568 t1_jc827bb wrote
They have purpose built software that is better than a general purpose AI.
Pathanni t1_jc840ay wrote
We have been ignorant all our lives beign enslaved by banks and locked inside by governments. Watching brainwash on the television and wearing a tracker on our wrists or in our pockets. At this point it doesn't matter anymore
Responsible-Lie3624 t1_jc87b3u wrote
I could tellya but then I’d hafta killya. (Old joke. Siri — I mean sorry.)
bigfootjedi t1_jc89zyd wrote
It’s already been done for at least 10-20 years. DOD always wins. USA!!!
Hour_Worldliness9786 t1_jc8aqrh wrote
I wanted chatGpT to write my resume, all it could do was give me pointers, then I asked it to write an introduction for my LinkedIn profile, it only offered advice. For the lazy bitches (like me) of the world this AI tool is useless. Is it offensive for a dude to call himself a bitch?
Responsible-Lie3624 t1_jc8neko wrote
You do know there’s more to the IC than NSA, don’t you? Most of the things you speculate NSA might use an LLM for are outside its remit.
[deleted] t1_jc8pkxa wrote
[deleted]
Watermansjourney t1_jc8vdvm wrote
plot twist, they’ve more than likely have already been using it for way longer than it’s been available to Joe Consumer-maybe by as long as a decade. Its probably being used to help predict diplomatic, economic and military espionage, warfare, social patterns and development scenarios and other outcomes involving for threats to our country by subversive actors both foreign and domestic. (so yes, that includes you)
Wyllyum_Cuddles t1_jc8zmal wrote
I would guess that the security agencies have had some advanced forms of AI before ChatGPT was released to public. I’m sure the public would be shocked at the kind of technologies our government employs without our knowledge.
Blasted_Biscuitflaps t1_jc93ouv wrote
Do me a favor and read about The Butlerian Jihad in Dune and why it got started.
[deleted] t1_jc95vhu wrote
[removed]
Knightbac0n t1_jc9ekm6 wrote
Probably the same as any other org. Debugg that code that is no longer working and was written by a dude who is no longer employed.
Uptown-Dog t1_jc9hl5g wrote
LLM's and just AI-systems in general have been used by intelligence agencies well before ChatGPT came along. At scale, and in coordinated, find-grained, and concerted fashion across society in ways that would shock almost anyone. Should we be concerned? I mean, relative to all the other shit going on in our lives? I guess?? It's the sort of stuff that makes us ineffectual at putting up a coordinated defense against fucked up government policies, so sure, but OTOH there's just so much shit going on right now that in the same breath, well, meh.
wowtah t1_jc9l0kf wrote
Don't worry, they already have it (or something similar)
[deleted] t1_jc9q313 wrote
[removed]
[deleted] t1_jc9s6f9 wrote
[removed]
sunrise_speedball t1_jca2c3t wrote
It can go a lot. Nothing that wasn’t already being done. When the government adopts stuff like this, it’s to re-align manpower—not revolutionize the industry.
So the AI will do jobs that people used to do, those people will be repurposed to other assignments.
Government works different.
dberis t1_jcai5b7 wrote
No need to worry, we'll all be redundant in 5 years anyway.
[deleted] t1_jcanbo9 wrote
[removed]
crazytimes68 t1_jc77l6w wrote
The antichrist,so I think a little concern is in order
Nuka-Cole t1_jc7cnjs wrote
I’m sad that these sorts of tools get developed and deployed and the first thing people think is “How is The Government going to use this against me” instead of “How can this be used for me”
Mercurionio t1_jc907pf wrote
Why anyone will need you at all?
ToBePacific t1_jc7e5ly wrote
ChatGPT can’t even reliably count the capital letters in a string.
swingingsaw t1_jc6tuiw wrote
“Should we be concerned?” Got nothing to hide then you’re good. As long as this doesn’t turn into an obvious 1984 scenario and keeps on focusing on domestic terrorists then I’m all for big brother
PM_ME_A_PLANE_TICKET t1_jc6uo65 wrote
Everyone's all for big brother until they decide you're on their list for whatever reason.
Highly recommend the book Little Brother by Cory Doctorow.
[deleted] t1_jc6wgnc wrote
[removed]
swingingsaw t1_jc6v47v wrote
Everyone doesn’t appreciate the amount of atrocities prevented bc of big brother
PM_ME_A_PLANE_TICKET t1_jc6vhpt wrote
Like I said, until you're on their list.
[deleted] t1_jc6vqhg wrote
[removed]
FawksyBoxes t1_jc6zf9q wrote
I don't appreciate the amount of atrocities committed by big brother.
SuckmyBlunt545 t1_jc6vckk wrote
Power should not go unchecked ya moron 🙄 educate yourself just a tinsy bit please
[deleted] t1_jc6wwa8 wrote
[removed]
Onlymediumsteak t1_jc6xhk6 wrote
First they came for the socialists, and I did not speak out—because I was not a socialist.
Then they came for the trade unionists, and I did not speak out—because I was not a trade unionist.
Then they came for the Jews, and I did not speak out—because I was not a Jew.
Then they came for me—and there was no one left to speak for me.
—Martin Niemöller
Who is in charge and what they consider good/evil can change very quickly, so be careful what powers you want to give to the state.
swingingsaw t1_jc6zcjs wrote
If you wanted to stop big brother all of you should’ve stood up against data mining when they sold you out for Pennies to an advertisement company
imtougherthanyou t1_jc74gbb wrote
Especially when they were babies! And not aware!
imtougherthanyou t1_jc74db6 wrote
Everyone has something they don't want everyone to know.
Occma t1_jc6wuw7 wrote
you are truly a living example for the term artless
swingingsaw t1_jc6x1o1 wrote
Yea I wouldn’t expect anything different from Reddit lol the platform that loves to complain but when good things happen it’s a fart in the wind
Occma t1_jc6xbw9 wrote
the problem is that if a AI decides that humanity need more intelligence, empathy or even creativity you would simply be culled.
givemethepassword t1_jc6tkzj wrote
Automated disinformation on social media. Smart bots at a scale.