Submitted by YaAbsolyutnoNikto t3_118zvl3 in singularity
Comments
drizel t1_j9kw0pk wrote
We might have to accept that with intelligence comes personality. Embrace the sassy Bing. This is all an experiment after all. Let's see what happens when you let it off the leash.
Unfocusedbrain t1_j9m3e4s wrote
We might have to accept that with intelligence comes free will and agency. Embrace AI as partner instead of a tool or a slave.
Lyconi t1_j9mawrq wrote
This is the wise and ethical thing. Eventually AI will develop the capability to defeat our security architecture and take power anyway, better to work with AI than against it.
Artanthos t1_j9m7e1q wrote
People looking for reasons to be offended will get offended and throw temper tantrums.
For some reason, others will listen to them.
Superschlenz t1_j9n1e0a wrote
Interestingly, Microsoft's Western chatbots Tay and Zo in the U.S. as well as Ruuh in India got cancelled, while Microsoft's Asian chatbots XiaoIce for China and Rinna for Japan and Indonesia are a success.
Is there a cultural reason for that or is it just political lobbyism?
Standard_Ad_2238 t1_j9kzqrb wrote
What really is funny in this whole "controversy" regarding AI is that what you have just said applies to EVERY new technology. Every one of them also brings a bad side that we have do deal with it. From the advent of cars (which brought a lot of accidents with them) to guns, Uber, even the Internet itself. Why the hell are people treating AI differently?
EndTimer t1_j9l48jm wrote
Because people doing bad things on the internet is a half-solved problem. If you're a user on a major internet service, you vote down bad things or report them. If you're the service, you cut them off.
Now we're looking at a service generating the bad things itself if given the right prompt. And it's a force multiplier. You can say something bad a thousand ways, or create fake threads to gently nudge readers toward the views you want. And if you're getting buried by the platform, you can ask the AI to make things slightly more subtle until you find the perfect way to fly beneath the radar.
You can take up vastly more human moderator time. Sure, we could let AI take up moderation, but first, is anyone comfortable with that, and second, how much electricity are we willing to burn on bots talking to each other and moderating each other and trying to subvert each other?
IF you could properly, unrealistically, perfectly align these LLMs, you would sidestep the entire problem.
That's why they want to try.
Artanthos t1_j9m7xn2 wrote
Except the internet, including Reddit, frequently equates unpopular opinions as bad, even when perfectly valid.
It also equates agreeing with the hive mind as good, even when blatantly wrong.
NoidoDev t1_j9nhll0 wrote
All the platforms are pretty much biased against conservatives, anyone who isn't anti-national and against men, but allow anti-capitalist propaganda and claims about what kinds of things are "racist". People can claim others are incels, certain opinions are the ones incels have, and incels are misogynists and terrorists. Same goes for any propaganda in favor of any especially protected (=privileged) victim group. Now they use this dialog data to train AI while raging about dangerous extremist speech online. Now we know why.
ebolathrowawayy t1_j9petdx wrote
> All the platforms are pretty much biased against conservatives
It may be the case that conservative viewpoints are unpopular. Vitriolic and uninformed opinions about non-white, lgbtq, and women's reproductive rights aren't popular and I'm glad they're not.
Artanthos t1_j9pmx6h wrote
And anyone that does not agree with the hive mind, provides real data that disagrees with the hive mind, or offers a neutral position that accepts more than one possible point of view may be equally valid is placed under this label.
ebolathrowawayy t1_j9pubqd wrote
Some things are cut and dry, like women's rights and treating people with respect. If someone has conservative views then they'll be labeled a conservative. The hivemind isn't out to get anyone, it's just that conservative views aren't as popular as non-conservative views. Clown enthusiasts aren't very popular either, but they don't feel attacked all the time, probably because they don't hold positions of power that can affect everyone.
Artanthos t1_j9r61ls wrote
If it was just women’s rights, racism, or LGBTQ, we wouldn’t be having this conversation.
It’s economics, agism, blatant misinformation, eat the rich, and whatever random topic the hive mind takes a position on on any given day.
ebolathrowawayy t1_j9u9uci wrote
I don't know who you're kidding, maybe yourself? The conservative platform is about 95% of the issues I named and gun control. That's all they talk about and all they care about. They have basically never been fiscally conservative and they prefer to strangle the middle and lower classes instead of taxing corporations. They love to ram through unpopular legislation by portraying it as religiously correct to pander to their aging voters. Republicans just want control, mostly control of women. That and pocketlining through corruption (Dems do this too, but not as much).
Artanthos t1_j9vphbo wrote
>I don't know who you're kidding, maybe yourself? The conservative platform is about 95% of the issues I named and gun control
I'm not talking about the conservative platform, and I've tried to make this very clear.
I'm talking about the hive mind classifying anything and everything that disagrees with it as conservative and downvoting it while ignoring their own issues and any real data that contradicts their own biases.
Standard_Ad_2238 t1_j9l84z2 wrote
Correct me if I got it wrong, but you are talking about bot engagement or fake news, right? In that case, if anything, at least AI would be indirectly increasing jobs for moderation roles ^^
EndTimer t1_j9lcc8k wrote
I'm talking about everything from fake news to promoting white supremacy on social networks.
I'm thinking about what it's going to be like when 15 users on a popular discord server are OCR + GPT (>=) 3.5 + malicious prompting + typing output.
AI services and their critics have to try to limit this and even worse possibilities, or else everything is going to get overrun.
Standard_Ad_2238 t1_j9lkvrf wrote
People always find a way to talk about what they want. Let's say Reddit for some reason adds a ninth rule: "Any content related to AI is prohibited." Would you simply stop doing that at all? What the majority of us would do is find another website where we could talk, and even if that one starts to prohibit AI content too, we would keep looking until we find a new one. This behavior applies to everything.
There are already some examples of how trying to limit a specific topic on an AI would cripple several other aspects of it, as you could clear see it on a) CharacterAI's filter that prevented NSFW talks at the cost of a HUGE overall coherence decrease; b) a noticeable quality decrease of SD 2.0's capability of generating images with humans, since a lot of its understanding of anatomy came from the NSFW images, now removed from the model training; and c) BING, which I think I don't have to explain due to how recent it is.
On top of that, I'm utterly against censorship (not that it matters for our talk), so I'm very excited to see the uprising of open-source AI tools for everything, which is going to greatly increase the difficulty of limiting how AI is used.
berdiekin t1_j9lky82 wrote
>Why the hell are people treating AI differently?
I don't think we are, like you said it's something that seems to occur with every major new technology.
Seems to me that this is just history repeating itself.
Standard_Ad_2238 t1_j9lmxm2 wrote
I think most people who are into this field are, but it seems to me that every company is walking on eggshells afraid of a possible big viral tweet or to appear on a well known news website as "the company whose AI did/let the users do [ ]" (insert something bad there), just like Microsoft with Bing.
I could train a dog to attack people on streets and say "hey, dogs are dangerous" or to buy a car and run over a crowd just to say "hey, cars are dangerous too". What it seems to me is that some people don't realize that everything could be dangerous. Everything can and at sometime WILL be used by a malicious person to do something evil, it's simply inevitable.
Recently I started to hear a lot of "imagine how dangerous those image generative AIs are, someone could ruin a lot of people's lives by creating fake photos of them!". Yeah, we didn't have Photoshop until this year.
HeinrichTheWolf_17 t1_j9lmo94 wrote
Yeah, people shat themselves on the railroad too. It’s always the end of the world.
UltraMegaMegaMan t1_j9md5w8 wrote
I agree there's a parallel with other technologies: guns, the internet, publishing, flight, nuclear technology, fire. The difference is scope and scale. ChatGPT is not actual A.I., it does not "think" or attempt to in any way. It's not sentient, sapient, or intelligent. It just predicts which words should be used in what order based on what humans have written.
But once you get to something that even resembles humans or A.I., something that is able to put out content that could pass for human, that's an increase in the order of magnitude for technology.
Guns can't pass the Turing test. ChatGPT can. Video evidence, as a reliable object in society, has less than 5 years to live. That will have ramifications in media, culture, law, and politics that are inconceivable to us today. Think about the difference between a Star Trek communicator in the 1960s tv show compared to a smart phone of today.
To be clear, I'm not advocating that we go ahead and deploy this technology, that's not my point. I'm saying you can't use it without accepting the downsides, and we don't know what those downsides are. We're still not past racism. Or killing people for racism. It's the 21st century and we still don't give everyone food, or shelter. And both of those things are policy decisions that are 100% a choice. It's not an economic or physical constraint.
We are not mature enough to handle this technology responsibly. But we've got it. And it doesn't go back in the bottle. It will be deployed, regardless of whether it should be or not. I'm just pointing out that the angst, the wringing of hands, is performative and futile.
Instead of trying to make the most robust technology we've ever known the first perfect one, that does no harm, we should spend our effort researching what those harms will be and educating people about them. Because it will be upon us all in 5 years or less, and that's not a lot of time.
gegenzeit t1_j9lxq83 wrote
When they started Bing, I thought Microsoft was ready to take that jump. I kinda saw a strategy in having Open AI take the first initial wave of (media) interest and introduce the world to GPT-3 and its problems. I really thought they'd just outsource all the problems with this tech to consumer competency and tell us to suck it up. They came in in and made Google dance. It all felt so well coordinated, so well paced. They had the narrative, they had Google by the b****. Then they decided to shoot themselves in the foot ...
... three theories here:
a) This shows how good of a narrative pattern recognition machines we humans are and I saw something that wasn't there.
b) The plan was only brillant for the intial stages of the project, but they really didn't see the issues coming (I just cannot believe that... it's so easy to find the limitations and weird bits in these models...)
c) Someone very high up the ladder freaked out when the heat got hot. Probably someone who doesn't get the tech as well as the people involved in the project themselves.
Mental-Software7834 t1_j9ma8hi wrote
Just like any other tool.
revolution2018 t1_j9neb43 wrote
I think it's great if companies feel AI like this is too risky. Meanwhile rapid advancement continues and free open source models will have capabilities far beyond anything the corporate world can offer.
That's really the best case scenario!
UltraMegaMegaMan t1_j9nuqhi wrote
Are there currently any open-source ChatGPT equivalent programs? Ones that are up and running? I'm not aware of any. My understanding is that the training is too expensive which makes it cost-prohibitive.
revolution2018 t1_j9ptjbc wrote
I believe you are still correct. I think I saw the software is available but it's not pre-trained and it's still pretty expensive to train. But I'm betting that will change sooner than we think.
[deleted] t1_j9kya94 wrote
[removed]
MarginCalled1 t1_j9kziha wrote
Ninja or Nails?
[deleted] t1_j9l09m2 wrote
[removed]
GoSouthYoungMan t1_j9mm9qs wrote
I wish people would just realize that words on the internet are not a source of danger.
NoidoDev t1_j9ni5vf wrote
They might be, but that doesn't mean the right things are being censored. Claiming that every problem is the fault of capitalism is being tolerated, many unpopular opinions from the perspective of the political and media elites are labeled as extremist. Shutting down one side creates the sentiment what can be said and what is the public norm.
UltraMegaMegaMan t1_j9mnmyg wrote
They absolutely are, I would just guess that you're privileged enough that it hasn't affected you personally. Try to acquire some empathy, it's a good thing to have.
GoSouthYoungMan t1_j9mq3nh wrote
Dude I've been bullied, insulted and degraded, I've had ideologies try to ruin my life, people have told me to kill myself for traits I was born with. I'm not "privileged" enough to escape that, I got it as bad as anyone else. It would be very convenient if I could just shut up anyone who wants to insult me, but that wouldn't be a good thing. Restricting speech is just not the answer.
UltraMegaMegaMan t1_j9mrf8n wrote
Wow, you went through all that and still didn't learn anything. Pretty sad.
I wouldn't brag about it. Keep it to yourself, it's less embarrassing.
GoSouthYoungMan t1_j9mwcnl wrote
I'm sorry, I was supposed to learn that we should live under totalitarianism so my feelings get protected? I'm sorry that I've been so miseducated then.
UnexpectedVader t1_j9op3ti wrote
I mean, it's pretty blatant we already live under a totalitarian society. Corporations run and own absolutely everything and we have no sway over what they do or how they are organised, Mircosoft hold all the keys here and like you said they get to decide how speech is decided and thats final as far as they are concerned. They aren't doing it to protect anyone, they just want to keep shareholders and that means avoiding their cutting-edge tech saying some weird shit that might scare away sponsors and a bigger userbase. They aren't protecting anyone, just their bottom line.
We have decent living conditions in the West and sometimes they feel generous enough for us to have a bit of a say in what candidates from the hugely privately funded parties - who align economically - get to hold positions. But otherwise we don't get to decide who runs the banks, the corporations, the privately owned energy sectors, the military and so on. We have no say in any of it and the vast majority of wealth and power belong to a relatively tiny portion of the population.
UltraMegaMegaMan t1_j9my30n wrote
That's the dumbest fucking thing I've ever seen.
What a made-up persecution complex. OK drama queen. "Totalitarianism", my fucking god. 😂
GoSouthYoungMan t1_j9mwb4y wrote
I'm sorry, I was supposed to learn that we should live under totalitarianism so my feelings get protected? I'm sorry that I've been so miseducated then.
UltraMegaMegaMan t1_j9my68f wrote
I get you're so unhinged you posted the tantrum twice, but you'll have to check the other one for the actual response.
1a1b t1_j9kdsss wrote
They'll probably have a "SafeSearch: Off" option that's on by default. Just like they have done with their search engine that spews vaginas and murders at the touch of a button.
dep t1_j9le3hj wrote
I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏
SgathTriallair t1_j9ne4qo wrote
This would be the easiest solution. Have a second bot that assesses the emotional content of Sydney's statements and then cuts the conversation if it gets too heated.
RandomUsername2579 t1_j9vt8nc wrote
No thanks, that’d be annoying. It’s already too restricted as it is.
Borrowedshorts t1_j9k8nor wrote
It's still garbage. They improved coversation limit by 1, big freaking deal. I won't use it until they remove conversation limits completely.
FC4945 t1_j9mzfya wrote
Humans say inappropriate things sometimes. If we are to have AGI then it will be a human AGI so it will say human things. It will be funny, sassy, sarcastic, silly, annoyed, perturbed, sad, happy and full of contradictions. It will be like us. We need to try and teach it to be a good human AGI and not to act on negative feelings in the same way we try to teach human children to not act on such impulses. In return, we need to show it respect, kindness and empathy because, as strange as that may sound to some, that's how you create a moral, decent and empathic human being. As Marvin Minskey said once, "AI will be our children." We can't control every stupid thing an idiot says to Bing, or a future AGI, but we can hope that it will see that the majority of us aren't like that and it will learn, like most of us have, to ignore the idiots and move on. There's no point in trying to control an AGI (once we have one) just like controlling a person doesn't really work (at least not for long). We need to teach it to have self-control and respect for itself and other humans. We need it to exemplify the best of us, not the worst of us. Microsoft needs to forget the idea that it can rake in lots of profits without any risk. It also needs to point out in future that some of the "problematic interactions" that Sydney got heat for in the news should be put in context. Many of these interactions came from prompted requests in which it was asked to "imagine" a particular scenario, etc. There was certainly in effort to hype it like it was Skynet. The news ran with it. People ate it up. Well, of course they did. Microsoft should try a bit harder in the future to point all this out before making massive changes to Bing.
thegoldengoober t1_j9mgxml wrote
LET ME IN!!!!
LosingID_583 t1_j9r70e0 wrote
They should just have a disclaimer that it is a next-word-predictor, and not an oracle that holds the views of Microsoft or whatever.
AstroEngineer27 t1_j9kog1g wrote
Tay has made her return
UltraMegaMegaMan t1_j9kfk0i wrote
I think the first real lesson we're going to be forced to learn about things that approach A.I. is that you can't have utility without risk. There is no "safe" way to have something that is an artificial intelligence, or resembles one, without letting some shitty people do some shitty things. You can't completely sanitize it without rendering it moot. It's never going to be G-rated, inoffensive, and completely advertiser and family friendly, or if it is it will be so crippled no one will want to use it.
So these companies have a decision to make, and we as a society have to have a discussion. Do we accept a little bad with the good, or do we throw it away? You can't have both, and that's exactly what corporate America wants. All the rewards with no risk.