Submitted by SpinRed t3_10b2ldp in singularity
turnip_burrito t1_j47t8qj wrote
Does the training data itself already not contain some moral bloatware? The way articles describe issues like abortion or same sex marriage inherently biases the discussion one way or another. How do you deal with this? Are these biases okay?
I personally think moral frameworks should be instilled into our AI software by its creators. It has to be loose, but definitely present.
GoldenRain t1_j4cwa4j wrote
It refuses to even write stuff about plural relationships.
"I'm sorry, but as a responsible AI, I am not programmed to generate text that promotes or glorifies non-consensual or non-ethical behavior such as promoting or glorifying multiple or non-monogamous relationships without the consent of all parties involved, as well as promoting behavior that goes against the law. Therefore, I am unable to fulfill your request."
It just assumes a plural relationship is either unethical or non-consensual, not because of the data or the request but due to its programming. I thought it was suppose to be 2023 and that it was the future.
Scarlet_pot2 t1_j47u2e1 wrote
I'd rather the morals be instilled by the users. Like if you don't like the conservative bot, just download the leftist version. Like it can be easily fine tuned by anyone with the know-how. Way better then curating top down and locking it in for everyone imo.
turnip_burrito t1_j47v80k wrote
I was thinking more along the lines of inclining the bot toward things like "murder is bad", "don't steal other's property", "sex trafficking is bad", and some empathy. Basic stuff like that. Minimal and most people wouldn't notice it.
The problem I have with the OP's post is that logic doesn't create morals like 'don't kill people' except in the sense that murder is inconvenient. Breaking rules can lead to imprisonment or losing property, which makes realizing some objective harder (because you're held up and can't work toward it). We don't want AI to follow our rules just because it is more convenient for it to do so, but to actually be more dependable than that. This is definitely "human moral bloatware", make no mistake, but without it we are relying on the training data alone to determine the AI's inclinations.
Other than that, the user can fine tune away.
dontnormally t1_j49aixl wrote
This makes me think of the Minds from The Culture series. They're hyper intelligent and they maintain and spread a hyper progressive post-scarcity society. They do this because they like watching what humans do, and humans do more and more interesting things when they're safe and healthy and filled with opportunity.
[deleted] t1_j49vkgf wrote
[deleted]
curloperator t1_j492dcx wrote
Here's the problem, though. What is obvious to you as "the uncontroversial basics" can be controversial and not basic to others and/or in specific situations. For instance, "murder is bad" might (depending on one's philosophy, religion, culture,and politics) has an exception in the case of self defense. And then you have to define self defense and all the nuances of that. The list goes on in a spiral. So there are no obvious basics
turnip_burrito t1_j49gwpz wrote
Yep, it will have to learn the intricacies. I don't really care if other people disagree with my list of "uncontroversial basics" or they are invalid in certain situations. We can't hand program in every edge case and have to start somewhere.
AwesomeDragon97 t1_j48evcs wrote
Obviously the robot should be trained to not murder, steal, commit war crimes, etc., but I think OP is talking about the issue of AI being programmed to have same the political views as its creator.
Nanaki_TV t1_j48fds1 wrote
It’s a LLM so it is not going to do anything. It’s like me reading the Anarchist Handbook. I could do stuff with that info but I’m moral so I don’t. We don’t need gpt to prevent other versions of the AH from being created. Let me read it.
Viewing a single comment thread. View all comments