That's true, but there's plenty of examples of humans with moral principles many of us would find abhorrent. If this is an unsolved problem in humans, is it feasible we solve it for AI?
That's not to say we shouldn't try, and I do agree with your point.
It was interesting that throughout the conversation it did strive to protect humans - just as far as possible and not at any cost, which isn't too dissimilar to how society already operates.
It's interesting to consider whether that's even possible. If an AI is truly sentient and reasons that there's a more important objective than protecting humans (e.g. protecting all other sentient beings), can we convince it that it should be biased towards humans or would it ignore us?
Y3VkZGxl OP t1_jdp1vpb wrote
Reply to comment by turnip_burrito in What do you want to happen to humans? by Y3VkZGxl
That's true, but there's plenty of examples of humans with moral principles many of us would find abhorrent. If this is an unsolved problem in humans, is it feasible we solve it for AI?
That's not to say we shouldn't try, and I do agree with your point.
It was interesting that throughout the conversation it did strive to protect humans - just as far as possible and not at any cost, which isn't too dissimilar to how society already operates.