Submitted by [deleted] t3_115ez2r in MachineLearning
Username912773 t1_j92rzza wrote
Reply to comment by loga_rhythmic in [D] Please stop by [deleted]
I think it might be seen as something to fear, a truly sentient machine would have the ability to develop animosity towards humanity or develop a distrust/hatred for us in the same way we might distrust it.
It also might be seen as something that makes being human entirely obsolete.
Sphere343 t1_j92woql wrote
Yes indeed that’s what it seems a lot of these people seem to think. But the thing is AI being self aware of sentient isn’t that bad of a thing as long as it is done correctly it is really good which is contrary to all that. As first off a AI just being created and being sentient is literally just like suddenly having a baby, you need to raise it right. For a Ai you need to give it as unbiased information as possible, make it clear about what is right and wrong and don’t give the AI a reason to hate you (abuse it, try to kill it) the AI may turn out good just like any other human or turn bad just like many others.
And the best way to make a sentient Ai with out all these problems? Base it on the human brain. Create emotional circuits and functions for each individual emotion and so on. The tech and knowledge for all this stuff isn’t here of course so we can’t do this currently. However in the future the best way to really realistically create a sentient AI is to find a way to digitize the human brain. It’s possible given our brain works as a organic “programming” of sorts with all the Neutron networks and everything.
Major Taboo of AI is don’t do stupid stuff. Don’t give unreasonable commands that can make it do weird things like saying do something by any means. Don’t feed the AI garbage information. And most certainly don’t antagonize a sentient AI. Also i believe personally a requirement for AI is to be allowed to be created and be sentient is to basically show that the AI would have emotions circuits and as such can train the AI in what is good and bad.
If a AI doesn’t have any programming to tell a right from a wrong naturally a Sentient AI would be dangerous. Which I think is the main important problem. Kinda rambled but anyways yeah they indeed should be created but more when we have the knowledge I mentioned.
the320x200 t1_j939qzo wrote
Nearly all animals fit that definition to a large degree. Hard to see that really being the core issue and not something more in line with other new technology, like the issues of misplaced incentives around engagement in social networks for example.
Viewing a single comment thread. View all comments