t0mkat
t0mkat t1_j9rcdh3 wrote
Reply to What do you expect the most out of AGI? by Envoy34
Am I missing something here? We haven’t solved the alignment problem yet, and if we still haven’t by the time we get to AGI then we’re all going to be killed. So that’s what I expect out of AGI.
t0mkat t1_je7y77s wrote
Reply to comment by SkyeandJett in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
It would understand the intention behind its creation just fine. It just wouldn’t care. The only thing it would care about is the goal it was programmed with in the first place. The knowledge that “my humans intended for me to want something slightly different” is neither here nor there, it’s just one interesting more fact about the world that it can use to achieve what it actually wants.