cmustewart t1_j24bxuf wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I feel like either you or I missed the point of the article, and I'm not sure which. I didn't get any sense of "what if ai takes over". My account is that the author thinks that "ai" systems should have some sort of consequentialism built in, or considered in the goal setting parameters.
The bit that resonates with me is that highly intelligent systems are likely to cause negative unintended consequences if we don't build this in up front. Even for those with the most noble intentions.
glass_superman t1_j24oaqv wrote
It's the article that missed the point. It wastes time considering the potential evil of future AI and how to avoid. I am living in a banal evil right now.
cmustewart t1_j24px5g wrote
Somewhat fair as the article was fairly blah, but I've got serious concerns that the current regimes will become much more locked into place backed by the power of scaled superhuman AI capabilities in surveillance, behavior prediction and information control.
glass_superman t1_j26l6c5 wrote
That's totally what is going to happen. Looks at international borders. As nuclear weapons and ICBMs have proliferated, we find the nation borders are now basically permanent. Before WWII shit was moving around all the time.
AI will similarly cement the classes. We might as well have a caste system.
Viewing a single comment thread. View all comments