Viewing a single comment thread. View all comments

cmustewart t1_j24bxuf wrote

I feel like either you or I missed the point of the article, and I'm not sure which. I didn't get any sense of "what if ai takes over". My account is that the author thinks that "ai" systems should have some sort of consequentialism built in, or considered in the goal setting parameters.

The bit that resonates with me is that highly intelligent systems are likely to cause negative unintended consequences if we don't build this in up front. Even for those with the most noble intentions.

45

glass_superman t1_j24oaqv wrote

It's the article that missed the point. It wastes time considering the potential evil of future AI and how to avoid. I am living in a banal evil right now.

−7

cmustewart t1_j24px5g wrote

Somewhat fair as the article was fairly blah, but I've got serious concerns that the current regimes will become much more locked into place backed by the power of scaled superhuman AI capabilities in surveillance, behavior prediction and information control.

15

glass_superman t1_j26l6c5 wrote

That's totally what is going to happen. Looks at international borders. As nuclear weapons and ICBMs have proliferated, we find the nation borders are now basically permanent. Before WWII shit was moving around all the time.

AI will similarly cement the classes. We might as well have a caste system.

1