Submitted by SchmidhuberDidIt t3_11ada91 in MachineLearning
A recent podcast interview of EY's has gone a bit viral, and in it he claims that researchers have dismissed his views without seriously engaging with his arguments, which are described here in relative detail.
I'm aware of on-going AI safety and interpretability research, but the dual use of the term "AI safety" to mean something close to AI ethics, and something close to preventing an existential threat to humanity, makes distinguishing the goals of, say, Anthropic, and the extent to which they consider the latter a serious concern, difficult as a layperson.
I haven't personally found EY's arguments to be particularly rigorous, but I'm not the best suited person to evaluate their validity. Any thoughts are appreciated. Thanks in advance!
Additional-Escape498 t1_j9rq3h0 wrote
EY tends to go straight to superintelligent AI robots making you their slave. I worry about problems that’ll happen a lot sooner than that. What happens when we have semi-autonomous infantry drones? How much more aggressive will US/Chinese foreign policy get when China can invade Taiwan with Big Dog robots with machine guns attached? What about when ChatGPT has combined with toolformer and can write to the internet instead of just read and can start doxxing you when it throws a temper tantrum? What about when rich people can use something like that to flood social media with bots that spew disinformation about a political candidate they don’t like?
But part of the lack of concern for AGI among ML researchers is that during the last AI winter we rebranded to machine learning because AI was such a dirty word. I remember as recently as 2015 at ICLR/ICML/NIPS you’d get side-eye for even bringing up AGI.