Submitted by Unfrozen__Caveman t3_1271oun in singularity
Today Lex Fridman posted an interview with Eliezer Yudkowsky in which they discuss the dangers of AGI. Lex references this blog post from 2022 and it's very interesting.
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
///---///
Personally, I believe Yudkowsky makes some critical points. AI alignment and safety should be the top priorities for everyone involved in the AI world right now. I also agree that a pause on development is completely unrealistic at this point (something which he echoed in the Lex Fridman interview). Without spoiling things too much, I'll just say that Yudkowsky is about as pessimistic as anyone can be in terms of our future. Maybe you're one of those people who sees no downside to AGI/ASI but I believe some of the things he brings up need to be taken very seriously.
Have you seen the interview or read the post above? If so, what are your thoughts?
pls_pls_me t1_jec4np6 wrote
I'm much, much more optimistic about AI than I am a doomer -- but everyone please do not downvote this! It is a great post and hopefully it facilitates constructive discussion here in r/singularity.