Viewing a single comment thread. View all comments

Jq4000 t1_irbvo5v wrote

I think you outline a possible outcome. There's also the possibility that strong AI could be cracked in a way where the AI doesn't have a "grow at all costs" imperative that puts it on a ballistic trajectory of growth.

There's also the possibility that strong AI comes online in tandem with humans developing neural nets, in such a way that humans aren't left behind by an AI going asymptotic.

I agree with Kurzweil's thesis that we'll likely be facing AIs that pass the Turing Test by 2030. The point where things get serious for me is when machines pass Turing tests in perpetuity rather than for a few hours. That's the point where we may be dealing with more than our equals.

I'm not ready to commit that the world beyond 2030 is a black haze of singularity just yet. What I will say is that if we have machines passing the Turing test at that point then we should be buckling up for an eventful set of decades to follow.

2

izumi3682 OP t1_irc3wx1 wrote

>There's also the possibility that strong AI comes online in tandem with humans developing neural nets, in such a way that humans aren't left behind by an AI going asymptotic.

Yes, I agree with this. I have placed it occurring roughly 5 years after the initial TS, which as you eloquently state may not be "a black haze of singularity".

https://www.reddit.com/r/Futurology/comments/vpoopq/we_asked_gpt3_to_write_an_academic_paper_about/ielpj4d/

2

Enoughisunoeuf t1_ircolig wrote

We're already watching cults form in real time due to propaganda. AI is going to be disastrous.

1