Submitted by Cool-Particular-4159 t3_1072vgn in singularity
I know that this may seem like a rash thought at first, but think about it. We all here tend to agree that increasingly more sophisticated and intelligent AI will only lead to increasingly more sophisticated and intelligent AI will only lead to... and so on; indeed, this is the hypothetical runaway path of uncontrollable development. But in recognising that, won't the time taken to make scientific discoveries - or really discoveries of any kind - begin to tend to zero? At some point, there will be topics simply incomprehensible to the limited human mind, and everything that we could potentially figure out 'on our own' (so to speak) would have already been established by such agents of predicted intellect. Eventually, I'm pretty sure the only real 'purpose' of humans will be to just live out our lives and then die - and I don't mean that in a nihilist sort of way, as much as I mean that in a 'everything that human society has striven to achieve before will just be taken up as aims of ASI itself' sort of way.
This just doesn't seem like any other period of technological development in human history, but rather none other period with accordingly never experienced before effects. I don't want to delve into politics here, but it seems rather obvious - inevitable, perhaps - that current ideas of UBI, and perhaps even less accepted ones like technocommunism will be the only 'right' approaches to ensuring humanity's united, well... happiness. But I think that's another discussion to be had.
World_May_Wobble t1_j3kkm8w wrote
It's not obvious that there can be a finite number of things to learn.
>I'm pretty sure the only real 'purpose' of humans will be to just live out our lives and then die
That has been (and frankly continues to be) the human experience for the vast majority of people.