Submitted by [deleted] t3_10cd4by in singularity
PhilosophusFuturum t1_j4fbmzi wrote
OP’s username is OldWorldRevival, and he advocates for technological regression and an academic reintroduction of theism. So I assume that the angle here is that Transhumanists are attempting to use Transhumanism as a stand-in for religious fulfillment. Edit: His account was also first made to complain about AI art and he made a sub protesting it. I assume his resistance to AI-art is what attracted him to resist Singulatarianism and Transhumanism.
For some that could possibly be true. But the idea of Transhumanism-as-religion is fundamentally flawed. Transhumanism and religion might share a lot of similar ideas like immortality and creating the best possible existence. But that’s where the similarities end. Religions make metaphysical claims like the existence of gods, creation of the earth, etc. transhumanism makes none of these claims because it is an intellectual school of philosophy; not a religion.
As to why people follow Transhumanism, most Transhumanists are very staunch Humanists, Futurists, and Longtermists. Transhumanists see the vague concept of “technological development” as a way to achieve things like superintelligence, omnipotence, immortality, and supereudaimonia.
As for “the beauty of life”, most Transhumanists tend to be existentialist and cosmist. Many Transhumanists believe in the beauty of the existential nature of Humanity to achieve great heights, and our very specific place in the history of Humanity. As a result, Transhumanists often have a strong fascination with things that most people overlook like everyday scientific progress; while ignoring “distractions” like elected politicians.
[deleted] OP t1_j4fcz8f wrote
> OP’s username is OldWorldRevival, and he advocates for technological regression and an academic reintroduction of theism. So I assume that the angle here is that Transhumanists are attempting to use Transhumanism as a stand-in for religious fulfillment.
Start out with a strawman, eh? You think you understand my username?
What if you're the technological regressive? What if you'd create the tools haphazardly that enable the destruction that prevents the greatest of tools from ever being made?
Maybe you think we should have just felt the intense heat of progress of thermonuclear weapons?
Stephen Hawking warned about AI - was he a Luddite, in your view?
PhilosophusFuturum t1_j4fdpf8 wrote
I assume by that you’re referencing the idea that we might accidentally create a tool that could destroy civilization. Transhumanists care deeply about preventing that; many of the researchers working on the Control Problem are Transhumanists.
The Control Problem (aka the alignment problem) is the problem of making sure a superintelligent AI is aligned to Human interests.
If AGI is to eventually happen (which most Transhumanists believe it will), then it’s imperative we solve the Control Problem instead of trying to prevent the development of AGI. In this framing, it’s Transhumanists who are engaging in the reality of the danger whereas everyone else is playing with fire by ignoring it.
[deleted] OP t1_j4fe2xd wrote
I don't think that's the right framing of the problem.
Transhumanism carries its own innate risks and is not a real solution to the control problem on a practical level, in my view.
PhilosophusFuturum t1_j4fep0j wrote
From their worldview of an inevitable singularity it makes perfect sense. If we cannot stop AGI; we need to find a way to align it to our interests. It’s the practical approach. As to why Transhumanists often believe AGI to be inevitable:
-Game Theory: Many countries like the US, China, UK, India, Israel, Japan, etc., are all working on researching Machine Learning. And an AGI is absolutely crucial to national security. Therefore a ban on ML research is entirely unrealistic. And since every country understands that such a ban won’t work, they would all continue to research ML even if there was an international ban on it.
-The inevitability of progress: Transhumanists often believe in AI-eventualism, or the idea that Humanity is on the inevitable path to creating ASI, and we can only slow down or accelerate that path.
-The upward trajectory of progress: Building on the last point; most Transhumanists believe that technological progress only ever increases, and that any attempt to stop a society changing innovation permanently has entirely failed and will always fail. So focusing in adapting to the new reality of progress is better than resisting it, which has a 100% fail rate.
[deleted] OP t1_j4ff8ay wrote
> -Game Theory: Many countries like the US, China, UK, India, Israel, Japan, etc., are all working on researching Machine Learning. And an AGI is absolutely crucial to national security. Therefore a ban on ML research is entirely unrealistic. And since every country understands that such a ban won’t work, they would all continue to research ML even if there was an international ban on it.
Glad you bring game theory into this, because this is why I do not view transhumanism as a solution. Heh. My opinion on this topic is a little bit esoteric even around these parts.
Basically... Pareto principle is why we have wealth distributions that are hugely unequal, but even more unequal at the top. It's why we have kings. It's why India and China are way more populous than the rest of the world.
This is also what will happen with AI agents. Certain AI or human agents will have significantly more control, and that control will snowball. We already see this with giant tech companies now dominating the landscape.
If an AI is not a world dominating AI, then it by nature cannot suppress other AI from being created to surpass it, and another will surpass it. If it is ASI, it has the power to dominate the world, whether or not it wields that power to do so.
It'll be significantly more stratified, not less. Basically, transhumanism is like putting your mind at the whim of whatever AI is at the top of this hierarchy, whether orn ot that AI does something.
PhilosophusFuturum t1_j4fg510 wrote
In theory the growth of the ivory tower that the elites are on should rapidly outpace that of the peasants because they hold the ever-expanding means of power. But the one asset the elites have that is truly ever accelerating passed the peasants is their wealth, not technology. In fact, technology is the great equalizer.
For example, your average middle-class person in the developed world today has a higher QoL than a king in the Middle Ages, and that’s entirely thanks to technology. Likewise, the QoL gap between a modern middle-class person and an oligarch is smaller than that of a medieval peasant and medieval king, despite the lifestyle of a modern oligarch being so much more lavish than that of a medieval king.
This also applies to offensive technology. For example, Europe was able to take over all of Africa despite the invaders being a small army compared to the imperialists tribes of Africa. That’s because they had guns. And when Africans got guns; they were able to push the Europeans out. The only African country that avoided colonization was Ethiopia, and it’s because they convinced the UK and Italy to give them guns. This is because guns rapidly closed the technology gap, even if the guns of the invaders were still very superior.
The same logic applies to ASIs. Sure there may be an ASI that is so great that no ASI could surpass it, but it doesn’t mean lesser ASIs can’t be created that could potentially kill it.
On that note, I am a lot more concerned about civilizational destablemeant than than I am of super-authoritarianism. With increasingly better tools, people could easily create dangerous ASIs and super viruses that huge governmental institutions might not be able to contend with.
Viewing a single comment thread. View all comments