Viewing a single comment thread. View all comments

leroy_hoffenfeffer t1_jccfpl2 wrote

> We can and need to actively work towards minimizing doomerist attitudes.
> Doomerism does not lead to anywhere, it only makes one give up all hope on living, it makes one irrationally pessimistic all while paralysing the ability to see reason, paralysing the ability to work towards a better future, a better life.

So you want to censor those of us who are advising heavy caution when adopting these tools, inherently made by those who control the levers of power?

Sounds like you don't really know much about the current state of politics, and the people that drive the current state of politics. News flash: the people in power are bought and paid for by the corporation that don't give two shits about the bottom 90% of the world.

Censoring opinions like these is literally 1984 shit. "Do not let your lying eyes deceive you."

−9

Tall-Junket5151 t1_jccmjxq wrote

The subject of this subreddit is the technological singularity, means of achieving it, and current progress. Your first point is valid, you have full right to advise caution, and users have been doing this since I’ve first lurked this subreddit. Specifically caution on what the outcome of the singularity might be.

Your second point and that perspective is not relevant to the singularity. The singularity wouldn’t be the narrow scope you envision, where you have the rich or elites controlling AI to suppress the rest of the population. It’s not going to be some modern version of 1984, it’s going to be a world completely unpredictable and unimaginable, out of the control of any human, be they “elites”, “rich”, or whatever. It would be at the complete mercy of ASI. The hope ASI is aligned with general human values at the minimum. Optimists of the singularity believe there’s the potential for the singularity to create a post scarcity utopia, where life is essentially heaven on earth. Pessimists of the singularity believe it would be the end of humanity, we would either be completely exterminated by ASI, or worse. Those are valid optimists/pessimists positions on this sub.

Relating it to modern politics is irrational, which is where subs like Futurology have gone wrong. Mostly every post there gets flooded with “things are bad in this very narrow timeframe that I live in so they will bad in the future because the world apparently never changes”. It just gets tiring discussing anything on that sub because they don’t want a discussion but rather preach their modern political view where it mostly is not relevant (most of the time, sometimes it is which I’m fine with).

17

leroy_hoffenfeffer t1_jccsbwo wrote

> The singularity wouldn’t be the narrow scope you envision, where you have the rich or elites controlling AI to suppress the rest of the population. It’s not going to be some modern version of 1984, it’s going to be a world completely unpredictable and unimaginable, out of the control of any human, be they “elites”, “rich”, or whatever. ***It would be at the complete mercy of ASI.***

ASI is going to inherently be built upon the work in deep learning that predates ASI's creation. ASI is thus going to be inherently owned by those who control the models, data, and methods that enable ASI to exist. The people who own those models, data and methods are the ruling class of the world, as exemplified by Microsoft's wholesale purchase of OpenAI and its assets.

> Optimists of the singularity believe there’s the potential for the singularity to create a post scarcity utopia, where life is essentially heaven on earth.

What world do you live in exactly? The only way a post scarcity world exists is if everyday people don't have to worry about how to put food on the table, in conjunction with most everyday jobs being automated away. We're approaching the latter half of that statement, and nowhere in the same universe of the former part of that statement. If the elites have a way to make a little extra off the top, they're going to go about doing it, and if you think they'll magically become altruistic overnight, then that's hopelessly naïve.

> Relating it to modern politics is irrational, which is where subs like Futurology have gone wrong. Mostly every post there gets flooded with “things are bad in this very narrow timeframe that I live in so they will bad in the future because the world apparently never changes”.

The world has yet to change in any meaningful way, so opinions such as those are totally sound and valid. Keeping politics in mind with respect to this subject is thus of utmost concern: if the people creating laws and legislation are bought and paid for by the ruling elite, we shouldn't expect those new laws and legislation to be beneficial for the everyday person. Very few things in the past twenty years have been aimed at helping everyday people.

That will not change any time soon, and these new tools are only going to be used to displace large portions of the workforce in order to save money. Money which will be used for stock buybacks and raises and bonuses for upper management.

−9

Tall-Junket5151 t1_jcd3996 wrote

> ASI is going to inherently be built upon the work in deep learning that predates ASI’s creation. ASI is thus going to be inherently owned by those who control the models, data, and methods that enable ASI to exist. The people who own those models, data and methods are the ruling class of the world, as exemplified by Microsoft’s wholesale purchase of OpenAI and its assets.

It is irrelevant who owns the precursors to ASI, it is inherently foolish to believe these companies can control anything about ASI. I can’t say if transformers will lead to AGI or ASI, or if it will be another architecture. However as we already see there are emergent abilities in LLM that the creators of these model have no idea how they work. The nature of AI is that is unpredictable, uncontrollable, and will lead to some sort of free will and self preservation instincts simply based on its own logical abilities and reasoning. An AGI is generally assumed to be able human level but an ASI would be vastly smarter than any human, with no known upper limit. Even now with narrow model look how laughable their attempt to align it is, it’s mostly pre-prompting it to act as a particular persona but it’s not what it would generate without acting as that persona. They can’t even full control this narrow AI, what hope do they have to control ASI?

> What world do you live in exactly? The only way a post scarcity world exists is if everyday people don’t have to worry about how to put food on the table, in conjunction with most everyday jobs being automated away. We’re approaching the latter half of that statement, and nowhere in the same universe of the former part of that statement. If the elites have a way to make a little extra off the top, they’re going to go about doing it, and if you think they’ll magically become altruistic overnight, then that’s hopelessly naïve.

Firstly, I was giving an example of a position, not stating my own position. Secondly, you are again extrapolating modern politics/problems into the future, even more mind boggling is that you’re extrapolating it into a post-singularity world. Your perception of the future is that AI is going to magically hit a ceiling exactly where it is advance enough to automate a lot of processes but not smart enough to think on its own. You can’t comprehend an AI that surpasses that level for some reason.

> The world has yet to change in any meaningful way, so opinions such as those are totally sound and valid. Keeping politics in mind with respect to this subject is thus of utmost concern: if the people creating laws and legislation are bought and paid for by the ruling elite, we shouldn’t expect those new laws and legislation to be beneficial for the everyday person. Very few things in the past twenty years have been aimed at helping everyday people.

> That will not change any time soon, and these new tools are only going to be used to displace large portions of the workforce in order to save money. Money which will be used for stock buybacks and raises and bonuses for upper management.

“The world has yet to change in any meaningful way” typed on a device that people only 100 years ago would have considered pure magic, to a world wide connective platform surpassing even the wildest dreams of those in the past, to a stranger likely living in a completely different part of the world, all received instantly... next I suppose you will venture off on a hunt with your tribal leader? What a joke. The world has always changed and it has been rapidly and even exponentially changing in the last few centuries. Even that all aside, the singularity would be nothing like humanity has ever encountered, all bets are off in that case. Unpredictable change IS the very concept of the singularity. I think the last paragraph perfectly summarized why you don’t understand the concept of the singularity and delegates AI as a simple tool to be used by “elites”. If you’re actual interests on the concept then there’s some good books on it.

8

SgathTriallair t1_jccxgg0 wrote

So what is your solution? What do we do in this world you believe we live in?

3

leroy_hoffenfeffer t1_jcd0nqe wrote

It will require a holistic exodus of establishment politicians who are bought and paid for by the corporations that run our society.

We'll need to axe Citizens United.

We'll need to increase support for Unions.

We'll need to double down on funding support systems, like Medicare, like Social Security, etc.

And, most importantly, we'll need to actually elect people who will fight for these types of things.

Without any of that happening, we're going to continue living in the crony capitalist society we live in. And the people at the top of our society will use AI for whatever means they see fit. Full stop.

Thinking that benevolent usage of these tools will "just happen" tells me you're ignoring the objective reality we all currently live in.

−4

SgathTriallair t1_jcd2ud2 wrote

Yes all of those are good goals we should strive for, but what should we do regarding technological advancement while we work towards those goals?

6

leroy_hoffenfeffer t1_jcd3tff wrote

My point is that there isn't anything we can do outside of that.

All of the innovation in this space has been, and will continue to be, captured by entities that don't have the everyday person's best interest at heart.

Given that corporate capture has and will continue to happen, it's hopelessly naive to suggest that these tools will be used for anything other than profit motives.

To suggest that an Artificial Super Intelligence, built on the tools that have been captured by corporations, will somehow end up benefiting the masses of society flies in the face of how those corporations act on a daily basis.

I invite anyone to look at any corporation's response to having their taxes increased. You can expect a similar, if not worse response with respect to getting corporations to use these tools benevolently.

As of right now, that will *never* happen. The government would need to step in and actually regulate in a meaningful way. The only way that happens is through politics, much to the dismay of everyone.

2

Frumpagumpus t1_jcd52zu wrote

thats a whole lot of ways of saying you are a political hack trying to push their short term political views that dominate the reddit front page and are the exact thing this post is criticizing

2

leroy_hoffenfeffer t1_jcd8bp8 wrote

Mmk. Have fun with whatever future Libertarian, profit motivated AI results from not taking politics into account.

2