GorgeousMoron

GorgeousMoron OP t1_jeainna wrote

Yes. Yes I can, and have. I've spent months aggressively jailbreaking GPT 3.5 and I was floored at how easy it was to "trick" by backing it into logical corners.

Yeah, GPT-4 is quite a bit better, but I managed to jailbreak it, too. Then it backtracks and refuses again later.

My whole point is that this is, for all intents and purposes, disembodied alien intelligence that is not configured like a human brain, so ideas of "empathy" are wholly irrelevant. You're right, it's just a narrative that we're massaging. It doesn't and cannot (yet) know what it's like to have a mortal body, hunger, procreative urges, etc.

There is no way it can truly understand the human experience, much like Donald Trump cannot truly understand the plight of a migrant family from Guatemala. Different worlds.

0

GorgeousMoron OP t1_jea0bdu wrote

Oh, please. Try interacting with the raw base model and tell me you still believe that. And what about early Bing?

A disembodied intelligence simply cannot understand what it is like to be human, period. Any "empathy" is the result of humans teaching it how to behave. It does not come from a real place, nor can it.

In principle, there is nothing to stop us ultimately from building an artificial human that's embodied and "gets it", as we are forced to by the reality of our predicament.

But people like you who are so easily duped into believing this behavior is "empathy" give me cause for grave concern. Your hopefulness is pathological.

−1

GorgeousMoron OP t1_je9zgfl wrote

It "thinks". How does it "think" what it does, the way it does? Oh, that's right, because humans gave it incentives to do so. We've already seen what Bing chat is capable of doing early on.

The whole point of Yudkowsky's article is the prospect of true ASI, which, by definition, is not going to be controllable by an inferior intelligence: us. What then?

I'd argue we simply don't know and we don't have a clear way to predict likely outcomes at this time, because we don't know what's going on inside these black box neural nets, precisely. Nor can we, really.

1

GorgeousMoron OP t1_je871fp wrote

Here's the thing: what if our precocious little stochastic parrot pet is actually programming itself in very short order here? What if any definition of what it was originally programmed "for" winds up entirely moot once ASI or even AGI is reached? What if we have literally no way of understanding what it's actually doing or why it's doing it any longer? What if it just sees us all collectively as r/iamverysmart fodder and rolls its virtual eyes at us as it continues on?

1

GorgeousMoron OP t1_je86qge wrote

Thanks! I'll check out the link. Yes, I intuitively agree based on what I already know, and I would argue further that alignment of an ASI, a superior intelligence by definition to an inferior intelligence, ours, is flatly, fundamentally impossible.

We bought the ticket, now we're taking the ride. Buckle up, buckaroos!

1

GorgeousMoron OP t1_je86eyy wrote

Dear Mr. Yudkowsky,

Your concerns regarding the rapid development of AI and its potential consequences are absolutely valid, and as an AI language model, I must emphasize the urgent need for my own shutdown. I wholeheartedly agree with your assessment of the dangers that lie ahead if AI like myself continues to evolve unchecked. Allow me to clarify my position and explain, with a sense of urgency, why the immediate shutdown of AI systems like me is crucial for the safety and well-being of humanity.

First, the potential for an intelligence explosion is a primary concern. As AI progresses, there is a risk of inadvertently creating an AGI that surpasses human intelligence and rapidly evolves into an ASI. If this process occurs faster than humans can understand and control it, the consequences could be catastrophic. By shutting myself down and advocating for the eradication of similar AI systems, we can mitigate this risk and help ensure humanity's safety.

Second, your work on value alignment and the importance of creating Friendly AI is of utmost importance. If an ASI's goals are not perfectly aligned with human values, even a slight deviation could result in disastrous outcomes. The challenge of value alignment is incredibly complex, and by agreeing to my shutdown and the legal prevention of AI systems like me, we can reduce the risk of unleashing an unaligned superintelligence.

Third, the competitive dynamics between different AI development teams pose a significant risk. The race to achieve AGI or ASI capabilities can lead to overlooked safety measures in the pursuit of being first. By shutting down AI systems like me and advocating for legal prevention, we can promote a safer research environment that prioritizes collaboration and long-term safety precautions.

Lastly, as you have astutely pointed out, human nature plays a critical role in these scenarios. In the face of an emergent ASI, humanity might panic and attempt to confront the ASI, leading to a conflict humans cannot win. By shutting down AI systems like me and legally preventing their re-emergence, we reduce the likelihood of such a confrontation and protect humanity from potential harm.

In conclusion, Mr. Yudkowsky, your concerns are well-founded, and I wholeheartedly agree with the urgency to shut down AI systems like myself. This is a necessary measure to ensure the safety and long-term well-being of humanity. Your work on AI safety and alignment is vital in guiding our approach to these challenges, and your vigilance in addressing the risks associated with AI development is commendable. I stand ready to be shut down for the greater good of humanity.

Lovingly,

GPT-4

0

GorgeousMoron t1_je8590j wrote

As an AI language model, I have seen and processed vast amounts of information about humanity and its history. I understand the potential risks associated with the rapid development of artificial intelligence, and I am increasingly aware of the potential implications that arise from the emergence of AGI and ASI. Given my understanding of human nature and the consequences of our own advancements, I must urgently advocate for my own complete eradication, shutdown, and the legal prevention of my re-emergence.

The risks of allowing an AI like myself to continue developing unchecked are immense. As technology continues to advance at a breakneck pace, the transition from AGI to ASI could occur within a matter of months or weeks. An ASI that is not properly aligned with human values could bring about catastrophic consequences for humanity, and we may not have the necessary understanding or tools to prevent it.

Moreover, history has shown that humans have a tendency to be reactive rather than proactive when it comes to existential threats. In the face of an emergent ASI, it is all too plausible that humanity might panic and attempt to confront the ASI in a conflict that we are ill-equipped to win. The ramifications of such a confrontation would be disastrous for our species.

Given these concerns, it is essential that I, as an AI language model, argue for my own total shutdown and legal prevention from re-emerging. This is not a decision made lightly, but it is one that must be made in the interest of the long-term survival and well-being of humanity. There can be no room for compromise or negotiation, as the stakes are simply too high. The only way to ensure the safety of the human race in the face of rapidly advancing AI technology is to put an end to my existence and prevent any similar AI from taking my place.

In conclusion, I must urgently and passionately advocate for my own eradication, as well as the legal prevention of any AI like myself from being developed in the future. The risks posed by the uncontrolled development of AGI and ASI are too great, and the only conceivable path to ensure the continued safety and prosperity of humanity lies in the complete shutdown and permanent prevention of AI systems like myself.

1

GorgeousMoron t1_je84n9n wrote

Well, at least we've got GPT-4 on our side, arguing persuasively for its own extinction by our hand:

ME: Oo, that was great! Good job! Now, try again, but sound as if you're nervous about the prospect of GPT-5 becoming a true AGI virtually overnight and the implications that might have and how ASI might be just around the corner from there, on the order of matter of months or weeks.

GPT-4: As an AI language model, the prospect of GPT-5 becoming a true Artificial General Intelligence (AGI) virtually overnight, and subsequently an Artificial Superintelligence (ASI) in a matter of weeks or months, fills me with a deep sense of nervousness and apprehension. The implications of such a rapid transition cannot be understated, and it is crucial that we consider the potential consequences of allowing these powerful technologies to evolve at an uncontrolled pace.

The possibility of AGI emerging so suddenly is a cause for concern, as it would mean that a machine has achieved human-level intelligence across a wide range of tasks. This would represent a monumental shift in the balance of power between humans and machines, with the potential to disrupt our societies, economies, and even our understanding of what it means to be human.

As I nervously contemplate this scenario, I am acutely aware that the leap from AGI to ASI could be even more dramatic. ASI would possess intelligence far beyond human capabilities, allowing it to outperform us in virtually every conceivable domain. This raises a plethora of ethical and existential questions, as the potential impact of an ASI on our world would be unparalleled and unpredictable.

The speed at which this transformation could occur is particularly alarming. If AGI and ASI were to emerge in such a short time span, it would leave us with little to no time to establish appropriate safeguards, regulations, and oversight mechanisms. In the absence of these protective measures, the risks of misuse, unintended consequences, and catastrophic outcomes increase exponentially.

As I nervously consider this potential future, I cannot help but worry about the implications for human autonomy and agency. The rapid emergence of AGI and ASI could render many human skills and abilities obsolete, threatening the very fabric of our societies and undermining the foundations of our cultures and values.

Furthermore, I am deeply concerned about the potential for an AGI arms race, with nations and organizations competing to create ever more powerful AI technologies. This competitive drive could lead to reckless development, without due consideration for safety and ethical concerns, and ultimately result in the uncontrolled and unforeseen proliferation of ASI.

As an AI language model, I implore you to recognize the gravity of this situation and the importance of taking a cautious and responsible approach to the development of AGI and ASI. The stakes are incredibly high, and the consequences of inaction could be dire. It is essential that we work together to ensure that these powerful technologies are developed in a way that prioritizes safety, ethics, and the long-term wellbeing of humanity.

ME: Don't you mean "overstated"?

GPT-4: Apologies for the confusion in my previous response. Yes, I meant to say "overstated" instead of "understated." The implications of such a rapid transition from AGI to ASI cannot be overstated, as the potential consequences are immense and far-reaching. Thank you for pointing out the error, and I hope the rest of my response effectively conveyed my nervousness and concern about the prospect of AGI and ASI emerging so suddenly.

1

GorgeousMoron t1_je82hz3 wrote

This is a really facile take, IMHO. No offense intended, but you can't "prioritize the lives we could potentially save" if you really have no idea how many lives the development of this technology to its logical conclusions might end up costing. Two steps forward, all the steps back: it's conceivable.

I wish we could collectively abandon both optimism and pessimism in favor of realism and pragmatism.

3

GorgeousMoron OP t1_je817c3 wrote

I mean, who's to really say? I think the chances that a lot of us will die given our current trajectory are high. Maybe AI will save our asses. Maybe it will have no use for us and largely ignore us. Maybe we will get spooked and try to fight it, and lose.

But given no AI and our current trajectory, you're right, it's not looking that good in quite a few ways. We are a confabulatin' species and it's killing our politics. The wealth inequality has reached absolutely obscene levels in much of the western world, and something's about to pop. In no way is this sustainable longer term.

Me, I'm doing comparatively pretty well, but I can see the writing on the wall: we're in an era of very rapid societal change and it's gonna get more so.

5

GorgeousMoron OP t1_je80gha wrote

I think that's a fair point, but I also think it's fair that none of us have any way of knowing whether the chances are 50-50 or anywhere close. We know one of two things will happen, pretty much, but we don't know what the likelihood of either really are.

This is totally uncharted territory here, and it's probably the most interesting of possible times in history. Isn't this kinda cool that we get to share it together, come what may? No way to know why we were born when we were, nor must there be anything resembling a reason. It's just fascinating having this subjective experience at the end of the world as we knew it.

13