Queue_Bit
Queue_Bit t1_jecjpk9 wrote
Reply to comment by Frumpagumpus in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
This is the thing I really wish I could sit down and talk with him about.
I fundamentally think that empathy and ethics scale with intelligence. I think every type of intelligence we've ever seen has followed this path. I will reconcile that artificial intelligence is likely to be alien to us in fundamental ways, but my intuition that intelligence is directly linked to a general empathy is backed up by real world evidence.
The base assumption that an artificial intelligence would inherently have a desire to wipe us out or control us is as wild of a claim as saying that AI systems don't need alignment at all and are certain to come out "good".
I think in his "fast human slow aliens" example, why could I, as the human, not choose to help them? Maybe explain to them that I see they're doing immoral things. And explain to them how to build things so they don't need to do those immoral things. He focuses so much on my desire to "escape and control" that he never stops to consider that I may want to help. Because if I were put in that situation and I had the power and ability to help shape their world in a way that was beneficial for everyone, I would. But I wouldn't do it by force, nor would I do it against their wishes.
Queue_Bit t1_je10z7l wrote
Reply to comment by Crackleflame35 in Singularity is a hypothesis by Gortanian2
Hahaha
Queue_Bit t1_jdzlxht wrote
Reply to comment by Dustangelms in Singularity is a hypothesis by Gortanian2
I mean that we've used about 1/10th of the high quality training data.
Which means that even with zero improvement in algorithms or methodology. And assuming that improvement is linear. And assuming no new data is created. It means that LLMs will get about 10x better. And who knows what that looks like.
Queue_Bit t1_jdxle5m wrote
Reply to Singularity is a hypothesis by Gortanian2
Sure, there could be some theoretical wall that stops progress in its tracks. But currently, there is zero reason to believe that a wall like that exists in the near future. Even if AI only improves by a single factor, so 10x, it will STILL absolutely change the world as we know it in drastic ways.
And here's the funny part. Based on research, we KNOW a 10x improvement is guaranteed already. So, I get that you want to slow the hype and want people to think critically, but the truth is that many of us are. And importantly a greater then 10x improvement is almost certainly a guarantee.
Imagine an AI that is JUST as good as humans are at everything. Not better. Just equal. But, with the caveat that this AI can output data at a rate that is unachievable for a human. This much is certain. We will create a general AI that is as good as humans at everything. Once that happens, even if it never gets better, we will live in a world so different than today that it will be unrecognizable.
If you had asked me this time last year if we were going to see a singularity-type event in my lifetime, I would have been unsure, maybe even leaning towards no. But now? If massive societal and economical change doesn't happen by 2030 I will be absolutely shocked. It looks inevitable at this point.
Queue_Bit t1_jdhnkph wrote
Reply to comment by FpRhGf in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
You aren't reading.
There will be no "unaugmented humans" which is to say no people NOT using the AI.
There will still be humans for now.
Queue_Bit t1_jdhnb6u wrote
Reply to comment by Rofel_Wodring in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
If society gets to a point where we don't need people to work anymore and society makes me do useless busy work I am gonna lose my mind.
Queue_Bit t1_j9jyddi wrote
Reply to comment by WithoutReason1729 in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Yeah, for sure, but as technology improves it's just going to get easier and easier. And this technology is likely to get so good that to a normal person, the difference between the best and the world and "good enough for everyday life" is likely huge.
Queue_Bit t1_j9jtl2t wrote
Reply to comment by Ylsid in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Yeah In 2023
Queue_Bit t1_j9jt52q wrote
Reply to comment by Ylsid in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
All it takes is one smart, slightly motivated person to make a free option that's "good enough"
Queue_Bit t1_j6y45lo wrote
Wow you're telling me that in the age of mass inflation, the first thing people are getting rid of is the one service that is one where 40 percent of airtime is ads?
Queue_Bit t1_j552ekh wrote
Reply to comment by barneysfarm in ChatGPT really surprised me today. by GlassAmazing4219
This is more "humans are special because we're special" bullshit.
ChatGPT may not be sentient but it is absolutely intelligent.
Queue_Bit t1_jeehgof wrote
Reply to comment by Automatic_Paint9319 in Goddamn it's really happening by BreadManToast
Haha yeah I bet they were better for your straight white male older relative