RiotNrrd2001

RiotNrrd2001 t1_jeeua7k wrote

>and that maybe life will continue with no more disruption than was caused by, say, the Internet.

Were you around to see the disruption caused by the internet? We used to buy newspapers and things at stores. And those are just two of the things that the internet completely changed. The internet was massively disruptive.

This promises to be even more so, probably by orders of magnitude. But it doesn't mean we'll all start wearing silver mylar and get supersized foreheads. When you look out the window, you'll probably see the same things you're seeing now, at least for the time being. The sudden appearance of a superintelligence isn't going to reconfigure our physical reality immediately, or even within the next decade or two. It will reconfigure what happens inside that reality, but even that won't happen overnight. For quite some time things will still look pretty similar. ASI will have massive consequences, but for the majority of humanity it won't be a switch being thrown from OFF to ON.

5

RiotNrrd2001 t1_jeesk45 wrote

Two weeks ago I was running the Opt2.3B (I think) language model, which is not very capable and ran like an absolute dog on my machine. Last week, I downloaded Alpaca, which was better, twice the size, and ran super fast. Four days later I downloaded GPT4All, which is even better than that, and now I'm eyeing Vicuna, which does better on many tasks than Bard, thinking nothing but "gimmee" (so far that one isn't available for download, but man is the online demo impressive).

I was actually sort of surprised that Vicuna didn't become available for easy download overnight. This snail-pace has got to stop! \s\s\s\s\s

5

RiotNrrd2001 t1_jebrdzp wrote

If AI threatened ANY group other than the political and business leader class, the political and business leader class would not give one flying fuck. They are only loudly concerned because... this will affect THEM. That's a whole different kettle of fish in their eyes. The poors have some concerns? Whatevs. Wait, this will affect US? NOW we need to be careful, circumspect, conservative, move slowly, don't rock the boat, because if money becomes obsolete, who are we going to hire as security guards? And with what? Money?

15

RiotNrrd2001 t1_je9wkxq wrote

I think some people insist on "consciousness" as being a necessary component of AI, and that "understanding" is a function of consciousness. And consciousness means "being conscious the way biological systems like ourselves are conscious". AND, the final nail in this coffin: "that's impossible". Hard to argue with.

QED, ergo, in conclusion regarding AIs ever "understanding" anything: Nope.

But what about....? Nope.

But maybe they'll...? I said no.

What if they invent a...? Doesn't matter, what part of "impossible" are you not getting here?

Just to be clear, I am not one of these people. But I think this is what we sometimes see. In order for AI to be "real", it has to have characteristics that are basically impossible to test for (i.e., consciousness and\or self-awareness). Thus, for these people AI can't ever be real.

1

RiotNrrd2001 t1_je9wjmp wrote

I think some people insist on "consciousness" as being a necessary component of AI, and that "understanding" is a function of consciousness. And consciousness means "being conscious the way biological systems like ourselves are conscious". AND, the final nail in this coffin: "that's impossible". Hard to argue with.

QED, ergo, in conclusion regarding AIs ever "understanding" anything: Nope.

But what about....? Nope.

But maybe they'll...? I said no.

What if they invent a...? Doesn't matter, what part of "impossible" are you not getting here?

1

RiotNrrd2001 t1_jdmi47t wrote

There are people who will keep moving the goalposts literally forever. It pretty much doesn't matter what gets developed, it won't ever be "real" AI, in their minds, because for them AI is actually inconceivable. There's us, who are (obviously) intelligent, and then there's a bunch of simulations. And simulations will always be simulations, no matter how close to the real thing they get.

So, whatever we have, it won't be "real" until we develop X. Except that as soon as X gets developed, well... X has an explanation that clearly shows that it isn't actually intelligence it's just a clever simulation, so now it won't be "real" AI until we develop Y...

And so it goes.

3

RiotNrrd2001 t1_ja5u208 wrote

They say "do what you love for work, and you'll never work a day in your life." And that is a complete crock of shit. If you do what you love for work, what you will do is turn what you love... into work. Don't burn out on what you love. Don't "dread Mondays" because you have to go do what you loved once. Don't gripe about how you really aren't being appreciated doing what you used to love, but now are kinda neutral on and, honestly, some days are having trouble remembering what it was you even liked about it. And so on down the spiral. That's what happens, mostly, when you start out doing what you love for work.

Just find something you can stand to do for a living, and do the stuff you love on the side. Not as employment, but because it's what you love. Then there won't be any burnout, AND the AI revolution won't eat you.

4

RiotNrrd2001 t1_j9p3ds6 wrote

The world just got a couple of free interns. They know a lot, but they're inexperienced, kind of dumb in some ways, and they make glaring errors. On the other hand, it's always easier to edit than it is to compose, so having some rough-draft writing fools spit out a bunch of nicely worded and formatted stuff at you, half of which is wrong, is actually just fine for a lot of things. It doesn't save you 100% of your work time, but it sure cuts it down. Jobs that used to involve direct creation will now be more exercises in proofreading and editing.

That, by itself, is enough to upend things. Even if we don't get AGI. Even if ChatGPT and Bing get no more accurate than they are right now. Just the tools we now have, we've only had widely available for a very short time, and people are still working out what they can do. The pebble's been dropped into the pond, but many ripples are only now starting to become visible (e.g., Amazon is just now reporting seeing a huge influx of ChatGPT-authored content, etc.)

The real AGI fun is down the road. But that doesn't mean some fun isn't still starting.

1

RiotNrrd2001 t1_j9ovc22 wrote

They will never be 100% accurate. They are like people. Even the smartest of people doesn't know things, has blind spots, has been trained incorrectly, etc. They are no different. We can trust them the way we can trust people. Perhaps eventually with a very high degree of confidence, but never with 100% blind trust.

6

RiotNrrd2001 t1_j9mddet wrote

I imagine at some point LLMs will be paired with tools that can handle the things they themselves are poor at. Instead of remembering that 3 + 4 = 8 the way it has to today, it will outsource such operations to a calculator which will tell it that the answer is actually 7. That ChatGPT can't do that today and still does as well as it does is actually pretty impressive, but... occasionally you still get an 8 where you really want a solidly dependable 7.

These are the early days. There is still some work to be done.

20

RiotNrrd2001 t1_j9bmlb2 wrote

I personally couldn't care less if it's "intelligent" or not. My own concern is mainly whether what comes out of it is useful or not. Whether a conscious mind produced that output or whether it was the result of a complicated dart game, as far as I'm concerned is an interesting question. But a more important question is - at least for me - is what it produces useful? It's less academic, and somewhat more objective. I can't tell if it's conscious. I CAN tell whether it's properly summarized a paragraph I wrote into a particular format, or whether the list of ideas I asked it for are worth delving into. I can't evaluate its conscious state, or even its level of intelligence, but that doesn't mean I can't evaluate its behavior, and I have to say that in those areas where factual knowledge isn't as necessary (summarizing text, creating outlines, producing lists of ideas, etc.) it behaves usefully intelligent. Does that mean it IS intelligent? To an extent, to me at least, that may not even matter except as an academic thought.

I almost want to look at these systems from a Behavioral Psychology point of view, where internal states are simply discounted as irrelevant and external behavior is all that matters. I don't like applying that to people, but it does seem tempting to apply it to AIs.

ChatGPT is not a calculator, it's more like a young, well-educated but inexperienced intern who wants to do a good job, but who still makes mistakes. I understand that I have to check ChatGPTs work. I can work with that.

1

RiotNrrd2001 t1_j8w88ju wrote

The main problem is that there is no generally agreed upon definition of "intelligence". For some people the recent Large Language Models totally meet their definition, so for them, yes, we have made it to the promised land. For others, the models don't meet their definitions, so no, we still have a long ways to go and may never get there. I have a feeling this split is going to keep on keeping on for some time.

9

RiotNrrd2001 t1_j8ucrdc wrote

I think I came across the term first in the late nineties or maybe early 2000's. People were claiming we'd make it there sometime around 2012. Of course, various other apocalypses were also converging on 2012 (Mayan calendar myths, massive asteroids, probably zombies I can't remember all of them), although I have to say that my big memory for that year was that it wasn't quite as apocalyptic as feared\hoped (depending on your outlook). That years singularity was pretty disappointing as well, as it turned out.

Maybe the next one will be too. It's only hype until it isn't, of course, but until it isn't I expect most of it still will be.

1

RiotNrrd2001 t1_j8tuud6 wrote

What you've said is true about ALL new technologies.

More people were killed by motorcars than by buggies; obviously the internal combustion engine was a mistake. Airplanes can crash from great heights: mankind obviously wasn't meant for altitudes in excess of the nearest climbable mountain, and ALSO: bombs. And no one was ever electrocuted until mass electrification occurred; piping lightning directly into our homes is just asking for fires.

Movies are awesome! Also, they can be used for mass propaganda. As can that dang printing press. No printing presses, no Mein Kampf, so maybe that ought to be looked into.

My point is that yes, all new technology has a potential for causing damage and for being misused. We should definitely be conscious of those things. But that doesn't mean we need to stop development. What we need to be is aware.

1

RiotNrrd2001 t1_j8tgtrp wrote

Paradigm-breaking shifts were something of the norm for most of the twentieth century. We've been at the top of that S-curve for a lot longer than I personally like . And personally I prefer the rapid pace of change that I myself witnessed between the 1960s and roughly mid-September of 2001, at which point "progress" became more "refinement of what we already have" (smartphones being nothing more than small computers, for example), than groundbreaking new additions.

Finally we have something new. I'm happy to see it. Right now our culture seems to have a lot of problems, and to be honest, I don't see how things can change without things changing, if you understand what I'm saying. We can't improve things AND keep everything the same at the same time. The AI stuff is the face of change, but again... that's something those of us in the older generations were kind of used to and haven't been seeing a lot of for a while. "Apps" don't count.

Don't be a square, man. The times they be a'changin'.

1