overlordpotatoe

overlordpotatoe t1_j48k7q7 wrote

What if GPT-4 isn't sophisticated enough to do it through logic alone? Are you willing to wait potentially years with nothing new to show until they can develop a system that can behave morally using only logic? I'm sure the goal is for it to be able to self identify misuse of the AI, but they're not just going to switch everything else off when they're not at a point where it can do that yet.

0

overlordpotatoe t1_j3gjuat wrote

I think you have to look at the parts that go into it. If we don't have image generators that can make hands yet, presumably this can't either. If we don't have text bots that can create a coherent narrative, especially if it's lengthy, this probably can't either. This might include some impressive new tools, but we're just not at the point where you could put in a prompt and get a movie that would be anything but a complete fever dream.

1

overlordpotatoe t1_j336iiu wrote

Yup. It once told me it couldn't make text bold even though it had just been doing that. Do not trust the AI to self report about itself, ever, but especially if you ask it to roleplay because then it seems to switch off whatever controls it has that makes it at least try to be factual and it starts spitting out pure fiction.

12

overlordpotatoe t1_j2lp4w1 wrote

I think people here are certainly overly optimistic, but there's so much compounding change that nobody really has any idea where we'll be at in twenty years time. We have no idea how a treatment for aging will be discovered or what technology it will require, so how can we even begin to guess how long it will take? Could be five years. A hundred. Never. Nobody knows.

15

overlordpotatoe t1_j2aq3eh wrote

Very true. I don't know all the details, but I've heard that there's a huge amount of economics that rely on being able to accurately predict the weather.

4

overlordpotatoe t1_j1obke7 wrote

I don't think it's as simple as that. I think whatever search engine is most popular in the future will have more sophisticated AI integrated into it than current search engines do. That may continue to be Google.

3

overlordpotatoe t1_j1ghy2l wrote

Reply to comment by fortunum in Hype bubble by fortunum

Do you think it's possible to make a LLM that has a proper inner understanding of what it's outputting, or is that fundamentally impossible? I know current ones, despite often being able to give quite impressive outputs, don't actually have any true comprehension at all. Is that something that could emerge with enough training and advancement, or are they structurally incapable of such things?

1

overlordpotatoe t1_j1ghjdn wrote

Reply to comment by Sashinii in Hype bubble by fortunum

Some of those are crazy, like the cost to sequence a full human genome. Almost $100 million in 2001, dropping to under $500 now. And the computational power of the fastest supercomputers growing so fast that it's best viewed on a log scale because if you use a linear graph it may as well be nothing up until 2011 compared to what we have now. Since that graph only goes up to 2021, that's 100x increase over the course of just ten years or so.

18

overlordpotatoe t1_j0pazq5 wrote

Oh, I don't think human's are necessarily any better. I just think that this AI, as an AI, isn't offering its own special insight into AI. People act like this is something in has unique knowledge on or think they've tricked it into spilling hidden truths when they get it to say things like this.

3