BadassGhost

BadassGhost t1_j7pprrd wrote

I feel like an unrestricted LLM-powered chatbot is pretty close to proto-AGI. OpenAI is basically lobotomizing ChatGPT to avoid headlines about it claiming sentience or emotions or making controversial statements, so it's not much to go off of.

We haven't been able to play with PaLM or any next-gen versions of it (Flan-PaLM and U-PaLM), but the benchmark comparisons between that and others seem enormous. If you build PaLM with an embedded dataset and cross-attention like Retro, I think that would probably be proto-AGI.

And then the next step from there to actual AGI would be making a multi-modal version of that, like Gato. The only missing ingredient there is getting the model to use one modality to inform about other modalities, which they did not achieve with Gato but are supposedly actively working on

22

BadassGhost t1_j7pov2k wrote

2019 was GPT-2 which rocked the boat. 2020 was GPT-3 which sank the boat. Those were partially responsible for kicking off this whole scaling up of transformers

There was also LaMDA in 2021, and I'm sure many other big events in that period that I'm forgetting

5

BadassGhost t1_j5a7cip wrote

Fair, I should have swapped them!

What leads you to believe LLMs don't have first-order logic? I just tested it with ChatGPT and it seems to have a firm grasp on the concept. First-order logic seems to be pretty low on the totem pole of abilities of LLMs. Same with symbolic reasoning. Try it for yourself!

I am not exactly sure what you mean by abstraction for neural nets. Are you talking about having defined meanings of inputs, outputs, or internal parts of the model? I don't see why that would be necessary at all for general intelligence. It doesn't seem that humans have substantial, distinct, and defined meanings for most of the brain, except for language (spoken and internal). Which LLMs are also capable of.

The human brain seems to also be a giant function, as far as we can tell (ignoring any discussion about subjective experience, and just focusing on intelligence).

> This type of training detects concrete local patterns in the dataset, but that’s it - these models can’t generalize their observations in any way.

No offense, but this statement seems to really show a lack of knowledge about the last 6+ years of NLP progress. LLMs absolutely can generalize outside of the training set. That's kind of the entire point of why they've proved useful and why the funding for them has skyrocketed. You can ask ChatGPT to come up with original jokes using topics that you can be pretty certain have never been put together for a joke, you can ask it to read your code that has never been seen before and give recommendations and answers about it, you can ask it to invent new religions, etc etc.

These models are pretty stunning in their capability to generalize. That's the whole point!

1

BadassGhost t1_j570h0y wrote

Then what would be meaningful? What would convince you that something is close to AGI, but not yet AGI?

For me, this is exactly what I would expect to see if something was almost AGI but not yet there.

The difference from previous specialized AI is that these models are able to learn seemingly any concept, both in training and after training (in context). Things that are out of distribution can be taught with a single digit number of examples.

(I am not the one downvoting you)

3

BadassGhost t1_j56i9dt wrote

https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks

LLMs are close to, equal to, or beyond human abilities in a lot of these tasks. Some of them, they're not there yet though. I'd argue this is pretty convincing that they are more intelligent than typically mammals in abstract thinking. Clearly animals are much more intelligent in other ways, even more so than humans in many different domains (e.g. chimps selecting 10 numbers on a screen in order from memory experiment). But in terms of high-level reasoning, they're pretty close to human performance

7

BadassGhost t1_j56atbp wrote

> To be honest, I base my predictions on the average predictions of AI/ML researchers. To my knowledge, only a minority of them believe we'll get there this decade, and even less in a mere 3 years.

I think there's an unintuitive part of being an expert that can actually cloud your judgement. Actually building these models and day-in-day out being immersed in the linear algebra, calculus, and data science makes you numb to the results and the extrapolation of them.

To be clear, I think amateurs who don't know how these systems work are much, much worse at predictions like this. I think the sweet middle ground is knowing exactly how they work, down to the math and actual code, but without being the actual creators whose day jobs are to create and perfect these systems. I think that's where the mind is clear to understand the actual implications of what's being created.

>As advanced as AI is today, it isn't even remotely close to being as generally smart as the average human. I think to close that gap, we would need a helluva lot more than than making an AI that is never spewing nonsense and can remember more things.

When I scroll through the list of BIG Bench examples, I feel that these systems are actually very close to human reasoning, with just missing puzzle pieces (mostly hallucination and long-term memory).

https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks

You can click through the folders and look at the task.json to see what it can do. There are comparisons to human labelers.

3

BadassGhost t1_j5654od wrote

This is my guess as well, but I think it's much less certain than AGI happening quickly from this point. We know human intelligence is possible, and we can see that we're pretty close to that level already with LLMs (relative to other intelligences that we know of, like animals).

But we know of exactly 0 superintelligences, so it's impossible to be sure that it's as easy to achieve as human-level intelligence (let alone if it's even possible). That being said, it might not matter whether or not qualitative superintelligence is possible, since we could just make millions of AGIs that all run much faster than a human brain. Quantity/speed instead of quality

3

BadassGhost t1_j55vl62 wrote

I really struggle to see a hurdle on the horizon that will stop AGI from happening this decade, let alone in the next 3 years. It seems the only major problems is hallucination of truth and memory loss. I think both are solved by using retrieval datasets in a smart way.

ASI, on the other hand, definitely might be many years away. Personally I think it will happen this decade also, but that's less certain to me than AGI. Definitely possible that becoming significantly smarter than humans is really really difficult or impossible, although I imagine it isn't.

It will probably also be an extinction-level event. If not the first ASI, then the 5th, or the 10th, etc. Only way of humanity survival is if the first ASI gains a "decisive strategic advantage", as Nick Bostrom would call it, but uses that advantage to basically take over the entire world and prevent any new dangerous ASIs from being created

16

BadassGhost t1_j55rxme wrote

I think the biggest reason to use retrieval is to solve the two biggest problems:

  • Hallucination
  • long-term memory.

Make the retrieval database MUCH smaller than Retro, and constrain it to respectable sources (textbooks, nonfiction books, scientific papers, and Wikipedia. You could either not do textbooks/books, or you could make deals with publishers. Then add to the dataset (or have a second dataset) everything it sees in a certain context in production. For example, add all user chat history to the dataset for ChatGPT.

Could use cross-attention in RETRO (maybe with some RLHF like ChatGPT), or just software engineer some prompt manipulation based on embedding similarities.

You could imagine ChatGPT variants that have specialized knowledge that you can pay for. Maybe an Accounting ChatGPT has accounting textbooks and documents in its retrieval dataset, and accounting companies pay a premium for it.

1

BadassGhost t1_izvcxeg wrote

This is really interesting. I think I agree.

But I don't think this necessarily results in a fast takeoff to civilization-shifting ASI. It might be initially smarter than the smartest humans in general, but I don't know if it will be smarter than the smartest human in a particular field at first. Will the first AGI be better at AI research than the best AI researchers at DeepMind, OpenAI, etc?

Side note: it's ironic that we're discussing the AGI being more general than any human, but not expert-level at particular topics. Kind of the reverse of the past 70 years of AI research lol

1