ReadSeparate
ReadSeparate t1_jed4ghg wrote
Reply to comment by brianberns in Are LLMs a step closer to AGI, or just one of many systems which will need to be used in combination to achieve AGI? by Green-Future_
Yann LeCun is consistently bad and dumb on everything so I assume this means that LLMs are a direct route to AGI.
ReadSeparate t1_jdi9wic wrote
Reply to comment by Maleficent_Refuse_11 in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
What if some of the latent patterns in the training data that it's recreating are those that underlie creativity, critique, and theory of mind? Why are people so afraid of the idea that both of these things can be true? It's just re-creating patterns from its training data, and an emergent property from doing that at scale is a form of real intelligence because that's the best way to do it, because intelligence is how those patterns originated from in the first place.
ReadSeparate t1_jcsi6oz wrote
Reply to comment by y53rw in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
Agreed. The proper way to conceive of this, in my opinion, is to view it purely through the lens of value maximization. If we have a hypothetical set of values, we can come up with some rough ideas of what an ASI might do if it possessed such values. The only other factor is capabilities - which we can assume is something along the lines of the ability to maximize/minimize any set of constraints, whether that be values, resources, time, number of steps, computation, etc. in the most efficient way allowable within the laws of physics. That pretty much takes anything except values out of the equation, since the ASI's capabilities, we assume, are "anything, as efficiently as possible."
It's impossible to speculate what such a mind would do, because we don't know what its values would be. If its values included the well-being of humans, it could do a bunch of different things with that. It could merge us all into its mind or it could leave Earth and leave us be - it completely depends on what its other values are. Does it value human autonomy? Does it value humanity, but less than some other thing? If so, it might completely wipe us out despite caring about us. For instance, if it values maximizing compute power over humans, but still values humans, it would turn all matter in the galaxy or universe (whatever it has the physical capabilities to access) into computronium, and that would include the matter that makes up our bodies, even if that matter is a completely insignificant fraction of all matter it has the ability to turn into computronium.
I don't think any of these questions are answerable. We just don't know what it's going to value. I actually think it's somewhat feasible to predict ROUGHLY what it's going to do IF we had a full list of its values, but outside of that it's impossible.
ReadSeparate t1_j8fb4cr wrote
Reply to comment by [deleted] in This is Revolutionary?! Amazon's 738 Million(!!!) parameter's model outpreforms humans on sience, vision, language and much more tasks. by Ok_Criticism_1414
One can easily imagine a generalist LLM outputting an action token which represents prompting the specialized LLM, which then gets routed to the specialized LLM, then the response is formatted and put into context by the generalist.
ReadSeparate OP t1_j8442mf wrote
Reply to comment by adt in Where are all the multi-modal models? by ReadSeparate
This is exactly the comment I was looking for when I made this thread, thanks so much
Submitted by ReadSeparate t3_10zcig2 in singularity
ReadSeparate t1_j2euf62 wrote
Reply to comment by RoninNionr in Happy New Year Everyone. It's time to accelerate even more 🤠by Pro_RazE
And there’s no doubt that he is aware of that look and of his platform, which means it IS an announcement
ReadSeparate t1_j29kvft wrote
Reply to comment by Lodge1722 in OpenAI might have shot themselves in the foot with ChatGPT by Kaarssteun
Yeah I would like to see this as well, I couldn’t find it either
ReadSeparate t1_izhbplg wrote
Reply to ChatGPT solves quantum gravity? by walkthroughwonder
Who is this person tweeting? Are they a physicist? If not, who cares, this could easily be random gibberish from the model.
ReadSeparate t1_iyz1wbt wrote
Reply to comment by Nameless1995 in [D] OpenAI’s ChatGPT is unbelievable good in telling stories! by Far_Pineapple770
Awesome comment, thank you, I'm gunna check all of these out. For the external database thing, to clarify, I was wondering if part of the model training could be learning which information to store so that it can be remembered for later. Like for example, in a conversation with someone, their name can be stored in a database and retrieved later when they want to reference the person's name, even if that's not in the context window any longer.
ReadSeparate t1_iywxpuh wrote
Reply to comment by ThePhantomPhoton in [D] OpenAI’s ChatGPT is unbelievable good in telling stories! by Far_Pineapple770
I wonder how feasible it is to use an external database to store/retrieve important information to achieve coherency.
If it’s not, then I guess we’ll have to wait for something to replace Transformers. Perhaps there’s a self-attention mechanism out there which runs in constant time.
ReadSeparate t1_iyo883j wrote
Reply to comment by EntireContext in Have you updated your timelines following ChatGPT? by EntireContext
I do agree with this comment. It’s feasible that long term memory isn’t required for AGI (though I think it probably is) or that hacks like reading/writing to a database will be able to simulate long term memory.
I think it may take longer than 2025 to replace transformers though. They’ve been around since 2017 and we haven’t seen any real promising candidates yet.
I can definitely see a scenario where GPT-5 or 6 has prompts built into is training data which are designed to teach it to utilize database read/writes.
Imagine it says hello to you after seeing your name only once six months ago. It could have a read database token which has sub-input tokens to fetch your name from a database based on some sort of identifier.
It could probably get really good at doing this too if it’s actually in the training data.
Eventually, I could see the model using its coding knowledge to design the database/promoting system on its own.
ReadSeparate t1_iynt062 wrote
Reply to comment by EntireContext in Have you updated your timelines following ChatGPT? by EntireContext
They can’t just increase it. The context window’s time complexity is O(n^2) which means the amount of compute needed per token added grows exponentially.
This is an architectural constraint of transformers. We’ll either need a better algorithm than transformers, or a way to encode/decode important information to, say, a database and insert it back into the prompt when it’s required
ReadSeparate t1_iy4k2gp wrote
Reply to comment by exioce in Isaac Arthur interviewed talking about the next 100 years by Quealdlor
Not sure if this is better or worse than if they made fun of his speech impediment
ReadSeparate t1_iwqtf27 wrote
Reply to comment by berdiekin in US and EU Pushing Ahead With Exascale, China Efforts Remain Shrouded by nick7566
> pre-pend
This guy programs
ReadSeparate t1_iw4nifp wrote
Reply to comment by visarga in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
I’m not saying this group of people are going to be permanently unemployed, I’m saying they’re not going to be making art for money. Many of them may facilitate the process somehow, like prompt engineering, etc, but that’s very different and FAR less time consuming than actually creating art.
ReadSeparate t1_iw3ivdm wrote
Reply to comment by calendarised in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
Let me clarify, I’m not saying it will go away as a hobby or as a passion, just the percentage of people who are doing it for money will be a TINY fraction of those who are doing it for money today.
Think of the numbers of horses being used for transportation today vs the number of horses being used for transportation before the invention of cars. Horses for transportation are irrelevant today compared to back then.
ReadSeparate t1_iw0xb6p wrote
Reply to comment by Sashinii in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
You’re getting downvoted but you’re speaking straight facts tbh. Human art is gunna be irrelevant in less than 5 years, aside from people who want art specifically made by humans. These kinds of things people are bitching about don’t matter, in just a few short years these models will advance so much they won’t need any new training data anyway. They’ll be able to get what they need and these artists will still be out of a job.
That said, my heart goes out to the artists losing their livelihoods at the altar of profit and technological progress, we ought to have a UBI/unemployment program for automation job loss.
ReadSeparate t1_iv6p95l wrote
Reply to comment by World_May_Wobble in How do you think an ASI might manifest? by SirDidymus
Are we talking about a world in which there are multiple ASIs existing at the same time? In that case you could be right, I have no idea how to model such a world though. I have no idea what their systems would look like. Would they compete? Would they cooperate? Would they merge? Would game theory still apply to them in the same way? I have no answers for any of those.
I was under the assumption that we were talking about a singular ASI with complete control over everything. I don’t know why the ASI, or whoever is controlling it, would allow any other ASIs to come into existence.
ReadSeparate t1_iv6blo0 wrote
Reply to comment by World_May_Wobble in How do you think an ASI might manifest? by SirDidymus
Why would it need symbols to do that though? It would just do it directly. The reason why humans use money is because we don’t know the direct comparison from iPhones to chickens.
Additionally, there would not be market forces in such a system, so nothing would have a price, just an inherent value based on scarcity/utility. That wouldn’t change, they’d just be fundamental constants, more of less.
ReadSeparate t1_iv3zj0j wrote
Reply to comment by ihateshadylandlords in How do you think an ASI might manifest? by SirDidymus
I’m not assuming it’ll be sentient, I’m just saying an Oracle ASI equally as dangerous as one with agency. It MIGHT be sentient. Or it might NOT be sentient, but still dangerous, I.e. the paper clip maximizer scenario.
> Okay then the owners will probably use this Non-sentient tech to take care of themselves
Like just AGI you mean? Yeah I agree with that of course. But ASI, again, seems short sighted. If Google makes human level AGI, but it’s just as smart as say Einstein, yeah of course they’ll use it to get richer. But if they create something that makes Einstein look like an ant, they’d be foolish to use it in such a way.
ReadSeparate t1_iv3xw7n wrote
Reply to comment by ihateshadylandlords in How do you think an ASI might manifest? by SirDidymus
Even if an ASI is an oracle alignment is still just as much of an issue. It can tell them to do something that sounds completely harmless to even the smartest of humans and even non-ASI AGIs, but in reality lets it out of the box.
> Unless the ASI is a genie that can turn everything around in a split second, they’re most likely going to want to take care of themselves first and everyone else right after that.
What do you mean? That's exactly what ASI is. We're talking about something orders of magnitudes more intelligent than Albert Einstein here. A machine like that will be capable of recursively improving its own intelligence at an insane rate and will eventually know how to achieve any goal compatible with the laws of physics in the most efficient way possible for any possible set of constraints. That is basically by definition a magical genie that can do anything in a split second.
Every point you're making makes sense IF you're talking about just human-level AGI, but it makes no sense for ASI.
ReadSeparate t1_iv3hlhu wrote
Reply to comment by ihateshadylandlords in How do you think an ASI might manifest? by SirDidymus
I don’t think that there’s a difference in regard to the control problem by asking the ASI to do any task. Whether they ask it to make money or they ask it to upload all of our minds to the hive mind and build a dyson sphere around the Sun, I don’t see it making any difference if it’s misaligned. If it’s misaligned, it’s misaligned. You could ask it simply to say hello and it could still cause issues.
Why would they want to recoup their investment? Money doesn’t mean anything in this scenario. ASI is the absolute pinnacle of the universe and money is just a social construct invented by some upright walking apes. It’s like chimps worrying about bananas when they’ve stumbled upon modern food supply chains.
ReadSeparate t1_iv38nua wrote
Reply to comment by ihateshadylandlords in How do you think an ASI might manifest? by SirDidymus
I still think that's absurd. We're not talking about human level AGI here, we're talking about ASI. The moment ASI comes online is the moment money loses all of its value. If they do anything except use it to transition humanity into the next thing we're going to evolve into, I'll think they're short-sighted.
ReadSeparate t1_jefrna7 wrote
Reply to We have a pathway to AGI. I don't think we have one to ASI by karearearea
There's a few things here that I think are important. First of all, completely agree with the point of this post and I completely expect that to become the outcome of GPT-6 or 7 let's say. Human expert level at everything would be the absolute best.
However, I think it may not be super difficult to achieve superintelligence using LLMs as a base. There's two unknowns here and I'm not exactly sure how they will mesh together: