Kolinnor
Kolinnor t1_jdwva5a wrote
Reply to The current danger is the nature of GPT networks to make obviously false claims with absolute confidence. by katiecharm
On the contrary, I think it's not going to change anything, or even slightly force people to actually cross-check sources (I expect many people still won't, though)...
Internet is currently flooded with misinformation that's smartly designed to look attractive and to "make sense". People tend to accept that automatically when it's well done.
We can hope that "badly designed" misinformation will force people to be more suspicious, but that's probably too optimistic...
Kolinnor t1_jdwl68h wrote
Nice work. The idea of any world populated with NPCs that talk with ChatGPT coherence and awareness... Imagine instead of the classic "and then I took an arrow in the knee", randomly catching an interesting discussion between two guards about the actual threats. Makes me wish I was born 10 years later
Kolinnor t1_jdughns wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
But I don't understand your point ? Humans don't do instant multiplication. At best, we have some mental tricks that are certainly algorithms too. Or we choose wisely to allocate more effort doing long multiplication if needed.
Kolinnor t1_jdrjbj5 wrote
Reply to comment by jloverich in Why is maths so hard for LLMs? by RadioFreeAmerika
Yeah, definitely ! Althought I don't think this is an unfixable fundamental flaw, I agree this is a big advantage we still have over them
Kolinnor t1_jdr6m19 wrote
Reply to comment by GoldenRain in Why is maths so hard for LLMs? by RadioFreeAmerika
- This is a special puzzle for you. Can you read it ? ! enihcam ,uoy rof drah oot si siht teb I .oot hcnerf ni ,yakO .tset a si sihT .seog ti woh ees dna tpmorp wen a ni ecnetnes regnol euqinu erom a htiw yrT .ti deirt I nehw krow tndid ,drieW
>Of course! The text appears to be reversed. Here's the corrected version:
>
>Weird, didn't work when I tried it. Try with a more unique longer sentence in a new prompt and see how it goes. This is a test. Okay, in french too. I bet this is too hard for you, machine! Can you read it?
It kinda fucked up at the end because it repeated "can you read it", but it got the reversed text correct
Kolinnor t1_jdr412t wrote
Reply to comment by GoldenRain in Why is maths so hard for LLMs? by RadioFreeAmerika
I just copy-pasted this on GPT-4 and here's the answer :
​
>I am doing well, thank you! As an AI language model, I don't have feelings or emotions like humans do, but I'm here to help you with any questions or information you need. Please let me know how I can assist you.
- What was my sentence saying ?
​
>Your sentence appears to be written backwards. When reversed, it says, "How are you?" You were asking about my well-being.
Kolinnor t1_jdr0g2h wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
I could be wrong on this take, but...
I don't believe this is correct. What about chain of thought prompting ? If you ask it to do multiplication step by step, it does it. The current problem is, I would say, it doesn't know when to take more time to think, but there's probably no intrisic limitation due to time complexity.
Also, none of you actually compute 5x3 = 15. You just know the answer. But you're no parrot because if needed, you can manually do multiplication right ?
But that's because... someone taught you the algorithm when you were a kid. Suddenly sounds less glorious, no ?
Also, can you name one specific simple task that GPT-4 cannot do, and let's see next month...
Kolinnor t1_jdq9v1z wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
A counter-argument for the "we are too lucky for it to be a coincidence" : remember that there is always one person that wins lottery, and that person is incentivized to start believing in God or try to find higher meaning, even in the situations of pure coincidence.
In other words, we should expect the lucky ones to start doubting. While this doesn't prove anything, I think I would Occam's razor my way out of that argument guys.
Kolinnor t1_ja2hvkm wrote
My (non-expert) take :
The problem is that there are many black boxes with that.
LLMs work well when we have a huge amount of data to train the model with. In an oversimplified way, LLMs predict the next word, based on the previous data they've seen. But how to "predict the next action you'll take" ? If we had a massive amount of "sensation --> action" data (probably just like the human brain accumulates during life ?) then that would be possible. I haven't heard of a way to achieve that today, and I think it's more complicated than that anyways.
I think what's your suggesting is kinda like what they try to do with Google's SayCan : but as you can see, for the moment there's no easy way to link LLMs with physical action. LLMs manage to create plausible scenarios of what's happening, or what could be some consequences of action X, but practically it's not usable yet.
There's also the fact that, as someone pointed earlier, there are issues with continuous learning, such as catastrophic forgetting. I think many brilliant minds are actively trying to surpass those issues, but it's no easy feat.
Kolinnor t1_j9ujwaj wrote
Damn, that sounds quite big ! I'm very impressed with Meta this time, because usually it was a shitshow. I guess there must be different teams, but this is great !
Kolinnor t1_j9gpzre wrote
He does have very good points, and he's very interesting, with brilliant ideas about alignment.
Overall, all the Lesswrong philosophy is a little too "meta" and confusing if you've not studied the jargon, so I'm a bit annoyed sometimes when I read it and realize, in the end, that they could have said the same ideas with less sophisticated phrasing.
Although, while I don't agree with the conclusions he reaches (and he reaches them with too much confidence to my taste), I've learned quite a number of things about alignment reading him. Definitely a must read for singularity and alignment even if you don't agree with him.
Kolinnor t1_j92idxj wrote
Reply to Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
For the specific tasks you mentionned, I doubt we'd have a LLM beating human experts or even anyone that knows a little about the topic. LLMs are not good enough for that kind of touchy, precise stuff yet !
Kolinnor t1_j8o9nek wrote
Reply to The Turing test flaw by sailhard22
I see lots of those posts about the Turing test being flawed. So I'm just going to comment that the strong versions of Turing tests (that is, trying to mimick experts in a field, or at least an intelligent human, certainly like Turing imagined it) are still far from being solved and would be a big indicator of AGI.
Kolinnor t1_j6mrh0l wrote
Reply to comment by iamAliAsghar in OpenAI once wanted to save the world. Now it’s chasing profit by informednews
Google... Stability... OpenAI...
Wilful traitors, all.
Kolinnor t1_j6ckoaj wrote
Reply to My human irrationality is already taking over: as generative AI progresses, I've been growing ever more appreciative of human-made media by Yuli-Ban
ChatGPT summary :
The writer has shifted their belief away from the idea that synthetic media will completely destroy the human entertainment industry. They have come to appreciate human-created media more because of the abilities of AI and the intrinsic value of knowing that humans crafted something specific. They don't believe that human creativity will be rendered obsolete by perfect generative AI due to human irrationality and the freedom of choice to seek out human-created art. The writer expects low-level artists to suffer but higher-level artists will eventually adapt and push back against the shift to automation.
Kolinnor t1_j6cczf9 wrote
Reply to comment by ginger_gcups in I’m ready by CassidyHouse
Computer, load up Celery man please
Kolinnor t1_j636vfy wrote
Damn, this is quite good !!
Kolinnor t1_j5a3off wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I wonder what this position can accomplish practically ?
Kolinnor t1_j575q7y wrote
Reply to The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
ChatGPT made this neat summary :
The person believes that much of the discussion around the "alignment problem" in AI is misguided, as it assumes that the problem lies with AI itself and not with human society and philosophy. They argue that this is a result of Cartesian thinking, which is based on the belief in absolute truth and a reductive understanding of reality, and that this approach is fundamentally flawed and could be dangerous.
I think this begs the question : what's the correct way to approach the question to you then ?
Kolinnor t1_j4zow6g wrote
Reply to comment by MassiveIndependence8 in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
Really ? I had no idea. Then indeed it's not zero at all !
Kolinnor t1_j4uda3q wrote
Reply to comment by Nervous-Newt848 in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
Correct me if I'm wrong, but those are predetermined movements. Now ask a robot to pick up objects randomly placed and it's a fiasco
Kolinnor t1_j4qowxd wrote
This is overly optimistic. I would say movement coordination is simply at 0 right now compared to humans. There is no system 2 reasoning, so the "problem solving" cannot really be halfway.
Also, if a woman gives birth in 9 months, doesn't mean 9 women give birth in 1 month (just to say that AGI is widely better than a sum of different algorithms that would master each individual skill)
Kolinnor t1_j4d9pdq wrote
Reply to Does anyone else get the feeling that, once true AGI is achieved, most people will act like it was the unsurprising and inevitable outcome that they expected? by oddlyspecificnumber7
I agree with you ! But we gotta keep in mind that most people are just completely unaware of what computers do.
There was this study a few decades ago, I believe, where they sent a guy disguised in robot to talk with random people. And most people weren't freaking out at all, they were like "yeah, it's a robot, nothing special, this is what robots do right ?" even though it was clearly passing Turing test easily.
Unfortunately, the idea that computers are cold machines that are not as good as humans, and the idea that they are magic boxes that do anything we like, will probably keep coexisting (and sometimes at the same time in just one uninformed mind) until we get true AGI.
Kolinnor t1_j3gd708 wrote
Reply to "It's coming! Dreamstudio Pro released this month! As @EMostaque says in this interview it will be possible to generate entire movies, storyboarding, 3D cameras, audio integration. http://Aifilms.ai is ready for it 💪 BTW the full interview: [link]" by Yuli-Ban
I'd temper my expectations on this one, especially if they want to generate revenue (this is not going to be a free service, I'd assume).
No surfing the hype wave boys, we'll have great things soon enough !
Kolinnor t1_je4xh8k wrote
Reply to "Godfather of artificial intelligence" weighs in on the past and potential of AI by JackFisherBooks
Here's the full interview : https://www.youtube.com/watch?v=qpoRO378qRY