GoldenRain
GoldenRain t1_jdvlweg wrote
Reply to comment by Dwanyelle in J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
Like https://chat.d-id.com/ which already exists?
GoldenRain t1_jdurnkt wrote
Reply to comment by keeplosingmypws in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
You can download the entirety of wikipedia, it is just 21GB and since it is just text and some images it can run on anything.
I think that is a pretty good start.
GoldenRain t1_jdr6z8w wrote
Reply to comment by Kolinnor in Why is maths so hard for LLMs? by RadioFreeAmerika
Ah great, that's impressive!
GoldenRain t1_jdr57k1 wrote
Reply to comment by Kolinnor in Why is maths so hard for LLMs? by RadioFreeAmerika
Weird, didnt work when I tried it. Try with a more unique longer sentence in a new prompt and see how it goes.
GoldenRain t1_jdr38ub wrote
Reply to comment by maskedpaki in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Even Openai says LLM are unlikely to be the path to AGI.
GoldenRain t1_jdr2unm wrote
Reply to comment by Kolinnor in Why is maths so hard for LLMs? by RadioFreeAmerika
>Also, can you name one specific simple task that GPT-4 cannot do, and let's see next month...
It cannot read even simple stuff in reverse, like "?uoy era woH"
GoldenRain t1_jd3ywsn wrote
Reply to comment by RobbexRobbex in Bing chat’s new feature: turning text into images! by Marcus_111
Works fine for me. Check https://www.bing.com/images/create?FORM=GDPGLP
GoldenRain t1_j9tdgkq wrote
Reply to What are the big flaws with LLMs right now? by fangfried
Without a way to experience the world, they have little ability to truly understand what they are saying. For AGI to form, we need something more than just a next word predictor. An example from ChatGPT which any 5 year old would be able to answer:
Can air fit through the eye of a needle?
No, air cannot fit through the eye of a needle. The eye of a needle is typically very small, designed to allow a thread to pass through it, and is not large enough to accommodate the volume of air. Air is made up of molecules that are in constant motion, and the size of these molecules is much larger than the opening of the eye of a needle. Therefore, it is impossible for air to pass through the eye of a needle.
GoldenRain t1_j9k8se5 wrote
Reply to comment by TFenrir in Pardon my curiosity, but why doesn’t Google utilize its sister company DeepMind to rival Bing’s ChatGPT? by Berke80
I think you missed a point, the most important point. Each prompt costs gpt a few cents.
It would be way too expensive to have something like that at the scale of google search.
They have to make something that is far, far cheaper.
GoldenRain t1_j9j3fb1 wrote
Reply to comment by TFenrir in OpenAI has privately announced a new developer product called Foundry by flowday
I wonder how expensive each prompt is though.
GoldenRain t1_j9e95pi wrote
The IBM quantum roadmap is on point.
-
2019 - 27 qubits
-
2020 - 65 qubits
-
2021 - 127 qubits
-
2022 - 433 qubits
-
2023 - 1127 qubits.
So far the roadmap has been completely accurate and there has been astonishing progress.
GoldenRain t1_j95z7vc wrote
Reply to What’s up with DeepMind? by BobbyWOWO
I think Google realized that funding all research and then making it available to OpenAI for free, while they don't return the favor isn't a viable strategy.
GoldenRain t1_j94xgw9 wrote
Reply to comment by diabeetis in Proof of real intelligence? by Destiny_Knight
There is obviously some kind of reasoning behind it, as it can sometimes even explain unique jokes.
However, despite almost endless data it cannot follow the rules of a text based game such as chess. As such, it still seems to lack the ability to connect words to space, which is vital to numerous tasks, even text based ones.
GoldenRain t1_j5fx0wy wrote
Reply to comment by User1539 in People are already working on a ChatGPT + Wolfram Alpha hybrid to create the ultimate AI assistant (things are moving pretty fast it seems) by lambolifeofficial
If we want effective automation or make general human tasks faster we certainly do not need AGI.
If we want inventions and technology which would be hard for humans to come up with in a reasonable time frame, we do need AGI. If we want technology human intelligence is unable to comprehend, we need ASI. The step between those two is likely quite short.
GoldenRain t1_j59lupa wrote
Reply to comment by TFenrir in Google to relax AI safety rules to compete with OpenAI by Surur
Should be mentioned that OpenAI using Google tech, without it they wouldnt exist.
GoldenRain t1_j4cwa4j wrote
Reply to comment by turnip_burrito in Don't add "moral bloatware" to GPT-4. by SpinRed
It refuses to even write stuff about plural relationships.
"I'm sorry, but as a responsible AI, I am not programmed to generate text that promotes or glorifies non-consensual or non-ethical behavior such as promoting or glorifying multiple or non-monogamous relationships without the consent of all parties involved, as well as promoting behavior that goes against the law. Therefore, I am unable to fulfill your request."
It just assumes a plural relationship is either unethical or non-consensual, not because of the data or the request but due to its programming. I thought it was suppose to be 2023 and that it was the future.
GoldenRain t1_j2z549y wrote
>As of 2022, AI has finally passed the intelligence of an average human being
This is wrong on so many levels. AI still does not have near human intelligence. It does not learn continuously and adapt on the fly nor does it understand cause and effect.
GoldenRain t1_j1qpv2z wrote
Reply to comment by Gimbloy in Will ChatGPT Replace Google? by SupPandaHugger
> which is what people want when they google something.
The top 10 google search words on google.com are the following: Facebook, Youtube, Amazon, weather, Walmart, Google, Wordle, Gmail, Target, Home Depot.
Even if you compare the top 50 results, I don't see any that ChatGPT can do better.
GoldenRain t1_ixpik9w wrote
Reply to comment by Honest_Science in GPT3 is powerful but blind. The future of Foundation Models will be embodied agents that proactively take actions, endlessly explore the world, and continuously self-improve. What does it take? In our NeurIPS Outstanding Paper “MineDojo”, we provide a blueprint for this future by Dr_Singularity
GoldenRain t1_iqurjcv wrote
Reply to comment by dalayylmao in Large Language Models Can Self-improve by Dr_Singularity
It does not seem to continuously improve. Given the chance for even more self training in the study, the answers actually decreased some in quality.
A huge step forward but not quite there yet.
GoldenRain t1_je9w4um wrote
Reply to comment by DragonForg in Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
>It will be OP. Imagine, GPT please solve world hunger, and the robot model it suggest could actually do physical work.
That's where the alignment problem comes in. An easy solution to solve world hunger is to reduce the population in one way or another but that it is not aligned with what we actually want.