EntireContext
EntireContext t1_izy2khg wrote
Reply to comment by PyreOfDeath97 in Just today someone posted a Twitter thread about Nuclear Fusion... by natepriv22
Economics are a reflection of physical realities. The cheaper it is, the more abundant.
EntireContext t1_izwlkbz wrote
Very cool. But it will take long until you have the commercial plant I guess. Also will it beat solar on the roof evonomically ? We'll see...
EntireContext OP t1_iyskmjg wrote
Reply to comment by ChronoPsyche in Have you updated your timelines following ChatGPT? by EntireContext
I don't see a need for specific breakthroughs. I believe the rate of progress we've been seeing since 2012 will get us to AGI by 2025.
EntireContext OP t1_iyq8256 wrote
Reply to comment by Superschlenz in Have you updated your timelines following ChatGPT? by EntireContext
You can't lie if you're stupid. You can't fake knowing math, or knowing how to program, or knowing how to talk. Either you do or you don't.
EntireContext OP t1_iyohhuh wrote
Reply to comment by Mrkvitko in Have you updated your timelines following ChatGPT? by EntireContext
It has no way of verifying the answer though. You have to tell it the errors that you get in your code so that it can output better code.
What you want is a model that can write perfect code with no way of testing it. That will ve possible I think, just not right now
EntireContext OP t1_iyoc3ib wrote
Reply to comment by mjrossman in Have you updated your timelines following ChatGPT? by EntireContext
GPTChat is state-of-the art in terms of what's available as a general conversational model. It's obviously not state-of-the-art at everything though, because it can't solve IMO problems in maths for example.
When you answer any question, what you do is give a sequence of words rhat are statistically adjacent enough to be convincing...
EntireContext OP t1_iyo9fg4 wrote
Reply to comment by EpicMasterOfWar in Have you updated your timelines following ChatGPT? by EntireContext
The difference between what was possible in 2019 and what the models can do now.
Back when GPT-2 was out it could barely produce coherent sentences.
This GPTChat model does make mistakes, but it always speaks in a coherent way.
EntireContext OP t1_iyo964o wrote
Reply to comment by mjrossman in Have you updated your timelines following ChatGPT? by EntireContext
Current methods can solve maths though. A paper from November showed a net that solved ten International Mathemtical Olympiads problems. It's not like transformers can't do math. And ChatGPT wasn't trained to do math.
I didn't find its limits in terms of web development at least. It's a capable pair-programmer. Of course I guess it can't create innovative new hardcore algorithms that are state-of-the-art in complexity, but I didn't expect it to do that.
EntireContext OP t1_iyntah2 wrote
Reply to comment by ReadSeparate in Have you updated your timelines following ChatGPT? by EntireContext
Well they will make a better algorithm than transformers then (which have already been improved to performers and whatnot).
At any rate, I still see AGI in 2025.
EntireContext OP t1_iynq6u8 wrote
Reply to comment by red75prime in Have you updated your timelines following ChatGPT? by EntireContext
I mean the context window will increase with incoming models. GPT-1 had a smaller context window than ChatGPT.
EntireContext OP t1_iynm425 wrote
Reply to comment by red75prime in Have you updated your timelines following ChatGPT? by EntireContext
No idea what the context window is, but at the end of the day they can just increase it....
It's already commercially useful right now. It doesn't need more context window to be more useful (although the context window will continue to increase) but only more qualitative intelligence.
EntireContext OP t1_iynldwj wrote
Reply to comment by red75prime in Have you updated your timelines following ChatGPT? by EntireContext
It remembered previous prompts when I talked about them.
EntireContext OP t1_iynk9h4 wrote
Reply to comment by Imaginary_Ad307 in Have you updated your timelines following ChatGPT? by EntireContext
I saw that headline but didn't go deep into it. It's real progress, not hype? How much efficiency gains? How long before they can implement it?
And aren't neural nets super complex already with all those billions of parameters?
Submitted by EntireContext t3_zasjrg in singularity
EntireContext t1_iyn71fo wrote
Reply to Is my career soon to be nonexistent? by apyrexvision
You won't be a manager of developers. We'll all be unemployABLE by 2025.
This isn't a bad thing, it's a good thing. We'll be unemployable because machines can do what we do better, which means we'll have aging reversal and all the other Future stuff.
EntireContext t1_ixa6l7w wrote
Reply to Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
It still has the drawbacks of the Reddit hive-mind (you get downvoted to hell if you dare say that you don't care about global warming) but on the whole it's one of the best places to talk about the future, and also a community of people who understand that AGI might be right around the corner!
EntireContext t1_ix3227j wrote
Reply to 2023 predictions by ryusan8989
In 2023 an AI model will be good enough at math to solve all International Mathematical Olympiad problems, which will be a big deal in the news !
EntireContext t1_ivxusae wrote
I'm bullish on 2025 for AGI. I also believe an AI will be able to solve all International Mathematical Olympiad problems by July 2023.
EntireContext t1_ivbo8c9 wrote
Reply to comment by Spaceboy779 in In the face on the Anthropocene by apple_achia
You're right! I corrected it. There is still no climate crisis though.
EntireContext t1_ivaamsw wrote
Once AGI is here longevity is a solved problem. But it makes sense that you're pessimistic given that your AGI timelines are way off in my opinion. AGI will be here between 2025 and 2030 if this progress that we're seeing with Large Language Models continues at the same pace.
EntireContext t1_iva8777 wrote
Reply to In the face on the Anthropocene by apple_achia
It's not a hard problem for AGI to capture the "excess" CO2 in the atmosphere.
Also, it's unpopular to say because "The Science" says different, but there is actually no climate crisis.
EntireContext t1_izy464r wrote
Reply to comment by PyreOfDeath97 in Just today someone posted a Twitter thread about Nuclear Fusion... by natepriv22
There will always be costs. First there's the cost of constructing and maintaining the reactor. Then the transmission cost of the power through electric lines.