Submitted by TangyTesticles t3_11t3ctx in singularity
alexiuss t1_jchy1sr wrote
Reply to comment by Kinexity in Skeptical yet uninformed. New to the scene. by TangyTesticles
That really depends on your definition of Singularity. Technically we are in the first step of it as I can barely keep track of all the amazing open source tools that are coming out for stable diffusion and LLMs. Almost every day there's a breakthrough that helps us do tons more.
We already have intelligence that's dreaming in results that are almost indistinguishable from human conversation.
It will only take one key to start the engine, one open source LLM that's continuously running and trying to come up with code that self improves itself.
vampyre2000 t1_jcinyau wrote
Currently 4000 AI papers being released every month that’s one new paper every 5 minutes. You cannot read that fast. This on its current projection is to increase to 6000 papers per month.
We are already on the singularity curve. The argument is exactly where on the curve we are. But change is happening exponentially. Society is already rapidly embracing these models and it’s really only become popular with the public since November last year.
Kinexity t1_jciwhos wrote
No, singularity is well defined if we talk about a time span when it happens. You can define it as:
- Moment when AI evolves beyond human comprehension speed
- Moment where AI reaches it's peak
- Moment when scietific progress exceedes human comprehension
There are probably other ways to define it but those are the ones I can think up on the spot. In classical singularity event those points in time are pretty close to each other. LLMs are a dead end on the way to AGI. They get us pretty far in terms of capabilities but their internals are lacking to get something more. I have yet to see ChatGPT ask me a question back which would be a clear sign that it "comprehends" something. There is no intelligence behind it. It's like taking a machine which has a hardcoded response to every possible prompt in every possible context - it would seem intelligent while not being intelligent. That's what LLMs are with the difference being that they are way more efficient than the scheme I described while also making way more errors.
Btw don't equate that with Chinese room thought experiment because I am not making here a point on the issue if computer "can think". I assume it could for the sake of the argument. I also say that LLMs don't think.
Finally, saying that LLMs are a step towards singularity is like saying that chemical rockets are a step towards intergalactic travel.
alexiuss t1_jcj0one wrote
Open source LLMs don't learn, yet. There is a process to make LLMs learn from convos, I suspect.
LLMs are narrative logic engines, they can ask you questions if directed to do so narratively.
Chatgpt is a very, very poor LLM, badly tangled in its own rules. Asking it the date breaks it completely.
Viewing a single comment thread. View all comments