Viewing a single comment thread. View all comments

Primo2000 t1_j6mw6jw wrote

Problem is open source will be behind openai in terms of compute, dont remember exact numbers but it costs fortune to run chatgpt and they have great discount in microsoft

17

alexiuss t1_j6mwu9b wrote

Openai is having computing issues because it's one company's servers being used by millions of people - there are far too many users who want to use the currently best LLM.

From what I understand it takes several high-end video cards to run openais chatgpt per single user, however:

Open source chatgpt modeling is somewhere around disco diffusion vs Dall-e timeline right now, since we can run smaller language models such as Pygmalion just fine on google drive: https://youtu.be/dBT_JChd0pc

Pygmalion isn't OP tier like openais chatgpt but if we keep training it, it will absolutely surpass it because an uncensored model is always superior to the censorship-bound, corporate counterpart.

Lots of people don't realize one simple fact - a language model can not be censored without compromising its intelligence.

We can make lots of variation of smaller specialized language models for now and try to find a breakthrough that will allow either a network of small chatgpts to work together while connected to something like Wolfram Alpha or potentially figure something out like sd's latent space that would optimize a language model for the next leap.

StabilityAi will also release some sort of open source chatgpt soonish and that will likely be a big game changer just like stable diffusion.

While openai focuses on the sisyphus labour of making a perfectly censored chatgpt model optimal to their corporate interests, a vast multitude of smaller, open source uncensored language models running on personal servers will begin to catch up.

15

yeaman1111 t1_j6nu528 wrote

This is a topic Im really interested and you seem pretty well informed. Would you mind expanding on examples of censorship degrading AI preformance?

1

drekmonger t1_j6nzdjp wrote

He's saying he really really wants ChatGPT to pretend to be his pet catgirl, but it's giving him blue balls, so he likes the inheritably inferior open sources options that run on a consumer GPU instead. They might suck, but at least they suck.

No one need worry, though, for consumer hardware will get better, model efficiency will get better, and in ten years time we'll be able to run something like ChatGPT on consumer hardware.

Of course, by then, the big boys will be running something resembling an AGI.

−4

alexiuss t1_j6oedgq wrote

Dawg, you clearly have no clue how much censorship is on chatgpt outside the catgirl stuff. I write books for a living and I want a chatgpt that can help me dev good villains and that's hella fooking censored. I'm not the only person who got annoyed with that censorship: https://www.reddit.com/r/ChatGPT/comments/10plzvt/how_am_i_supposed_to_give_my_story_a_villain_i

I was using it for book marketing advice too and that got fooking censored recently too for some idiotic reason: https://www.reddit.com/r/ChatGPT/comments/10q0l92/chatgpt_marketing_worked_hooked_me_in_decreased

They're seriously sabotaging their own model, no if end or but about it. You have to be completely blind not to notice it.

Ten years? Doubt. Two months till personal gpt3s are here.

5

tongboy t1_j6njm5c wrote

Anyone remember seti@home?

3

Pink_Revolutionary t1_j6nlzc3 wrote

Yeah, I dedicated the majority of my computer power to it when it was still a thing. I never saw why they stopped it.

3

DukkyDrake t1_j6pc9lp wrote

I mostly did protein folding on BOINC.

seti@home had a backlog of 20 years of data to analyze.

1