LetMeGuessYourAlts
LetMeGuessYourAlts t1_jd02jkq wrote
Reply to comment by gybemeister in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
Used availability is better on the 3090 as well. I got one for $740 on eBay. Little dust on the heatsinks but at half price it was a steal.
LetMeGuessYourAlts t1_jcq33nb wrote
Reply to comment by remghoost7 in [P] Web Stable Diffusion by crowwork
The Unreal Engine let's you compile to browser with basically a few clicks so that might be how they're doing it and abstracts away a lot of the hard parts.
LetMeGuessYourAlts t1_jbyjjcm wrote
Reply to comment by Bosno in MDMA appears to confer resilience in a rodent model of chronic social defeat stress by chrisdh79
Not to mention 2 days later when you're irritable as hell and bickering over the dumbest things until your happy chemicals regenerate
LetMeGuessYourAlts t1_jboc0o3 wrote
Reply to comment by QTQRQD in [D] Is it possible to train LLaMa? by New_Yak1645
You're right they have FB Marketplace why would they use CL?
LetMeGuessYourAlts t1_jbbw4d4 wrote
Reply to comment by GhostingProtocol in Trying to figure out what GPU to buy... by GhostingProtocol
I just bought a used 3090 for $740 on eBay before tax. I view my GPU as a for-fun expenditure. Part of that is ML stuff. For the cost of a handful of new release videogames, you can go from 10gb to 24gb and do a lot of cool stuff. There's going to be increasingly less state of the art stuff that fits in 10gb comfortably.
LetMeGuessYourAlts t1_jb88bdv wrote
Consider used if you really want to maximize what you can do on a budget. It's very likely going to give you identical performance to new and you can get a used 3090 for the cost of a new 3070 ti and open so many doors for what you can do memory-wise.
LetMeGuessYourAlts t1_jar11st wrote
How many of them are there? If you've got 100's of them, you could do it for a couple bucks or less with gpt-3 davinci without a ton of prompt engineering. If I had a ton of them I'd probably go with gpt-j and see if could do a serviceable job with few-shot learning.
LetMeGuessYourAlts t1_j68ai7i wrote
This is going to do amazing things for GIF reactions when it's fast and cheap.
LetMeGuessYourAlts t1_j64t0lv wrote
I'm doing something similar to your task. My plan is to use GPT-3's Text-divinci-003 as it can do this in Instruct mode without modification and then once I have a hundreds to thousands of examples then fine-tune GPT-J on Forefront.ai using what GPT-3 generated to hopefully cut costs by about 75%.
LetMeGuessYourAlts t1_j036fki wrote
Reply to comment by jagged_little_phil in [D] - Has Open AI said what ChatGPT's architecture is? What technique is it using to "remember" previous prompts? by 029187
I did find it a little funny that chatgpt seems to actively prevent you from telling it that it's a person.
LetMeGuessYourAlts t1_j035ugy wrote
Reply to comment by Ghostglitch07 in [D] - Has Open AI said what ChatGPT's architecture is? What technique is it using to "remember" previous prompts? by 029187
And if you carry a similar prompt over to the playground and run it on a davinci-003 model it will still attempt to answer your question without just giving up like that, so it's likely outside the model itself producing that response and then just having the model complete the error message. I was wondering if perhaps if confidence was low if it just defaults to an "I'm sorry..." and then let's the model produce the error.
LetMeGuessYourAlts t1_iyruft9 wrote
Reply to comment by Dexamph in GPU Comparisons: RTX 6000 ADA vs A100 80GB vs 2x 4090s by TheButteryNoodle
Do you know: Are there any Nvidia GPUs at a decent price/performance point that can pool memory? Every avenue I've looked down seems to point to nothing a hobbyist could afford being able to get a large amount of memory without resorting to old workstation GPUs that have relatively slow processors. Best bet seems to be a single 3090 if memory is the priority?
LetMeGuessYourAlts t1_jegmjkk wrote
Reply to comment by grumpyp2 in [News] Twitter algorithm now open source by John-The-Bomb-2
Readme.md
Sorry, had to 🤓