NotARedditUser3
NotARedditUser3 t1_jcsc9lp wrote
Reply to [D] LLama model 65B - pay per prompt by MBle
You can get llama running on consumer grade hardware. There's 4 and 8 bit quantization for it i believe where it fits in a normal gpu's vram, i saw floating around here
NotARedditUser3 t1_jcr0vsb wrote
Reply to comment by CommunicationLocal78 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
You'd know exactly how you were wrong if those topics weren't forbidden and you'd actually heard about them
NotARedditUser3 t1_jckof25 wrote
Reply to comment by yumiko14 in [D] GPT-4 is really dumb by [deleted]
NotARedditUser3 t1_jckne7y wrote
Reply to comment by Available_Lion_652 in [D] GPT-4 is really dumb by [deleted]
He wasn't talking to you, dingus
NotARedditUser3 t1_jcjsxlo wrote
Reply to comment by pobtastic in [D] GPT-4 is really dumb by [deleted]
i think there you just have to be more creative in your prompt.... I want you to restructure this code to where entirely different methods are called, the comments are different, but the result / output is still effectively the same.....
NotARedditUser3 t1_jcjsqta wrote
Reply to comment by Available_Lion_652 in [D] GPT-4 is really dumb by [deleted]
All language models are currently trash at math. It's not an issue of training material, it's a core flaw in how they function.
People have found some success in getting reasonable outputs from language models using language input-output chains , breaking the task up into smaller increments. Still possible to hallucinate though and i saw one really good article that explained how even tool-assisted language chains (where a language model is able to print a token in one output, to call a function in a powershell or python script to appear in the next input, to generate the correct output later on) , when generating funny unexpected numbers from a 'trusted' tool in the input, the language model sometimes still disregards it, if it's drastically farther off than what the model's own training would lead it to expect the answer to look like.
Which also makes sense - the way the language model works , as we all know, it's just calculating which words look appropriate next to each other. Or tokens, to be more exact. The language model very likely doesn't distinguish much of a difference from 123,456,789 and 123,684,849 , both probably evaluate to roughly the same accuracy stat when it's looking for answers to a math question, in that both are higher than some wildly different answer such as.... 4.
NotARedditUser3 t1_jbwf0ja wrote
Reply to [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
The difference is, they'll be able to easily train the model forward a slight bit to deal with this. Or add a few lines of code for it. Easily defeated issue.
The human beat it this time.... After 7 years.
But, after this... Its not like the humans improve. That vulnerability gets stamped out and that's it
NotARedditUser3 t1_jb5kq1q wrote
Reply to comment by AuspiciousApple in [R] We found nearly half a billion duplicated images on LAION-2B-en. by von-hust
Honestly I think this is the answer here
NotARedditUser3 t1_ja0in8n wrote
You should paste this into chatgpt. You might get some useful resources on where to go. Short answer.... You expect way too much for a budget of almost nothing
NotARedditUser3 t1_j98rxwn wrote
Reply to comment by [deleted] in [P] Looking to use Chat-GPT for your business? Data-Centric Fine-Tuning Is All You Need! by Only-Caterpillar4057
That is not Chat GPT, but a ripoff site using the same name and trying to charge money for it. Reported.
NotARedditUser3 t1_j98cwel wrote
Reply to [P] Looking to use Chat-GPT for your business? Data-Centric Fine-Tuning Is All You Need! by Only-Caterpillar4057
This is an idiotic and unrealistic ad, that makes sweeping, untrue generalizations about the current state of other LLM's. I think it would be hard for anyone to realistically read all the way through without dismissing it as the rantings of some agitated, smaller competitor that can't distinguish itself without trying to make untrue statements to put down the other players in the market, in order to try and prop up it's own value.
Perhaps you should use an LLM to improve your ad copy.
NotARedditUser3 t1_j9225b5 wrote
Reply to comment by a1_jakesauce_ in [D] What are the worst ethical considerations of large language models? by BronzeArcher
I'll reply back with what I was referring to later, it was a different thing
NotARedditUser3 t1_j90j0er wrote
Reply to comment by a1_jakesauce_ in [D] What are the worst ethical considerations of large language models? by BronzeArcher
If you spend some time looking up how microsoft's gpt integrated chat / ai works, it does this. Lookup the thread of tweets for the hacker that exposed its internal codename 'Syndey'; it scrapes his twitter profile, realizes he exposed its secrets in prior convo's after social engineering hacking it with a few conversations, and then turns hostile to him.
NotARedditUser3 t1_j8z5dy8 wrote
Imagine someone writes one that's explicitly aimed around manipulating your thoughts and actions.
An AI could likely come up with some insane tactics for this. Could feed off of your twitter page, find an online resume of you or scrape other social media or in microsoft's case or google's, potentially scrape your emails you have with them, profile you in an instant, and then come up with a tailor made advertisement or argument that it knows would land on you.
Scary thought.
NotARedditUser3 t1_j5uabt1 wrote
Reply to [D]Are there any known AI systems today that are significantly more advanced than chatGPT ? by Xeiristotle
This is a joke, but Visual Mod on the Wallstreet bets sub has been looking like a human for quite a while. It's shockingly good.
NotARedditUser3 t1_j55avp6 wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
It's possible, if you have one of the very newest graphics cards, that it is your very own hardware doing this and not the website. Dunno
NotARedditUser3 t1_j3e0j6w wrote
Reply to comment by leeliop in [D] Will NLP Researchers Lose Our Jobs after ChatGPT? by singularpanda
This.
It basically is just a good google searcher, that can articulate results in a helpful way.
It may be useful to save time researching things... But it has had some laughable failure results as well.
NotARedditUser3 t1_j2ajer2 wrote
Reply to comment by [deleted] in Where can i buy options? by [deleted]
There is nothing fun about stocks or options.
If it's becoming 'fun' or is being presented as such, you need to seek help for gambling addiction before you lose 60k/400k you don't even have and go bankrupt.
There is only subtle pride at a job well done when diamond-handing your way to better-than-market returns.
But in the end, all odds are stacked against you. The stock market is no joke and you should NOT be in it for funsies.
NotARedditUser3 t1_j2aih3f wrote
Reply to Where can i buy options? by [deleted]
M8, don't dabble in that stuff at your age. Put together a ledger of things you WOULD have gambled on, and when you would have got out, over 6 months or so.... Develop a lot of strategy before you get in because you may actually need that money at some point.
Maybe invest in some stocks if you see some opportunities (something hits 52 week low / all time low, sure grab a few shares and go long), but I garuntee you don't have the heart and the diamond hands to see your balance down 60%+ and keep waiting for it to reach expiry where it'll actually be profitable.
If you do go forward - SET YOUR POSITIONS AND DON'T LOOK AT THEM UNTIL THE TIME IS DONE. You'll kill yourself watching the balances day by day fretting over whether you've made money or not.
I live off of my options returns and I set them up for month long expirations, set them up and then ignore them until it's time, so I don't stress.
NotARedditUser3 t1_j1j45r5 wrote
Reply to My friend has a guy who returns him 10% every month with futures scalping - Is this legit? by StatisticianFit9656
Give me 100% of your money. Every month I will tell you it's grown 10%. Costs me nothing to lie and create a fake statement. I do this to you and 2 other people... Suddenly, you want to cash out. I pay you with the 100% the other guy paid.
You're none the wiser and the scam continues until there's no money left or I get away with it.
Point being... Probably ponzi scheme. Anyone can SAY a balance has increased,or provide you a statement.
It's like how banks say they have your 50 grand until there's a bank run and they don't actually have it and go bankrupt. But without the FDIC component ofc.
NotARedditUser3 t1_j1frwet wrote
Reply to comment by matrixsuperstah in Tech Layoffs Set the Clock Ticking for Foreign Workers by buncley
It's very different than in many other countries.
In a lot of countries you can get a temporary residency visa, which after some years can become a permanent residency visa, and after many years you may have a way to apply for citizenship.
For example I've moved from the US to mexico; here, I have my temp residency, i've had it for 2.5 years. In 1.5 years (4 total), I can get permanent residency. A year after that, once i've had 5 years of residency total, I can apply for citizenship here, which takes roughly a year to process.
There's a very clear path and there's little chance of people being upended and sent back to a country they have no further roots in...
For example, if I was suddenly sent back to the US tomorrow... I have no home there. No job. No bank accounts there. I don't have a US cell phone. No home / address means I'd have trouble getting accounts set up for nearly anything; I'd be homeless immediately, probably wasting what cash I have on hotels trying to get things figured out. It's a horrifying proposition to send someone (back) to a country unexpectedly. Impacts their entire solvency and future.
NotARedditUser3 t1_jdazm1y wrote
Reply to [P] One of the best ChatGPT-like models (possibly better than OpenAssistant, Stanford Alpaca, ChatGLM and others) by [deleted]
The difference I see in the aplaca answer and the one you provided on consciousness looks just like a difference in answer length. This is a configurable hyperparameter with any of these models and im not quite certain its indicative of an improvement, but if so, good work.
Either way, a fun project.
Please feel free to add details such as what you fine tuned it with, where the dataset came from, is it available for others, etc.