gettheflyoffmycock
gettheflyoffmycock t1_j1og78c wrote
Reply to Machine learning model reliably predicts risk of opioid use disorder for individual patients, that could aid in prevention by marketrent
As a machine learning engineer this is the WORST type of problems a ml model should be applied to. We are taught in school how easy it is to make a machine learning model that can detect a terrorist on a plane. Just guess “not a terrorist“ and you have a 99.9% accurate model.
theres countless records already of ML models doing things like this article that end up racist, wrong, and damage peoples lives. Shame on these ML engineers.
gettheflyoffmycock t1_j1chv31 wrote
Reply to comment by maxToTheJ in [D] When chatGPT stops being free: Run SOTA LLM in cloud by _underlines_
Yeah, funny how many people have been advertising on all the machine learning subreddits their new chat GPT application. Which is funny because Chat GPT doesn’t have a single API yet.
Kinda funny, AI is ending up like drop shipping. the art of advertising shitty AliExpress products as if they’re actually a better product, and then up charge people like 500 or 1000%, then you just order the AliExpress product and have it mailed to their house. It’s like people are doing that with AI now. Just say it’s this or that and then put a super lightweight model like OpenAI Davinci on a free AWS instance and call it chat GPT. Business models built on “If da Vinci charges you four cents per API credit just charge the user eight Cents “ what will they know?
gettheflyoffmycock t1_j1bzvsv wrote
I’ve had to deploy a lot of deep learning, there will not be a simple easy slap on deployment of something like this. Furthermore, it is not going to be cheaper. First of all, I’m not sure if it requires a graphics card, but in AWS there is a one hour minimum unless you use a more expensive contract. So when you make a API request, it’s going to charge you the full three dollar minimum or up to $20 depending on what instance you are using.
Furthermore, the cold start time. If you have it shut down when not in use its like at least 5 to 10 minutes for a model of this size to get up and running. The only way this is cost-effective is if it can run on CPU only, it could fit on an extremely cheap or free AWS. But my guess is that models like this are not going to be able to run fast enough to make it worth it with only CPU.
can anyone chime in if state of the art text generation models like this can run on CPU only?
gettheflyoffmycock t1_j9rqd5w wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Lol, downvotes. this subreddit has been completely overran by non engineers. I guarantee no one here has ever custom trained and inferred with a model outside of API calls. Crazy. Since ChatGPT, open enrollment ML communities are so cringe