MrTacobeans
MrTacobeans t1_jcagwir wrote
Reply to comment by NoScallion2450 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I don't know anything about this side of AI but when it's boiled down it's fancy algorithms, can those be patented?
Maybe that's the driving force of the "open" nature of AI. An algorithm can't be patented but a product based on that can be. Kinda like how LAMBDA has the non-commercial license but if a community rebuilt it under a permissive license that'd be totally kosher.
This may be why openAI is being hush about their innovations because if it's published someone else can copy it without the legal woes.
MrTacobeans t1_jbu7nw4 wrote
Reply to comment by science-raven in [D] Development challenges of an autonomous gardening robot using object detection and mapping. by science-raven
Regardless of the possibilities of this being possible this robot even at economies of scale looks more like 15-20k. Advanced lawn roombas are already in the 2-5k range a fully autonomous lawn/garden maintenance bot will never be below 5k... Unless it's just running around spraying water on stuff. The arm in your video alone would likely cost atleast 2-3k after R&D is factored in
MrTacobeans t1_jbs7jtu wrote
Reply to [D] Development challenges of an autonomous gardening robot using object detection and mapping. by science-raven
I think the biggest issue here though is a large part of a bot like this would be traditional programming. The type of AI hardware needed for these tasks in the garden would likely eclipse the bots 3k goal for the foreseeable future.
MrTacobeans t1_j9yedqd wrote
Reply to comment by kaityl3 in People lack imagination and it’s really bothering me by thecoffeejesus
This is exactly the kind of AI that shouldn't even be scary. It's taking monotonous labor and doing the majority of it. If anthem holds true to any kind of decency their employees can focus on other pursuits within the company while an AI crunches the nitty gritty bits.
If that AI axes 70% of the workforce without proper movement to New adventures for each affected employee that's criminal. But also a possible situation unfortunately :/
MrTacobeans t1_j9y6dha wrote
Reply to Is it prime time to start an AI company? by Scarlet_pot2
Oh my sweet summer child... I miss those days
MrTacobeans t1_j9xi9n6 wrote
Reply to comment by Lesterpaintstheworld in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
If you are paying for the API something like "RWKV" might be an alternative hosted on a GPU Cloud provider. The model is currently only at 14B parameters but technically has "unlimited context" which in theory is probably not actually unlimited but for what I saw in your use case it might be worth looking into
MrTacobeans t1_j9w1obd wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
I don't discount the ramifications of a fully functional AGI but I don't really see even the first few versions of AGI being "world ending". Not a single SOTA model currently can have persistent presence without input. That gives us a very large safety gap for now. Sure if we get a "all you need is X" research paper that defuncts transformers sure. But structurally transformers are still very much input/output algorithms.
That gives us atleast a decent safety gap for the time being. I'm sure we'll figure it out when this next gen of narrow/expert AI starts being released in the next year or so. For now an AGI eliminating humanity is still very much science fiction no matter how convincing current or near future models are.
MrTacobeans t1_j9vzw1p wrote
Why are you building this based on a closed API?
You could eventually find something in this adventure and openAI could be like woah let's not go there and block/ruin the work you've done. There are multiple open source models that can be worked into the kind of flow you are creating.
On a side note though leveraging GPT-3 to create even a proto AGI seems incredibly unlikely. If it was possible it would likely be in the news already. You mentioned yourself the memory limit. That's a big chunk of the issue with current AI. Can't keep a "sense of mind" going when half of it is getting deleted every few prompts
MrTacobeans t1_j9mxp5u wrote
Reply to comment by Kiryln in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
"ohh my god. 😱"*
MrTacobeans t1_j9mjznl wrote
Reply to comment by AnakinRagnarsson66 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
It'll toast the answers to your homework on one side and Sydney having an existential crisis on the other side for being put in a toaster
MrTacobeans t1_j8qi0gg wrote
Reply to comment by wastedtime32 in What will the singularity mean? Why are we persuing it? by wastedtime32
You just explained a Google search. Atleast in the short term AI/chat bots are just proving to be a more consumable or more entertaining way of gathering information.
It's upto each person to decide what they want to learn without the crutch of technology. Even an expert AI will never replace the need for actively learning things. Jumping way back even written language is a technology. For thousands of years humans have been figuring out how to compress knowledge and share it easily.
MrTacobeans t1_j8p1wwt wrote
Reply to comment by wastedtime32 in What will the singularity mean? Why are we persuing it? by wastedtime32
I have sorta the same kinda fear response to the exponential growth of AI in just the last year. We've gone from "WOAH this ai can convincingly have a conversation with a human and beat a world expert in games/challenges" to "holy shit this AI chatbot can now help me with my job, give therapist level advice (even if it's wrong), image generation at 1% of the effort needed by a conventional artist and the constant flux of all these things improving not just on a quarterly basis anymore but like every other day some sort of SOTA level model is released".
It's alarming and it is a lot but I think if AI doesn't just randomly decide that humans are a useless presence we'll be fine. I see a world where even general intelligence AI aligns as a tool that even "bad actor" AI is combatted by competing "good actor" AI. I don't see society falling apart or a grand exodus of jobs.
I'm hoping AI turns into the next evolution of the internet, where we all just have an all-knowing intelligent assistant in our pocket. I can't wait for the day that my phone pings with a notification from my assistant with useful help like "Hey, I noticed you were busy and a bill was due so I went ahead and paid that for you! Here's some info about it:" Or "I know it's been a tough day at work but you haven't eaten, can I order you some dinner? Here's a few options that I know are your comfort foods: ".
The moment a dynamic AI model that can run on consumer hardware with chatGPT level or even beyond intelligence, stuff like that will become a reality. AI might be scary but trying to think about the positive effects it could have really helps me cope with the nebulous unknowns.
MrTacobeans t1_j7jwmn3 wrote
Reply to comment by el_chaquiste in What Large Language Models (LLMs) mean for the -near- future, from Search to Chatbots to personal Assistants. Some of my thoughts, predictions and hopes - and I would love to hear yours. by TFenrir
I donno stability although it seems like a well funded machine of a organization now, beat openAI incredibly fast at a time when their funding was no where near the level of openAI. All while producing a model that can throw strong punches against DALLE without using multiple industrial GPUs to inference each image.
Now stability has DeepFloyd which is a nebulous/ethereal model under lock&key atm that seems to be completely SOTA just from the base model.
I wouldn't discount the small players, especially the ones that plan on open source. People have done wild things with stable diffusion. The model I'm following right now for LLM, RWKV is creating pretty darn impressive results at 14B parameters. Compared to chatGPT it's ok but the big difference is you need 15k+ of hardware to even inference the chatGPT model. RWKV from it's base model is creating coherent results on consumer hardware. It hasn't even been tuned yet with RL training or q&a data.
MrTacobeans t1_j7ekqqr wrote
Reply to comment by supersoldierboy94 in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
But why is that bad? If the researchers wanted moola they should have made a business or published/ran the models they created from their own research. If you don't want to get stepped on by someone else talented enough to piece it together don't release your ideas.
Don't get butt hurt when a primarily publicity or capitalist based company implements your idea and makes it into a product.
MrTacobeans t1_j0wgheb wrote
Reply to Everything an average person should know about Web 3 at this time, and how this will be needed for the metaverse by crua9
I really don't see why AR/VR is so intertwined with Blockchain shenanigans. If a web 3 develops some day I don't really see a future where block chain is the enabling software to provide secure meaningful interactions. This can already be done almost entirely without crypto. Look at any MMO. Almost every single game survived decades without a shiny new minted Blockchain.
Even in a decentralized network there needs to be a centralized player(s) to guarantee the network doesn't fail. What's the point of an extremely secure network if there is no full guarantee that data will still be secure and present in 10 years.
Or say we figured out all the technicalities and you some how lost every key/recovery. That means part of the chain will be functionally locked away forever without a significant upheaval or a chain that can break rules without security implications.
To me if a web 3 develops. It'll likely be an extremely efficient hub of data with some sort of strong AI influence but not something solely based on the security of crypto
MrTacobeans t1_izpt1y6 wrote
I just tried this on character.ai and although long winded since character.ai prefers sending bricks it answered this question perfectly both on a copy from above and a more natural language version that I tried to trip the bot up with. In both cases gpt passed with flying colors
MrTacobeans t1_it6lf1h wrote
Reply to comment by Dreason8 in A YouTube large language model for a scant $35 million. by Angry_Grandpa_
I am once again asking you to smash that like button
MrTacobeans t1_irtydxz wrote
Reply to comment by goldygnome in Why does everyone assume that AI will be conscious? by Rumianti6
Even dogs/cats which aren't on the upper end of animal intelligence display consciousness on a scale that completely beats out their wild counterparts. Both my dog and cat display signs that they are thinking and not just in some autopilot type instinct based thought process. When danger/survival is removed from the equation I wouldn't doubt most animals start to display active consciousness
MrTacobeans t1_irb5lx3 wrote
Reply to We are in the midst of the biggest technological revolution in history and people have no idea by DriftingKing
I've shared some ai stuff on Facebook and they are some of the least liked/interacted posts. People have no idea, the opensource community is sending a nuclear bomb level of effort through AI.
When stable diffusion was released it wasn't even a week before people got it running on cards 40% smaller than the original requirement and now in some repos that number is down to 2gb.
When models like adept.ai get released I wouldn't be surprised if open source beats academics/corps to the first general AI. It already seems like Google and others are making headway on the issues that are holding back a general AI (retaining memory, model size, efficiency, etc...)
MrTacobeans t1_jcbj1kw wrote
Reply to [D] Is there an expectation that epochs/learning rates should be kept the same between benchmark experiments? by TheWittyScreenName
This is coming at a total layman's point of view that follows ai Schmutz pretty closely but anyway...
Wouldn't running a tighter learning variable and longer epoch length reduce many of the benefits of a NN outside of a synthetic benchmark?
From what I know a NN can be loosely trained and helpfully "hallucinate" the gaps it doesn't know and still be useful. When the network is constricted it might be extremely accurate and smaller than the loose model but the intrinsically useful/good hallucinations will be lost and things outside the benchmark will hallucinate worse than the loose model.
I give props to AI engineers this all seems like an incredibly delicate balance and probably why massive amounts of data is needed to prevent either side of this situation.
I feel like there is no need to enforce a epoch/learning curve in benchmarks because usually models converge to their best versions at different points regardless of the data used and if they are making a paper they likely tweaked something that was worth training and writing about beyond beating a benchmark