Submitted by AylaDoesntLikeYou t3_11c5n1g in singularity
FaceDeer t1_ja23lku wrote
Reply to comment by Ok-Ability-OP in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
I'd be happy with it just running on my home computer's GPU, I could use my phone as a dumb terminal to talk with it.
This is amazing. I keep telling myself I shouldn't underestimate AI's breakneck development pace, and I keep being surprised anyway.
Z1BattleBoy21 t1_ja2jtse wrote
I think LLMs running on a phone would be really interesting for assistants; AFAIK siri is required to on hardware only
duffmanhb t1_ja2nzfa wrote
Siri was exclusively cloud based for the longest time. They only brought over basic functions to local hardware.
Z1BattleBoy21 t1_ja2qcli wrote
I did some research and you're right. I made my claim based on some reddit threads that said that apple won't bother with LLMs as long as they couldn't be processed on local hardware due to privacy; I retract the "required" part of my post but I still believe they wouldn't go for it due to [1] [[2]] (https://www.theverge.com/2021/6/7/22522993/apple-siri-on-device-speech-recognition-no-internet-wwdc)
NoidoDev t1_ja5sjcu wrote
>I'd be happy with it just running on my home computer's GPU
This, but as a separate server or rig for security reasons. As external brain for you robowaifus and maybe other devices like housekeeping robots at home.
Viewing a single comment thread. View all comments