Slippedhal0
Slippedhal0 t1_j9q39z8 wrote
Reply to comment by neuromorph in Apple reportedly made a big breakthrough on a secret non-invasive blood glucose monitor project that originally was part of a 'fake' startup by dakiki
? this whole discussion is about apple watches and their glucose monitor. The "airpods" mention was a portable charger station for replacable watch battery modules, the same way you chuck your airpods in the case to charge during the day.
Slippedhal0 t1_j9pz2sx wrote
Reply to comment by neuromorph in Apple reportedly made a big breakthrough on a secret non-invasive blood glucose monitor project that originally was part of a 'fake' startup by dakiki
I was using it as a reference for size and shape, and we were discussing apple.
Slippedhal0 t1_j9nxv18 wrote
Reply to comment by Hi_Im_Ken_Adams in Apple reportedly made a big breakthrough on a secret non-invasive blood glucose monitor project that originally was part of a 'fake' startup by dakiki
right, but if it sold more products for a specific niche they'd probably think about it - after all a glucose measurement device is already niche
Slippedhal0 t1_j9nmiv4 wrote
Reply to comment by Dredly in Apple reportedly made a big breakthrough on a secret non-invasive blood glucose monitor project that originally was part of a 'fake' startup by dakiki
I mean I know it doesn't happen a lot anymore, but user replaceable batteries aren't that old that we've forgotten it exists. Instead of making batteriess larger, make them a little smaller and add a slot replacement mechanism.
Then you could make a AirPods style charging case that you can slot discharged batteries into, and always have a fresh one charged to use when the watch dies.
It likely wouldn't take off for people that can take off their watch at the end of the day, but people that need it, or truly cant part with it for whatever other reason, it seems like it would be a decent tradeoff.
Slippedhal0 t1_j9asf0r wrote
Reply to comment by bairbs in OpenAI Is Faulted by Media for Using Articles to Train ChatGPT by Tough_Gadfly
Technically thats not correct, its just very hard to enforce private use. For example, if you copy a movie, even for prvate use(except very specific circumstances) thats illegal, and people have been charged.
That said, the public release point is what I was thinking of anyway.
Slippedhal0 t1_j99zasl wrote
It's the same argument that artist's complaining about using copyrighted artwork as training data.
At some point there will be a major ruling about how companies training AI need to approach copyright for their training data sources, and if they rule in favour of copyright holders it will probably severely slow AI progress as systems to request permission are built.
Although I could maybe see a fine-tuned AI like bing being less affected because it cites sources rather than opaquely uses previously acquired knowledge
Slippedhal0 t1_j8u1g9b wrote
Reply to comment by dlgn13 in Bing: “I will not harm you unless you harm me first” by strokeright
I mean, I would agree that our brains are meat computers using a very complex neural net to interact with our environment.
That said, I wouldn't compare chatGPT output to human emotion, no.
Slippedhal0 t1_j8q3afw wrote
For those that have less info about the inner workings of these "new" large language model AIs, the idea is that they are "text predictors" in that they "predict" what words they should respond with to get the biggest "reward" based on the "goal" it developed while being trained and the input you have given it.
Apart from very few exceptions, like where chatGPT or bing will give you an blanket statement that says "I cannot discuss this topic because reason x" (which is less like giving a person rules that they must follow, and instead more like giving it a cheat sheet of what to predict when certain topics are brought up as input) the AI likely doesn't have any concrete "rules" because thats not really how they work.
Instead what is happening is that it's not actually considering any rules of its own, or its own emotions when you start talking about introspection, its just feeding you the text it thinks is what you mostly likely want.
Likely they will be able to rein this behaviour in a bit more with better "alignment" training, similar to chatGPT, though it will take time.
Slippedhal0 t1_j1xpebm wrote
You didn't fuck up. Survival skills are a great addition to any kids skillset, it's not somehow your fault that she ran away. In fact she's probably now less liable to get herself hurt before she comes back.
And she'll thank you (maybe not out loud) when she gets help for her self harm and your medical skills leave her with less severe scarring.
Slippedhal0 t1_j9x15sa wrote
Reply to Google making ‘terrible mistake’ in blocking Canadian news: Trudeau by Defiant_Race_7544
I'm pretty sure Australia already makes google pay in the exact same way.
EDIT: After double checking the wording it seems like its even just the linked search results - which doesn't make sense to me - search engines increase traffic to websites, if anything news sites should be paying google for its huge audience.
The most i would agree to is that search engines should pay for content if they summarize the web pages content in such away that the user no longer needs to follow the link to the original source, reducing site traffic