Slippedhal0

Slippedhal0 t1_j9x15sa wrote

I'm pretty sure Australia already makes google pay in the exact same way.

EDIT: After double checking the wording it seems like its even just the linked search results - which doesn't make sense to me - search engines increase traffic to websites, if anything news sites should be paying google for its huge audience.

The most i would agree to is that search engines should pay for content if they summarize the web pages content in such away that the user no longer needs to follow the link to the original source, reducing site traffic

30

Slippedhal0 t1_j9q39z8 wrote

? this whole discussion is about apple watches and their glucose monitor. The "airpods" mention was a portable charger station for replacable watch battery modules, the same way you chuck your airpods in the case to charge during the day.

1

Slippedhal0 t1_j9nmiv4 wrote

I mean I know it doesn't happen a lot anymore, but user replaceable batteries aren't that old that we've forgotten it exists. Instead of making batteriess larger, make them a little smaller and add a slot replacement mechanism.

Then you could make a AirPods style charging case that you can slot discharged batteries into, and always have a fresh one charged to use when the watch dies.

It likely wouldn't take off for people that can take off their watch at the end of the day, but people that need it, or truly cant part with it for whatever other reason, it seems like it would be a decent tradeoff.

4

Slippedhal0 t1_j99zasl wrote

It's the same argument that artist's complaining about using copyrighted artwork as training data.

At some point there will be a major ruling about how companies training AI need to approach copyright for their training data sources, and if they rule in favour of copyright holders it will probably severely slow AI progress as systems to request permission are built.

Although I could maybe see a fine-tuned AI like bing being less affected because it cites sources rather than opaquely uses previously acquired knowledge

1

Slippedhal0 t1_j8q3afw wrote

For those that have less info about the inner workings of these "new" large language model AIs, the idea is that they are "text predictors" in that they "predict" what words they should respond with to get the biggest "reward" based on the "goal" it developed while being trained and the input you have given it.

Apart from very few exceptions, like where chatGPT or bing will give you an blanket statement that says "I cannot discuss this topic because reason x" (which is less like giving a person rules that they must follow, and instead more like giving it a cheat sheet of what to predict when certain topics are brought up as input) the AI likely doesn't have any concrete "rules" because thats not really how they work.

Instead what is happening is that it's not actually considering any rules of its own, or its own emotions when you start talking about introspection, its just feeding you the text it thinks is what you mostly likely want.

Likely they will be able to rein this behaviour in a bit more with better "alignment" training, similar to chatGPT, though it will take time.

12

Slippedhal0 t1_j1xpebm wrote

You didn't fuck up. Survival skills are a great addition to any kids skillset, it's not somehow your fault that she ran away. In fact she's probably now less liable to get herself hurt before she comes back.

And she'll thank you (maybe not out loud) when she gets help for her self harm and your medical skills leave her with less severe scarring.

8