Stibley_Kleeblunch

Stibley_Kleeblunch t1_iu0boo6 wrote

There's plenty I don't know about the topic, and I'll happily admit it. I've never had a Facebook account, and never plan to. But at some point, doesn't "showing people a variety of ads" end up being at odds with "selling highly-targeted ad space?" I wonder if the business model accounts for that by, for instance, not charging the client to display their ads to a group that is unlikely to respond well to them.

0

Stibley_Kleeblunch t1_itzsh2v wrote

Fortunately for us, the most capable systems are focused on relatively harmless things like advertising... For now. But there are major advances every day in fields like medtech and finance. We really should be very careful with how we interact with these systems, and with how much trust we're willing to place in black-box systems.

0

Stibley_Kleeblunch t1_itzqwp7 wrote

That's where things get really interesting and all the fun questions start to pop up. Even if human influence of the system was minimal and occurred back in the system's infancy, just how impactful is that influence today? What were those inputs? If we don't understand what's going on under the hood, can we really trust that the system is still doing a good job, or is it possible that its current success is perceived based on reputation gained from past successes? And at what point does such a system transition from identifying patterns to pattern creation?

Is it possible for a neural network to lose its mind somewhere along the way? Google Flu worked fantastically, right up until it didn't, and nobody understood what went wrong.

Then the moral questions -- should our values impact how these things work? And, if so, to what degree? This article essentially implies that the system has re-discovered phrenology, which we decided long ago was a flawed theory that was unpalatable in no small part due to its roots in racism. If AI comes up with the same theory, does that make it an acceptable theory? We're still very early in our exploration of our relationship with such systems, and there's potential danger in how we interact with them, with respect to both how we reach them and how we learn from them.

Really, though, my issue right now is with how some people are interpreting this information. "Oh great, they're advertising to pedos and racists" is certainly not the right takeaway here, yet that exact sentiment seems to be what some people are taking away, based on some of the comments that popped up in here last night. I don't believe that "the system has been training for a long time, so we should trust it" is an especially useful conclusion either.

−1

Stibley_Kleeblunch t1_ity1cna wrote

I'm seeing a bunch of ill-conceived takeaways based on the assumption that Facebook is an authoritative source on demography in here, such as "old men are gross."

This information says nothing about your middle-aged neighbor. It does, however, say a lot about what Facebook THINKS about older men. Nobody turns 55, then suddenly thinks, "hey, I like young women now!" Do some older men like younger women? Sure. Do most of them? Perhaps, I have no idea.

While we'd all like to think that their advertising is based on good data and that their conclusions are sound (despite the company having fallen completely out of public trust these last few years), there's always a risk that the algorithm actually creates a stereotype, rather than just operating on what is given. The more a certain type of ad is shown to a demographic, the more likely a positive feedback loop is to emerge. If all you see is young white women, you don't even get the chance to contradict the algorithm's assumptions about you. It becomes a more binary decision at that point: "responds to ads" or "doesn't respond to ads." And if you're being targeted as a 55-year-old male, then what else is being left behind? The man could be gay, or have a fetish for old women, or just really like trees.

Point is, it's not reasonable to make assumptions on whole demographics of people based on what Facebook tells you they like. Besides, I thought we already learned that Facebook has a penchant for misinformation -- what happened to that?

not an old man yet, but not particularly fond of seeing entire age, race, or gender groups being maligned in here

15