Comments
koebelin t1_j57jnq2 wrote
There are private groups on Facebook that have a more adult level of discussion than on most subreddits. The big public or popular private Facebook groups are just garbage, but I love some of the more focused, unsensationalist groups.
PooperJackson t1_j573nsc wrote
>It's impossible to have a nice discussion on anything in the realm of conspiracy.
As opposed to the rest of the internet where nice discussions are often the norm.
Groperofeuropa t1_j58lnwt wrote
That is false equivalence. The assertion is that this is a way in which Facebooks platform design makes it worse, which is the topic of the post.
[deleted] t1_j57m3fq wrote
[removed]
BigMax t1_j55w1cb wrote
Wait, so are they saying Facebook is more suited to stopping problems with Facebook compared to random users? Astonishing!
I wonder if this is true in other cases. I mean, I saw an issue with Amazon and figured I could fix it, but maybe Amazon itself might be better positioned to?
processedmeat t1_j57c68e wrote
Taken another way... User don't take accountability for what they share on facebook
Chpgmr t1_j57gml0 wrote
And what is the solution to that?
processedmeat t1_j57hd0x wrote
User shouldn't hit the share button when they see misinformation
Chpgmr t1_j57irfp wrote
Because we all are constantly up to date on what is and isn't misinformation and there certainly isn't nefarious people out there.
insaneintheblain t1_j586xtd wrote
Social media is the solution.
[deleted] t1_j56zqga wrote
[removed]
insaneintheblain t1_j586v8h wrote
You realise that users create and share the actual content on Facebook, right?
Delamoor t1_j5878ii wrote
I'm pretty sure I can take down bot networks with willpower, facts and logic alone. I just have to make posts long and angry enough to outweigh theirs and then post them in response to every bot's post or comment. Seems doable in a weekend.
Jackmeoff611 t1_j555tp8 wrote
Just look at Reddit with its collective group think. If you don’t aggressively agree with their view points you get banned. There is no more such thing as grey area now. And people have stopped learning how to interpret things for themselves.
alexRr92 t1_j557og7 wrote
I find this very true about grey areas, it's like people have forgotten grey areas exist despite our reality being mostly made up of grey areas/complexity.
The real issue I feel is too many are way to eager to speak on subjects they don't understand instead of openly admitting to the fact they don't understand. There's nothing wrong with not understanding, you can't know know everything about everything. Also people don't seem to trust one another anymore contributing to the problem.
EddoWagt t1_j55me14 wrote
>The real issue I feel is too many are way to eager to speak on subjects they don't understand instead of openly admitting to the fact they don't understand.
This is probably the vocal minority, as people who know they don't know will probably remain silent
Chpgmr t1_j57h4fk wrote
Same with those street interviews. Look at how many people simply look and walk by vs how many actually stop to talk. You just get the weird confrontational people who talk before they think more often than not.
IRYIRA t1_j57qsib wrote
2,400 years since Socrates and the smartest person in any room is the one who says, "I don't know." Of course we understand a lot more about our world today than we did then, but new data could always completely change the rules by which we understand how the world operates. None of that means we should stop trying to understand our world or that everything once believed to be true is wrong, rather we should recognize that what we know to be true today could be false tomorrow, or merely partially true.
Chiliconkarma t1_j56gckz wrote
Also, prejudice and people are very willing to ignore a lack of information. Thinking more like "round block goes in round hole", than "out of X possible scenarios, you should worry about this: .....".
If the poster mentions an acceptable clue for what the answer should be, then the answer can only be what the clue indicates.
budlystuff t1_j55l5ax wrote
I agree with this where a “dissenting” view on Reddit with the most traction gets buried in the comment section and Bots force feeding agendas.
Facebook is a sewer of information, I explain to my Mam that everything is fake even when her neighbours is live-streaming at the local shops.
[deleted] t1_j55udtv wrote
[removed]
CrustyMilkCap t1_j555kwi wrote
Because Facebook users know how to stop the spread of misinformation on Facebook...
n8bitgaming t1_j56ndjx wrote
Same premise as carbon footprints and environmental impact. Corporate interests are masters and transferring responsibility to others.
[deleted] t1_j59tx9e wrote
[removed]
Thatguyxlii t1_j56bm0i wrote
Except Facebook doesn't stop it. And often will ignore it even if reported. And will "fact check" obvious satire and joke memes and restrict accounts based on that.
[deleted] t1_j57jf60 wrote
[removed]
Few-Ability-7312 t1_j54zulg wrote
What classifies as misinformation?
_______someone t1_j55baoo wrote
According to Google, misinformation is defined by the OED as "false or inaccurate information, especially that which is deliberately intended to deceive".
denyjunctionfunction t1_j55v4un wrote
So misleading and cherry-picked information is fine so long as it is true (but leaving out context)?
thechinninator t1_j55vu83 wrote
An unfortunate loophole, but idk how to account for this if you want to keep moderators' personal biases out of the execution
denyjunctionfunction t1_j55y9jm wrote
A major loophole. I’m willing to bet most of the things that people call misinformation aren’t straight up lies, but just cherry picked information.
_______someone t1_j56com4 wrote
Misleading and cherry-picked information is inaccurate information.
denyjunctionfunction t1_j56p0tl wrote
No they aren’t. “Inaccurate” doesn’t deal with context alone. It’s binary in saying it is accurate or not, on a spectrum of how inaccurate/accurate it is. If stats show that X does Y if Z is present, it is not inaccurate to say X does Y. It’s misleading though because the entire context isn’t there, but it is factually correct.
_______someone t1_j57hqbr wrote
That's not how the concept of accuracy works.
Akiasakias t1_j55qskc wrote
So they determine not only the "truth" but also mind read your intent.
Given the missteps lately of true stories being labeled misinformation, I don't think there could be a good way to administer this policy, it is flawed.
resorcinarene t1_j55qbyf wrote
No, misinformation is false or inaccurate information. That's it. Information that is deliberately intended to deceive is disinformation.
Your post is misinformation
_______someone t1_j56bb3w wrote
By all means go ahead and tell the Oxford English Dictionary that you know better than them when defining a word.
While you're at it, look up the word "especially" in the dictionary and reread the definition quoted in my post.
[deleted] t1_j56vhr5 wrote
[removed]
[deleted] t1_j56m15a wrote
[removed]
Proponentofthedevil t1_j56mlh8 wrote
The definition isn't the issue. Read the question again and figure out what it is asking. Your definition does nothing to figure that out.
_______someone t1_j57fkw8 wrote
I believe this is the third time I say: it is not my definition.
Proponentofthedevil t1_j57iq2d wrote
That's not the issue. The definition is just that. Yes, we know that bad info is bad info. The definition doesn't tell us what is misinfo. That's the question. Are you being purposefully obtuse?
_______someone t1_j57k2ef wrote
Mate, you're debating semantics. I don't make up the words. People do. You got a problem with a word that is less than your standard of certainty? You got a big problem cuz that's most words. Language is not maths. Get over it and get wise.
Proponentofthedevil t1_j57lbxz wrote
.... it's like you're not even reading the words I'm saying at all... never mind dude. Have a good day.
_______someone t1_j57o1o6 wrote
Welcome to my world.
Proponentofthedevil t1_j57qm15 wrote
Been there a long time. Welcome to mine.
[deleted] t1_j552kie wrote
[removed]
[deleted] t1_j5533nc wrote
[removed]
[deleted] t1_j57dfuc wrote
[removed]
salamander423 t1_j55evqq wrote
Lies, basically.
RonPMexico t1_j55nw13 wrote
All legal speech should be allowed, and algorithms should be content neutral. If a platform is going to censor any speech, they should be liable for all speech on the platform.
whittily t1_j55x1dr wrote
No algorithmically-curated content feed can be content neutral. Every design choice affects what you see and comes with unintended curatorial effects. It’s an oxymoron.
Deep90 t1_j57iczt wrote
Its also problematic because not all users are actually people.
The "Every user is equal" model falls apart when I create 10 bot accounts against your single account.
​
A truly neutral platform would give my posts 10x the reach.
[deleted] t1_j56uwck wrote
[removed]
RonPMexico t1_j55xnm7 wrote
It absolutely can be content neutral, and it normally is. The algorithm looks at engagement patterns and content is usually only considered after the optimization has occurred.
whittily t1_j561fn5 wrote
And then it surfaces content that is highly engaged, like sensationalized misinformation. Content-neutral decisions never have content-neutral effects.
The town square can only accommodate a limited amount of speech. Democratic societies have always had an active role in deciding what kind of speech is prioritized and what mechanisms should be used to do so in a way that’s fair and non-censorious. If you go to a public hearing, is it just a crowd of people shouting over each other? Do you only get to hear from whoever is shouting loudest? No, obviously that would be unproductive and stupid. The digital town square isn’t different.
Your statement also weirdly puts this design choice in its own category than literally every other that a company makes when designing algorithms. They don’t work from first principles to decide what inputs should feed an algorithm. They test changes and look to see if it results in desired outputs. But for this one aspect, you expect them to design in a black box and not respond to what the actual effects are to platform. It’s just not really engaging with the reality of how these get built and optimized.
RonPMexico t1_j562sk7 wrote
prioritizing content is censorship.
I believe that it is fine if platforms want to censor content, but if they are going to take responsibility for some of the speech on their platform, they should have to be liable for all speech on their platform.
If we are going to allow nameless tech employees to determine who gets the megaphone in society with no public accountability, we should at least be able to use litigation to keep the size of their megaphone in check.
burnerman0 t1_j57dr0t wrote
> prioritizing content is censorship
What exactly do you think these algorithms are doing if not prioritizing content? By your logic every platform is practicing censorship simply by existing.
RonPMexico t1_j57e7t5 wrote
The platforms generally prioritize posts not by content but by interactions. The algorithm doesn't know what message any individual post conveys, it only knows that the post will lead to a desired outcome (clicks, shares, likes, what have you)
bildramer t1_j566unq wrote
There's "content-neutrality" as in not ranking posts by date, or not prefering .pngs to .jpgs, and then there's "content-neutrality" as in not looking into the content and determining if a no-no badperson with the wrong opinions posted it. The first is usually acceptable by anyone, and not what we're talking about. And you can distinguish the two, there's a fairly thick line.
whittily t1_j56da21 wrote
I’m going to move past your ridiculous strawman and just say that that is not how algorithm design happens. You are just not engaging with reality. Every design choice is evaluated on its effects on user behavior. To insist that we refuse to evaluate whether algorithm design degrades the user experience by forcing lies into their feed is absurd.
RonPMexico t1_j56fdea wrote
The problem is that when a person makes a value judgment on content and uses it to promote (or not) that speech, it is censorship. If a platform is going to censor speech, they should be accountable for that speech.
whittily t1_j56y9xb wrote
-
It’s unavoidable. You are demanding something that is impossible. Every decision requires a value judgement, especially decisions that attempt to avoid a value judgement. In this case, we should value truth and accuracy.
-
The platform shouldn’t be responsible for these decisions. We should democratically determine how we prioritize speech in a crowd d public sphere, just like every democratic society has done for hundreds of years. Pretending that every society has always allowed infinite, unfettered speech in public forums is ahistorical and also a little stupid. Society would be chaos, just like corporate-controlled digital public spaces are today.
-
Finally, no, there is such a thing as truth and a lie. Sometimes it’s complex, but that doesn’t mean it is impossible to determine. Democracy only works when strong, trusted, public, unbiased institutions mediate information between experts/information producers and the public. The introduction of digital media infosystems without institutional mediation is breaking down our publics’ access to true, relevant information and damaging our ability to solve problems politically.
RonPMexico t1_j574jrw wrote
You are unapologetically anti-free speech. What a stance.
whittily t1_j57728e wrote
No. Free speech =/= unfettered, unmoderated speech, and it never has in the history of the US. No society operates that way.
RonPMexico t1_j577huj wrote
Ok, the sentence "a man can not become a woman." Is true or false? How would it be decided in a way that does not infringe on free speech in your world?
SmellyMammoth t1_j57eija wrote
This is a bad take. You can’t force a company to host speech it doesn’t agree with. It’s also unrealistic to expect a company to monitor every single user post on their platform. This is why we have protections like Section 230 in place.
RonPMexico t1_j57fhes wrote
My point is section 230 shouldn't exist. Either the company is responsible for the posted content or it isn't.
A company removing content it does not agree with gives content it doesn't remove its implied endorsement. They can't have it both ways.
[deleted] t1_j57frjl wrote
[removed]
Euphoric-Driver-7568 t1_j5793oi wrote
I don’t want a technology firm to decide what I’m to see. If I want to listen to someone who only tells lies, I should be allowed to do that.
PhilosophyforOne t1_j57jxvb wrote
While I applaud the results and the study, the fact that some people try to argument otherwise is downright laughable.
Think about making the argument that instead of having a police force and laws that govern our behavior towards each other, we should simply leave it up to individual responsibility to reduce crime and harmful behavior.
(Note. I’m not saying our current system is perfect, but saying it’s not up to the platform but the user to reduce misinformation is akin to saying instead of rule of law, it should be up to individuals to behave how they choose.)
U_Vill_Eat_Ze_Bugs t1_j56peol wrote
More censorship incoming!
PmMeWifeNudesUCuck t1_j57ib3s wrote
#TwitterFiles disagrees. Communication shows they're the key source of government disinformation/misinformation aka propaganda
AutoModerator t1_j54zjt4 wrote
Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, personal anecdotes are allowed as responses to this comment. Any anecdotal comments elsewhere in the discussion will be removed and our normal comment rules apply to all other comments.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
[deleted] t1_j552nyr wrote
[removed]
[deleted] t1_j5683iz wrote
[removed]
[deleted] t1_j56a6hd wrote
[removed]
[deleted] t1_j56pnhe wrote
[removed]
[deleted] t1_j574roo wrote
[removed]
[deleted] t1_j578k7d wrote
[deleted]
qwicksilver6 t1_j57n6rx wrote
Human body says lymph node to blame for rampant virus.
[deleted] t1_j57oo3q wrote
[removed]
trollsmurf t1_j57tv12 wrote
But do they want to? Users are the bait.
boston101 t1_j584fn2 wrote
I’ve been thinking of something I’d like you all opinion on.
As someone that works in the industry, as a data scientist/engineer, and writes/implements my ml models in production.
I think people would benefit to learn at a very basic level how their data is turned into decisions aka money. I also think showing them how data is structured for ML models to make those decisions would help.
What do you all think?
I’ll hazard a guess and say the older generations won’t take to this but if this was taught in schools even at a very basic level I think people would rethink their behaviors online.
(Making up this example) basic level could be as simple as:
data is gather and organized into spreadsheets.
We are trying to predict the next value in this one column. The values are “yes” or “no” for this column based on if you clicked a button. The rest of the columns represent your behavior on the website, like cursor speed. With math magic we can see what columns are most likely to influence the “yes” or “no” value in column we are trying to predict.
Finally as new data comes in we can use the math to see how you interactions on the website will predict if you click the button. Totally made up.
js1138-2 t1_j58af2p wrote
Am I allowed to say that I hate websites that exhibit slow performance that seem to be the result of processing something other than my menu choices.
I leave the site and never return. I return to sites that enable snappy searches for stuff.
boston101 t1_j58bkll wrote
Totally. Aren’t we doing the same math for feature elimination every day subconsciously?
Every decision has some sort of quick analysis going on and there are feature to every decision that one subconsciously weights to determine the decision to make.
For example, why chose the apple you picked? Did it have the shine or firmness you wanted but isn’t fundamentally in your attention? You made this decisions without even knowning.
js1138-2 t1_j58dcbh wrote
Here’s a clue. I do a lot of searches for hardware and parts. When I find a website with a good static home page, I bookmark it. When I find a site that wiggles too much on the home page, I leave immediately.
Hope you can capture that.
You have one half second to fully display the home page, or I’m gone.
After that I’m a bit more forgiving.
[deleted] t1_j58fw9u wrote
[removed]
[deleted] t1_j588mlb wrote
[removed]
[deleted] t1_j58wwu4 wrote
[removed]
[deleted] t1_j5949o8 wrote
[removed]
dumnezero t1_j59tsbq wrote
Platforms are a problem?
>Mark: >Nooo, we're not a platform, we're PUBLISHERS!
Publishers are a problem?
>Mark: >Nooo, wasn't us, it's the users. We're a PLATFORM!
https://www.eff.org/deeplinks/2020/12/publisher-or-platform-it-doesnt-matter
QTheLibertine t1_j57nw0v wrote
I did a study of all of human history and found allowing the government to define misinformation leads to death on a massive scale and should be avoided at all costs.
Delamoor t1_j589wzg wrote
I also did a study of human history and found that mob rule leads to death and suffering on a massive scale and should also be avoided at all costs.
Like government defining disinformation is also what protects you from 'Your tribe cursed my cousin, so we all came over here to murder half of you and enslave/rape the other half if we think they look good enough. This'll teach you to cast curses.'
Or even just 'our daughter is a witch. We need to kill her.'
[deleted] t1_j56n3h4 wrote
[removed]
Gozillasbday t1_j56refy wrote
At birth sure.
nlewis4 t1_j55m77u wrote
It is far too late to stop or reduce the spread of misinformation online. Bad actors that are knowingly spreading misinformation stand behind "my free speech!!!" and "I'm being silenced!" and the target audience eats it up.
thechinninator t1_j55x3gf wrote
More people than you think are reasonable but locked in an echo chamber due to where they happen to live. Yeah this will make some people dig in further but most of those are probably lost causes.
[deleted] t1_j55q54m wrote
[removed]
oOzephyrOo t1_j556d88 wrote
Sure, put the blame on platforms when law makers could toughen the laws and penalties on misinformation.
[deleted] t1_j558j9t wrote
[removed]
resorcinarene t1_j55rbu9 wrote
What was misinformation a year ago that's a fact today?
[deleted] t1_j55s15c wrote
[removed]
resorcinarene t1_j55v84u wrote
Speak plainly. What's the summary, in your words?
[deleted] t1_j55vkxo wrote
[removed]
resorcinarene t1_j55vnsg wrote
Do me a solid. I'm at work and don't have time for the whole thing.
_______someone t1_j55c58a wrote
That's called censorship. For the pros and cons of this practice, look it up. I ain't educating you on something you should have retained from your high-school history class.
[deleted] t1_j55lwbt wrote
[removed]
whittily t1_j575iny wrote
No. Speech is zero sum. Democratic societies always have to create prioritization mechanisms and institutions to not let public debate degrade to chaotic shouting matches.
mobrocket t1_j55k8pu wrote
It is. But censorship isn't always a bad thing.
Societies have to decide what should or shouldn't be censored.
I could easily argue why in today's post truth world where any idiot can find an audience maybe we need more censorship
[deleted] t1_j55ngl7 wrote
[removed]
mobrocket t1_j578oce wrote
So you think child porn should be allowed.... Same logic
Or graphic murder on all tv stations
Irs should disclose everyone's tax returns
All that stuff is censored and protected because as a society we determined it's not acceptable for all the public to have
Maybe you should consider censoring your mind and it's 5th grade level of knowledge about censorship
_______someone t1_j569vml wrote
The authority of a government to suppress the ideas that people express is always a bad thing.
[deleted] t1_j55ulf5 wrote
[removed]
[deleted] t1_j574zkd wrote
[removed]
Captain_Spicard t1_j55e4zl wrote
One thing I can think of is how crossposting is handled on Facebook.
If a group posts an article, and a separate group links to that same article, the reacts and comments go on the original post.
Post anything Apollo moon landing related, and all the anti moon landing groups will react to it. On a purely science related group, you get "Laugh" reacts and absolute cesspool of comments. It's impossible to have a nice discussion on anything in the realm of conspiracy.