One month after Facebook revealed a seven point plan to eradicate “fake news”, Mark Zuckerberg has made good on his promise to strip Facebook of fake news stories, starting with a tool that allows users to flag anything they consider a hoax, as well as features that tweak Facebook’s news algorithm and provide more restrictions on advertising.

Facebook’s 1.8 billion users will now be able to click the upper right-hand corner of the post to flag content as fake news.

The first problem, however, immediately emerges because as NBC notes, “legitimate news outlets won’t be able to be flagged“, which then begs the question who or what is considered “legitimate news outlets”, does it include the likes of NYTs and the WaPos, which during the runup to the election declared on a daily basis, that Trump has no chance of winning, which have since posted defamatory stories about so-called “Russian propaganda news sites”, admitting subsequently that their source data was incorrect, and which many consider to be the source of “fake news”.

Also, just who makes the determination what is considered “legitimate news outlets.”

[adrotate banner=”12″]

In any case, flagged stories – which really means any story that a reader disagrees with – will then be reviewed by Facebook researchers and sent on to third-party fact-checking organizations for further verification — or marked as fake.

Here too, one wonders how much checking will take place considering that these “researchers” will be bombarded with tens of thousands of flagged articles daily, until it ultimately becomes a rote move to simply delete anything flagged as false by enough disgruntled readers, before moving on to the next article, while in the process not touching the narrative spun by the liberal “legitimate news outlets”, the ones who would jump at the opportunity to have dinner with Podesta in hopes of becoming Hillary Clinton’s public relations arm.

“We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we’re approaching this problem carefully,” Adam Mosseri, Facebook’s vice president of News Feed, said in a blog post. So, what Facebook will do, is give the voice to all those others who praise any article they agree with, and slam and flag as “fake news” anything they disagree with. At least no book burning will be involved.

The Facebook VP promised that “we’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organizations.”

Not only that, but Facebook’s algorithm that decides what gets the most prominence in News Feeds, will also be tweaked, one would assume to give more prominence to the above mentioned “legitimate news outlets”… such as WaPo and the NYT.

How will the algorithm determine if a story is potentially fake? If a story is being read but not shared, Mosseri said that may be a sign it’s misleading. Which in turn means that clickbait articles are about to explode at the expense of deep-though, long-read pieces which the current generation of Facebook readers has no time for.

“We’re going to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it,” he said.

It gets better: the next step in Facebook’s plan to rid the site of fake news involves sending flagged stories to third-party fact-checking organizations, which include Snopes, Politifact, and Factcheck.org, which as the recent election showed, are just as biased as the so-called “fake news” sites, however they cover their partiality under the cloak of being objective, which they conflate with being “factual.”

A group of Facebook researchers will initially have the responsibility of sifting through flagged stories and determining which ones to send to the fact-checking organizations. If it’s determined to be fake, the story will be flagged as disputed and include a link explaining why.

Then the punishment: flagged stories can still be shared, but readers will be warned in advance, and they’ll be more likely to appear lower in News Feed. These stories also won’t be able to be promoted or turned into advertisements. 

[adrotate banner=”9″]

* * *

Facebook’s bottom-line argument for engaging in soft censorship? Money.

While Facebook hopes these tools will be helpful, they’re also aiming to hit purveyors of fake news where it hurts — the pocketbook.

“Spammers make money by masquerading as well-known news organizations, and posting hoaxes that get people to visit to their sites, which are often mostly ads,” Mosseri said.

“On the buying side we’ve eliminated the ability to spoof domains, which will reduce the prevalence of sites that pretend to be real publications. On the publisher side, we are analyzing publisher sites to detect where policy enforcement actions might be necessary,” he said.

As a reminder, the Fake News theme reached a boiling point days after the election, when Zuckerberg said it was “pretty crazy” to think fake news could have influenced the election and warned Facebook “must be extremely cautious about becoming arbiters of truth.” Less than two weeks later, with the issue still simmering, Zuckerberg shared a more detailed account of projects he said were already underway to thwart the spread of misinformation.

While the narrative has since shifted to fake news following the disastrous WaPo report on “Russian Propaganda” outlets, which ultimately crushed the credibility of its author, and has been replaced with the “Putin hacked the election” narrative, the quiet push to silence non-compliant voices continues.

Amusingly, the team at Facebook has made it clear they don’t want censorship on the site and that these new tools are just part of the evolving process of combating misinformation. And yet, crowd sourced censorship is precisely what Facebook has just unrolled.

Ultimately, what will end up happening is that one half of Facebook users will flag what they read by one half of the media as fake, and vice versa, while millions of users will simply leave the now censorship endorsing social network out of disgust.

Because, while we admire Zuckerberg’s initiative, there is one tried and true way to avoid the all the “fake news” on Facebook:

 

* * *

[adrotate banner=”11″]

The facebook blog post on the topic is below:

News Feed FYI: Addressing Hoaxes and Fake News

By Adam Mosseri, VP, News Feed

A few weeks ago we previewed some of the things we’re working on to address the issue of fake news and hoaxes. We’re committed to doing our part and today we’d like to share some updates we’re testing and starting to roll out.

We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we’re approaching this problem carefully. We’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organizations.

The work falls into the following four areas. These are just some of the first steps we’re taking to improve the experience for people on Facebook. We’ll learn from these tests, and iterate and extend them over time.

Easier Reporting
We’re testing several ways to make it easier to report a hoax if you see one on Facebook, which you can do by clicking the upper right hand corner of a post. We’ve relied heavily on our community for help on this issue, and this can help us detect more fake news.

reporting-a-story-as-fake

Flagging Stories as Disputed
We believe providing more context can help people decide for themselves what to trust and what to share. We’ve started a program to work with third-party fact checking organizations that are signatories of Poynter’s International Fact Checking Code of Principles. We’ll use the reports from our community, along with other signals, to send stories to these organizations. If the fact checking organizations identify a story as fake, it will get flagged as disputed and there will be a link to the corresponding article explaining why. Stories that have been disputed may also appear lower in News Feed.

disputed-story

It will still be possible to share these stories, but you will see a warning that the story has been disputed as you share.

sharing-disputed-story

Once a story is flagged, it can’t be made into an ad and promoted, either.

Informed Sharing
We’re always looking to improve News Feed by listening to what the community is telling us. We’ve found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way. We’re going to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it.

Disrupting Financial Incentives for Spammers
We’ve found that a lot of fake news is financially motivated. Spammers make money by masquerading as well-known news organizations, and posting hoaxes that get people to visit to their sites, which are often mostly ads. So we’re doing several things to reduce the financial incentives. On the buying side we’ve eliminated the ability to spoof domains, which will reduce the prevalence of sites that pretend to be real publications. On the publisher side, we are analyzing publisher sites to detect where policy enforcement actions might be necessary.

It’s important to us that the stories you see on Facebook are authentic and meaningful. We’re excited about this progress, but we know there’s more to be done. We’re going to keep working on this problem for as long as it takes to get it right.

 

This article appered at ZeroHedge.com at:  http://www.zerohedge.com/news/2016-12-15/facebook-rolls-out-tools-curb-fake-news