The issue of filtering out content that advocates or glorifies terrorism on widely-used media sites like Alphabet’s YouTube has come under renewed scrutiny since authorities learned that 23-year-old Salman Abedi was radicalized after watching videos of an American preacher posted on the site. So unsurprisingly, barely two weeks after UK Prime Minister Theresa May accused tech companies of providing a “safe space” for extremist content, Google’s General Counsel Kent Walker has revealed four new measures the company is taking to censor its users.

The biggest change? Questionable content that doesn’t explicitly meet the grounds for removal under the YouTube’s terms of use will now be buried, as the New York Times noted, while the site also plans to enhance its abilities to automatically filter out content that does meet these standards.

These videos will now come with a warning, be banned from featuring adds or collecting advertising revenue, or be recommended, endorsed, or commented on. Users will still be able to find the content once the policy goes into effect, but it will eliminate one of the most prominent means of transmission – sharing over social media networks like Twitter and the messaging app Telegram.

“…we will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find. We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”

As noted by NYT, figuring out how to censor extremist content while taking precautions not to trod too heavily on free speech has been a longstanding problem for YouTube.

“Google has created a thriving video platform that appeals to people with a wide range of interests. But it has also become a magnet for extremist groups that can reach a wide audience for their racist or intolerant views. Google has long wrestled with how to curb that type of content while not inhibiting the freedom that makes YouTube popular.”

The company also said it will launch a new social-intervention program that relies on the “power of targeted online advertising” to reach out to impressionable terrorist recruits, and redirect them toward anti-terrorist content.

In addition to devoting more engineering resources to technology that automatically filters out questionable content, the company said it would also add more manpower to its “trusted flagger” program, though it neglected to explain what qualifies someone as a “trusted flagger” (from what we can tell the program involves partnerships with select NGOs).

While we recognize the political pressure that the company is under to seem like it’s doing something about terorrism, we hope YouTube doesn’t repeat its mistakes from September 2016, when it sparked a backlash after deeming posts by YouTube personality Philip DeFranco to be “inappropriate for advertising,” offering only a vague explanation as to why.

Perhaps the company could offer to hire people from a truly diverse range of backgrounds and political persuasions to try and prevent a repeat of this incident. Though given the state of today’s discourse – where leftists accuse anyone who disagrees with them of being a hateful racist – we worry that relying at all on human judgment could be a mistake.

Especially if these flaggers are academics. Because the political climate on US college campuses, as students at Evergreen State in Olympia, Wash. recently demonstrated, is grossly intolerant of viewpoints that don’t jive with their ultraprogressive orthodoxies.

 

This article appeared at ZeroHedge.com at:  http://www.zerohedge.com/news/2017-06-19/google-promises-bury-questionable-content