Facebook Censors 7 Million COVID Posts in Second Quarter – but Removes Fewer Posts about Child Abuse and Suicide

Facebook released its “Community Standards Enforcement Report” and said the company had censored 7 million COVID-19 posts and placed a misinformation warning label on 98 million other COVID-related posts from April through June 2020.

At the same time, fewer posts related to the categories of suicide and self-injury, or child nudity and sexual exploitation, were taken down compared with the first quarter. The company relies more heavily on content reviewers for these two categories, and as workers were kept at home due to COVID lockdowns, fewer people were available to review this graphic content.

The company estimates that less than 0.05% of views were of content that violated their standards for either of these categories.

In an online press conference, Facebook VP of Integrity Guy Rosen said the company also has policies regarding “hate speech,” and it took action on 22.5 million items in the second quarter of this year, up from 9.6 million the previous quarter.

Facebook’s sixth report on enforcing its community standards was released Tuesday, August 11. The company began publishing the reports every 6 months in 2018 and started quarterly reports this year.

Rosen explained that coronavirus-related posts include “posts that push fake preventative measures or exaggerated cures that the CDC and other health experts tell us are dangerous. “He added, “For other misinformation, we work with independent fact checkers to display warning labels.”

In March, Facebook announced it had been removing “COVID-19 related misinformation that could contribute to imminent physical harm” since January. The company says it uses guidance from the World Health Organization (WHO), the CDC and local health authorities. Facebook also started a “COVID-19 Information Center,” which “includes real-time updates from national health authorities and global organizations, such as the WHO.

The social media giant removed ads for products that guarantee a cure or prevents people from contracting the coronavirus, and it also took down posts that say physical distancing doesn’t slow the spread of the virus.

The tech company uses both automated filters and people to remove content. Facebook sent content moderators home in March, due to the COVID-19 pandemic.  The company said, “For both our full-time employees and contract workforce, there is some work that cannot be done from home due to safety, privacy and legal reasons,” so it relied more heavily on artificial intelligence to evaluate posts. Many of those reviewers have since been brought back online.

Fewer posts with child nudity and sexual exploitation or suicide and self-injury were taken down in the second quarter, Rosen said, because doing so depends more on human reviewers than on automated filters.

“Reviewing this content continues to be challenging. It can’t be done from home due to its very graphic nature. We want to ensure it’s reviewed in a more controlled environment and that’s why we started bringing a small number of reviewers, where it’s safe, back into the office.”

Facebook came under fire last month, after banning a press conference video related to coronavirus from America’s Frontline Doctors. The doctors advocated hydroxychloroquine for treating COVID and had been viewed over 17 million times on Facebook before it was taken down.

Despite its efforts to moderate content, a recent article from MIT Technology Review said that both human reviewers and technology still fail. Facebook employs “15,000 people who spend all day deciding what can and can’t be on Facebook.” The majority are not directly employed by Facebook – the work is outsourced to other vendors. 

“Facebook has itself admitted to a 10% error rate, whether that’s incorrectly flagging posts to be taken down that should be kept up or vice versa,” the magazine reports. That’s a huge number of mistakes, it says, “Given that reviewers have to wade through three million posts per day, that equates to 300,000 mistakes daily.”

“Facebook needs to bring content moderators in-house, make them full employees, and double their numbers,” the magazine says, citing a report on Facebook’s content reviewing from New York University’s Stern Center for Business and Human Rights.

While removing some COVID posts may be controversial, MIT Technology argues that monitoring is important, “Imagine if Facebook stopped moderating its site right now. Anyone could post anything they wanted. Experience seems to suggest that it would quite quickly become a hellish environment overrun with spam, bullying, crime, terrorist beheadings, neo-Nazi texts, and images of child sexual abuse. In that scenario, vast swaths of its user base would probably leave, followed by the lucrative advertisers.”

Related articles:

Amazon, YouTube, Facebook: Silencing Those Who Disagree with Transgender Activism and Ideology

Facebook Bans Content Related to Change from Homosexuality – Calls it ‘Hate Speech’

Facebook, Twitter and YouTube Ban Doctors’ Video Advocating Hydroxychloroquine for Treating COVID

Facebook Employees Stage ‘Virtual Walkout’ to Protest Company’s Unwillingness to Censor President Trump

Photo from klevo / Shutterstock.com

 

Visit our Election 2020 page

The post Facebook Censors 7 Million COVID Posts in Second Quarter – but Removes Fewer Posts about Child Abuse and Suicide appeared first on Daily Citizen.

Read More
Daily Citizen

Generated by Feedzy