Facebook Policies

      144

Since the infiltration of bad actors plaguing the platsize during the 2016 elections, Facebook has prioritized its nội dung Reviews process.

Bạn đang xem: Facebook policies


*

Last week, Facebook announced it had removed 32 Pages & accounts from its platsize và Instagram for “coordinated inauthentic behavior” — a term used by Facebook lớn define efforts by a network of accounts aiming lớn spread malicious nội dung. The bad actors behind the misinformation campaigns included eight Facebook Pages, 17 Facebook accounts và seven Instagram accounts.

“This kind of behavior is not allowed on Facebook because we don’t want people or organizations creating networks of accounts to mislead others about who they are, or what they’re doing,” wrote Facebook in its July 31 announcement that the accounts had been taken down.

One week later, Facebook took down four more Pages that belonged khổng lồ conspiracy theorist và Infowars founder Alex Jones for repeatedly posting nội dung that broke the company’s Community Standards Guidelines. (Spotify, Apple, YouTube & others have also restricted or removed Jones’ content on their platforms.)

Facebook’s decisions lớn take down content, & the accounts attached lớn it, are a direct result of the fallout after the company failed lớn identify a surge in misinformation campaigns plaguing the platkhung during the năm nhâm thìn US election cycle. Since admitting it did not bởi enough lớn police malicious content và bad actors, Facebook has pledged to lớn prioritize its content Review process.

How vì chưng these efforts affect marketers? While Facebook’s actions are aimed at people & organizations with malicious intent, marketers looking khổng lồ build & foster brands on Facebook need lớn be aware of Facebook’s rules around content — especially since the nội dung reviews policies và systems apply khổng lồ Facebook ad policies as well. We’ve sầu put together a rundown on Facebook’s content Review process, the teams involved & how it’s working so far.

Removing content vs. limiting distribution

In April, Facebook released its first ever Community Standards guidelines — a rule book outlining the company’s nội dung policies broken down into six different categories: violence and criminal behavior, safety, objectionable nội dung, integrity & authenticity, respecting intellectual property, & content-related requests. At the time, Facebook said it was using a combination of artificial intelligence và reports from people who have identified posts for potential abuse. Posts reported for violating content policies are reviewed by an operations team that is made of up of more than 7,500 nội dung reviewers.


“Here’s how we think about this: if you are who you say you are & you’re not violating our Community Standards, we don’t believe sầu we should stop you from posting on Facebook.”

Regarding the Đánh Giá process, Facebook says its content review team members are assigned a queue of reported posts lớn evaluate one by one. Facebook says the reviewers are not required lớn evaluate any phối number of posts — there is no quota they must meet when it comes khổng lồ the amount of nội dung being reviewed.

In a July 24 Q&A on Election Integrity, Facebook’s News Feed sản phẩm manager, Tessa Lyons, said the company removes any nội dung that violates its Community Standards guidelines, but that it only reduces the distribution of problematic nội dung that may be false but does not violate Community Standards. According lớn Lyons, Facebook will show stories rated false by fact-checkers & display them lower in the News Feed so dramatically fewer people see them. (According lớn Facebook’s data, stories that were ranked lower in the News Feed resulted in future views being cut by more than 80 percent.)

Lyons addressed criticism around Facebook’s policy khổng lồ limit the distribution of nội dung identified as false versus removing it, explaining it’s not Facebook’s policy to lớn censor nội dung that doesn’t violate their rules.

“Here’s how we think about this: if you are who you say you are & you’re not violating our Community Standards, we don’t believe sầu we should stop you from posting on Facebook. This approach means that there will be information posted on Facebook that is false & that many people, myself included, find offensive,” said Lyons.

More recently, Facebook offered a deeper dive sầu inkhổng lồ the reasons behind why it would remove a Page.

“If a Page posts content violates our Community Standards, the Page and the Page admin responsible for posting the content receive a strike. When a Page surpasses a certain threshold of strikes, the whole Page is unpublished.”

Facebook says the effects of a strike vary depending on the severity of the content violation, và that it doesn’t give sầu specific numbers in terms of how many strikes a Page may receive before being removed.

“We don’t want people to lớn game the system, so we bởi vì not mô tả the specific number of strikes that leads to lớn a temporary blochồng or permanent suspension.” Facebook says multiple nội dung violations will result in an trương mục being temporarily blocked or a Page being unpublished. If an appeal is not made to lớn reinstate the Page — or if an appeal is made, but denied — the Page is then removed.

Announced in April, the appeal process is a new addition lớn Facebook’s content review system.

Facebook’s content đánh giá teams & technology

In recent months, Facebook has said multiple times it would be hiring đôi mươi,000 safety and security employees during the course of this year. As of July 24, the company confirmed it had hired 15,000 of the trăng tròn,000 employees it plans to recruit.

Xem thêm: Cách Quản Lý Email Hiệu Quả N Lý Email Hiệu Quả, Cách Quản Lý Hộp Thư Gmail Của Bạn Hiệu Quả Hơn

The content Đánh Giá teams include a combination of full-time employees, contractors và partner-companies located around the world, along with 27 third-các buổi tiệc nhỏ fact-checking partnerships in 17 countries. In addition lớn human review, Facebook uses AI and machine learning tech to lớn identify harmful content.

“We’re also investing heavily in new technology to help giảm giá khuyến mãi with problematic nội dung on Facebook more effectively. For example, we now use công nghệ khổng lồ assist in sending reports to reviewers with the right expertise, to lớn cut out duplicate reports, & lớn help detect and remove terrorist propaganda and child sexual abuse images before they’ve even been reported,” wrote Facebook’s VP of global policy management, Monika Bickert, on July 17.

Facebook’s content Đánh Giá employees undergo pre-training, hands-on learning and ongoing coaching during their employment. The company says it also has four clinical psychologists on staff, spread across three regions, to design and evaluate resiliency programs for employees tasked with reviewing graphic & objectionable content.

What we know about recently removed content

Regarding the 32 Pages và accounts removed last week, Facebook said it could not identify the responsible group (or groups), but that more than 290,000 Facebook accounts had followed at least one of the Pages. In total, the removed Pages và accounts had published more than 9,500 organic posts on Facebook, one piece of nội dung on Instagram, ran approximately 150 ads (costing a total of $11,000) và created about 30 Events dating bachồng to lớn May 2017 — the largest of which had 4,700 people interested in attending and 1,400 users who said they would attend.

The Alex Jones Pages were taken down because they violated Facebook’s graphic violence & hate speech policies. Before being removed, Facebook had removed videos posted lớn the Pages for violating hate speech and bullying policies. The Page admin, Alex Jones, was also placed on a 30-day blochồng for posting the violating content. Within a week, Facebook made the decision to remove sầu all the Pages after receiving more reports of content violations.

Looking beyond these two specific actions, Facebook says that it is currently stopping more than a million accounts per day at the point of creation using machine learning công nghệ. The company’s first transparency report, released in May, showed Facebook had taken action against 1.4 billion pieces of violating content, including 837 million counts of spam and 583 million fake accounts. Excluding hate speech violations, Facebook says more than 90 percent of the nội dung was removed without being reported in nearly all categories, including spam, nudity và sexual activity, graphic violence and terrorist propaganda.

In the Q&A on Election Integrity issues, Facebook said it took down tens of thousands of kém chất lượng likes from Pages of Mexican candidates during Mexico’s recent Presidential elections, along with fake Pages, groups và accounts that violated policies and impersonated politicians running for office. (In advance of the November US midterm elections, Facebook has launched a verification process for any person or group wanting lớn run political ads & a searchable archive of political ad content going baông xã seven years that lists an ad’s creative, budget and the number of users who viewed it.)

But is it working?

While Facebook’s transparency report offered insight into lớn just how many spam posts, nhái accounts & other malicious nội dung the company has identified since last October, there is still work left lớn vị.

Last month, advertisers discovered Facebook ads with words like “Bush” and “Clinton” were removed after being tagged as political ads by advertisers that had failed to be verified. A barbecue restaurant ad that listed the businesses location on “President Clinton Avenue” & a Walmart ad for “Bush” baked beans were both removed — most likely the result of Facebook’s automatic systems incorrectly identifying the ads as political ads.

More concerning, a report from the BBC’s Channel 4 news program “Dispatches” showed a Dublin-based nội dung nhận xét company contracted by Facebook failed to act on numerous counts of content that violated the app’s Community Standards. The report also accused Facebook of practicing a “shielded review” process, allowing Pages that repeatedly posted violating content khổng lồ remain up because of high follower counts.

Facebook responded to the charge by confirming it did persize “Cross Check” review (its definition for shielded reviews), but that it was part of a process khổng lồ give sầu certain Pages or Profiles a “second layer” of Review to lớn make sure policies were applied correctly.

“To be clear, Cross Checking something on Facebook does not protect the profile, Page or nội dung from being removed. It is simply done to make sure our decision is correct,” wrote Bickert, in response to the Channel 4 report.

Ever since admitting Facebook was slow khổng lồ identify Russian interference on the platform during the năm 2016 elections, CEO Mark Zuckerberg has said time & time again that security is not a problem that can ever be fully solved. Facebook’s News Feed sản phẩm manager spoke khổng lồ the complicated intersection of security và censorship on the platsize during the company’s Q&A on Election Integrity: “We believe sầu we are working to strike a balance between expression and the safety of our community. And we think it’s a hard balance khổng lồ strike, & it’s an area that we’re continuing lớn work on và get feedback on — và to increase our transparency around.”

From the Q.1 transparency report to its lathử nghiệm actions removing malicious content, Facebook continues to lớn prove sầu it is trying lớn rid its platform of bad actors. The real kiểm tra of whether or not the company has made any progress since năm 2016 could very well be this year’s midterm election in November. As Facebook puts more focus on content and its reviews process, marketers and advertisers need to lớn underst& how these systems may impact their visibility on the platform.

Opinions expressed in this article are those of the guest author và not necessarily Marketing Land. Staff authors are listed here.