ADVERTISEMENT

Facebook Announces Removal of More Than 250 Accounts, 500 Pages for Breaking Spam Rules

Facebook: We're going after bad behavior, not content

Getty Images
October 11, 2018

Facebook announced the removal Thursday of more than 250 accounts and 500 Pages for breaking its rules on "spam and coordinated inauthentic behavior."

It stressed the actions it took were politically agnostic and meant to go after behavior, not content. Facebook has faced charges of bias from conservatives, such as suppressing right-leaning news sources from its trending news section.

"Today, we're removing 559 Pages and 251 accounts that have consistently broken our rules against spam and coordinated inauthentic behavior," Facebook said in a blog post. "Given the activity we’ve seen — and its timing ahead of the US midterm elections — we wanted to give some details about the types of behavior that led to this action. Many were using fake accounts or multiple accounts with the same names and posted massive amounts of content across a network of Groups and Pages to drive traffic to their websites. Many used the same techniques to make their content appear more popular on Facebook than it really was. Others were ad farms using Facebook to mislead people into thinking that they were forums for legitimate political debate."

Facebook said it wanted people to be able to trust the connections it makes. Examples of today's spam that it was cracking down on were people posting "clickbait" posts – with headlines like 'You Won't Believe This' – that drive people to websites that are ad farms. Bad actors on the site will then post the same content on dozens or even hundreds of Facebook groups to drive traffic to their sites and generate revenue.

"This artificially inflates engagement for their inauthentic Pages and the posts they share, misleading people about their popularity and improving their ranking in News Feed. This activity goes against what people expect on Facebook, and it violates our policies against spam," Facebook wrote.

Examples of such clickbait can include inflammatory political content, as well as stories about natural disasters and celebrity gossip that may well end up being dreaded "fake news."

"This is why it's so important we look at these actors' behavior – such as whether they're using fake accounts or repeatedly posting spam – rather than their content when deciding which of these accounts, Pages or Groups to remove," Facebook wrote.

The social media giant has also been under sharp scrutiny in Washington and around the world for being a forum for Russian trolls to spread fake news and sow discord during the 2016 election cycle. CEO Mark Zuckerberg said this year that Facebook is taking steps to combat future election interference and it will be better prepared to do things like take down fake accounts and coordinate with security firms to prevent malicious behavior.

Full Facebook blog post:

People need to be able to trust the connections they make on Facebook. It's why we have a policy banning coordinated inauthentic behavior — networks of accounts or Pages working to mislead others about who they are, and what they are doing. This year, we've enforced this policy against many Pages, Groups and accounts created to stir up political debate, including in the US, the Middle East, Russia and the UK. But the bulk of the inauthentic activity we see on Facebook is spam that's typically motivated by money, not politics. And the people behind it are adapting their behavior as our enforcement improves.

One common type of spam has been posts that hawk fraudulent products like fake sunglasses or weight loss "remedies." But a lot of the spam we see today is different. The people behind it create networks of Pages using fake accounts or multiple accounts with the same names. They post clickbait posts on these Pages to drive people to websites that are entirely separate from Facebook and seem legitimate, but are actually ad farms. The people behind the activity also post the same clickbait posts in dozens of Facebook Groups, often hundreds of times in a short period, to drum up traffic for their websites. And they often use their fake accounts to generate fake likes and shares. This artificially inflates engagement for their inauthentic Pages and the posts they share, misleading people about their popularity and improving their ranking in News Feed. This activity goes against what people expect on Facebook, and it violates our policies against spam.

Topics like natural disasters or celebrity gossip have been popular ways to generate clickbait. But today, these networks increasingly use sensational political content – regardless of its political slant – to build an audience and drive traffic to their websites, earning money for every visitor to the site. And like the politically motivated activity we've seen, the "news" stories or opinions these accounts and Pages share are often indistinguishable from legitimate political debate. This is why it's so important we look at these actors' behavior – such as whether they're using fake accounts or repeatedly posting spam – rather than their content when deciding which of these accounts, Pages or Groups to remove.

Today, we're removing 559 Pages and 251 accounts that have consistently broken our rules against spam and coordinated inauthentic behavior. Given the activity we’ve seen — and its timing ahead of the US midterm elections — we wanted to give some details about the types of behavior that led to this action. Many were using fake accounts or multiple accounts with the same names and posted massive amounts of content across a network of Groups and Pages to drive traffic to their websites. Many used the same techniques to make their content appear more popular on Facebook than it really was. Others were ad farms using Facebook to mislead people into thinking that they were forums for legitimate political debate.

Of course, there are legitimate reasons that accounts and Pages coordinate with each other — it's the bedrock of fundraising campaigns and grassroots organizations. But the difference is that these groups are upfront about who they are, and what they're up to. As we get better at uncovering this kind of abuse, the people behind it — whether economically or politically motivated — will change their tactics to evade detection. It's why we continue to invest heavily, including in better technology, to prevent this kind of misuse. Because people will only share on Facebook if they feel safe and trust the connections they make here.

Published under: Facebook