A recent investigative probe released online by the Tech Transparency Project (TTP) concluded that dozens of white supremacist groups are still using Facebook as a safe haven to spread extremist content.
Similarly, some far-left leaning groups linked to advocates of the Democratic National Committee have turned to astroturfing, in possible defiance of strict policies and enforcement of the social networking sites implicated in their automated activities.
But in the TTP study of far-right groups, the findings claimed that online extremists have used the social platform to ignite fear and recruit amid the recent coronavirus pandemic.
“The findings, more than two years after Facebook hosted an event page for the deadly ‘Unite the Right’ rally in Charlottesville, Virginia, cast doubt on the company’s claims that it’s effectively monitoring and dealing with hate groups,” a TTP news release reads.
“What’s more, Facebook’s algorithms create an echo chamber that reinforces the views of white supremacists and helps them connect with each other.”
In the findings, the research team utilized data from the Anti-Defamation League and the Southern Poverty Law Center, purportedly identifying 221 white supremacist groups. Nearly half of those groups were active on the social networking site.
Of the over 200 far-right groups, the TTP team reportedly uncovered four Facebook groups and 153 Facebook pages. A portion of the pages had remained active for nearly ten years. A few of the groups or organizations made a comeback on the social platform even after being banned.
“TTP found that 51% (113) of the organizations examined had a presence on Facebook in the form of Pages or Groups. Of the 113 hate groups with a presence, 34 had two or more associated Pages on Facebook, resulting in a total of 153 individual Pages and four individual Groups,” according to the TTP news release.
Among the pages belonging to extremists, they included symbolism and characteristics synonymous with neo-Nazism, neo-Confederatism, and white nationalism.
For the TTP team, their findings highlight a growing controversy with Facebook’s content moderation system, largely the result of what they describe as flawed ‘artificial intelligence’ and human moderators in charge of flagging such content.
“Relying on users to identify objectionable material doesn’t work well when the platform is designed to connect users with shared ideologies, experts have noted, since white supremacists are unlikely to object to racist content they see on Facebook,” the TTP team argued.
“A lot of Facebook’s moderation revolves around users flagging content. When you have this kind of vetting process, you don’t run the risk of getting thrown off Facebook.”
TTP is a nonprofit watchdog organization launched in 2016 to probe malfeasance with its team of researchers and data scientists. It models itself as a nonpartisan research initiative aimed at transparency and accountability in corporate and government entities.