Australian lawyers call on Facebook to crackdown on anti-Muslim comments

A group of lawyers found dozens of Facebook comments containing hateful rhetoric about Muslims remained on the social media platform despite being reported.

A group of Australian lawyers are calling on Facebook to do more to remove anti-Muslim comments from their platform.

A group of Australian lawyers are calling on Facebook to do more to remove anti-Muslim comments from their platform. Source: Press Association

A team of Australian lawyers are calling on Facebook to urgently fix their moderation policy after they found dozens of violent anti-Muslim comments were not removed. 

The findings of the investigation by lawyers from the Australian Muslim Advocacy Network and Sydney-based firm Birchgrove Legal were detailed in a letter sent to Facebook on Wednesday, days before the .

Rita Jabri-Markwell, a lawyer with the advocacy network, told SBS News that the group reported 71 comments posted on pages of known alt-right groups containing broad anti-Muslim statements but only 14 were removed.

Of the remaining comments, 45 were deemed to comply with Facebook’s community standards and another 14 received no response, she said.
A Muslim worshipper prays at a makeshift memorial outside the Al Noor Mosque in Christchurch.
A Muslim worshipper prays at a makeshift memorial outside the Al Noor Mosque in Christchurch. Source: AAP
One comment described Muslims as “parasites” and called for them to be “culled” received no response from Facebook when reported. Other comments using similar language had been removed.

Another comment that invoked the need for a second “final solution”, a term referring to the mass murder of Jewish people during the Holocaust, was found not to contravene Facebook’s community standards.

“We feel that there’s a gap and we want to work with them [Facebook], because this stuff can’t be in the mainstream, it can’t be normal,” Ms Jabri-Markwell said.

“We are very aware of the hate incidents that are happening in public places, particularly affecting women in hijabs, and this all starts online.”
In the aftermath of the Christchurch terror attacks, which saw the murder of 51 Muslim worshippers live-streamed on social media, Facebook signed on to the Christchurch Call - a multinational pledge to address the proliferation of extremist content online.

As a result, the company announced it would ramp up its rules against hateful content to enable the removal of posts supporting white nationalism and separatism.

Since the 15 March attack, Facebook has also banned a number of high-profile far-right commentators including US white nationalist Paul Nehlen, Milo Yiannopoulos and conspiracy theorist Alex Jones.
In a statement to SBS News, a Facebook company spokesperson said they had tripled their safety and security team to more than 35,000 people, invested in technology to identify hate speech before it is reported and banned over 200 white supremacist organisations.

"We also prohibit anyone from posting hateful content that targets people using violent or dehumanising speech, statements of inferiority, or calls for exclusion or segregation," the spokesperson said.

But Ms Jabri-Markwell said they haven’t seen any observable reduction in the number of pages and hate groups on the platform.

“These online communities are just becoming more and more emboldened in their hate all the time,” she said.

The group wants to “understand what they’re currently doing” and “why the processes aren’t picking up this stuff” so they can work with Facebook to improve their moderation, Ms Jabri-Markwell added.

A number of the comments included in the investigation did not specifically use the word “Muslims”, instead responding to external links about Islam with comments like “cull them” or “kill this filth” which may mean they are not picked up by Facebook’s algorithm.

The group’s push for stronger comment moderation practices followed Melbourne University research that studied more than 41,000 social media posts to determine what narratives were being used by far-right groups in Victoria.

Completed in November 2018 - approximately five months before the Christchurch attacks - the research found that supporters of the anti-Islam movement within the far-right were the most prolific posters online.

The social media giant's guidelines define hate speech as a “direct attack” on people based on “protected characteristics” that include religious affiliation, ethnicity, national origin and race.

“We define attack as violent or dehumanising speech, statements of inferiority, or calls for exclusion or segregation,” the policy states.

A Facebook spokesperson said they are continuing to work on their strategy to adapt to the far-right's methods.

"We know that some people will find new ways to communicate and spread harm both online and offline and we’re absolutely committed to doing everything we can to advance our work and share our progress.”


Share
4 min read
Published 13 March 2020 11:47am
Updated 13 March 2020 11:50am
By Maani Truu



Share this with family and friends