BANGKOK/BEIRUT — Facebook’s decision to allow hate speech against Russians due to the war in Ukraine breaks its own rules on incitement, and shows a “double standard” that could hurt users caught in other conflicts, digital rights experts and activists said.
Facebook owner Meta Platforms will temporarily allow Facebook and Instagram users in some countries to call for violence against Russians and Russian soldiers in the context of the Ukraine invasion, Reuters reported last week.
It will also allow praise for a right-wing battalion “strictly in the context of defending Ukraine,” in a decision that experts say demonstrates the platform’s bias.
The move represents a “glaring” double standard when set against Meta’s failure to curb hate speech in other war zones, said Marwa Fatafta at digital rights group Access Now.
“The disparity in measures in comparison to Palestine, Syria, or any other non-Western conflict reinforces that inequality and discrimination of tech platforms is a feature, not a bug,” said Ms. Fatafta, policy manager for the Middle East and North Africa.
“Tech platforms have a responsibility to protect their users’ safety, uphold free speech, and respect human rights. But this begs the question: whose safety and whose speech? Why were such measures not extended to other users?” she added.
Last year, hundreds of posts by Palestinians protesting evictions from East Jerusalem were removed by Instagram and Twitter, who later blamed technical errors.
Digital rights groups slammed the censorship, urging greater transparency on how moderation policies are set and ultimately enforced.
ONE POLICY FOR ALL?
Facebook has come under fire for failing to curb incitement in conflicts from Ethiopia to Myanmar, where United Nations investigators say it played a key role in spreading hate speech that fuelled violence against Rohingya Muslims.
“Under no circumstance is promoting violence and hate speech on social media platforms acceptable, as it could hurt innocent people,” said Nay San Lwin, co-founder of advocacy group Free Rohingya Coalition, who has faced abuse on Facebook.
“Meta must have a strict policy on hate speech regardless of the country and situation — I don’t think deciding whether to allow promoting hate or calls for violence on a case-by-case basis is acceptable,” he told the Thomson Reuters Foundation.
Scrutiny over how it tackles abuse on its platforms intensified after whistleblower Frances Haugen leaked documents showing the problems Facebook encounters in policing content in countries that pose the greatest risk to users.
In December, Rohingya refugees filed a $150 billion class-action complaint in California, arguing that Facebook’s failure to police content and its platform’s design contributed to violence against the minority group in 2017.
Meta recently said it would “assess the feasibility” of commissioning an independent human rights assessment into its work in Ethiopia, after its oversight board recommended a review.
In a report on Wednesday, Human Rights Watch said tech firms must show that their actions in Ukraine are “procedurally fair,” and avoid any “arbitrary, biased, or selective decisions” by basing them on clear, established, and transparent processes.
In the case of Ukraine, Meta said that native Russian and Ukrainian speakers were monitoring the platform round the clock, and that the temporary change in policy was to allow for forms of political expression that would “normally violate” its rules.
“This is a temporary decision taken in extraordinary and unprecedented circumstances,” Nick Clegg, president of global affairs at Meta, said in a tweet, adding that the company was focused on “protecting people’s rights to speech” in Ukraine.
Russia has blocked Facebook, Instagram, and Twitter.
And Meta’s new tack underlines how hard it is to write rules that work universally, said Michael Caster, Asia digital program manager at Article 19, a human rights organization.
“While the policies of a global corporation should be expected to change slightly from country to country, based on ongoing human rights impact assessments, there also needs to be a degree of transparency, consistency and accountability,” he said.
“Ultimately, Meta’s decisions should be shaped by its expectations under the UN Guiding Principles on Business and Human Rights, and not what is most economical or logistically sound for the company,” he said in emailed comments.
For Wahhab Hassoo, a Yazidi activist who has campaigned to hold social media firms accountable for failing to act against Islamic State (ISIS) members using their platforms to trade Yazidi women and girls, Facebook’s moves are deeply troubling.
Mr. Hassoo’s family had to pay $80,000 to buy the release of his niece from the jihadists, who abducted her in 2014 then offered her “for sale” in a WhatsApp group.
“I am shocked,” said Mr. Hassoo, 26, of Meta’s decision to allow hate speech against Russians.
“When they can make certain decisions unilaterally, they can basically promote propaganda, hate speech, sexual violence, human trafficking, slavery and other forms of human abuse related content — or prevent it,” he said.
“The last part is still missing.”
Mr. Hassoo and fellow Yazidi activists compiled a report that urged the United States and other nations to probe the role social media platforms including Facebook and YouTube played in crimes against their minority Yazidi community.
Meta’s actions on Ukraine confirm what their research showed, said Mr. Hassoo, who resettled in the Netherlands in 2012.
“They can promote or ban what fits in their interests and what they find important,” Mr. Hassoo said. “It is not fair that a company can decide on what’s good and what’s not.” — Rina Chandran and Maya Gebeily/Thomson Reuters Foundation