SENATORS heard testimony on how online platforms should handle disinformation and harmful messages on Wednesday, diving into possible freedom-of-speech issues and the removal of immunity protections currently enjoyed by such platforms under US law should they host damaging content.
At a hearing of the Senate Committee on Constitutional Amendments and Revision of Codes, Maria A. Ressa, Rappler chief executive officer and Nobel Peace Prize laureate, raised the issue of potential censorship if content is curtailed, but instead called for greater policing of algorithms, which have the power to amplify even messages that are clearly disinformation.
“…it’s not about the content, it’s about the operating system run by algorithms,” she said. “Don’t intervene in the content because then you can actually be accused of censorship. But if you go to the algorithms of amplification… everyone can say what they think, but what your neighbor said never reached broadcast scale until today, because there have been no guard rails on the distribution of lies.”
Minority Leader Franklin M. Drilon cited the position taken by Retired Associate Justice Antonio T. Carpio, who believes that online platforms should be considered publishers and that they should disclose the real identities of people who post the content.
The platform-publisher distinction Mr. Carpio made tracks the debate in the US over Section 230 of the United States Communications Decency Act, which currently renders website platforms immune should third-party content posted on the platform prove to be damaging. Publishers, on the other hand, are held responsible for all content appearing on their publications or sites.
“We are grappling with solutions here,” Mr. Drilon said. “As in every freedom, there should be responsibility.”
The quick solution, Ms. Ressa, said is to hold platforms accountable for what they allow to spread. “When you do that, I bet you that you would automatically see a shrinking of information operations.”
She was referring to disinformation campaigns that seek to game social-media algorithms in order to reach targeted audience more effectively, which have allegedly influenced the outcome of elections worldwide.
Roy Abrams, law enforcement manager for Asia Pacific at Meta Platforms, Inc., formerly known as Facebook, said the company’s concern is not to be seen as “the arbiters of truth, (because) that’s not our role in society… that’s why we employ businesses like Rappler to be our third-party fact-checker.”
Rappler and Vera Files were appointed Facebook’s Philippine fact checkers in 2018. Their role includes evaluating flagged posts for the quality of the information contained, and issuing a ruling on whether claims are true, false, or misleading.
“When it comes to hate speech, we don’t allow it, and quite frankly the algorithms that we’re talking about, there’s nothing inherently evil about it,” Mr. Abrams added, noting that algorithms helped numerous small businesses in the Philippines survive the pandemic and rid the country of terror-related content. “But it is true that we have to continue to improve on them.”
Jean-Jacques Sahel, Asia-Pacific Information policy lead at Google, said: “We have to maintain this careful balance between the internet being a place for free expression… but also make sure that we preserve user safety, that we make it a safe place,” he said. — Alyssa Nicole O. Tan