YouTube updates policies vs hate speech, misinformation
YOUTUBE has updated its policies surrounding hate speech and misinformation in light of worldwide scrutiny of social media giants over their responsibility to curb abusive online behavior and getting the right information across, especially about Coronavirus Disease 2019 (COVID-19).
These policies include fine tuning the definitions of what it regards as hate speech, a misinformation algorithm, and a news shelf and health panel for news on COVID-19.
“Our commitment to openness doesn’t mean that everything goes, and it doesn’t mean that we don’t take responsibility for the content that is on YouTube,” Jennifer Flannery O’Connor, product management director of trust and safety at YouTube, said in a digital briefing on Sept. 3 via Google Meet.
“The policy development process and enforcement of those policies is sort of a process that is never done, it’s a dynamic living process, because the world evolves, the world changes and so the types of content that appear on YouTube are obviously going to change. This year is probably one of the most dramatic examples of that,” she explained before adding that while the company always had policies around hate and harassment, they realized those policies “needed some updating.”
Ms. O’Connor cited YouTube’s policies on hate and harassment as examples where initially its hate policy “prohibited any content that looks to incite violence or foment or incite hatred,” but now the team has introduced “more definition” around what fomenting hate is by specifically including the phrase “allegations of superiority of a particular group or inferiority of another group” to justify violence, discrimination or segregation.
Also included is “language that might serve to dehumanize another person on the basis of their protected group characteristics like their race or nationality.”
YouTube currently has more than 10,000 people working on content screening. They work alongside machine learning to flag and remove content that violates YouTube’s safety guidelines.
FACEBOOK’S WOES
The announcement on YouTube’s updates regarding such policies comes at a time when Facebook, with its 1.69 billion users, is under fire for allegedly not removing enough, or successfully policing, hate speech causing it to proliferate on its site. Such was the backlash that in July, major sponsors including Disney and Coca-Cola participated in an ad boycott for the social media giant’s failure to contain hate speech.
Facebook founder Mark Zuckerberg in May defended their stance as supportive of free speech and said that Facebook is not an arbiter of truth. Several Facebook employees walked out on him in June in protest after the company failed to moderate a post by US President Donald Trump regarding the George Floyd protests in which he said, “When the looting starts, the shooting starts.”
In India, Facebook faces another controversy on its hate speech policies as it admitted in August that it needed to do better to police hate speech coming from the country’s right-wing leaders.
Other social media sites like Twitter and Reddit also announced that they have updated their hate speech policies.
MISINFORMATION
Aside from beefing up its hate speech policies, YouTube also announced updates on how it screens misinformation.
“Alongside raising high quality content, we’ve also worked very hard to identify and reduce the recommendation of content that comes close to violating our content policies for content that contains harmful misinformation,” Woojin Kim, vice-president for product management at YouTube, said in the same conference.
Mr. Kim added that since launching its misinformation algorithm, YouTube introduced “over 30 different changes to the program” that have successfully reduced the recommendations and watchtime of this kind of content by over 70% in the US.
YouTube also put banners under news accounts to inform its users about the backers or funders of certain accounts, as in the case of Singaporean news channel CNA. A banner under the videos of the channel indicates that Mediacorp (of which CNA is a member) is “funded in whole or in part by the Singaporean government.”
The company also created a COVID-19 news shelf which allows users to get the latest information about COVID-19 from trusted sources. It also created a health panel so when users search for COVID-19 information, they can get it from trusted sources.
YouTube has removed more than 200,000 videos that violated its misinformation policies including saying the virus is a hoax and recommending unsubstantiated cures for the disease.
The misinformation policies, Ms. O’Connor said, are “written agnostic of the speaker,” which means even if a government official is the one spreading misinformation, YouTube will remove the video.
“Now, we do have specific exceptions… because we believe often content presented in that context for example, in news, [it] provides the appropriate context so it may be in the public interest for people to know,” she explained.
“We did remove a substantial amount more misinformation in the last six months than we have historically, largely due to COVID-19 and I would say we both beefed up the reviewer team but also improved the technology,” Ms. O’Connor added.
And because information about the pandemic is rapidly changing and “different countries and different local health authorities have different perspectives and different recommendations,” YouTube partnered with local health authorities from “80 to 85 countries” to localize “as much as possible to get the best advice for each of the countries,” instead of just relying on the guidance of the World Health Organization (WHO), said Ms. O’Connor. — Zsarlene B. Chua