Asian innovators fight online hate and lies as tech giants fail

Fed up with the constant flow of fake news on her family’s WhatsApp group chats in India – ranging from a water crisis in South Africa to rumors surrounding the death of a Bollywood actor – Tarunima Prabhakar has built a simple tool to fight misinformation.

Prabhakar, co-founder of India-based tech company Tattle, archived content from fact-checking sites and news outlets, and used machine learning to automate the fact-checking process.

The online tool is available for students, researchers, journalists and academics, she said.

“Platforms like Facebook and Twitter are under scrutiny for misinformation, but not WhatsApp,” she said of the messaging app owned by Meta, Facebook’s parent company, which has more than 2 billion followers. active users per month, including about half a billion in India alone.

“The tools and methods used to verify misinformation on Facebook and Twitter do not apply to WhatsApp, nor are they suitable for Indian languages,” she told the Thomson Reuters Foundation.

WhatsApp rolled out measures in 2018 to limit messages sent by users, after rumors spread about the messaging service and led to several murders in India. It also removed the fast forward button next to multimedia messages.

Tattle is part of a growing number of initiatives across Asia to tackle online misinformation, hate speech and abuse in local languages, using technologies such as artificial intelligence, as well as the crowdsourcing, on-the-job training and engagement with civil society groups to meet community needs.

While tech companies such as Facebook, Twitter and YouTube are increasingly scrutinized for hate speech and disinformation, they have not invested enough in developing countries and lack moderators with language skills and knowledge. local events, according to experts.

“Social media companies don’t listen to local communities. Nor do they take context into account – cultural, social, historical, economic, political – when moderating user content,” said Pierre Francois Docquir, head of media freedom at Article 19, an advocacy group human rights..

“It can have a dramatic impact, online and offline. This can increase polarization and the risk of violence,” he added.

Local initiatives are essential

While the impact of online hate speech has already been documented in several Asian countries in recent years, analysts say tech companies have not increased their resources to improve content moderation, especially in local languages. .

United Nations rights investigators said in 2018 that Facebook use played a key role in spreading hate speech that fueled violence against Rohingya Muslims in Myanmar in 2017, after a military crackdown on the community minority.

Facebook said at the time that it was fighting misinformation and investing in Burmese language speakers and technology.

In Indonesia, “significant hate speech” online targets religious and racial minority groups, as well as LGBTQ+ people, with paid bots and trolls spreading misinformation aimed at deepening divisions, according to a report by Article 19 found in June.

“Social media companies…need to work with local initiatives to address the enormous challenges of governing problematic content online,” said Sherly Haristya, a researcher who helped author the Indonesia content moderation report with Section 19.

One such grassroots initiative is that of the Indonesian non-profit organization Mafindo, which, supported by Google, runs workshops to train citizens – from students to stay-at-home moms – in fact-checking and detecting. wrong information.

Mafindo, or Masyarakat Anti Fitnah Indonesia, the Indonesian anti-slander society, offers training in reverse image search, video metadata and geotagging to help verify information.

The non-profit organization has a professional fact-checking team which, aided by volunteer citizens, debunked at least 8,550 hoaxes.

Mafindo has also built a Bahasa-language fact-checking chatbot called Kalimasada – introduced just before the 2019 elections. It is accessible via WhatsApp and has around 37,000 users – a slice of the country’s more than 80 million WhatsApp users.

“Older people are particularly vulnerable to hoaxes, misinformation and fake news on platforms because they have limited technological skills and mobility,” said Santi Indra Astuti, president of Mafindo.

“We teach them to use social media, to protect personal data and to think critically about hot topics: during Covid it was misinformation about vaccines, and in 2019 it was about vaccines. elections and political candidates,” she said.

The Challenges of Abuse Detection

Governments across Asia are tightening rules on social media platforms, banning certain types of posts and demanding the prompt removal of posts deemed objectionable.

Yet hate speech and abuse, especially in local languages, often goes unchecked, said Tattle’s Prabhakar, who also created a tool called Uli – which means scissor in Tamil – to detect gender-based abuse in online in English, Tamil and Hindi.

Tattle’s team collected a list of offensive words and phrases commonly used online, which the tool then scrambles on users’ timelines. People can also add more words themselves.

“Abuse detection is very difficult,” Prabhakar said. Uli’s machine learning feature uses pattern recognition to detect and hide problematic posts from a user’s feed, she explained.

“Moderation happens at the user level, so it’s a bottom-up approach as opposed to the platforms’ top-down approach,” she said, adding that they would also like Uli to be able to detect memes. , abusive images and videos.

In Singapore, Empathly, a software tool developed by two university students, takes a more proactive approach, working as a spell checker when it detects abusive words.

Aimed at businesses, it can detect abusive terms in English, Hokkien, Cantonese, Malay and Singlish – or Singaporean English.

“We have seen the harm that hate speech can cause. But Big Tech tends to focus on English and its users in English-speaking markets,” said Timothy Liau, Founder and CEO of Empathly.

“So there is room for local interventions – and as locals we understand the culture and the context a bit better.”

About Jun Quentin

Check Also

Crime in Burnaby: Man convicted in random transit attacks

Rainier Jesse Azucena, 35, was sentenced to parole and three years probation after pleading guilty …