Population & Society

In the Run-up to the 2024 Election, Indonesia Fights Against False Information

Thursday, March 2, 2023

The Indonesian government has implemented legal sanctions and regulations on misinformation and disinformation. Nurma Fitrianingrum of the Tifa Foundation assesses the state’s efforts to address the problem, the concerns of tech platforms, and the challenge posed by the 2024 general election.

In the Run-up to the 2024 Election, Indonesia Fights Against False Information

Stop Hoax Festival in Yogyakarta, June 2019: Indonesia has witnessed the widespread distribution of disinformation and misinformation on platforms, particularly during recent elections and the pandemic (Credit: Hariyanto Surbakti / Shutterstock.com)

Indonesia has witnessed the widespread distribution of disinformation and misinformation on platforms. From fake news in the 2017 Jakarta gubernatorial election and the 2019 presidential election to the infodemic of false or unreliable information about Covid-19, the country has experienced the consequences of the dissemination of falsehoods. 

Fake news is not new phenomenon in Indonesia. It has been a means of propaganda since the country’s independence in 1945 and even before then. But the emergence and expansive use of social media has intensified both the speed and volume of false information distribution. At the beginning of this year, there were 212.9 million internet users in Indonesia or around 77 percent of the country’s population. For most of these citizens, the internet means social media or social-networking sites accessed on their smartphones, many never having used a computer to get online or even touched a laptop or desktop. 

Vague definition of prohibited content

The law in Indonesia does not distinguish between misinformation or disinformation – all fall under the term “false information”. According to Indonesia’s criminal code law, spreading false information including on the internet is a criminal offence. Online dissemination is further governed under the Electronic and Information Transaction (EIT) Law. Under these regulations, relevant authorities can request that false information on the internet be removed.

In 2020, the Ministry of Communication and Informatics (MOCI, referred to in Indonesia as Kominfo) enacted a regulation (MR5) which also covers moderation of “prohibited content” on tech platforms. Such material (a) violates existing laws and regulations, (b) provides information on or access to prohibited content, or (c) creates a public disturbance and disturbs public order. The regulation provides the legal underpinnings and the technical procedure for the removal of misinformation/disinformation. The wording, however, is ambiguous and open to many interpretations, raising the concerns of civil society organizations. They have criticized the vagueness of the law, fearing its misuse by the government to limit freedom of speech in Indonesia which has been experiencing a shrinking civic space and democratic backsliding in the recent years.

Ironclad yet flawed regulation

The content moderation mechanism in the MOCI regulation was based on Germany’s Network Enforcement Act (known as NetzDG for its German title and sometimes referred to as the “Facebook Act”). The Indonesian government, however, put even stricter procedures into the law and harsher consequences for platforms that fail to comply. It requires platforms to safeguard their spaces from prohibited content including misinformation using their own mechanisms such as community guidelines and flagging. More important is MOCI’s authority to ask a platform directly to remove content. Such a request may originate from other Indonesian government institutions, law enforcement agencies, the public, or from MOCI itself. 

Social media spread and speed: Nearly 80 percent of the Indonesian population are internet users, most getting online on their smartphones (Credit: kalilipatvideoart / Shutterstock.com)

Social media spread and speed: Nearly 80 percent of the Indonesian population are internet users, most getting online on their smartphones (Credit: kalilipatvideoart / Shutterstock.com)

MOCI has its own surveillance unit and tools that patrol, monitor and identify prohibited content. The decision as to whether any content might qualify as prohibited is made by MOCI alone without any transparent and accountable due process. Upon receiving a request from MOCI to take down online material, a platform would have 24 hours to comply or just four hours if the content pertains to terrorism, child pornography or a disturbance of public order. If the platform fails to do so, it will receive administrative fines and possibly have access to it blocked.

The regulation is full of problems. It lacks a clear definition of what prohibited content or misinformation is, raising the risk that the vagueness might be exploited by vested interests. This puts platforms in a difficult position when making decisions. In addition, the procedure for content removal is very top-down, with MOCI giving almost no room for due process, accountability or appeal. 

Moreover, the short time given for removing content deemed to be prohibited could be extremely challenging especially in cases that fall within gray areas or possibly mitigating contexts. The rush to act on an order could result in the removal of perfectly lawful content. The government has not considered the size, capacity and resources that a platform has in determining how quickly prohibited content has to be removed. The regulation also only provides for the taking down of content, ignoring other sanction options that a platform might have available and might already use such as flagging, the issuing of warnings, demoting and demonetizing.

Bumpy road to implementation

Based on the concerns noted above, platforms have complained about the implementation of the prohibited content regulation, which has not yet fully come into effect. MOCI has finalized the standard operating procedure for removal, but has not made that document public. MOCI and the platforms have not settled over the formula for calculating the fines that would be imposed for non-compliance.

In their approach to moderating content, platforms use other means besides removal. They also consider the content left online and content that the public might post in the future. They aim to ensure good quality content using incentive mechanisms such as recognition and monetary rewards. They also employ disincentive tools to discourage people from posting misinformation – warnings, additional information to clarify or qualify the misinformation, or alternative information or a list of debunked misinformation.

Prior to the 2020 regulation, the government of Indonesia had already requested the removal of a significant amount of content that it deemed to be negative or unlawful. Google reports that since 2011 the government had sent it 872 removal requests covering 278,221 items. Indonesia is among the top ten in the world for its number of removal requests and in the top three for the number of items its government requested be removed. Google, however, did not always comply. 

As for Meta (Facebook), their transparency report specifically indexes their restriction of access to content pertaining misinformation as a response to MOCI’s requests. The amount of moderation given to misinformation content increased significantly in 2021 due to Covid-19 infodemic. Twitter, meanwhile, received 291 legal demands for removal in 2020 and 269 requests in 2021, but their compliance rates went from below 30 percent in 2020 to 59 percent in 2021. 

Democracy at risk: Opposition candidate supporters put up a banner in Jakarta during the 2019 presidential election campaign (Credit: Harismoyo / Shutterstock.com)

Democracy at risk: Opposition candidate supporters put up a banner in Jakarta during the 2019 presidential election campaign (Credit: Harismoyo / Shutterstock.com)

From the transparency reports of global platforms, it is safe to conclude that none of them are blindly and fully acquiescing to the Indonesian government’s requests. While the compliance rates of each platform differed, no company showed a 100 percent rate. In taking action in response government requests, platforms take internal policy, community guidelines and local laws into their consideration. The question going forward is: What will be their attitude once the MOCI regulation takes full effect. 

Preparing for the biggest misinformation battlefield – the 2024 general election

With the regulation expected to be fully in effect soon, stakeholders are preparing for an extensive influx of misinformation in the coming months as Indonesia’s next general election approaches. On February 14, 2024, Indonesia will hold one of the biggest and most complicated single-day ballots in the world, as citizens cast their votes for president and members of legislative branches (national, provincial and local). Social media will be a key battleground for candidates to win votes as well as to bring candidates down using mis/disinformation. 

Having learnt the impact of misinformation and disinformation from the 2016 and 2020 elections in the US, the Brexit referendum in the UK in 2016, and the presidential election in the Philippines in 2022, the government, tech platforms, press and media organizations, and civil society organizations in Indonesia are working individually and collaboratively to fight misinformation and disinformation. MOCI has been coordinating with tech platforms to monitor and take down offending content. 

At the same time, civil society organizations have been combating disinformation by various means such as educating voters, debunking falsehoods, and creating and promoting peace-and-order narratives to minimize the impact of misinformation on the deepening polarization in the country. With the current political atmosphere in Indonesia, the general election will be the biggest challenge for platforms and the government in the fight against misinformation and disinformation. It will be the supreme test of the effectiveness of their approaches. 

This article is based on a presentation by the author at a Digital Asia Hub Platform Futures Roundtable on “Social Media’s Mis/Disinformation Problem”, which was held online on February 9, 2023.   

Opinions expressed in articles published by AsiaGlobal Online reflect only those of the authors and do not necessarily represent the views of AsiaGlobal Online or the Asia Global Institute

Author

Nurma Fitrianingrum

Nurma Fitrianingrum

Tifa Foundation

Nurma Fitrianingrum is a good governance project officer at the Tifa Foundation in Indonesia, established in 2000 with a vision to realize an open society in the post-dictatorship era. Previously, she worked as a researcher and policy analyst in the Department of Public Policy and Management at Gadjah Mada University in Yogyakarta. There, she started the podcast series “Policy Talk” as a new approach to bring discussions of the latest public policy issues to wider audiences outside of academia. Nurma also worked as a researcher at the Institute for Research and Empowerment (IRE) in Yogyakarta, where she focused on village development and women’s empowerment in various regions across Indonesia. She holds a master’s degree in public policy with a specialization in media and communications from Central European University, where she worked as a student researcher at the Center for Media, Data and Society.


Recent Articles
Recent Articles