The regulation is full of problems. It lacks a clear definition of what prohibited content or misinformation is, raising the risk that the vagueness might be exploited by vested interests. This puts platforms in a difficult position when making decisions. In addition, the procedure for content removal is very top-down, with MOCI giving almost no room for due process, accountability or appeal.
Moreover, the short time given for removing content deemed to be prohibited could be extremely challenging especially in cases that fall within gray areas or possibly mitigating contexts. The rush to act on an order could result in the removal of perfectly lawful content. The government has not considered the size, capacity and resources that a platform has in determining how quickly prohibited content has to be removed. The regulation also only provides for the taking down of content, ignoring other sanction options that a platform might have available and might already use such as flagging, the issuing of warnings, demoting and demonetizing.
Bumpy road to implementation
Based on the concerns noted above, platforms have complained about the implementation of the prohibited content regulation, which has not yet fully come into effect. MOCI has finalized the standard operating procedure for removal, but has not made that document public. MOCI and the platforms have not settled over the formula for calculating the fines that would be imposed for non-compliance.
In their approach to moderating content, platforms use other means besides removal. They also consider the content left online and content that the public might post in the future. They aim to ensure good quality content using incentive mechanisms such as recognition and monetary rewards. They also employ disincentive tools to discourage people from posting misinformation – warnings, additional information to clarify or qualify the misinformation, or alternative information or a list of debunked misinformation.
Prior to the 2020 regulation, the government of Indonesia had already requested the removal of a significant amount of content that it deemed to be negative or unlawful. Google reports that since 2011 the government had sent it 872 removal requests covering 278,221 items. Indonesia is among the top ten in the world for its number of removal requests and in the top three for the number of items its government requested be removed. Google, however, did not always comply.
As for Meta (Facebook), their transparency report specifically indexes their restriction of access to content pertaining misinformation as a response to MOCI’s requests. The amount of moderation given to misinformation content increased significantly in 2021 due to Covid-19 infodemic. Twitter, meanwhile, received 291 legal demands for removal in 2020 and 269 requests in 2021, but their compliance rates went from below 30 percent in 2020 to 59 percent in 2021.