In the Asia-Pacific region, momentum has been building around legislation to tackle digital violence and other forms of online harm. In January 2022, Australia’s Online Safety Act came into force, introducing mechanisms to remove harmful content, protect adults from online abuse, and augment existing protections for children against cyber bullying. Meanwhile, the Singapore government is set to launch legally binding codes of practice for technology platforms to establish accessible systems that allow users to report harmful content, take necessary remedial action, and regularly publish reports on the efficacy of their measures. In April this year, the Indonesian government passed the long-awaited Sexual Violence Bill, which recognizes nine forms of sexual violence as punishable acts, including physical and nonphysical sexual harassment, forced marriage, and, notably, cyber sexual harassment. Beyond these examples, other Asian nations are considering similar legislation.
Online harm and digital violence can assume many forms, have negative long-term effects on victims, and are rapidly becoming a societal scourge. Perpetrators can intimidate and threaten their victims on every available online platform through different communication modes and content. Trolls can send victims sexually explicit images on direct messaging platforms such as WhatsApp and Telegram. Participants in discussion forums such as Reddit can disseminate misogynistic and sexist memes disguised as dark humor. Gamers can perpetrate acts of sexual aggression against vulnerable players, including making unwanted sexual advances, rape jokes, or mounting virtual assaults in games such as World of Warcraft and Rape Day. Even the emerging metaverse has been acknowledged to have a “groping problem”. The possibilities are endless, but so is the trauma – victims can suffer from mental illness, reputational damage, fear for personal safety and reluctance to go online, thus constraining their freedom to enjoy online interaction.
With so many life-changing innovations born every day, one would think that online harm and digital violence can be technologically resolved — or at least managed — through bots to detect adverse content, verification systems to ban bad actors, automated prompts to caution against aggression, and so on. But how close are we, truly, to achieving a desirable level of online safety through technological solutions?