Technology

Why Asians Need Their Own Governance Structure for Artificial Intelligence

Thursday, December 3, 2020

Artificial intelligence is a boon to societies, governments and companies across the world. As Asia shifts from labor-intensive to knowledge-based economies, AI can be harnessed to accelerate development. But it can also be used for nefarious data collection and manipulation, warns philosopher Soraj Hongladarom of Chulalongkorn University, and it is vital that countries in the region collaborate to develop a set of regulations governing its use.

Why Asians Need Their Own Governance Structure for Artificial Intelligence

Artificial intelligence is now: Whenever a person searches for anything or conducts a transaction online, an AI machine is collecting and analyzing data (Credit: metamorworks / Shutterstock.com)

Artificial Intelligence (AI) has pervaded almost every aspect of our lives. Those who might think that AI is still far away from their lives need only to look at their mobile phones to see how it works. Whenever they search for anything, they are using an AI machine lying somewhere on the Internet, and there are countless ways in which programs or algorithms can manipulate data. Even computer scientists can hardly imagine the possibilities. AI used to occupy a lot of space in the news media, until, of course, the Covid-19 pandemic took over. Nonetheless, the technology and its power are still there, and its reach and its growing role in our lives will only increase.

The rapid evolution of AI has prompted scholars to speculate on if – or when – the technology will become “conscious”, capable of everything that a human being can do such as thinking, feeling, talking and understanding. The point at which machines will presumably reach and then surpass our collective intelligence and continue on their own without our intervention is known as the “technological singularity”, a term coined by the polymath John von Neumann and expanded upon by the computer scientist Ray Kurzweil, who has estimated that this point will probably occur in the year 2045. This is not all that far in the future. Of course, other scholars have different opinions, with some arguing it will never happen.

Even though the concept of technological singularity is still controversial, what is not disputed is that some forms of AI are already working and are having a major impact in many spheres of life all over the world. As with every kind of game-changing technology, this has prompted many to ponder about how to regulate its use. The automobile, for example, has been one of the most heavily regulated technologies ever. Consider traffic laws, emission standards, and many other rules and standards applying to vehicles. So, it is not surprising that regulations will have to be put in place for AI to become safe and beneficial to all of us.

The problem is that, as AI is a relatively new form of technology, there is no consensus on what form regulations should take. One of the main differences between AI and the automobile is that the latter is not a symbolic or semantic technology. This means that what AI does is manipulate and process symbols that have meanings. This has far-reaching implications: It means that AI can become more intimate with or embedded in our lives as it can process our thoughts. The car, on the contrary, cannot do this yet, unless of course it is equipped with the latest AI technology. A vehicle, for example, could be programmed to keep the driver focused on driving so that he or she does not become drowsy. This would certainly require a high level of AI technology. This shows how AI is semantic in the sense that it can interact with our thoughts and feelings.

The fact that AI can deal directly with thought makes it very important that there should be an effective way to regulate it; otherwise, if untrammeled, the technology could lead to thought or behavior control, threatening our dignity and our humanity. As social psychologist Shoshana Zuboff of Harvard Business School has shown, such control of thought and behavior is possible when AI manipulates data that we leave whenever we engage with one another online. Giant corporations such as Google collect a vast amount of data from users every day. By using AI, they can predict with great accuracy the likes and dislikes of an individual or a group and what they might do next. This is the most potent form of control, and hence it is clear that the capacity to use AI to control behavior must be among the most closely monitored machine functions.

The use of AI to predict and control behavior also makes it an ideal tool for authoritarian regimes. Here is where the Asia’s contribution to AI regulation could be vital, given the prevalence of authoritarian regimes across the region. It is tempting to see a link between China’s recent push toward AI development and its rigid one-party system. On the one hand, AI research and development in the country is buzzing, but on the other, the Communist Party government in Beijing has been utilizing the technology for surveillance and control of its population. 

China’s central government is using AI technology for the surveillance and control of its population (Credit: Zapp2Photo / Shutterstock.com)

China’s central government is using AI technology for the surveillance and control of its population (Credit: Zapp2Photo / Shutterstock.com)

Without a clear set of mechanisms and guidelines by which the use of AI can be scrutinized and where each possible use is identified, it would be impossible to determine the legality or acceptability of any specific deployment of the technology. China’s central government is widely perceived by the global community to be misusing the technology but the authorities in Beijing are aware that they need to be seen to be taking action. On December 2, the Cyberspace Administration of China published draft guidelines for the collection of personal data by mobile apps.

Without smart and balanced guidelines for its use, it is very difficult to see how AI could progress to the point of technological singularity. A political authority may thwart AI’s development, thus preventing singularity, or it could somehow come to terms with the technology and fully controls or manages its use. But neither scenario is likely.

China is not the only country in the region where AI guidelines are needed. In Thailand, AI has been used in the response to the Covid-19 pandemic. First, it was employed to select those eligible for government subsidies. When the government declared a nationwide lockdown in March, many were caught unprepared as their businesses were shut down. The poor had been living on a day-to-day basis for a long time, and when their workplaces were shut down, they did not have any savings on which to rely. The government tried to help them by distributing to each person 5,000 baht (US$167) a month for three months. AI was utilized to help find those most eligible for the payments. Many mistakes were made, however. Some who were ineligible received the handout, while many who were did not.

Second, AI has been used in Thai Chana, the contact-tracing app that the government developed. When it was first introduced, anyone entering a particular location such as a shopping mall was required to scan a QR code. The aim was to collect data on the movement of each individual person so that when a coronavirus case is diagnosed, health authorities could then trace with whom the patient has been in contact. Since there have been very few new cases of Covid-19 in Thailand for many months, fear of the disease has decreased dramatically. People have become less vigilant. Add to this the perception that the contact-tracing app is invasive. All this has meant that many more people no longer bother to scan the codes, severely diminishing the usefulness of the technology. 

Regulation should not be seen as an impediment to the further development of AI and other advanced technologies. Indeed, effective scrutiny would build trust among the public as well as provide safety and protection for consumers and legal certainty for developers. The question is not whether we should have regulation, but which specific issues should be regulated, how and why. 

For effective regulation to become a reality, Asian societies need to collaborate with each other as well as with their peers around the world in finding solutions to the questions of regulation, how an ethical framework around AI should be formulated, and which specific issues need to be discussed. The goal should be to protect the people, both their dignity and their physical well-being, while pursuing technological and thus economic development. 

Take the issue of surveillance. According to Zuboff, companies should not manipulate the digital traces they collect from users in a way that is inimical to their rights. In the same way, political authorities should not manipulate the digital data they have in a way that promotes their power at the expense of people. The Thai government’s use of Thai Chana, according to some reports, resulted in both by public authorities and private businesses demanding access to the data. Again, this show that an effective regulation is needed.

Whose AI is it anyway?: There is no rule saying that the superintelligence of the future needs to be based on Western values and norms (Credit: aslysun / Shutterstock.com)

Whose AI is it anyway?: There is no rule saying that the superintelligence of the future needs to be based on Western values and norms (Credit: aslysun / Shutterstock.com)

Asian societies need to think hard about what kind of governance structure and what kind of ethical guidelines they should adopt for their societies in the future. The greater interconnectedness of the world (something that Covid-19 may have slowed or impeded but a trend that will only pick up in pace once the pandemic has subsided) means that Asian-developed guidelines should not diverge greatly from internationally agreed norms. But Asian countries should not blindly follow other regions either. 

If Kurzweil and his followers are right about singularity, then it is not too long from now before we have machines that do not need us at all. If and when that time comes, Asian societies will have made a strong contribution to preparing for life in an AI world if they try to program their robots and machine-learning software in such a way that reflects features of Asian culture. There is no rule saying that the superintelligence of the future needs to be based on Western values and norms. What Asian economies – many at the forefront of adopting AI systems – need to do is to teach the AI to be ethical from the beginning. The necessary condition for that is that we have an effective set of ethical guidelines that communities, societies and people and their government to deliberate and come to a consensus on and then adopt and implement.

Opinions expressed in articles published by AsiaGlobal Online reflect only those of the authors and do not necessarily represent the views of AsiaGlobal Online or the Asia Global Institute

Author

Soraj Hongladarom

Soraj Hongladarom

Chulalongkorn University

Soraj Hongladarom is a professor of philosophy at Chulalongkorn University in Bangkok, Thailand. He is the author of The Ethics of AI and Robotics: A Buddhist Viewpoint, published in August 2020 by Rowman and Littlefield.


Recent Articles
Recent Articles