India Can Afford to Wait and Watch Before Regulating Artificial Intelligence

Wednesday, August 2, 2023

Delhi should not rush to pass a comprehensive law on artificial intelligence that would likely become outdated quickly, argues Anurag Mehra of the Indian Institute of Technology Bombay.

India Can Afford to Wait and Watch Before Regulating Artificial Intelligence

India’s position on regulating artificial intelligence (AI) has swung between extremes – from no regulation to regulation based on a “risk-based, no-harm” approach. In April this year, the Indian government said it would not regulate AI to help create an enabling, pro-innovation environment which could possibly catapult India to global leadership in AI-related tech.

Just two months later, however, the Ministry of Electronics and Information Technology indicated India would regulate AI through the Digital India Act. Explaining the U-turn from the earlier position of no-regulation, Minister Rajeev Chandrasekhar said: “Our approach towards AI regulation or indeed any regulation is that we will regulate it through the prism of user harm.”

In a labor-intensive economy such as India, job losses because of AI replacing people are a relatively stark issue. “While AI is disruptive, there is minimal threat to jobs as of now,” Chandrasekhar averred. “The current state of development of AI is task-oriented; it cannot reason or use logic. Most jobs need reasoning and logic which currently no AI is capable of performing. AI might be able to achieve this in the next few years, but not right now.”

Such an assessment seems only partially correct because there are many routine, somewhat low-skill “tasks” that AI can perform. Given the preponderance of low-skill jobs in India, their replacement by AI could have a significant and adverse impact on employment.

Drafts of the upcoming Digital Personal Data Protection Bill 2023 leaked in the media suggest that personal data of Indian citizens may be shielded from being used for training AI. This approach, it seems. was inspired by questions regulators in the United States have posed to Open AI, the company behind the generative AI app ChatGPT that has captivated the world, about how it scraped personal data without user consent. If this becomes law – though it is hard to see how it would be implemented because of the way training data is collected and used – the “deemed consent” that allows such scraping of data in the public interest will cease to exist.

Briefing on India's Digital Personal Data Protection Bill (Credit: ThePrint on YouTube)

To be sure, the Indian government’s position has clearly evolved over time. In mid-2018, the government think tank, Niti Aayog, published a strategy document on AI. Its focus was on increasing India’s AI capabilities, reskilling workers given the prospect of AI replacing several types of jobs and evolving policies for accelerating the adoption of AI in the country. The document underlined India’s limited capabilities in AI research. It therefore recommended incentives for core and applied research in AI through centers of research excellence in AI and more application-focused, industry-led international centers for transformational artificial intelligence.

The paper also proposed reskilling of workers because of the anticipated job losses to AI, the creation of jobs that could constitute the new service industry and recognizing and standardizing informal training institutions. It advocated accelerating the adoption of AI by creating multistakeholder marketplaces. This would enable smaller firms to discover and deploy AI for their enterprises through the marketplace, thus overcoming information asymmetry tilted in favor of large companies that can capture, clean, standardize data and train AI models on their own. Finally, it emphasized the need for compiling large annotated dynamic datasets across domains – possibly with state assistance – which could then be readily used by industry to train specific AI.

In early 2021, the Niti Aayog published a paper outlining how AI should be used “responsibly”. This set out the context for AI regulation. It divided the risks of “narrow AI” (task-focused rather than a general artificial intelligence) into two categories: direct “system” impact and the more indirect “social” impact arising out of the general deployment of AI such as malicious use and targeted advertisements, including political ones. More recently, the government set up seven working groups under the India AI program, which were to submit their reports by mid-June 2023. But these are not yet available to the public.

These groups have many mandates – creating a data-governance framework, setting up an India data management office, identifying regulatory issues for AI, evaluating methods for capacity building, skilling and promoting AI startups, guide moonshot (innovative) projects in AI and setting up of data labs. More centers of excellence in AI related areas are envisaged.

Debate in the Lok Sabha, the lower house of India's parliament: The government can take time to assess how artificial intelligence regulatory mechanisms unfold elsewhere before adopting a definitive AI law (Credit: PTI)

Debate in the Lok Sabha, the lower house of India's parliament: The government can take time to assess how artificial intelligence regulatory mechanisms unfold elsewhere before adopting a definitive AI law (Credit: PTI)

Policy makers are excited about designing the India datasets program – its form and whether public and private datasets could be included. The aim is to share these datasets exclusively with Indian researchers and startups. Given the large population and its diversity, Indian datasets are expected to be unique in terms of the high range of training they could provide for AI models.

The Ministry of Electronics and Information Technology has also set up four committees on AI which submitted their reports in the latter half of 2019. These studies focused on platforms and data on AI, leveraging AI for identifying national missions in key sectors, mapping technological capabilities, key policy enablers required across sectors, skilling and reskilling, and cyber security, safety, legal and ethical issues.

India’s position on regulating AI is clearly evolving. It might, therefore, be worthwhile for the government to assess how AI regulatory mechanisms unfold elsewhere before adopting a definitive AI regulatory law. The EU AI Act, for example, is still in the making. It gives teeth to the idea of risk-based regulation. The riskier the AI technologies, the more strictly would they be regulated. AI regulatory developments in the US also remain unclear. One major hurdle to AI regulation is that its evolution is so fast that unanticipated issues keep arising. For instance, the earlier EU AI Bill drafts paid little attention to generative AI until ChatGPT burst on the scene.

It may be prudent for India to see how the regulatory ethos evolves in Europe and US before it rushes in with a “comprehensive” law that might become outdated quickly. Adopting the “risk-based, no-harm” approach is the right one to follow. Fundamental AI development, however, is happening elsewhere. Instead of being worried about stifling innovation, it might be prudent to prioritize the cataloguing of the specific negative AI-fallouts that India might face. These could then be addressed either through existing agencies or by developing specific regulations aimed at ameliorating the harm in question.

This article is published under Creative Commons with 360info.

Opinions expressed in articles published by AsiaGlobal Online reflect only those of the authors and do not necessarily represent the views of AsiaGlobal Online or the Asia Global Institute


Anurag Mehra

Anurag Mehra

Indian Institute of Technology Bombay (IITB)

Anurag Mehra is a professor of chemical engineering and policy at the Indian Institute of Technology Bombay (IITB). His policy focus is the interface between technology, culture and politics.

Recent Articles
Recent Articles