Technology

Without Regulation Such as the EU AI Act, the Chatbot Revolution Risks Becoming a Race of the Reckless

Wednesday, June 14, 2023

Tech titans are rushing to stake their claim in the lucrative artificial intelligence chatbot market, but they are trading off all-important transparency in the process, writes Toby Walsh, chief scientist of the UNSW AI Institute at the University of New South Wales in Sydney, Australia. Expect more government regulation of AI such as the draft AI Act which the European Parliament passed on June 14, a major step towards shaping global standards.

Without Regulation Such as the EU AI Act, the Chatbot Revolution Risks Becoming a Race of the Reckless

European parliamentarians in committee overwhelmingly approved the draft EU Artificial Intelligence Act: The legislation is the first major effort anywhere in the world to regulate AI and could contribute to the setting of global standards (Credit: Mathieu Cugnot / Shutterstock.com)

With an overwhelming majority, the European Parliament in Strasbourg, France, on June 14 passed draft legislation to regulate artificial intelligence (AI), a major step in asserting global standards. Among other things, the EU’s AI Act would prohibit systems or applications that entail an “unacceptable level of risk” such as predictive policing tools or social scoring systems such as those used in China to profile and categorize individuals according to their behavior and socioeconomic status. The law would also put limits on “high-risk AI” such as programs that could influence voters or cause harm to people’s health.

In particular, the legislation would set guardrails on generative AI, requiring content created by chatbots such as ChatGPT to be labeled as such. AI models would have to publish summaries of copyrighted data used for training, a potential complication for systems that generate human-sounding speech from text online, frequently from copyrighted sources. With this far-reaching law, Europe has moved further ahead on AI regulation than any other region or country in the world. A final version of the law could be passed by the end of this year. There would be a grace period to allow companies to adapt.

The chatbot revolution

AI chatbots are like buses: You will wait half an hour in the rain with none in sight, then three come along all at once. In March 2023, OpenAI released its newest chatbot, GPT-4. It is a name that sounds more like a rally car than an AI assistant but it heralds a new era in computing.

Credit: Ascannio / Shutterstock.com

Credit: Ascannio / Shutterstock.com

Google responded with Bard, its more grandly named search chatbot. Chinese search giant Baidu launched its cheeky-sounding Ernie Bot. Salesforce put out its more serious sounding Einstein GPT chatbot. And Snapchat, not to be outdone, announced its My AI chatbot.

It is now fashionable for every tech platform and enterprise software company to have an AI chatbot providing an intelligent interface to their software. It may soon look and sound like the 2013 Hollywood movie Her. We will interact with our smart devices through AI chatbots. We will talk to them. They will understand complex and high-level commands. They will remember the context of our conversation. And they will intelligently do what we instruct them to do.

Hallucinating intelligence

We are still working out what these chatbots can do. Some of it is magical. Writing a complaint letter to the council for an undeserved parking ticket. Or composing a poem for your colleague's 25th work anniversary. But some of it is more troublesome. Chatbots such as ChatGPT, or GPT-4, will, for example, make stuff up, confidently telling you truths, untruths and everything in between. The technical term for this, according to experts, is “hallucination”.

The goal is not to eliminate hallucination. How else will a chatbot write that poem if it cannot hallucinate? The aim is to prevent the chatbot from hallucinating things that are untrue, especially when they are offensive, illegal or dangerous.

AI-generated image showing artificial intelligence (Credit: Julius H. from Pixabay)

AI-generated image showing artificial intelligence (Credit: Julius H. from Pixabay)

Eventually the problem of chatbots hallucinating untruths is likely to be addressed, along with other issues such as biases, a lack of references and concerns around copyright when using others’ intellectual property for training the chatbots. Disturbingly, however, tech companies are throwing caution to the wind by rushing to put these AI tools in the hands of the public with limited safeguards or oversight.

For the last few years, tech companies have developed ethical frameworks for the responsible deployment of AI, hired teams of scientists to oversee the application of these frameworks, and pushed back against calls to regulate their activities. But commercial pressure appears to be changing all that.

At the same time that Microsoft, which has a commercial partnership with OpenAI, announced it was including ChatGPT into all of its software tools, it let go of one of its AI and Ethics teams. Transparency is a core principle at the heart of Microsoft’s responsible AI principles, yet Microsoft has been secretly using GPT-4 within the new Bing search for a few months.

Google, which had previously not released its chatbot LaMDA to the public due to concerns about possible inaccuracies, appears to have been goaded into action by Microsoft’s announcement that Bing search would use ChatGPT. Google’s Bard chatbot is the result of adding LaMDA to its popular search tool. Deciding to build the Bard chatbot proved an expensive decision for Google: A simple mistake in the Bard’s first demo wiped US$100 billion off the share price of Google's parent company, Alphabet.

OpenAI, the company behind ChatGPT, put out a technical report explaining GPT-4. OpenAI’s core mission is the responsible development of artificial general intelligence – AI that is as smart or smarter than a human. But the OpenAI technical report was more of a white paper, having no technical details about GPT-4 or its training data. OpenAI was without shame in its secrecy, blaming the commercial landscape first and safety second. AI researchers cannot understand the risks and capabilities of GPT-4 if they do not know what data it is trained on. The only open part of OpenAI now is the name.

Primer on the EU AI Act (Credit: Euronews on YouTube)

Regulation required

There is a fast-opening chasm between what technology companies are disclosing and what their products can do that can only be closed by government action. If these organizations are going to be less transparent and act more recklessly, then it falls upon the government to act. Expect regulation.

We can look to other industry areas for how that regulation might look. In high-risk areas such as aviation or pharmacology, there are government bodies with significant powers to oversee new technologies. We can also look to Europe, whose forthcoming AI Act has a significant risk-based focus. A European Parliament committee passed the draft in May and the legislature approved on June 14. A final version of the Act will lilkely come up for a vote by the end of 2023. Whatever shape this and other regulation take, they are needed if the world is to secure the benefits of AI while avoiding the risks.

This article is published under Creative Commons with 360info.

Opinions expressed in articles published by AsiaGlobal Online reflect only those of the authors and do not necessarily represent the views of AsiaGlobal Online or the Asia Global Institute

Author

Toby Walsh

Toby Walsh

UNSW AI Institute, University of New South Wales

Toby Walsh is chief scientist of the UNSW AI Institute at the University of South Wales in Sydney. He is Scientia Professor of Artificial Intelligence at the School of Computer Science and Engineering at UNSW. He is a fellow of the Australia Academy of Science. His most recent book is Machines Behaving Badly: The Morality of AI, which was published in 2022 by The History Press. Prof Walsh is supported by the Australian Research Council (ARC) through an ARC Laureate Fellowship to explore "trustworthy AI".


Recent Articles
Recent Articles