Technology

Adding Value: Living with Intelligent Machines

Thursday, April 22, 2021

The interaction between people and intelligent machines is likely to increase rapidly in the near future. Yet scant attention has been given to how they would co-exist. Jeffrey Kok Hui Chan of the Singapore University of Technology and Design argues that the key concern should not be about the displacement of individuals in work and daily life but how to maximize the value added when human beings are augmented by artificial intelligence.

Adding Value: Living with Intelligent Machines

Credit: maxuser / Shutterstock.com

While the Covid-19 pandemic has changed many things, one of the most significant has been the drastic reduction in social interaction. People are expected to maintain their physical distance and to limit contact with others. But interactions with machines, particularly intelligent ones that can sense, observe, respond and learn, have become commonplace. These include home assistants and the various bots that can anticipate user needs, the unmanned convenience stores where customers can select items and just walk out, and the autonomous buses and cars that will be plying the roads in greater numbers. Human interaction with intelligent machines is likely to increase rapidly in the near future. Yet scant attention has been given to how people should interact and live with them.

Interacting with intelligent machines

According to Lancaster University anthropologist Lucy Suchman, making sense of a new – dumb – machine is already a challenge. Anyone who has fumbled with a photocopier in the office knows this. For example, people perceive and interact with a machine contextually by approaching the new model with knowhow from using the older one. But the replacement requires precise inputs based on its specifications and not those of the discarded device. University of California, San Diego, human-computer interaction (HCI) expert Donald Norman characterizes this mismatch as a conflict between flexible, versatile and creative people and a rigid, precise and unforgiving fixed machine.

Intelligent machines are going to accentuate this conflict. Transparency in HCI used to be straightforward: A person types up a document without being aware of the codes that are responsible for displaying the typed words on the screen. But David Leslie, ethics theme lead at the Alan Turing Institute, reckons that transparency with an intelligent machine means that the user knows why the device is acting in a particular way, and furthermore, that this appliance is also able to explain its actions to the user.

Because intelligent machines are becoming more like humans, transparency also means that intelligent machines should never feign humanity. According to Professor Frank Pasquale of the Brooklyn Law School, an expert in the law of artificial intelligence (AI), people must be informed that they are interacting with machines – albeit highly intelligent ones – and not people. Furthermore, many intelligent machines are likely to require that humans to be “on the loop”, that is, supervising them and intervening when necessary,

Autonomous bus in Chongqing, China: Supervising an intelligent machine requires different skills from actually operating it (Credit: helloabc / Shutterstock.com)

Autonomous bus in Chongqing, China: Supervising an intelligent machine requires different skills from actually operating it (Credit: helloabc / Shutterstock.com)

The safety operator in an autonomous vehicle is an example of a human on the loop. In 2019, The Straits Times reported that as many as 100 public-bus drivers were training to handle autonomous vehicles in Singapore. But interacting with an autonomous bus is unlike driving a regular one. To paraphrase brothers Hubert and Stuart Dreyfus, authors of a seminal study on human expertise, the safety operator is no longer one with the vehicle. Instead, this person is supervising the bus driving itself, which is a different task from actually driving it. This task does not necessarily engage the safety operator’s prior expertise of driving the bus.

As the report suggested, intervening to take back control of the vehicle is a learned skill, and even experienced drivers have to undergo training to know exactly how to do this in an intelligent autonomous bus. If this example is any indication, learning how to interact properly with intelligent machines will become an important area of concern in the near future.

Intelligent machines and human displacement

Even if humans still supervise autonomous intelligent machines, might such machines one day displace people entirely? Intelligent machines are expected to surpass humans in nearly every way – in terms of strength, agility, processing, memory power or stamina. But for the foreseeable future, intelligent machines are not likely to take the place of humans completely. This is because intelligent machines still cannot appreciate and grasp the promise and responsibility of risk taking found in nearly all social relationships.

The example of teaching is instructive. Might a teacher eventually be displaced by an intelligent teaching machine? But how would an intelligent robot deal with a student’s fear of being left behind, if it has never experienced the risk of falling behind as an individual in a competitive social environment? And how might it take a leap of faith in a student when getting the right answer does not matter as much as the experience of working out something new with the pupil? Can an intelligent robot ever be optimized to risk failure when the upside opens up new horizons in learning?

Robot teacher: But will it feel a student's pain? (Credit: Yamit29)

Robot teacher: But will it feel a student's pain? (Credit: Yamit29)

As the saying goes, education is not about filling a bucket but about lighting a fire. Teachers are there to set off a spark that can last an entire lifetime. But who can be sure if a student uses this inspiration for good or for ill? University of Edinburgh pedagogy professor Gert Biesta refers to the “beautiful risk” of education. Because people can appreciate the promise of this risk, they will undertake the activity of teaching with great care. At least for the foreseeable future, only human teachers can appreciate and grasp how to assess and balance the promise and responsibility of risk.

Doing so is vital not just in teaching but in so many enriching interpersonal encounters and social possibilities found in other vocations. An astute salesperson, sensing the prospect of a more durable relationship with a customer, may go against his immediate interest and instead takes the beautiful risk of not recommending a purchase. An inventor, perceiving the new opportunities offered by a discovery, accepts the beautiful risk of the novelty despite the uncertainty. A publisher with a gut feeling about a manuscript takes a chance on an unknown author. Until intelligent machines understand the promise offered by such beautiful risks and the responsibility to take them, humans are unlikely to be displaced any time soon.

Co-creating adaptive intelligence 

There is a tendency to focus on how to use AI to attain goals. In contrast, it is still relatively rare to consider using AI for producing a greater form of intelligence that surpasses what AI or humans can produce on their own. Important thinkers in AI, such as Kai-Fu Lee and Pasquale, have converged on the view that AI is most productive when it complements what humans do best.

Pasquale argues for ‘IA’, or intelligence augmentation, where human efforts, augmented by intelligent machines, result in better services and outcomes than either artificial or human intelligence working alone. Lee, for his part, imagines that doctors one day would become “compassionate caregivers”, with intelligent machines handling diagnoses and optimizing treatment plans, while the physicians offer care, support, and most importantly, a human presence to reassure the ill. By framing human co-existence with intelligent machines this way, both thinkers are in fact highlighting a valuable form of higher intelligence that results when human is augmented by machine.

How might one describe the value added by this interaction? Paul Daugherty, the chief technology and innovation officer at consulting group Accenture, and James Wilson, managing director of information technology and business research at Accenture Research, once described this as a form of collaborative intelligence, which emerges when humans and AI work together for a common goal. Collaborative intelligence is already evident in generative design, where intelligent machines are able to generate and select solutions that are best fitted to design constraints. In 2018, the BBC reported that intelligent machines managed to create a leg design for an interplanetary lander for the US National Aeronautics and Space Administration (NASA) that was nearly a third lighter than anything a human could come up with.

But collaborative intelligence might not be sufficient. According to Cornell University psychologist Robert Sternberg, all definitions of intelligence, despite their differences, generally agree on one thing – that intelligence must involve the ability to adapt to the environment. To count as genuine intelligence, intelligence must not undermine itself. However, Sternberg observes that despite a 30-point increase in IQ during the 20th century, humanity continues to engage in maladaptive behavior that is tantamount to species self-destruction. Given how environmentally disastrous anthropogenic climate change, pollution, deforestation and toxic waste are, the only intelligence that counts may be adaptive intelligence. This form of intelligence will enable humans to adjust to the changing environment that is unprecedented in history.

How might we use intelligent machines to create this adaptive intelligence to help mankind flourish despite environmental volatility and uncertainty? This has to be the most important question for applied AI. The future of humanity amid intelligent machines depends on how well this question is ultimately answered.

Opinions expressed in articles published by AsiaGlobal Online reflect only those of the authors and do not necessarily represent the views of AsiaGlobal Online or the Asia Global Institute

Author

Jeffrey Kok Hui Chan

Jeffrey Kok Hui Chan

Singapore University of Technology and Design

Jeffrey Kok Hui Chan is an assistant professor at the Singapore University of Technology and Design (SUTD). His research focuses on design ethics. He is the author of Urban Ethics in the Anthropocene: The Moral Dimensions of Six Emerging Conditions in Contemporary Urbanism (Palgrave Macmillan, 2019) and co-author with Zhang Ye of Sharing by Design (Springer, 2020).


Recent Articles
Recent Articles