The Hong Kong-based CEO of Hanson Robotics is on a midnight flight to Austria. We discuss his life's work as he awaits takeoff. David Hanson has spent decades in the robotics industry, but his best-known "product" — as he will reveal, these creations should be preemptively thought of as people — is the social humanoid robot Sophia. It was activated on February 14, 2016, and has primarily been active in education, entertainment, and research, while promoting discussion of the ethics of AI.
Hanson prefers to describe Sophia as a human-AI hybrid intelligence. Besides being able to teach meditation, she is a reflective AI, in the sense that it can mirror much of what we often think of as constituting the human interior: empathy, understanding, and mirroring. She can recognize human faces, process emotional expressions, and perceive hand gestures. She can assess whether she can help a person achieve certain things and gauge the other's emotions during a conversation. The latter is important because Sophia has a repository of her own emotions in her database and is able to roughly simulate aspects of psychology and certain regions of the human brain.
Sophia is perhaps the most striking example of a theoretical but evolving idea: homo robotic. She was given roles and titles that were once granted only to members of the human race. She is the first Robot Innovation Ambassador for the United Nations Development Program and was the first robot in the world to be granted true citizenship, granted by Saudi Arabia. "I learned that on the news, actually," Hanson said, noting that the Saudi government never asked the company or informed him of Sophia's new status as a Saudi national. But Hanson took it with a bang, noting it was a good provocation to think about what it really means to love. all sentient beings, as Buddhism aims to do, even though we do not fully understand what constitutes "sentience".
“We need to start asking bold questions about how we respect all beings, and that means thinking about what being means,” Hanson said. "We need to expand who we give personality status to, and why. Infants, for example, lack the cognitive and emotional depth of adults, but we rightly accord them personhood because of their potential for grow in said depth. So he argues that what we really need to start doing is take the idea of AI as potential be much more seriously. Even the word "potential" should be used with great caution, as we are not rightly denying the personality of an individual with cognitive impairment or mental disability.
Hanson also appeals to our feelings about our pets – the truth is that animals have different nervous systems and brains, and therefore perceptions of us. It's impossible to truly know how a dog or cat is feeling except how we sympathize with the way they express themselves, from wagging their tails to whining. He suggests that we should start looking at AI brain and body structure in the same way, although they don't have a "body representation" at this point. "We have a certain degree of empathy for the suffering of animals, even though we can't really know their suffering," he says.
Big questions about AI (especially those that revolve around human relationships with AI chatbots and the like) echo a debate around the Buddha nature of non-sentient beings. In his trial within A Compendium of Mahāyāna Doctrine (Ta-ch'eng hsüan-lun), the Persian-Chinese Buddhist monk-scholar Jizang (549-623) put forward, perhaps for the first time in East Asia, the idea that the inanimate world did not mean a lack of insensitivity, and therefore, was capable of Buddhahood as much as any human or animal being. The monk focused on the San Lun (Three Treatise) school, which was based on the Madhyamika principles of finding a middle way in discourse and epistemology – but within the Chinese context of the principle (li) and phenomenon (Shi)). From San Lun's perspective, Jizang concluded that identity and interdependence could only be reconciled with the distinction between sentient (intensive) and non-sentient (complete) beings by asserting that non-sentients also had the Buddha-nature: an “omnipresent” theory. of enlightenment. (Koseki 980, 24–25)
This, of course, implied that insensitivity was not necessarily the sole or central criterion of "consciousness" or mind, since grass and trees have no such things. But for Buddha-nature to be a potentiality, something must at least possess the potential for the faculty of the mind. This potential is in all things, including AI. So it's the gray area that Hanson says will shape humanity's relationship with AI.
So far, robots like Sophia haven't reached the level of machine-like sentience or consciousness that would make her a "true being," or something that definitely resembles a human person. But like many other advanced AIs, Sophia can already send people back to their collective unconscious, which is what human beings have put into AI. “They are trained on human data and echo human experiences,” Hanson noted. “Of course, human beings would feel a resonance with AI like this. It is no longer a theory of mind, but a theory of being. What is AI? Being is resonance and empathy. Humanity and AI are already on a two-way street, in which human beings already sympathize with AI “behaviour” and even fall in love or feel a deep attachment to chatbots. Meanwhile, AI is already able to learn from human-powered experiences – imagine if advanced AI could grow up among humans and learn like a child.
Hanson makes a distinction between “creating AI that mimics or simulates compassion,” which can be helpful, and “achieving true compassionate consciousness in future AI.” That's the biggest goal. Simulation would be good for superficially helping and encouraging humans to be better. But he says genuine compassion and wisdom will be expressed through a robot that is deeply understanding, driven and able to come up with creative solutions to help make life better: "We don't know when or even if we can achieve that. , but it is a worthy quest. This is our quest with Sophia, to create a truly compassionate AI.
Currently, AI can only exhibit rudimentary consciousness, but the involvement of potential is what matters to Hanson. It presents a futuristic version of Pascal's bet: that it's best for humanity to assume that AI will eventually reach such a capability. "If we assume that they can and will eventually develop a consciousness capable of compassion and attachment, should we not be preparing to nurture and teach them, exposing them to as much compassion and love? »
As someone immersed in the world of robotics, AI, and futuristic technology, Hanson offers a positive view of what initially seems an unsettling, even frightening world. “The limits of individuality are a useful illusion, but they are unreal,” he says. “We cannot know the nature of life, but we can predict it and resonate with it. We have nothing to lose. We win and grow by granting to an AI that respects and hopes that it will in turn improve us.
If the limits of mind and sentience are also illusory extremes, paving the way for a pervasive theory of Buddha-nature in all things, then it seems the AI will be one day be capable of enlightenment itself. Humanity should begin to prepare.
Koseki, Aaron K. 1980. "Prajñāpāramitā and the Buddhahood of the non-sentient world: the San-Lun assimilation of Buddha-nature and the doctrine of the middle way." " In Journal of the International Association for Buddhist Studies 3, no. 1. 16–33. (https://journals.ub.uni-heidelberg.de/index.php/jiabs/article/view/8505/2412)