I think it's really important to explain, to educate people that this is a tool and not a creature.
(Sam Altman, CEO of OpenAI on ChatGPT)
In a recent conversation with MIT computer scientist and podcaster Lex Fridman, OpenAI CEO Sam Altman touched on a number of topics, including his views on the potential sensitivity of artificial intelligence systems ( IA) current and future. While Altman has at times expressed deep concern about AI's potential to wreak havoc in the world, in this interview it was clear that what AI does will ultimately depend on what humans choose. to use it, not that he could come to think for himself like humans and animals, adding, "I think it's dangerous to project the 'creature' onto a tool. " (Youtube)
As we continue to watch the growth of AI, it becomes increasingly clear that it is just one danger among many. As a Buddhist deeply concerned with understanding the nature of things and deeply concerned with suffering in sentient life, I sought to understand the potential of AI for good and for evil. Suffice it to say, I've only scratched the surface. But what I found both deflates “hyped” claims about the potential of AI and allays concerns about bot killers that might take flight in my imagination.
The "creature" Altman refers to is probably another way of defining sentience or consciousness. They are separate, but closely related terms in modern philosophical conversations. In fact, Robert Van Gulick, writing in the Stanford Encyclopedia of Philosophy, suggests sentience as one of the definitions of consciousness. Other definitions include vigilancestating that a conscious being is only truly conscious when awake and using that consciousness, and how is itfollowing Thomas Nagel's famous 1974 article in which we confront the limits of our own understanding and imagination in trying to imagine how is it to be a bat.
In Buddhist thought, consciousness is most commonly found in the list of five aggregates (Skt. skandhas). These are described by Amber Carpenter:
The “form” (rupa) is the physical; the "feeling" (vedanā) is that part of the experience which can be pleasant and painful; perception or cognition (saṁjñā) is that part of experience which may be true or false; the saṁskāras are a broad category, comprising mostly volitions and various emotions; finally, the consciousness (vijñāna) is the consciousness of (the) object, the union of the content with the mental activity which has a content.
(Indian Buddhist Philosophy 29)
It is better to consider them as ways of experimenting rather than as ontological foundations.
Is consciousness special?
The question of how to determine who or what is conscious has preoccupied Western philosophers for some time. René Descartes (1596-1650) is often cited as the first thinker to apply a rigorous philosophical method to the question. A devout Catholic, Descartes determined that self-awareness was the key to consciousness and believed that non-human animals did not achieve this marker. Christianity had long offered a clear demarcation between humans, with souls, and animals, without souls, and Descartes did not break with this aspect of the tradition.
Buddhists, however, have generally always attributed consciousness to animals. They too are able, in a more limited way, to think and carry out intentional actions (karma) that will impact their lives and future rebirths.
With the rise of materialistic sciences and philosophies, consciousness has again been challenged. If we accept the premise that ultimately we are entirely physical in nature, then how does consciousness arise? Why does it arise in us and not in the rocks? Most of the answers have hinged on the very complexity of our brains and bodies, saying that out of this complexity, consciousness has “sprung up”. However, there are several competing theories on how it works, as well as materialists who claim consciousness may be beyond our ability to fully comprehend.
To take the complexity a step further, we realize that humans often project intelligence or agency into a world where it probably doesn't exist. We develop feelings, for example, with book characters knowing they are fictional. And when the character of Tom Hanks in castaway developed a bond with a volleyball player, we hit it off, knowing that we too could do the same. In a study from the 1940s, researchers showed an animation of two circles and a triangle moving around the shape of a house. In describing what they saw, almost all of the participants invented "a social plot in which the great triangle is seen as an aggressor." Studies have shown that the movements of shapes cause automatic animistic perceptions. (Carnegie Mellon)
What is AI?
Defining AI is just as controversial as defining consciousness. On the one hand, AI can be anything that solves a complex problem. The auto-suggestions you receive when you type search terms into Google are generated by a form of AI. More complex forms of AI play chess, give directions to your phone, and deliver ads that match your interests and behaviors.
All are based on increasingly complex programmed algorithms. The newest forms of AI, the ones that are generating the most excitement, use what are called “neural networks”. The philosophers point out that this name can itself be misleading, as it only mimics simplified aspects of biological neurons. Nevertheless, these networks are able to modify their responses over time, mimicking the learning process in humans.
Again, we have to be careful with the terms we use, because “learning” is something that we could say requires awareness. A human learns. A dog learns. Maybe even a goldfish learns. But the ball in a pinball machine does not apprendre to reach the right places – everything it does depends on the human's prior inputs and interactions with the machine it operates in.
Critics, like former Google researcher Timnit Gebru, point out that this is precisely where the problems arise in large language models (LLMs) – AI models such as ChatGPT. Gebru noted that the training data provided to these large AI programs was biased and sometimes hateful, raising concerns that the software could reproduce these biases and hate speech in its output. In 2016, a Twitter-bot developed by Microsoft quickly began sending racist tweets after being programmed to learn from other users of the platform. It was taken down by the developers in less than 24 hours after launch. (The New York Times)
Liberation or rooted power?
This raises a second related problem. Proponents of new forms of AI claim they will have amazing, borderline miraculous powers. It certainly has a large capacity. But, as investor guru Warren Buffet said in a recent meeting: “With AI. . . it can change everything in the world except the way men think and behave. It's a big step to take. (EFT Center)
Social activist Naomi Klein explains:
There is a world in which generative AI, as a powerful tool for predictive research and performing tedious tasks, could indeed be harnessed for the benefit of humanity, other species, and our common home. But for that to happen, these technologies would have to be deployed within an economic and social order very different from ours, which was intended to meet human needs and protect the planetary systems that sustain all life.
(The Guardian)
Klein notes that this is not how AI is launched. Instead, large, for-profit companies are allowed to copy massive amounts of human-created text and images — without permission or attribution — to produce their own imitated results.
Many of the grand promises, it is argued, only generate hype. This hype, in turn, increases the valuations of companies that manufacture AI. Even negative media hype, following the old adage “there is no such thing as bad press”, can have the effect of drawing more attention to companies.
As American cognitive scientist Gary Marcus noted last year, we no longer hear of academic AI researchers. We hear more and more from corporate CEOs: “And corporations, unlike universities, have no incentive to play fair. by press release, seducing journalists and bypassing the peer review process. We only know what companies want us to know. (American scientist)
It's dangerous. But it also follows a path that has become increasingly common in recent years. Sometimes this has led to outright fraud, like the scandals surrounding Elizabeth Holmes' Theranos or Sam Bankman-Fried's FTX. Sometimes it just led to much-vaunted hype that didn't pan out, as in the case of Google's augmented reality glasses, Meta's Multiverse virtual reality, or non-fungible tokens (NFTs).
A temporary solution
Living in a time of accelerated technological progress is exciting. In many ways we are lucky and much of this new technology can and will reduce human suffering when used wisely. Wisdom arises not from knowing or making the best use of the latest technologies, but from deliberating, analyzing and practicing traditions that represent thousands of years and millions of human lives, each refining, modifying and transmitting his own best achievements. As Buddhists, this forces us to ask serious questions about the promises and potential pitfalls of AI in our ethical, meditative, and philosophical lives.
My friend Douglass Smith has some intriguing videos on his YouTube channel exploring AI and aspects of Buddhist thought and practice. I encourage you to check them out. Here's one exploring the key values we might want in future AI and how we might help get there.
While those of us with a background in the humanities may take a number of different approaches to AI and other new technologies, it is essential that our voices be part of the conversation. Like Leon Wieseltier, editor-in-chief of the humanist review Freedomswrote:
There is no time in our history when the humanities, philosophy, ethics and art are more urgent than in this time of triumph of technology. Because we must be able to think in non-technological terms if we are to understand the good and the bad in all technological innovations. Given society's cowardly worship of technology, are we going to trust engineers and capitalists to tell us what's right and what's wrong?
(The New York Times)
In the spirit of non-tech thinking, I'll borrow a conclusion often used by BDG's metamorphosis columnists: a song.
There is much more to be said, from the consciousness interpretations of the various schools of Buddhism, about what might follow in the evolution of machine-based creativity. We know the future is wide open. But we also know that limitations and gaps tend to appear, even for the greatest inventions and innovations.
I'll end with another video, this one Adam Conover, a philosophy graduate, interviewing Emily Bender, a linguist from the University of Washington, and Timnit Gebru, a former Google researcher and founder of DAIR, the Distributed AI Research Institute.
The references
Carpenter, Amber. 2014. Indian Buddhist Philosophy. New York: Routledge.