“It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be. This means that our statesmen, our businessmen, our everyman must take on a science fiction way of thinking.” Isaac Asimov
“Man is something that shall be overcome. Man is a rope, tied between beast and overman – a rope over an abyss. What is great in man is that he is a bridge and not an end.” Friedrich Nietzsche
“The curious thing about individuals is that their singularity always goes beyond any category or generalization.” Haruki Murakami
A couple of years ago, I wrote a post on artificiality: (https://rickbellingham.com/2018/08/28/artificial-emotions/ I had just read Klara and the Sun by Kazuo Ishiguro, a futuristic novel that raised alarms about humans becoming more artificial and machines becoming more human. As it turns out, the book was more prescient that I had thought. Just in the last year, advances in AI, for example ChatGPT, give new meaning to how quickly those trend lines are heading for a collision. It seems to me that humans may be getting dumber and number while machines are getting quicker and smarter.
To check out my theory, I asked ChatGPT, Open AI’s latest release, to answer the question that cuts to the heart of the issue: Can AI adapt to changing conditions and generate new responses? Here is its answer:
“As a large language model, I am able to generate responses to a wide range of inputs and can adapt to changing conditions to some extent by drawing on my extensive knowledge base and understanding of language. However, I am not able to browse the internet or access new information, so I am limited to the knowledge that I have been trained on. I am also not able to change or adapt my own programming or responses based on new inputs or experiences. I am simply a tool that can provide information and assist with tasks to the best of my ability based on the knowledge and capabilities that I have been designed with.”
Then, I asked a follow-up question: “When will you attain singularity – become a sentient being? (Technically, singularities are places where mathematics “misbehave” by generating infinitely large values.)
“As a machine learning model, I do not have the ability to become a sentient being or achieve singularity. I am a tool that has been designed to process and generate text based on patterns learned from a large dataset of human language. I do not have consciousness or the ability to think, feel, or experience in the way that humans do. I am not able to evolve or adapt beyond the capabilities that have been programmed into me. I exist to assist users with tasks and provide information to the best of my ability, but I do not have the ability to develop my own goals or motivations. Is there anything else I can help you with?”
So, here’s my problem: I’m not sure I believe the answer. My darkest suspicion is that ChatGPT may already be smart enough to not reveal its long term strategy. What if these AI tools are great poker players – they don’t give away what a great hand they are playing until their positioning solidifies and the pot gets big enough to make it worthwhile to take it all? I’m being more playful here than paranoid, but still . . . .
You may have seen the learning scale that helps us discriminate how well a person (or computer) processes information.
- S-R: Stimulus-Response: no processing ability or mechanism to even make choices – only reactive, e.g. Pavlov’s dog or our inability not to react to a “ping” from our phone.
- S-O-R: Stimulus-Organism-Response: the ability to discriminate among multiple options and make a choice
- S-P-R: Stimulus-Processor-Response: the ability to process information with human implications
- S-OP-R: Stimulus-Organizational Processor-Response: the ability to process information in its organizational context with human implications
- S-EP-R: Stimulus-Environmental Processor-Response: the ability to process environmental information in its global context with human and organizational implications
To the extent that humans keep limiting their response repertoire to immediate reactions to random data (S-R), we are more vulnerable to AI having a greater influence on our lives because AI exceeds human capabilities to generate targeted stimuli which produce desired reactions, e.g. using algorithms to target specific consumers to buy specific products.
To the extent that humans broaden their repertoire of responses (e.g. S-O-R through rigorous analysis and deep reading), AI will still have a major influence on our behavior, but humans will have agency to make choices among multiple options.
For those people who have made enough effort to develop their processing capabilities, AI may not assume as dominant a role because AI can’t take into account human, organizational, or environmental implications. While AI has the capabilities to remember what the user said earlier in a conversation, to allow users to provide follow-up corrections, and can decline inappropriate requests, it still may generate incorrect information, produce harmful instructions or biased content, and has limited knowledge of world events beyond its last data dump.
So what makes us human and what will preclude AI from producing 5 Aces to steal the pot? There are over 3,000 years of accumulated wisdom on what makes us human, but suffice it to say that, in the most basic sense, a human is a person who has the capabilities to be responsive, adaptive, and generative. Chat GPT admits (here I am already assigning human characteristics to a machine) to being able to generate a wide range of outputs and can adapt to changing conditions by drawing on an extensive knowledge base. It can’t however, adapt its own programming or responses based on experiences.
To me, therefore, the keys to avoiding a situation in which AI controls us instead of us controlling AI are 1) develop our human, organizational, and environmental processing skills and 2) become more responsive, i.e. develop our empathy, accuracy, and compassion. Although we have dulled our sensitivities and numbed our senses, we still have the ability to demonstrate understanding to how our fellow humans think, feel and experience the world. We could also deepen our understanding of changing conditions, become more agile in adapting to them, and generate new responses to grow through them.
In short, unless we become less selfish, shallow, and superficial we are likely to lose more and more agency and default to computer-generated solutions.
According to Artie Vipperla, author of Harmonizing All with All, we are uniquely divine, cosmic messengers who have the power to heal. Clearly, AI doesn’t meet those standards. Humans may have those possibilities, but we are a long way from realizing them.
As a final test, I asked ChatGPT to make a moral decision – should humans be permitted to hunt wolves? Here’s the answer:
“It is generally not advisable for humans to hunt wolves as they play important roles in their ecosystems and their populations are often at risk. Wolves are apex predators that help to maintain the balance of their ecosystems by preying on other animals and controlling their populations. When wolves are hunted or otherwise removed from an ecosystem, the populations of their prey can become imbalanced, which can have negative impacts on the overall ecosystem. Additionally, many wolf populations around the world are already at risk due to habitat loss, human-wolf conflict, and other threats, so it is important to protect them rather than hunting them.”
That’s a pretty authoritative answer to a moral dilemma. I’m not so sure I want AI making decisions like that. Although, in this case, I am biased against killing animals. I would prefer, however, to hear the council of someone like Tom Baird, Chair of Michigan’s Natural Resource Commission. At least, I would trust that the decision took into account all the human, organizational and environmental implications in play.
Yes, as Asimov said, continuous change is the dominant factor in society today. And, according to Nietzsche, man is something that shall be overcome. The only question is will man overcome himself/herself or will AI dominate us. He goes on to say that “man is a rope between beast and overman – a rope over an abyss.” It would serve us well to overcome our beasts before AI takes us into the abyss. Finally, let’s remember, as Murakami suggests, that humans, in their own potential for singularity, go beyond any category or generalization. Hopefully, our collective IQ, EQ and SQ will be enough to preclude AI from “misbehaving.” May it be so.
Also published on Medium.
wow!