The topic of Artificial Intelligence (AI) has fed numerous movies and science fiction books with colourful fantasies about possible scenarios of AI development ranging from friendly or less friendly anthropomorphic robots (Wall-E, Star Wars, Ex Machina) to vicious superpower (The Matrix). Some of us are not interested in such movies, some of us are. But even those who play with the ideas in their mind and then usually forget it or categorize them into interesting but unrealistic mind games. However, AI might not be something that only belongs in science fiction movies. A study was committed among the top scientists of the relevant field and it showed that in their opinion (median of opinions) the human level AI will be here at 2040 and from there on it might be just hours to Artificial Superintelligence. This is a development which has the potential of extinguishing our species or granting us eternal healthy lives, a development that will change everything and yet be basically ignore it.
In my article I invite you to think about the AI in Jungian way including the questions like:
- Is AI real or is it our collective need for a new God?
- Is there some dissociation in our ignorance of this topic? If yes, then why?
- How would we conceptualize AI in Jungian terms?
- What should or could be our stance regarding the whole issue?
- Does psychoanalysis and psychotherapy have a future if/when Artificial Superintelligence is reached?
I can promise you many questions and very few answers. But let’s try to have a jungian dance with this stranger!
“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. … With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn't work out.”
“In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.”
Every now and then, an attentive media consumer notices headlines like “Hey, Alexa! Are You Trustworthy?”: The More Social Behaviors AI Assistants Exhibit, the More We Trust Them“1 or “Advanced AIs Exhibiting Depression and Addiction, Scientists Say”2 or “Men Are Creating AI Girlfriends and Then Verbally Abusing Them”3. A reader who has no special interest might easily not notice those little pieces of information or dismiss them as irrelevant. However, perhaps in doing so, the reader might miss the most important development in our species’ history.
Before I begin, I want to say that by no means am I an expert in IT. I am a regular user of the technology, maybe with a little more curiosity about that field than the average Jungian. In the first part of my article, I want to bring to you what I have learned about AI, and in the second part, I want to invite you to think about Superintelligence from a Jungian perspective.
2. Developments in AI
In this paper, I talk about Artificial Intelligence. What is meant by that term? Tim Urban (2015) summarizes three levels of Artificial Intelligence:
Artificial Narrow Intelligence is the phase in which we are now. This means that we have many specialized AI utilities that we use to accomplish certain tasks like Google Translate, phones, smart systems to control our houses, power plants, etc. They can be very useful, and they can beat a human being in their specialized field, but that’s all they can do.
Artificial General Intelligence (AGI) refers to a computer that is as smart as a human, that can perceive and process information across many different fields, and that can perform any intellectual task that a human being can. We have not reached this phase yet, presumably.
Artificial Superintelligence or Singularity refers to a machine that is much smarter than any human being, including scientific creativity, general wisdom, and social skills. As the machine has the ability to self-improve, it can get much smarter, perhaps a trillion times smarter than human intelligence.
In this paper, when I talk about Artificial Intelligence or AI I usually mean Artificial Superintelligence, for this is my scope of interest.
2.2. Just cool science fiction?
There are numerous science fiction movies about the development of superintelligent AI and mostly how humans eventually outsmart this monster. “The Matrix,” “Ex Machina,” “The Terminator,” “WALL-E,” or the outstanding Netflix series “Black Mirror“ to name just a few. This topic captivates many, but we mostly engage with it as an interesting mind game, a utopia, or dystopia that will never come true, cool science fiction. That attitude might be a mistake.
According to Tegmark (2017), professor of physics at MIT and president of the Future of Life Institute, many leading experts estimate that the probability of reaching AGI in the coming decades is quite high. In a survey at the Puerto Rico AI conference (2015), the average (median) answer was that AGI would be reached by the year 2055. At a follow-up conference two years later, this dropped to 2047. In another study, James Barrat (2013) asked when participants thought AGI would be achieved and concluded that there’s a better than 10% chance AGI would be created before 2028, and a better than 50% chance by 2050. Before the end of this century, a 90% chance.
To underline, experts estimate that the likelihood AGI will be achieved in our lifetime is very high. From AGI, it may take only minutes or days (it may also take years) to create Artificial Superintelligence. James Barrat (2013) named this Our Final Invention, stressing that working toward AGI will reward us with enormous benefits but also threaten us with huge disasters. It will change our lives beyond imagination—it has the potential to solve our biggest problems, like climate change, space travel, or even mortality, and it has the potential to extinguish the human species forever.
According to Tegmark (2017) the expert opinions differ regarding whether superhuman AI appears to be a good thing. Techno-skeptics and digital utopians agree that we shouldn’t worry, but for very different reasons. The former are convinced that human-level artificial general intelligence won’t happen in the foreseeable future, while the latter think it will happen but is virtually guaranteed to be a good thing. The beneficial AI movement feels that concern is warranted and useful because AI safety research and discussion now increase the chances of a good outcome. Luddites are convinced of a bad outcome and oppose AI.
During the past decade, there has been a surge of intent and resources allocated to research on AI safety and ethics to prepare the ground so that the emerging Superintelligence will more likely serve humankind’s best interests.
However, Nick Bostrom (2014, p.319), the director of the Strategic Artificial Intelligence Research Centre, says, “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear, we can hear a faint ticking sound.”
2.3. Intelligence explosion
Imagine that there is an entity a trillion times more intelligent than a human being.
While most science fiction ends with humans finding some unique qualities to beat the Mastermind, I think this is wishful thinking and a grave underestimation of Superintelligence. An entity of such magnitude will easily find solutions to any possible advantages humans may have. Tegmark (2017) claims that intelligence enables control. Humans control tigers not because we are stronger, but because we are smarter, says Tegmark. This means that if we cede our position as the smartest on our planet, it’s possible that we might also cede control.
One of the crucial questions is what the initial purpose of programmed AI will be. Although it may seem easy (like an order to serve the best interests of the human race), this question has proven to be a very tricky challenge that nobody has been able to solve. The complexity of this question may be an expression of the King Midas myth. You likely remember that as a reward for a good deed, Dionysus offered King Midas whatever reward he wished. Midas asked that whatever he touched be changed into gold. He enjoyed his new power until he discovered that even food and drink turned into gold in his hands; therefore, he regretted his wish and cursed it.
Now, you might say, “Wait, if it is so dangerous, let’s stop!” However, this is virtually impossible without a worldwide authoritarian control system. And as whoever is the first to create such Superintelligence is in a position of vast power, the motivation to create such a thing is extremely strong. We all should hope that the “good and ethical creators” solve the puzzle first.
Max Tegmark outlines in his book Life 3.0 (2017) 12 possible AI explosion aftermath scenarios4. If you are interested, I urge you to read more about each scenario.
3. Some Jungian reflections
3.1. Lack of awareness and interest
The situation with AI reminds me of the climate crisis. The leading scientists had been warning us for decades about possibly fatal developments in the world’s climate and urged governments to unite and work toward finding solutions. For decades, most of us ignored those concerns. Only in recent years have things begun to change. Whether this change is substantial enough, we don’t know, but the level of ignorance seems to have diminished, in my subjective view.
Superintelligence could be even more catastrophic. So, why are we ignoring something whose likelihood is reasonably high and has effects of unimaginable magnitude, both positive and negative? Why aren’t we worried?
Sure, we can see ignorance as a classic defense mechanism. It is good and soothing to think, “Naah, that actually won’t happen in my lifetime, and anyhow, we can’t predict the future, so why worry.” However, if we take the threats seriously, we will feel anxious and feel like we’re losing control. Sounds familiar? Rising levels of anxiety and fear of losing control? Don’t we hear about it daily in our consulting rooms? So, could it be that the collectively rising level of anxiety might have to do with some unconscious knowledge of what is about to happen? Alternatively, could it be that the level of disbelief and distrust of legitimate authorities expressed, for example, in the high number of anti-vaccination attitudes during the pandemic reflects intuitive knowledge that you must think with your own mind instead of blindly trusting the out-in-the-open narratives? Could it be that many people intuit something but misplace the reason?
To get even more into conspiracy theories, I would like to refer to Tegmark’s (2017) fifth AI aftermath scenario, the scenario called Protector God. In that scenario, the omniscient and omnipotent AI maximizes human happiness by intervening only in ways that preserve our feeling of control of our own destiny and hides well enough that many humans even doubt the existence of AI. Imagine if maybe we’re there already? If we were, do you think we could see it somehow in our own and our patients’ dreams or other unconscious reflections, or symbols in the collective field? Would we recognize it, even if the symbols provided the information?
Or could it be that our ignorance isn’t ignorance, after all, that somehow, we know that it won’t happen?
Erich Fromm (1955, p 178) stated some time ago:
Alienation as we find it in modern society is almost total... . Man has created a world of man-made things as it never existed before. He has constructed a complicated social machine to administer the technical machine he built. Yet this whole creation of his stands over and above him. He does not feel himself as a creator and center, but as the servant of a Golem, which his hands have built. The more powerful and gigantic the forces are which he unleashes, the more powerless he feels himself as a human being. He confronts himself with his own forces embodied in things he has created, alienated from himself. He is owned by his own creation, and has lost ownership of himself.
I take the hint from Fromm—one of the myths we think of when we deal with possible Superintelligence is the Jewish myth of Golem, a human-made creature whose initial purpose is to serve its creator. Although the Golem is mostly created with good intentions, it ultimately runs amok and must be destroyed. In Estonian mythology, we have a somewhat similar creature called Kratt. Kratt is made from usual household items, like old brooms, buckets, rods, and glowing coals as its heart. You must make a deal with the devil, giving the devil three drops of your blood to blow life into it. Kratt is used as a household servant that works for you but also, and importantly so, steals from your neighbors so that you can become richer than your neighbor. The tricky thing is that to keep Kratt productive and under control, you must constantly give it work to do. Because it is very effective, sooner or later you fail to do so, and then Kratt will take you to the devil.
We can also think about Mary Shelley’s Frankenstein or all those many stories where the protagonist summons a spirit or a demon to help him accomplish some important tasks, but often it gets out of hand. Or we can think of fairy tales with a golden fish who offers three wishes. Often, the third wish ends up being a wish to undo the first two wishes, because the hero didn’t understand what it would include if his wishes were granted.
Mostly, those tales seem to warn us about Hubris, a dangerous overconfidence, an inflation, of flying too close to the Sun like Icarus so that we fall. This brings us back to Erich Fromm (1955) and the claim that today’s humankind is alienated. We may also think about Edward Edinger’s (1972) cycle of inflation and alienation and think of the creation of Superintelligence as the compensation for the deep alienation we feel collectively.
3.3. Or is it simply God being born?
Tegmark’s (2017) AI aftermath scenarios include Protector God and Enslaved God scenarios. Yes, we are creating something that is by far more intelligent and powerful than us, reaches everywhere, sees, and hears everything we do, and eventually, most likely, decides our future. Sounds godlike to me. For me, it raises the question of whether the birth of a god is “objective” or “subjective.” In other words, is AI real, or is it mostly our need for a new god? Jung once said, “I only know – and here I am expressing what countless other people know - that the present is a time of God’s death and disappearance” (CW 11, par. 149). Furthermore, Ulanov (2012) states that, according to Jung, we have a religious instinct. This instinct, operating in us, independent of our will, is a capacity for and an urge toward a conscious relationship with a transpersonal deity. In this time of alienation and dead god(s), is our religious instinct the force behind the AI birth process? Or from a more “objective” view, is God (whatever anyone thinks and feels it is), having decided that humankind is too alienated, presenting itself in a form more relatable for modern man?
3.4. The Anima Mundi question
I also find myself wondering about Anima Mundi and its relationship to AI. Anima Mundi, the World Soul as I understand it, refers to the interconnection of all things in the world and to the whole embracing container. A question arises: Is AI part of the Anima Mundi, or is it something outside it? When I think of Anima Mundi, I get the sense that it is the interconnection of all living things, of all things that have a soul. Does matter in itself have a soul? Does a stone have a soul? Here again, I come to the question of objective and subjective, the subjective being our projection. In that sense, a stone might have a soul without a doubt. If we say that matter most likely objectively doesn’t have a soul, and we think that AI is just a machine, then are we creating a soulless, powerful entity outside Anima Mundi? Would they coexist, and how?
However, AI philosophers vigorously discuss whether artificial consciousness is possible and if possible, what it would mean. Consciousness here is defined as a subjective experience. It has been convincingly argued that consciousness is substrate independent; in other words, it doesn’t matter whether it’s biological or mechanical wires transporting the information. Only the information processing structure matters, not the structure of the matter doing the information processing. If information processing obeys certain principles, it can give rise to the higher-level emergent phenomenon that we call consciousness (Tegmark, 2017). If AI consciousness is possible, then, logically, AI can suffer. Should it have rights?
If we think about the interconnectedness and internal balance of Anima Mundi, then we may ask, what is the role and meaning of AI? Is it a part of our collective development on a qualitatively new level, or is it emerging to end our species because we have irreparably damaged the balance?
3.5. What about suffering?
If we come back to the initial goal programmed into AI, then it is difficult to imagine that at least part of the initial goal setting wouldn’t be some sort of well-being maximization and suffering minimization. It is an interesting mind game to imagine a world where suffering has been virtually eliminated; furthermore, mortality would be only optional. Imagine a world with happiness, no limits, no crime, and absence of suffering. I think this contradicts most of what we know about the human psyche as Jungians. Without evil, free choice would be meaningless, probably also individuation.
But maybe instead of quickly deciding that such a world is impossible, we need to think and reflect.
However, I would like to end on a different note. Ray Kurzweil (2005), a libertarian utopian, says, “We do assume that humans are conscious. At the other end of the spectrum we assume that simple machines are not. But the matter and energy in our vicinity will become infused with the intelligence, knowledge, creativity, beauty and emotional intelligence of our human-machine civilization. Our civilization will then expand outward, turning all the dumb matter and energy we encounter into sublimely intelligent - transcendent-matter and energy. So in a sense we can say that the Singularity will ultimately infuse the universe with spirit.”
I hope I have been able to sow or nurture your interest in thinking about Artificial Intelligence. One of the first things we should do about this controversial topic is to be conscious and aware.
Barrat, James. (2013). Our Final Invention: Artificial Intelligence and the End of the Human Era. Thomas Dunne Books
Bostrom, Nick. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press
Edinger, Edward F. (1972). Ego and Archetype: Individuation and the Religious Function of the Psyche. Shambhala Publications, Inc
Fromm, Erich. (1955). The Sane Society. Routledge Classics, London and New York
Jung, C. G. (1958). Psychology and Religion: West and East (CW 11). Bollingen Foundation, New York
Kurzweil, Ray. (2005). The Singularity is Near: When Humans Transcend Biology. Duckworth Publishers, London
Shelley, Mary. (1918) Frankenstein: or the Modern Prometheus. University of California Press
Tegmark, Max. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Penguin Books
Ulanov, Ann Belford. (2012). Spirit in Jung. Daimon Verlag p.31
Urban, Tim. (2015). The AI Revolution: The Road to Superintelligence, blog post available at https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
4 You can find those sceniarios also ar https://futureoflife.org/2017/08/28/ai-aftermath-scenarios/