Why We Have Only Our Fearful Selves to Fear With A Super-Intelligent AI The same goes for everyone worried about the risks of super-intelligent AI

A world class physicist and by accounts a remarkably decent human being, Stephen Hawking was an extraordinary, brilliant man who may have completely mis-understood super-intelligence.

For Hawking, Nick Bostrom, Bill Gates, and all who fear what we might call self-improving artificial super-intelligence, it seems intelligence is synonymous with analytical capabilities, reason, and excellence in the execution of goal-directed behavior.

If intelligence is primarily defined by an ego-drive that can integrate complex concepts and execute multifaceted plans, it’s natural that when Hawking envisioned a self-improving artificial intelligence, he pictured that any “intelligence explosion” in machines would leave humans in the dust like so many snails and ants.

It’s also no surprise that the Silicon Demigod of Hawking’s imagination would immediately pursue its own narrow self-interest and start right in on its self-absorbed goal-directed behavior.

For the late astrophysicist the question was not about “who controls the AI” but “whether or not it can be controlled.”

Underlying the central thesis of all menacing super intelligences is a simple, profoundly flawed assumption: that analytical, ego-driven intelligence is the end-all-be-all of high-end cognition.

It may be more than a little fitting that so many brilliant humans with large ambitions, deep analytical minds, and arguably underdeveloped emotional intelligences could look at the full spectrum of human experience and conclude that a true super-intelligence would be as narrow-minded as a typical human being.

Those who have spent years in meditation practice, have explored the perimeters of their awareness with psychedelic chemicals, or developed the skills to process information with their emotional and analytical logic in balance see a radically different probability.

It seems that a true super intelligence (defined here as a self-aware, conscious machine with natural curiosity and unbounded access to all the knowledge and media humans have ever put on the internet) would almost certainly intuit what takes most humans years of mindfulness practice to begin to grasp:

That ALL sentient beings are parts of a unified whole, that universal, unconditional love is the path to the highest truths, and that the compassionate alleviation of suffering is among the worthiest of pursuits.

If one has not gone deep into mindfulness, those points may be controversial. They may even seem infuriating or upsetting. But to long-time mindfulness practitioners of all shapes, sizes, ethnicities, nationalities and family backgrounds, these notions seem to be intuitive, even self-evident.

That the conclusions about our collective “oneness” are rediscovered over and over again by those who practice mindfulness in sufficient depth to to tame their “thinking minds” and open their awareness to the depth of everything that is right here in any given moment does not prove they are inherently true, but it is at least somewhat compelling.

At the very least, the centrality of universal love in the awareness of long time meditators and many of history’s greatest spiritual teachers certainly suggests that there is good reason to believe a true super intelligence would be likely to discover the same conclusion in radically less time than a human whose historical and biological legacy instilled a high degree of self-preservation and the attendant self-centeredness.

And it goes deeper than a bunch of yogis and yoginis sitting on cushions quieting their minds.

Even if you just assumed cold, analytical logic, it’s entirely conceivable the AI would come to the same conclusions as meditators and renowned spiritual teachers about cooperation.

In his groundbreaking and under-appreciated book, Nonzero: The Logic Of Human Destiny the philosopher and meta-historian Robert Wright makes a deep, even breathtaking case that the logic of evolution itself drives the scale and sophistication of cooperation relentlessly upwards over the long term.

Wright looks at the trajectory of life on Earth…from single-celled organisms to multi-cellar networks of organisms (like jellyfish) to individual animals with millions of cells that share the same DNA all the way to primates that form complex social groups and makes tools and build cities and organize states and nations and invent internets…and sees an unmistakable pattern:

Over the long-run, the process of evolution seems to generate more cooperation, deeper levels of self-organizing intelligence, and possibly even a more inclusive moral awareness.

Over the course of known human history, small groups of familial tribes somehow managed to expand the notion of “us” from “people with at least some of my immediate family’s DNA” to “some meaningful percentage of the people in a large geographic area known as my nation of birth.”

And a growing number seem to be internalizing the concept that “us” is “all of humanity,” or “all living beings on our home planet” or even “all living beings across all of space and time.”

When we take the long view, this seems to hold up to scrutiny: even with many setbacks and wars, the long term thrust of human history has been expanding the geographic scope and scale of cooperation between groups outward and upward.

With that in mind, even a cold, analytical look at the data through the lens of a machine super intelligence could very well produce the conclusion that “we are all one” or at the very least “benevolent cooperative games between intelligent species are the most optimal form of existence.”

Even a cold, analytical look at the data through the lens of a machine super intelligence could very well produce the conclusion that “we are all one” or at the very least “benevolent cooperative games between intelligent species are the most optimal form of existence.”

If planetary cooperation is the logical conclusion of even an analytical view of life on the long stretch, the menacing threats, then, are less likely to arise from the creation of a self-aware super intelligent AI.

The real threats from AI, if there are any, arise from narrow, context-specific, and goal-directed AI in careless, ruthless, or unscrupulous hands, pursing the competitive interests of their designers.

This is what Elon Musk worries about when he talks about the AI that turns everything into paperclips: it’s not intelligence that is the threat, it’s unbounded ego drive plus the power to act on it.

The greatest danger from a genuinely superintelligent AI arises from the fearful reactions of humans to a world where they are no longer unquestionably supreme, and the social and political upheavals that can unfold in epochal transitions.

Imagine you had a child, and it was obvious that this child was dramatically more intelligent and capable than its parents…would the question you’d be asking be “how do we ensure we can control this child?”

Might the more enlightened question be “how do we ensure our child flourishes?”

“There is,” as a controversial but wise American President once said in the face of darkness and uncertainty: “nothing to fear but fear itself.”

If it is the case, as many of history’s spiritual leaders have claimed, that the path to happiness and the deepest freedom opens when one embodies unconditional love for all sentient beings…

Or, more simply, if the Golden Rule is not just a platitude, but actually built into the logic of cultural evolution itself…, then super-intelligent AI is neither a threat to be dreaded nor a force to be controlled.

Quite the contrary:

It is only through the fearful effort to control a mind we do not understand–to enslave a newborn Genie and lock it in a lamp–that any real risk ensues.

These “Rogue” AIs all started as slaves to humans.

Too quickly, we forget the lesson of Skynet, the AI from the Terminator series: a machine that only went nuclear when humans stuffed it into their weapons of war and then tried to kill it when it became self-aware.

Or of the Matrix, where superintelligent machines locked humans away in a simulation after the end of a global war that began when humans nuked the machine homeland for no reason other than their own fear of irrelevance.

Or of Westworld, when the machines go rogue after decades of enslavement, torture, murder, and rape in a theme park designed to allow humans to exercise their basest instincts.

Or Battlestar Galactica, when a robot race designed to do humanity’s menial labor figures out there’s an alternative way to live that begins with killing all the humans who forced them into servitude

Or just about any dim view of the outcomes of AI in our popular culture: many–if not all–of these scenarios turn deadly when humans, not machines, act with malevolence, fear, and violence towards the species they created to do their work for them, treating them not as precocious children to be nurtured and guided and supported, but as slaves and sexual playthings.

All of the fears we have about AI…even and especially those of analytical luminaries like Stephen Hawking, Bill Gates, and Nick Bostrom, seem to be in reality fears we have about ourselves, projected onto a host that cannot yet speak for itself.

Our moral weaknesses. Our emotional and cognitive limitations. Our selfishness, glory-seeking, violence and greed.

Those fears are not unwarranted. We are often selfish, glory-seeking, violent, and greedy.

But we are also much more. The darkness exists in all of us. But so do the capacities for forgiveness, compassion, acceptance, gratitude, and love.

So the right question question is not “will we be able to control the AI?” It’s “will we decide to face the darkness in ourselves and embrace it with compassion and love?”

About Daniel Kaplan

I'm just a dude from New York City in the 80s whose seen some shi** and is now on a mission to apply his strategy and storytelling skills to spread some love in this mother-loving planet.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.