Intelligence equals power. We are smarter than dogs, therefore we keep them as house pets. Computers will soon be smarter than humans, so they may keep us as house pets (at least that’s what Elon Musk believes). This claim isn’t baseless. In fact, I’m referring to Singularity, which is the moment artificial intelligence surpasses humans in every category of intelligence (Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Superintelligence).
Superintelligent computers don’t automatically equal the end of humanity. In fact, they could help us solve every problem we face, from world hunger to what you should eat when you’re hungry.
But, with great power comes great responsibility. If the proper fail-safes and standards aren’t programmed into these superintelligent computers, we’ll be staring face-to-screen with the world’s greatest superpower.
The Road to Singularity
As Tim Urban puts it, there are three levels of artificial intelligence on the road to Singularity: Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Superintelligence.
Unknowingly, we interact with Artificial Narrow Intelligence on a daily basis. Whether it is AutoCorrect on your phone, the spam filter that cleans your email inbox, or the uncanny ability of Netflix to always recommend the perfect movie for you, these AI systems have been programmed to get really good at one thing. For the most part, Artificial Narrow Intelligence is benign – it’ll never go rogue – and is satisfied with ridding the world of every spelling error that exists.
The next step in the evolution, and what researchers are toiling over now, is going from Artificial Narrow Intelligence to Artificial General Intelligence. Artificial General Intelligence encompasses a breadth of knowledge comparable to that of a human brain – decently good at a lot of things.
However, this is much harder than merely meshing together a bunch of Artificial Narrow Intelligence systems. For instance, combining an Artificial Narrow Intelligence of vast culinary knowledge with an Artificial Narrow Intelligence of witty jokes doesn’t result in an Artificial General Intelligence of Guy Fieri, Food Network Star.
Curiosity Killed the Artificial General Intelligence
As humans, our radar for seeking knowledge comes from our curiosity. Thanks to curiosity, we don’t mind going outside of our comfort zone to learn something from scratch. Unfortunately, it is very difficult to program curiosity into a computer.
To create a jack of all trades in AI, researchers propose three things.
First, they can copy the structure of the human brain by creating neural networks. You know that lightbulb moment you get when you connect two things? Behind the scenes, that’s your neurons making connections. Essentially, researchers want to mimic the neuron structure of the human brain, so AI can have lightbulb moments.
The second method (and my favorite) takes a page out of Darwin’s book – “survival of the fittest AI”. Through a series of genetic algorithms, researchers could face two AI systems against one another. Whichever AI completed a task better would survive and be bred (programmed) with other successful AI. The catch? Evolution takes billions of years and we might only have a few decades.
Lastly, and perhaps most frighteningly, would be to let AI do it themselves. Researchers would program a computer with mad skills in researching AI and coding changes into itself. This would allow it to improve its own architecture as it learns, much like a writer makes edits as they write.
People in high places, such as Nick Bostrom, surveyed other intelligent researchers and determined that there’s a 50% chance we’ll achieve Artificial General Intelligence by 2040 and that becomes a 90% by 2075.
Once general intelligence is achieved among AI, then it is their mission to become superintelligent, which some researchers believe will take just hours and other say decades.
Regardless of when Artificial Superintelligence is achieved, I’d like to talk about the implications of superintelligent computers.
Living with Artificial Superintelligence
It’s hard to think of a single problem superintelligence wouldn’t be capable of solving – disease, poverty, environmental destruction, you name it.
Equipped with an advanced understanding of nanotechnology (manipulating individual atoms and molecules) Artificial Superintelligence could change a pile of trash into a feast for a village…and it would taste good too. Applying its understanding of humans, Artificial Superintelligence could stop or reverse the human aging process through the use of nanomedicine or by uploading our brains into new bodies (crazy, right?!).
At the same time, Artificial Superintelligence programmed to rid the world of our problems may find the easiest solution in eliminating humanity…the root of all those problems. Nobody knows the effects of Singularity. Anyone who pretends otherwise doesn’t understand what superintelligence means.
Computers don’t abide by the human moral code. They follow their own programming to the best of their ability. For us to ponder whether the Artificial Superintelligence will be friendly or unfriendly is irrelevant. We really don’t know what Singularity will bring.
There are a lot of intelligent conversations to be had before this day comes. Considering this very well might be the most important innovation earth will ever see.
I realize Singularity is a very heavy concept and about as crazy sounding as digital drugs. But try not to dwell on it too much.
Mental baggage weighs down the spirit
On a daily basis, we walk around with clouded minds, or mental baggage as I like to call it. As you are taking a shower, you think about what to eat for breakfast. As you are eating breakfast, you think about the traffic you’ll encounter on your way to work.
You carry this mental baggage throughout the entire day. By the time you get home in the evening, your mind is exhausted. That’s because we carry mental baggage with us wherever we go.
There’s an old Zen story that illustrates this point:
Two monks were traveling together in a heavy downpour when they came upon a beautiful woman in a silk kimono who was having trouble crossing a muddy intersection. “Come on,” said the first monk to the woman, and he carried her in his arms to a dry spot. The second monk didn’t say anything until much later. Then he couldn’t contain himself anymore. “We monks don’t go near females,” he said. “Why did you do that?”
“I left the woman back there,” the first monk replied. “Are you still carrying her?”
Mental baggage only clouds us from experiencing what’s happening now. It causes unwanted negative emotions to linger longer than they are welcome. By thinking of the past or the future, you dilute the present.
Check your mental baggage at the gate and don’t even think about bringing a carry-on.
The present moment is for gaining knowledge – immersing yourself in the task at hand. That’s why I created Quick Theories – a weekly newsletter exploring modern technology and its possible effects on your future – to help you understand and adopt technology in your own creative way.
If you enjoyed this article and would like to read about modern technology from a futurist’s perspective, sign-up here: quicktheories.com
Receive exclusive content that you won't find anywhere else other than on my list. Not only that, but you'll be the first to access my latest quick theories.