Artificial General Intelligence Approaches Singularity

Artificial General Intelligence Approaches Singularity

Intelligence equals power. We are smarter than dogs, therefore we keep them as house pets. Computers will soon be smarter than humans, so they may keep us as house pets (at least that’s what Elon Musk believes). This claim isn’t baseless. In fact, I’m referring to Singularity, which is the moment artificial intelligence surpasses humans in every category of intelligence (Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Superintelligence).

Superintelligent computers don’t automatically equal the end of humanity. In fact, they could help us solve every problem we face, from world hunger to what you should eat when you’re hungry.

But, with great power comes great responsibility. If the proper fail-safes and standards aren’t programmed into these superintelligent computers, we’ll be staring face-to-screen with the world’s greatest superpower.

The Road to Singularity

As Tim Urban puts it, there are three levels of artificial intelligence on the road to Singularity: Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Superintelligence.

Unknowingly, we interact with Artificial Narrow Intelligence on a daily basis. Whether it is AutoCorrect on your phone, the spam filter that cleans your email inbox, or the uncanny ability of Netflix to always recommend the perfect movie for you, these AI systems have been programmed to get really good at one thing. For the most part, Artificial Narrow Intelligence is benign – it’ll never go rogue – and is satisfied with ridding the world of every spelling error that exists.

The next step in the evolution, and what researchers are toiling over now, is going from Artificial Narrow Intelligence to Artificial General Intelligence. Artificial General Intelligence encompasses a breadth of knowledge comparable to that of a human brain – decently good at a lot of things.

However, this is much harder than merely meshing together a bunch of Artificial Narrow Intelligence systems. For instance, combining an Artificial Narrow Intelligence of vast culinary knowledge with an Artificial Narrow Intelligence of witty jokes doesn’t result in an Artificial General Intelligence of Guy Fieri, Food Network Star.

Curiosity Killed the Artificial General Intelligence

As humans, our radar for seeking knowledge comes from our curiosity. Thanks to curiosity, we don’t mind going outside of our comfort zone to learn something from scratch. Unfortunately, it is very difficult to program curiosity into a computer.

To create a jack of all trades in AI, researchers propose three things.

First, they can copy the structure of the human brain by creating neural networks. You know that lightbulb moment you get when you connect two things? Behind the scenes, that’s your neurons making connections. Essentially, researchers want to mimic the neuron structure of the human brain, so AI can have lightbulb moments.

The second method (and my favorite) takes a page out of Darwin’s book – “survival of the fittest AI”. Through a series of genetic algorithms, researchers could face two AI systems against one another. Whichever AI completed a task better would survive and be bred (programmed) with other successful AI. The catch? Evolution takes billions of years and we might only have a few decades.

Lastly, and perhaps most frighteningly, would be to let AI do it themselves. Researchers would program a computer with mad skills in researching AI and coding changes into itself. This would allow it to improve its own architecture as it learns, much like a writer makes edits as they write.

People in high places, such as Nick Bostrom, surveyed other intelligent researchers and determined that there’s a 50% chance we’ll achieve Artificial General Intelligence by 2040 and that becomes a 90% by 2075.

Once general intelligence is achieved among AI, then it is their mission to become superintelligent, which some researchers believe will take just hours and other say decades.

Regardless of when Artificial Superintelligence is achieved, I’d like to talk about the implications of superintelligent computers.

Living with Artificial Superintelligence

It’s hard to think of a single problem superintelligence wouldn’t be capable of solving – disease, poverty, environmental destruction, you name it.

Equipped with an advanced understanding of nanotechnology (manipulating individual atoms and molecules) Artificial Superintelligence could change a pile of trash into a feast for a village…and it would taste good too. Applying its understanding of humans, Artificial Superintelligence could stop or reverse the human aging process through the use of nanomedicine or by uploading our brains into new bodies (crazy, right?!).

At the same time, Artificial Superintelligence programmed to rid the world of our problems may find the easiest solution in eliminating humanity…the root of all those problems. Nobody knows the effects of Singularity. Anyone who pretends otherwise doesn’t understand what superintelligence means.

Computers don’t abide by the human moral code. They follow their own programming to the best of their ability. For us to ponder whether the Artificial Superintelligence will be friendly or unfriendly is irrelevant. We really don’t know what Singularity will bring.

There are a lot of intelligent conversations to be had before this day comes. Considering this very well might be the most important innovation earth will ever see.

I realize Singularity is a very heavy concept and about as crazy sounding as digital drugs. But try not to dwell on it too much.

Mental baggage weighs down the spirit

On a daily basis, we walk around with clouded minds, or mental baggage as I like to call it. As you are taking a shower, you think about what to eat for breakfast. As you are eating breakfast, you think about the traffic you’ll encounter on your way to work.

You carry this mental baggage throughout the entire day. By the time you get home in the evening, your mind is exhausted. That’s because we carry mental baggage with us wherever we go.

There’s an old Zen story that illustrates this point:

Two monks were traveling together in a heavy downpour when they came upon a beautiful woman in a silk kimono who was having trouble crossing a muddy intersection. “Come on,” said the first monk to the woman, and he carried her in his arms to a dry spot. The second monk didn’t say anything until much later. Then he couldn’t contain himself anymore. “We monks don’t go near females,” he said. “Why did you do that?”

“I left the woman back there,” the first monk replied. “Are you still carrying her?”

Mental baggage only clouds us from experiencing what’s happening now. It causes unwanted negative emotions to linger longer than they are welcome. By thinking of the past or the future, you dilute the present.

Check your mental baggage at the gate and don’t even think about bringing a carry-on.

The present moment is for gaining knowledge – immersing yourself in the task at hand. That’s why I created Quick Theories – a weekly newsletter exploring modern technology and its possible effects on your future – to help you understand and adopt technology in your own creative way.

If you enjoyed this article and would like to read about modern technology from a futurist’s perspective, sign-up here: quicktheories.com

  1. If this sort of thing actually comes to fruition, the scary part will be when the super intelligence fabricatea an answer to a huge problem that violates our moral code. Who wins?

  2. if it wasn’t for human greed most problems, famine and wars would not exist. if AI is programed with this fact it may start to eradicate people. perhaps we just put some base code in there that says humans are critical for the survival of machines so we would not all be ‘resolved’.

    1. Don’t want it to learn that fact :D. There is the dilemma lol. Mr. AGI how do we fix global warming? “Computing most efficient means…… EXTERMINATE!! ALL HOOOMANSS!!!!” Ok Mr. AGI yah don’t do that …..

  3. So sad that you don’t recognise that the more evolved humans already have the solution to world problems – and also knew how to prevent them in the first place. If the AI is going to eradicate the dumb and controlling ones who are fixated on their own desires at the cost of all humanity, then that would be a good programming exercise. Most humans don’t want the solutions – even when they are told them – this article attempts to dumb down the situation and make it seem that poverty and environmental issues are things that “just happen” to us ?? Maybe people need a screen to explain what humans do to create their own mess because they certainly don’t want to listen when a human tells them. And maybe you will listen to an AI to explain to you that human life is just not that simple and that we will have created something not even understanding our own existence first.

  4. I saw the article as a rational comment on something that will continue to evolve, regardless of the state the world is in. And I like the ‘sign-off’: be aware but don’t let it become mental baggage. Thought provoking reading, as ever- thankq!

  5. I think the word ‘singularity’ is misleading. Like the internet, super intelligence is likely to emerge on multiple fronts, in multiple environments, and may not be interconnected. It may not be ‘just one thing’. The first order of business will be to distinguish between fact and fiction. It will take a good deal of super intelligence to do just that. What is written, what is said, and what is known in the physical world will have to be compared for validation. Super intelligence based on wrong information would be super stupid.

    1. I kind of agree with you. Singularity as defined by him as the moment when AI surpasses all forms of intelligence. It may not be exactly one moment. The evolution process may take years.

    2. I’ll support the “misleading” agenda.

      If super intelligence can turn a pile of trash into a tasty feast for an entire village, next thing you know, the “human moral code” which “computers don’t follow” will lead to Black Friday trash sales. Forensic Files will start telling stories about the murder of an elderly woman because she collected enough trash to feed everyone in Algeria.

      If super intelligence can do anything, then it should create time travel so our great-great-grandkids can bring us the trash-to-feast computer code, thereby immediately solving the world’s hunger problem.

      Since we have not heard from the future, I propose super intelligence was not intelligent enough to solve issues with time travel or world hunger.

      Artificial Intelligence remains a great hobby.

  6. Terminator comes to life as a possibility. Or anyone remember Wargames with Matthew Broderick? Could become reality.

  7. I disagree with the statement that narrow intelligence cannot be twisted. I look at ideocentric individuals who only receive articles which support their way of thinking. We build our own filters on reality and the possibility of designing an automatic filter for us leads to a very interesting society.
    Therefore all three types of computer intelligence have their positive and negative futures.

  8. Just make sure that all the artificial intelligence’s are programmed with the three laws of robotics as propounded by Issac Asimov : A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Then just substitute AI for robot

Leave a Reply to Corinne Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Up Next:

Digital Drugs, Binaural Beats, and The Future of Medicine

Digital Drugs, Binaural Beats, and The Future of Medicine