Survive the Top Human Threat: Artificial Super Intelligence

Posted on 8 Jul 2017

1


What most threatens humankind? The Centre for the Study of Existential Risk, you may be surprised to learn, puts Artificial Intelligence at the top of the list. (Sample video lecture)

10 THREATS TO MANKIND

  1. Artificial intelligence
  2. Bio-hacking
  3. Killer robots
  4. Nuclear war
  5. Climate change
  6. Asteroid impact
  7. Loss of reality
  8. Food shortage
  9. Particle accelerator
  10. Tyrannical ruler

Link

You thought Siri was helping you? Bwhaahahah. Seriously though, artificial intelligences are considered by some of our most intelligent humans to be the single cause most likely to end humanity. How so?

The technological singularity (also, simply, the singularity) is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.

According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-

improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. (Link)

… beyond this point, the curve is driven by new dynamics and the future becomes radically unpredictable… (Link)

How do we survive as our own creations surpass us as the most intelligent life forms on the planet?

First, consider this mind bender: Humanity was lost long ago. We are all experiencing simulations run by our robot children. There are hints to confirm this, glitches in the Matrix in the form of astronomically improbable events such as the same man’s house being hit by meteors years apart and a bullet fired from one gun entering the barrel of another gun, jamming it. The simulation theory is one reason I’ve spent years tracking the strangest news in the world. It’s my white whale, the big one, the strangest truth, the possibility that we are not at all what we think, and that, perhaps, with an understanding of the “program,” the rules could be bent to do things that seem impossible, to change reality. If this is a dream, let it be a great one.

Putting simulation theory aside, here are ways a group decided we might  avoid a negative artificial intelligence singularity. The list below is my paraphrase, minus an item that was more mitigation than prevention.

Remember, an AGI is an Artificial General Intelligence, a machine that could successfully perform any intellectual task a human being could.

Here’s how one group of humans thought we might survive:

1. Human-enforced fascism – A sufficiently powerful dictatorship stops ongoing technological development. Not a great option, but we don’t go extinct.

2. “Friendly” AGI fascism –
 A “Guardian” system with intelligence, say, 3x human level — and a stable goal system enforces a stable social order to prevent negative AI explosion.

3. Virtual world AGI sandbox – Create an AI system that lives in a virtual world that it thinks is the real world. If it doesn’t do anything too nasty, let it out (or leave it in there and let it discover things for us).

4. Build an oracular question-answering AGI system, not an autonomous AGI agent – If you build an AGI whose only motive is to answer human questions, it’s not likely to take over the world or do anything else really nasty.

5. Create upgraded human uploads or brain-enhanced humans first – Maybe we’ll create smarter (human) beings that can figure out more about the universe than us, including how to create smart and beneficial AGI systems. Whether this is a safer scenario than well-crafted superhuman-but-very-nonhuman AGI systems is not at all clear.

7. Coherent Extrapolated Volition – This idea is from Eliezer Yudkowsky: Create a very smart AGI whose goal is to figure out “what the human race would want if it were as good as it wants to be” (very roughly speaking: see here for details). Aside from practical difficulties, it’s not clear that this is well-defined or well-definable.

8. Individual Extrapolated Volition – Have a smart AGI figure out “what Ben Goertzel (the author of this post) would want if he were as good as he wants to be” and then adopt this as its goal system. (Or, substitute any other reasonably rational and benevolent person for Ben Goertzel if you really must….) This seems easier to define than Coherent Extrapolated Volition, and might lead to a reasonably good outcome so long as the individual chosen is not a psychopath or religious zealot or similar.

9. Make a machine that puts everyone in their personal dream world – If a machine were created to put everyone in their own simulated reality, we could all live our our days blissfully and semi-solipsistically until the aliens come to Earth and pull the plug.

10. Engineer a very powerful nonhuman AGI that has a beneficial goal system – … certainly the most straightforward option. Opinions differ on how difficult this will be.

11. Let humanity die the good death – Nietzsche opined that part of living a good life is dying a good death. You can apply this to species as well as individuals. What would a good death for humanity look like? Perhaps a gradual transcension: let humans’ intelligence increase by 20% per year, for example, so that after a few decades they become so intelligent they merge into the overmind along with the AGI systems … transcend slowly enough and you feel yourself ascend to godhood and become one with the intelligent cosmos. There are worse ways to bite it.

Read the original

In case the where we are already too late, the above list would be the creators of the singularity telling us how to avoid it… an intriguing thought. 

If you think this could be a simulation, join me in watching for events that happen despite astronomical odds. I’ve had a few happen to me personally. Mind blowing strange events no one would believe. Perhaps you have as well. Clues. Here’s a random unverified example:

During the 1970s I was walking to a friends house and was not on my usual route at all. I had walked about a half a mile with about the same distance to go when ahead of me I could hear a telephone ringing. The payphone on the next corner had no one near it so I answered the call. The person on the other end asked to speak to Pauline, his voice was familiar. It was my friend calling his girlfriend. He was not happy with me, he thought that I was at his girlfriends! I explained it was a payphone, he quickly asked what the number was, it was just one digit different. I continued to his house where he then asked me to take him to the payphone (I don’t think he believed me). What are the odds of that? (Link)

What clues are hidden in the details of these glitches? My gut feeling is that they can help us wake up within the dream and fix it.

You in?

TrueStrange.com