` How "Terminator" is hurting evolution of AI? - Think Deeply. Speak SimplyThink Deeply. Speak Simply How "Terminator" is hurting evolution of AI? - Think Deeply. Speak Simply
  • April 01 ,2018

  • Written By Rajat Mishra

How “Terminator” is hurting evolution of AI?

“Machine Intelligence is the last invention that humanity will ever need to make” – Nick Bostrom

When we discuss super intelligence in the media, we sensationalize and “anthromorphize” it. The image conjured is a terminator like character slaying humans in its path. Not only is this description wildly unrealistic it creates all the wrong kinds of reactions.

Why the Terminator metaphor hurts AI

First, it makes it hard to have an even keeled discussion about the topic. Instead of discussing how super intelligence could evolve, we discuss the behavior of Robert Patrick’s T-1000 character in the movie.

Second, “anthromorphizing” super intelligence and looking at it through the human lens, makes it tempting to transpose human greed and sins to super intelligence. The evolution of super intelligence is non-deterministic and assigning perceived negative human traits is naïve.

The final and most insidious side-effect of this metaphor is fear. Instead of thinking of how super intelligence will augment human life and make it better, this comparison inevitably makes it an “us” vs. “them” debate. Pick a side! Is the overly simplified battle-cry, I often hear.

A better way to think of AI

I think a better way to think of super intelligence is perhaps a little more abstract and definitely more boring. Super intelligence is a super smart and super expansive version of a high-school math concept – an Optimization Problem.

Super intelligence is an Optimization Problem

The objective functions focus the AI. They range from Playing Go to recognizing faces. The objective function gets progressively better by solving the problem iteratively with increasing sets of data. And, like an optimization problem, the assumptions on what can and cannot be done, are critical to the solution space.

While this paradigm is infinitely more boring than the T-1000, I think it focuses our attention on the three important questions of super intelligence.
1) What objective functions we want AI to solve?
2) What data-sets and runs will create the learning paradigm?
3) What assumptions should we bake in to make sure super intelligence works within the parameters of human values?

We will explore the optimization paradigm of AI and super intelligence in future blogs. For now, let’s drop the T-1000 terminator paradigm!

Don’t miss a beat. Free updates.

Popular Articles

3 responses to “What Bobby Fischer’s c4 opening teaches us about winning at the highest levels?”

  1. Ⅴery great pоst. I simply stumbled upon your blog and wiѕhed to mention thаt Ӏ
    have really enjoyed surfing around your blog poѕtѕ.
    In any case I’ll be subscribing in your гss feed and I
    hope you write agaіn vеry ѕoon!

  2. This tendency is generally helpful in smoothing the progress of interpersonal relationships, but too much concern about what others think renders your mind inhospitable to original thought and can result in your holding on to dangerous misconceptions.

Leave a Reply

Your email address will not be published. Required fields are marked *