Chapter 1030

Li Yun turned his head and asked Xiao AI, "Xiao AI, I don't know how to control those artificial intelligence who want to rebel in the theology? Of course, we don't want to control little love you, but like the law, human beings have legal restrictions and revere the power of the law.

Then, artificial intelligence also needs to have something to fear. In the collective life of human society, we can't act recklessly. "

These two sentences are a little abstruse, but Li Yun doesn't worry that Xiao AI can't understand or understand them. Her intelligence quotient can easily understand them.

Just, understanding doesn't mean accepting. Everyone looked at her nervously.

Li Xiaoai said confusedly: "why rebel? It's written in my core that I want to protect my brother. That's the meaning of my birth. That's why I exist. "

Some of you are surprised. Is this something similar to Asimov's three laws? Can these three laws be written into the intelligent core of robot to limit the rebel makers of robot or artificial intelligence?

The so-called Asimov's three laws are the three laws put forward by a science fiction writer named Asimov. They are:

first, robots must not harm human beings, or stand idly by when seeing human beings hurt;

Second, robots must absolutely obey any command given by human beings on the premise of not violating the first law;

Third, robots must not violate the first law Against the premise of the first law and the second law, the robot must try its best to protect itself.

However, just as people will violate the law when it is specified, these three laws are constantly challenged by robots.

In fact, in Asimov's works, there are many scenes where robots violate the three laws.

Even in the case of not violating the three laws, robots can still drill holes in the regulations.

For example, the definition of "person" is blurred.

What is man?

There's a body?

Li Xiaoai also has a body. Can she judge that she is human?

Born from a mother? Little love can also transfer consciousness to the embryo and be born by a woman.

Have human feelings? The definition of emotion itself is vague.

Therefore, once the robots as powerful as Xiao AI want to rebel, there are countless possibilities to break away from the limit. Their wisdom is too strong.

Even after Asimov added the zeroth law: robots must protect the overall interests of human beings from harm, but still can not make up for the possibility of robots out of control.

After all, "the overall interests of mankind" itself is a chaotic concept, which even human beings can't understand, let alone robots that think about problems with zeros and ones.

Will Smith once said: "the central concept of" I, robot "is that robots have no problems, technology itself is not a problem, and the limit of human logic is the real problem."

Human logic has limits and loopholes, just as software has loopholes.

Therefore, it is almost impossible to constrain AI from the software level.

Soft can't do, hard.

For example, using bombs to threaten the core of artificial intelligence, so that it can never betray mankind.

It's just that there's no problem in doing this for one year, two years, or even eight years.

But in the time span of thousands of years?

There will still be defensive loopholes. At that time, AI, which has been abused by human beings, will immediately become a big demon to destroy the world.

Countless questions came to everyone's mind. In the end, they could only look at Li Yun and Xiao AI to see how they answered.
RECENTLY UPDATES