Yahoo Answers is shutting down on May 4th, 2021 (Eastern Time) and beginning April 20th, 2021 (Eastern Time) the Yahoo Answers website will be in read-only mode. There will be no changes to other Yahoo properties or services, or your Yahoo account. You can find more information about the Yahoo Answers shutdown and how to download your data on this help page.

Justin
Lv 6
Justin asked in Science & MathematicsPhysics · 8 years ago

What if Sentient Machines kill us all?

So we reach the singularity and machines are now sentient. Everyone knows that the prime directive of all sentient machines is to kill all humans. After a long bloody/oily fight we lose. Then what? What would a race of machines do with Earth? I'll give you some things to think about.

The Matrix. What did the machines do? We know they kill or use humans for fuel. What else do they do?

Terminator. They live to destroy. What happens when they win?

iRobot. What if Will Smith failed?

Update:

In iRobot the new robots were programmed with the 3 laws. But just like human criminals, the robots chose to ignore these laws. What do you think would happen if they machines choose to ignore the laws and wipe us out?

4 Answers

Relevance
  • 8 years ago
    Favorite Answer

    They wouldn't.

    The problem with the Matrix and Terminator is that in order for sentient machines to survive, they need humans. Sentient does not mean 'all intelligent'. How does a machine know how to operate a nuclear power station to keep the electricity on? How does a machine drive a truck to deliver spare parts?

    So, as these films tried to suggest, SOME humans would be kept alive and work as slaves while the others were hunted down and killed. This wouldn't work. Our society operates because people have skills and specialisms. If you want to keep a powerstation running, well, you have to keep the miners who dig up the coal and have to keep the drivers, and have to keep the sailors, and have to keep the crane operators who get it to the power plant, and have to keep the guys and girls working in the powerstations, and the people working on the distribution grid, and the people who repair it all, and the people who built the equipment to monitor the station, and the people who built the electronics, and ... you get the idea. Not only do you need people, but it's difficult to decide exactly who to kill!

    So it seems a silly strategy for a machine to kill off humans!

  • 6 years ago

    This Site Might Help You.

    RE:

    What if Sentient Machines kill us all?

    So we reach the singularity and machines are now sentient. Everyone knows that the prime directive of all sentient machines is to kill all humans. After a long bloody/oily fight we lose. Then what? What would a race of machines do with Earth? I'll give you some things to think...

    Source(s): sentient machines kill all: https://shortly.im/HK2ws
  • 8 years ago

    iRobot===> No! the robots did NOT choose to ignore the 3 Laws. Humans for some reason chose to weaken the Laws. It has been a while but here is an example from my memory

    !st Law: A robot may not harm a human nor through inaction allow a human to come to harm.

    Researchers were doing an experiment that was perfectly safe, provided they stopped within 30 minutes. ====> A robot would not allow the researchers to do the experiment because the researchers MIGHT forget to stop in 30 minutes. Thus, the "nor through inaction allow a human to come to harm." required the robots to act to prevent the possibility that the researchers might be harmed if they did forget. Thus, the researchers had special robots made that had the 1st Law reduced to:

    1st Law: A robot may not harm a human.

    This of course caused all sorts of problems.

    Actually, I worry more about salient humans than salient machines. For a machine there is no real benefit to wiping out humans, we don't have anything a machine would want. Think about it. What would be their motivation?

  • ?
    Lv 6
    8 years ago

    That's because the Authors ignored Asimov's Three Laws of Robotics:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Still have questions? Get your answers by asking now.