Hackers Can Turn Sex Robots Into Killing Machines, Security Expert Warns

John Vibes, TMU
Waking Times

According to Nicholas Patterson, a cybersecurity lecturer at Deakin University in Melbourne, Australia, humanoid sex robots that have recently hit the market could potentially be hacked and turned into killing machines.

  • Patterson gave this warning in a string of interviews with various UK publications:

    “Hackers can hack into a robot or a robotic device and have full control of the connections, arms, legs, and other attached tools like in some cases knives or welding devices. Often these robots can be upwards of 200 pounds and very strong. Once a robot is hacked, the hacker has full control and can issue instructions to the robot. The last thing you want is for a hacker to have control over one of these robots. Once hacked they could absolutely be used to perform physical actions for an advantageous scenario or to cause damage.”

    Similar warnings surfaced last year in response to the growing popularity of Bluetooth-enabled sex toys. It was revealed that hackers could control the devices from remote locations, and even use them to spy on unsuspecting pleasure seekers.

    Realistically, any device connected to the internet can be programmed to do harm, or at the very least spy on you. In fact, most smart devices are specifically designed to spy on users for data mining purposes.

    The primary reason sex robots evoke a special fear when it comes to hacking potential is because they are made in the likeness of humans. These devices are some of the very first humanoid robots that everyday consumers have the opportunity to interact with, which is naturally causing a great deal of anxiety for some. It has been predicted that humanoid robots will become a part of our everyday lives in the near future, but in reality they are far less dangerous than their formless counterparts.

    We have been trained to believe that the threat of artificial intelligence (AI) will come in the form of a Terminator-like robot that looks indistinguishable from an actual human, while invisible AI algorithms have been silently taking over our lives for the past decade, right under our noses. The real AI threat is disembodied, and comes in the form of algorithms that are sending the wrong people to jailcontrolling the information you see online, and even writing the news.

    The idea of a rogue robot that can walk and talk is indeed scary, but having every service and product being controlled by invisible algorithms is far worse. While this technology could be used to make positive change in the world, it is unfortunately true, as many experts have pointed out, that the ethics of these devices are only as good as the humans who programming them.

    An article published last year in Nature, explores the ethical framework of technology like self-driving cars. The article notes that the ethics of self-driving cars are based on the trolley problem, an ethical lifeboat scenario that would prove extremely unlikely in the real world. According to the ethics of self-driving cars, informed by the trolley problem, the lives of old people are less valuable than those of younger generations, and the life of an athlete is likewise more valuable than a “large” woman or homeless person.

  • Like Waking Times on FacebookFollow Waking Times on Twitter.

     


    No, thanks!

    -->