11/28/2017 / By Robert Jonathan
A prominent artificial intelligence expert is warning that facial recognition could make so-called killer robots even more lethal. University of California, Berkeley computer science professor Stuart Russell appears at the end of a short film called “Slaughterbots,” offering his take on the new wave of AI technology.
The film, which was produced by the Campaign to Stop Killer Robots, shows a dramatization of the risks to ordinary citizens posed by mini-drones operating with minimal human supervision. The production was screened at a recent international conference in Geneva, Switzerland, that considered whether to formally ban AI-powered autonomous weapons. The unsettling fictionalized footage starts off with a Silicon Valley-style tech product launch and ends with a swarm of palm-sized drones, equipped with explosives and facial recognition capabilities, hunting down and killing college students with shots to the head as they attempt to flee a classroom in Edinburgh, Scotland.
Dr. Russell provides the epilogue to the film, the Mirror of London noted, in which he acknowledges that while AI technology can provide enormous benefits to the world, there’s a catch.
[A]llowing machines to choose to kill humans will be devastating to our security and freedom — thousands of my fellow researchers agree. We have an opportunity to prevent the future you just saw, but the window to act is closing fast.
Tesla and SpaceX CEO Elon Musk has repeatedly warned that rapidly advancing artificial intelligence could give rise to self-replicating machines that might threaten humanity. Health Ranger Mike Adams, the founding editor of Natural News, has similarly cautioned that once AI technology develops into highly evolved, self-aware, systems, the human race has a big Terminator problem on its hands. Further developments in miniature-drone technology (rather than a supercharged cyborg with big arms and legs) as depicted in “Slaughterbots” could make the situation even more ominous. (Related: Read more about artificial intelligence at Robotics.news.)
Musk was one of about 100 robotics experts who signed an open letter in August recommending that the U.N. prohibit the use of AI weaponry.
The Human Rights Watch website implies that the Geneva meeting was a missed opportunity to address the killer robot threat. Instead, diplomats apparently decided to kick the can down the road, which politicians tend to do on many issues, with more talks scheduled for next year.
Said an official with Human Rights Watch:
A critical mass of states want to start negotiating new international law to prevent the development of killer robots, but this forum looks unlikely to deliver any time soon. Bold action is needed before technology races ahead and it’s too late to preemptively ban weapons systems that would make life and death decisions on the battlefield.
Although AI technology has gained currency in military applications, the scenario could become even more dire should autonomous drones (i.e., those operating independent of human oversight) fall into the wrong hands and be deployed in terrorist attacks or aggression by rogue regimes.
As of this writing, “Slaughterbots” has been viewed nearly 600,000 times on YouTube. Watch it below and draw your own conclusions.
Sources include:
Tagged Under: artificial intelligence, drones, Facial recognition, human survival, killer robots, robotics, slaughterbots, Terminator robots
COPYRIGHT © 2017 GLITCH.NEWS
All content posted on this site is protected under Free Speech. Glitch.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Glitch.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.