Killer robots are almost a reality and need to be banned, warns leading AI scientistsystemdigits.com
The technology to create killer robots is already here and needs to be banned, a leading artificial intelligence scientist has warned. Stuart Russell, a professor of computer science at Berkeley University, California, said “allowing machines to choose to kill humans” would be “devastating” for world peace and security.
The professor, who has worked in the field of artificial intelligence (AI) for more than 35 years, also warned that the window to ban lethal robots was “closing fast”.
His warning comes as campaigners are making the case at the United Nations (UN) this week for a global prohibition on lethal autonomous weapons systems.
Yesterday the pressure group, the Campaign to Stop Killer Robots, showed a short film it produced to a meeting of countries participating in the Convention on Conventional Weapons, which painted a dystopian scenario based on existing technologies.
The video, entitled ‘Slaughterbots’, starts with an enthusiastic CEO on stage unveiling a new product to an excited crowd. Instead of a new smartphone of consumer tech innovation, he reveals a miniturised drone that uses facial recognition to identify its target before administering a small yet lethal explosive blast to the skull.
The nameless CEO boasts: “A $25 million order now buys this, enough to kill half a city – the bad half. Nuclear is obsolete, take out your entire enemy virtually risk-free. Just characterise him, release the swarm and rest easy.”
However the film shows the weapons quickly falling into the hands of terrorists who use them to slaughter politicians and a classroom of students.
Professor Russell said: “This short film is more than just speculation, it shows the results of integrating and minturising technologies that we already have.
“[AI’s] potential to benefit humanity is enormous, even in defence. But allowing machines to choose to kill humans will be devastating to our security and freedom – thousands of my fellow researchers agree.
“We have an opportunity to prevent the future you just saw, but the window to act is closing fast”.
More than 70 countries participating in the Convention on Conventional Weapons have been meeting in Geneva this week to discuss a potential worldwide ban on lethal robots.
The convention has already prohibited weapons such as blinding lasers before they were widely acquired or used.
Autonomous weapons that have a degree of human control, such as drones, are already used by the militaries of advanced countries such as the UK, US, Israel and China.
The Campaign to Stop Killer Robots is arguing that modern low-cost sensors and recent advances in artificial intelligence have made it possible to design a weapons system that could attack and kill without human control.
Jody Williams, a 1997 Nobel Peace Laureate and co-founder of the campaign, said: “To avoid a future where machines select and attack targets without further human intervention, countries must draw the line against unchecked autonomy in weapon systems.
“With adequate political will, governments can negotiate an international treaty and ban killer robots—fully autonomous weapons— within two years time.”
The pressure group’s concerns echo those voiced by technology billionaire, Elon Musk, earlier this year.
In July the entrepreneur behind companies such as Tesla and SpaceX described AI as the “biggest risk we face as a civilisation” and warned that it needed to be regulated before “people see robots go down the street killing people”.