ON NOVEMBER 12th a video called “Slaughterbots” was uploaded to YouTube. It is the brainchild of Stuart Russell, a professor of artificial intelligence at the University of California, Berkeley, and was paid for by the Future of Life Institute (FLI), a group of concerned scientists and technologists that includes Elon Musk, Stephen Hawking and Martin Rees, Britain’s Astronomer Royal. It is set in a near-future in which small drones fitted with face-recognition systems and shaped explosive charges can be programmed to seek out and kill known individuals or classes of individuals (those wearing a particular uniform, for example). In one scene, the drones are shown collaborating with each other to gain entrance to a building. One acts as a petard, blasting through a wall to grant access to the others.
“Slaughterbots” is fiction. The question Dr Russell poses is, “how long will it remain so?” For military laboratories around the planet are busy developing small, autonomous robots for use in warfare, both conventional and unconventional. In America, in particular, a programme called MAST (Micro Autonomous Systems and Technology), which has been run by the US Army Research Laboratory in Maryland, is wrapping up this month after ten successful years. MAST co-ordinated and paid for research by a consortium of established laboratories, notably at the University of Maryland, Texas A&M University and Berkeley (the work at Berkeley is unrelated to Dr Russell’s). Its successor, the Distributed and Collaborative Intelligent Systems and Technology (DCIST) programme, which began earlier this year, is now getting into its stride.