In what may be the scariest news this week, the South China Morning Post (SCMP) reported that China has been testing unmanned drone submarines all the way back in the 1990s. The news came about when the military declassified certain elements of the program.
The SCMP further indicated that the submarines could detect a mock craft, use artificial intelligence to identify its origin, and hit it with a torpedo, all without any human input. The Chinese military is reported to have even conducted a successful field test back in 2010.
Researchers also told SCMP that the drones could also be trained to hunt in packs. If that seems very scary it’s because it is. If the Chinese military had this technology all the way back in 2010 and the 1990s what could it possibly have now?
“The needs of future underwater warfare bring new development opportunities for unmanned platforms,” stated a research paper on the program, published last week in the Journal of Harbin Engineering University.
The SCMP also reported that although the drones were likely never used in a real combat scenario, the way they detect their targets makes them very susceptible to error which is not something you want in a weaponized submarine.
The SCMP further speculated that China released this information now as a show of strength. Increasing tensions between China, the U.S., Japan, and other countries in the Taiwan Strait, may be pushing the country to show off its military might.
Either way, for whatever reason it did declassify the documents, it is safe to assume that developing autonomous weapons is never a good idea. In an open letter published by The Future of Life Institute, a number of AI and Robotics experts argued that autonomous (or semi-autonomous) weapons systems cannot be deployed responsibly or ethically. They write, in part:
“Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.
If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. . . Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”
Will China continue its weaponizedsubmarine program or does this declassification indicate the end of the program?