Kalashnikov Introduces Autonomous Cannon

Is Kalashnikov's AI driven combat module the sentry weapon of the future? (Image courtesy of Kalashnikov Group.)

Kalashnikov, famous for developing the AK-47, arguably the most widely used weapon of the 20th century, has debuted a “fully automated combat module” that can independently identify targets and engage them without human supervision.

According to a report by government-owned Russian news agency TASS, the Kalashnikov system is comprised of a gun (as of yet the caliber is unspecified, meaning that a number of different weapons could be built on this mold), a computer control unit and, most importantly, a neural network that will allow the machine to take its base AI algorithm and learn as it performs in the field. TASS actually went so far as to say that the Kalashnikov neural network will be able to “make decisions”, strongly hinting that this new weapon system won’t require any human intervention—a weapon with a mind of its own.

So, is this a harbinger of what’s to come for militaries across the globe?

While many will point to the inevitability of fully autonomous machines facing the horrors of front line combat, even those in the highest echelons of military thinking see AI-driven warfare as an enormous ethical concern.

As Defense One pointed out in a recent article, Deputy Defense Secretary Bob Work articulated the U.S. ethical line on the subject stating, “I will make a hypothesis: that authoritarian regimes who believe people are weaknesses… that they cannot be trusted, will naturally gravitate toward totally automated solutions. Why do I know that? Because that is exactly the way the Soviets conceived of their reconnaissance strike complex. It was going to be completely automated. We believe that the advantage we have as we start this competition is our people.”

The key thing to remember about this statement is that it was made in 2015 by a man who is no longer working at the DoD. So, the question has to be asked, will the U.S. begin pursuing a weapons development strategy that emphasizes AI-driven machines?

I’d venture that regardless of stated policy, the U.S. military research groups have always been developing technologies that are just shy of being artificially intelligent with the notion that one day the policy will change.

Whether it’s Boston Dynamics, Lockheed Martin or iRobot, there are just too many possibilities for platforms that could be made extraordinarily lethal with a neural net or strong-AI assist.

Hopefully, the ethics of such a development will have been hashed out before an AI-on-AI, or even worse, and AI-on-human battle changes the face of an already brutal aspect of human culture.

For less terrifying look at the future of AI, check out our feature on Artificial Intelligence and Engineering.