Is artificial intelligence becoming the future of warfare?

Spread the love

Artificial intelligence was used in the Russian-Ukrainian war, and the results of its use could be a reflection of future conflicts.

Since the beginning of the Russian invasion of Ukraine, there has been a lot of talk about the use of advanced weapons. Ukrainian forces managed to largely stop and even force the powerful army of the Russian Federation to retreat using modern, easily portable weapons, such as the Javelin anti-tank missile system, or the similar Stinger anti-aircraft system.

What is less talked about is the use of artificial intelligence in this war, and its application could show what the future of warfare will be like.

The Pentagon, for example, uses the opportunity to use the AI ​​system to obtain as much information as possible from the battlefield and based on it, makes new plans, but also makes future adjustments.

Thus, the US military is quietly using artificial intelligence and machine learning tools to analyze vast amounts of data, generate useful intelligence, and learn about Russian tactics and strategy, said Maynard Holiday, a senior US Department of Defense official.

The technology for collecting data and searching for specific objects in images collected by drones has advanced significantly in recent years, which has also led to the use of satellite images for similar endeavors.

In addition to processing large amounts of data, artificial intelligence is increasingly used in automated armed systems, popularly called “killer robots”, basically armed systems that are programmed to independently search and attack a specific types of targets, such as enemy tanks, warehouses, bunkers, or even soldiers and high-ranking commanders. In these efforts, the goal is to reduce the influence of human decision-making, that is, artificial intelligence should independently make decisions about attacks.

Advances in technology

Advances in artificial intelligence have made it easier to incorporate autonomy into weapons systems and raised the prospect that more capable systems will eventually decide for themselves who to kill. The United Nations report published last year stated that a deadly drone with the ability to make independent decisions and attack targets may have been used in the Libyan civil war.

This technology is still prone to errors, but also due to numerous issues regarding the actions and decision-making of these “killer robots”, numerous world organizations have requested a ban on the use of such technologies for military purposes.

The control of such AI systems is especially crucial in situations when one side’s military operations go badly, so the “losing” side is tempted to use new technologies to gain an advantage on the ground. He also states that there are claims that Russia has started using the KUB-BLA automated “suicide drone” in Ukraine, which has the ability to identify targets using artificial intelligence.

Whether one day we will really have fully automated “killer robots” that will be an integral part of military forces deployed on the battlefield remains to be seen, but current trends and technological development are certainly in favor of this.

dr. Dinko Osmanković, associate professor at the Faculty of Electrical Engineering in Sarajevo, explains that the use of artificial intelligence is (almost) always governed by the application itself.

“If it’s about military systems, then artificial intelligence should achieve military goals or at least help achieve them.”

As he does not deal with the development of military technologies, Osmanković says that he can only provide estimates of its application.

Application of robots

“Let’s say we want to take some territory from the opponent. Sending classic infantry (humans) for such a thing would lead to unwanted losses, but if we send a unit of robots to do the task, there are no losses. Similar things are currently already in use, but such machines are mostly controlled from a distance by the operator (it is essentially a form of teleoperation) and most often they are unmanned aerial vehicles. In addition, reconnaissance drones can operate completely autonomously – as the input they are given GPS coordinates that are monitored, and then these aircraft send data that is analyzed by experts and possibly with the help of artificial intelligence-based systems.”

Raising these systems to a higher level, in terms of complete autonomy, requires huge amounts of data from which AI systems could learn, and the previously mentioned teleoperation systems provide a good basis for such a thing – large amounts of data from sensors, but also operator decisions.

What particularly worries opponents of using AI systems to make decisions and execute attacks are the problems that are noticeable in the civilian sector.

Artificial intelligence is widely used in surveillance systems in many countries, primarily to identify persons of interest, but these technologies have often proven to be inaccurate, identifying the wrong people, says Milić. This was especially noticeable in the case of the identification of persons with darker skin, which is why the condemnations of the association for the protection of human rights are getting louder.

If an AI can replace two people whose only similarity is skin color, how can we trust it to independently decide which target to eliminate?

“Can AI in military applications choose its own targets? It is possible, the only question is, as usual in military missions, what are you prepared for as a military strategist. What is the amount of invested resources, what are the possible losses, what are the possible military and/or political consequences of such decisions”, says Osmanković.

Collateral damage

On the other hand, advocates of using this advanced technology claim that AI could reduce the number of civilian deaths because unlike humans, emotions, fatigue, and stress are not a factor when making lethal decisions.

Every war brings collateral damage – destroyed civilian property, civilian victims, but also soldiers who could have been saved says an expert from the Sarajevo-based ETF.

“AI can help achieve military goals, but almost always this achievement comes with great risks. Failure to achieve those goals potentially brings even bigger ones. Of course, now we are entering the domains of ethics of some decisions and what we as a society are ready for in order to achieve (some) goals”.

He also addressed the issue of the ethics of developing artificial YIMUSANFENDI News for military purposes.

“It depends on what we as a society want. Is it acceptable to protect security and democracy by killing people? Are we ready, when eliminating military targets, to agree to possible collateral victims? Are we ready to use AI to reduce that number of collateral victims, but not to zero?”

Osmanković also states that the ethics of AI is an area that is intensively researched and a lot of progress has been made, where he cites the example of the MIT Moral Machine, which is one such project that has advanced this area.

“Flaws, in terms of ethics, that are defined for civilian applications of AI can be easily generalized to military applications”.