War of algorithms
Technologies

War of algorithms

When it comes to the use of artificial intelligence in the military, the nightmare of science fiction immediately wakes up, a rebellious and deadly AI that rises against humanity to destroy it. Unfortunately, the fears of the military and leaders that “the enemy will catch up with us” are just as strong in the development of warfare algorithms.

Algorithmic Warfarewhich, according to many, could fundamentally change the face of the battlefield as we know it, mainly because warfare would be faster, far ahead of people's ability to make decisions. American general Jack Shanahan (1), head of the US Joint Center for Artificial Intelligence, emphasizes, however, that before introducing artificial intelligence into arsenals, we must ensure that these systems are still under human control and do not start wars on their own.

“If the enemy has machines and algorithms, we will lose this conflict”

Driving ability algorithmic warfare is based on the use of advances in computer technology in three main areas. First decades of exponential growth in computing powerthis has greatly improved the performance of machine learning. Second rapid growth of resources “Big data”, that is, huge, usually automated, managed and continuously created data sets suitable for machine learning. The third concerns rapid development of cloud computing technologies, through which computers can easily access data resources and process them to solve problems.

War Algorithmas defined by the experts, it must first be expressed with computer code. Secondly, it must be the result of a platform capable of both collecting information and making choices, making decisions that, at least in theory, do not require human intervention. Thirdly, which seems obvious, but not necessarily so, for it is only in action that it becomes clear whether a technique intended for something else can be useful in war and vice versa, it must be able to work in conditions. armed conflict.

An analysis of the above directions and their interaction shows that algorithmic warfare it is not a separate technology such as, for example. energy weapons or hypersonic missiles. Its effects are wide-ranging and are gradually becoming ubiquitous in hostilities. For the first time military vehicles they become intelligent, potentially making the defense forces that implement them more efficient and effective. Such intelligent machines have clear limitations that need to be well understood.

"" Shanahan said last fall in an interview with former Google CEO Eric Schmidt and Google vice president of international affairs Kent Walker. "".

The US National Security Council's draft report on AI refers to China more than 50 times, highlighting China's official goal of becoming the world leader in AI by 2030 (see also: ).

These words were spoken in Washington at a special conference that took place after the aforementioned Shanakhan Center presented its preliminary report to Congress, prepared in collaboration with renowned experts in the field of artificial intelligence, including Microsoft Research Director Eric Horwitz, AWS CEO Andy Jassa and Google Cloud Principal Researcher Andrew Moore. The final report will be published in October 2020.

Google employees protest

A few years ago, the Pentagon got involved. algorithmic warfare and a number of AI-related projects under the Maven project, based on collaboration with technology companies, including Google and startups such as Clarifai. It was mainly about working on artificial intelligenceto facilitate the identification of objects on.

When it became known about Google's participation in the project in the spring of 2018, thousands of employees of the Mountain View giant signed an open letter protesting the company's involvement in hostilities. After months of labor unrest Google has adopted its own set of rules for AIwhich includes a ban on participation in events.

Google has also committed to complete the Project Maven contract by the end of 2019. Google's exit didn't end Project Maven. It was purchased by Peter Thiel's Palantir. The Air Force and the US Marine Corps plan to use special unmanned aerial vehicles, such as the Global Hawk, as part of the Maven project, each of which is supposed to visually monitor up to 100 square kilometers.

On the occasion of what is happening around Project Maven, it became clear that the US military urgently needs its own cloud. This is what Shanahan said during the conference. This was evident when video footage and system updates had to be trucked to military installations scattered across the field. In building unified cloud computing, which will help solve problems of this type, as part of a unified IT infrastructure project for the Jedi army, Microsoft, Amazon, Oracle and IBM. Google is not because of their ethical codes.

It's clear from Shanahan's statement that the great AI revolution in the military is only just beginning. And the role of its center in the US armed forces is growing. This is clearly seen in the estimated JAIC budget. In 2019, it totaled just under $90 million. In 2020, it should already be $414 million, or about 10 percent of the Pentagon's $4 billion AI budget.

The machine recognizes a surrendered soldier

US troops are already equipped with systems such as the Phalanx (2), which is a type of autonomous weapon used on US Navy ships to attack incoming missiles. When a missile is detected, it turns on automatically and destroys everything in its path. According to Ford, he can attack with four or five missiles in half a second without having to go through and look at each target.

Another example is the semi-autonomous Harpy (3), a commercial unmanned system. The harpy is used to destroy enemy radars. For example, in 2003, when the US launched a strike on Iraq that had airborne radar interception systems, Israeli-made drones helped find and destroy them so that the Americans could safely fly into Iraqi airspace.

3. Launch of the drone of the IAI Harpy system

Another well-known example of autonomous weapons is Korean Samsung SGR-1 system, located in the demilitarized zone between North and South Korea, designed to identify and fire intruders at a distance of up to four kilometers. According to the description, the system "can distinguish between a person who surrenders and a person who does not surrender" based on the position of their hands or recognition of the position of the weapon in their hands.

4. Demonstration of the detection of a surrendering soldier by the Samsung SGR-1 system

Americans are afraid of being left behind

Currently, at least 30 countries around the world use automatic weapons with different levels of development and use of AI. China, Russia and the United States see artificial intelligence as an indispensable element in building their future position in the world. “Whoever wins the AI ​​race will rule the world,” Russian President Vladimir Putin told students in August 2017. The President of the People's Republic of China, Xi Jinping, has not made such high-profile statements in the media, but he is the main driver of the directive calling for China to become the dominant force in the field of AI by 2030.

There is growing concern in the US about the “satellite effect”, which has shown that the United States is extremely ill-equipped to meet the new challenges posed by artificial intelligence. And this can be dangerous for peace, if only because the country threatened by domination may want to eliminate the enemy's strategic advantage in another way, that is, by war.

Although the original purpose of the Maven project was to help find Islamic ISIS fighters, its significance for the further development of military artificial intelligence systems is enormous. Electronic warfare based on recorders, monitors and sensors (including mobile, flying) is associated with a huge number of heterogeneous data flows, which can only be effectively used with the help of AI algorithms.

The hybrid battlefield has become military version of IoT, rich in important information for assessing tactical and strategic threats and opportunities. Being able to manage this data in real time has great benefits, but failure to learn from this information can be disastrous. The ability to quickly process the flow of information from various platforms operating in multiple areas provides two major military advantages: speed i reach. Artificial intelligence allows you to analyze the dynamic conditions of the battlefield in real time and strike quickly and optimally, while minimizing the risk to your own forces.

This new battlefield is also ubiquitous and. AI is at the heart of the so-called drone swarms, which have received a lot of attention in recent years. With the help of ubiquitous sensors, not only allows drones to navigate hostile terrain, but may eventually allow the formation of complex formations of various types of unmanned aerial vehicles operating in many areas, with additional weapons that allow sophisticated combat tactics, immediately adapting to the enemy. maneuvers to take advantage of the battlefield and report changing conditions.

Advances in AI-assisted targeting and navigation are also improving the prospects for effectiveness in a wide range of tactical and strategic defense systems, especially missile defense, by improving the methods of detecting, tracking and identifying targets.

constantly increases the power of simulations and gaming tools used to research nuclear and conventional weapons. Mass modeling and simulation will be essential to develop a comprehensive multi-domain system of target systems for combat control and complex missions. AI also enriches multi-party interactions (5). AI allows players to add and modify game variables to explore how dynamic conditions (weaponry, allied involvement, additional troops, etc.) can affect performance and decision making.

For the military, object identification is a natural starting point for AI. First, a comprehensive and rapid analysis of the growing number of images and information collected from satellites and drones is needed in order to find objects of military significance, such as missiles, troop movements and other intelligence-related data. Today, the battlefield spans all landscapes—sea, land, air, space, and cyberspace—on a global scale.

Cyberspaceas an inherently digital domain, it is naturally suited to AI applications. On the offensive side, AI can help find and target individual network nodes or individual accounts to collect, disrupt, or misinform. Cyber ​​attacks on internal infrastructure and command networks can be disastrous. As far as defense is concerned, AI can help detect such intrusions and find destructive anomalies in civilian and military operating systems.

Expected and dangerous speed

However, quick decision making and prompt execution may not serve you well. for effective crisis management. The advantages of artificial intelligence and autonomous systems on the battlefield may not allow time for diplomacy, which, as we know from history, has often been successful as a means of preventing or managing a crisis. In practice, slowing down, pausing, and time to negotiate can be the key to victory, or at least averting catastrophe, especially when nuclear weapons are at stake.

Decisions about war and peace cannot be left to predictive analytics. There are fundamental differences in how data is used for scientific, economic, logistical and predictive purposes. human behavior.

Some may perceive AI as a force that weakens mutual strategic sensitivity and thus increases the risk of war. Accidentally or intentionally corrupted data can lead AI systems to perform unintended actions, such as misidentifying and targeting the wrong targets. The speed of action postulated in the case of the development of war algorithms may mean premature or even unnecessary escalation that impedes the rational management of the crisis. On the other hand, algorithms will also not wait and explain, because they are also expected to be fast.

Disturbing aspect functioning of artificial intelligence algorithms also presented by us recently in MT. Even experts do not know exactly how AI leads to the results that we see in the output.

In the case of war algorithms, we cannot afford such ignorance about nature and how they "think" them. We don't want to wake up in the middle of the night to nuclear flares because "our" or "their" artificial intelligence has decided it's time to finally settle the game.

Add a comment