The ghost of the machine killer continues. What does President Putin believe in?
Technologies

The ghost of the machine killer continues. What does President Putin believe in?

Proponents of military robots (1) argue that automated weapons provide more options for protecting human life. Machines are able to get closer to the enemy than soldiers, and correctly assess the threat. And emotions sometimes paralyze the ability to make the right decisions.

Many advocates of killer robots are convinced that they will make wars less bloody because fewer soldiers will die. They note that the robots, while not feeling pity, are immune to negative human emotions such as panic, anger, and revenge, which often lead to war crimes.

Human rights activists also use the argument that the military has led to a huge reduction in civilian casualties over the past half century, and the robotization of the army allows for a mechanism to enforce the laws of war more strictly. They claim that machines will become ethical when they are equipped with software that will force them to obey the laws of war.

Of course, a huge number of people, including very famous ones, do not share this opinion for years. In April 2013, an international campaign was launched under the slogan (2). Within its framework, non-governmental organizations demand a complete ban on the use of autonomous weapons. Experts from many countries first sat down to discuss this topic at the UN Conference on Disarmament in Geneva in May 2014. A report published a few months later by Human Rights Watch and scientists from Harvard University said that the autonomous ones would be too dangerous - they chose their own targets and killed people. At the same time, it is not very clear who should be held accountable.

2. Demonstration as part of the action "Stop killer robots"

What a swarm of small drones can do

Disputes around killer robots (ROU) have been going on for years and do not fade away. Recent months have brought new attempts to stop military robots and a wave of reports of new projects of this type, some of which are even being tested in real combat conditions.

In November 2017, a video showing deadly swarms of mini-drones ., in terrifying action. Viewers saw that we no longer need the heavy war machines, tanks, or missiles thrown by the Predators to kill en masse and with machine guns. Lead director Stuart Russell, professor of artificial intelligence at Berkeley, says:

-

Last spring fifty professors The world's leading universities have signed an appeal to the Korea Institute of Advanced Science and Technology (KAIST) and its partner Hanwha Systems. they announced that they would not cooperate with the university and host KAIST guests. The reason was the construction of "autonomous weapons" carried out by both institutions. KAIST denied the media reports.

Shortly thereafter in the US more than 3 Google employees protested against the work of the company for the military. They were concerned that Google was partnering with a government project codenamed Maven that aims to use AI to recognize objects and faces in military drone videos. The company's management says that the goal of Maven is to save lives and save people from tedious work, not aggression. The protesters were not convinced.

The next part of the battle was the declaration artificial intelligence experts, incl. working on a Google project and Elon Musk. They promise not to develop robots. They also call on governments to step up efforts to regulate and limit these weapons.

The statement says, in part, that "the decision to take a human life should never be taken by a machine." Although the armies of the world are equipped with many automatic devices, sometimes with a high degree of autonomy, many experts fear that in the future this technology may become completely autonomous, allowing killing without any involvement of a human operator and commander.

Experts also warn that autonomous killing machines could be even more dangerous than "nuclear, chemical and biological weapons" because they can easily spiral out of control. In total, in July last year, a letter under the auspices of the Future of Life Institute (FGI) was signed by 170 organizations and 2464 individuals. In the early months of 2019, a group of FLI-affiliated medical scientists called again for a new letter banning the development of artificial intelligence (AI) controlled weapons.

Last year's August meeting of the UN in Gniewo on the possible legal regulation of military "killer robots" ended in success ... machines. A group of countries, including the United States, Russia and Israel, blocked further work on the introduction of an international ban on these weapons (draft Convention on the Prohibition or Restriction of the Use of Certain Conventional Weapons, CCW). It is no coincidence that these countries are known for their work on advanced systems of autonomous and robotic weapons.

Russia focuses on combat robots

President Vladimir Putin is often quoted as saying about military AI systems and combat robots:

-.

talks openly about the development of autonomous weapons. The chief of the General Staff of its armed forces, General Valery Gerasimov, recently told the military news agency Interfax-AVN that the use of robots will be one of the main features of future wars. He added that Russia is trying fully automate the battlefield. Similar comments were made by Deputy Prime Minister Dmitry Rogozin and Defense Minister Sergei Shoigu. Chairman of the Federation Council Committee on Defense and Security Viktor Bondarev said that Russia is striving to develop Roju technologiesthis would allow drone networks to function as a single entity.

This is not surprising if we remember that the first teletanks were developed in the Soviet Union in the 30s. They were used at the start of World War II. Today Russia is also creating tank robots become more and more autonomous.

Putin's state recently sent its own to Syria Unmanned combat vehicle Uran-9 (3). the device lost contact with ground control points, had problems with the suspension system, and its weapons did not function perfectly and did not hit moving targets. It doesn't sound very serious, but many consider the Syrian wipe to be a good combat test that will allow the Russians to improve the machine.

Roscosmos has approved a preliminary plan to send two robots to the International Space Station by August this year. Fedor (4) in the unmanned Union. Not like a load, but. As in the movie RoboCop, Fedor wields a weapon and demonstrates deadly marksmanship during shooting exercises.

The question is, why would a robot in space be armed? There are suspicions that the matter is not only in ground applications. Meanwhile on Earth, Russian weapons manufacturer Kalashnikov showed a visualization robot Igorekwhich, although it caused a lot of laughter, signals that the company is seriously working on autonomous combat vehicles. In July 2018, Kalashnikov announced that he was building a weapon that he uses to make “shoot or not shoot” decisions.

To this information should be added reports that the Russian gunsmith Digtyarev developed a small autonomous tank Nerekht which can move silently towards its target on its own and then explode with powerful force to destroy other or entire buildings. As well as Tank T14 Army , the pride of the Russian armed forces, was designed for possible remote control and unmanned driving. Sputnik claims that Russian military engineers are working to make the T-14 a fully autonomous armored vehicle.

Objection Directive

The US military itself has imposed a fairly clear limit on the level of autonomy of their weapons. In 2012, the US Department of Defense issued Directive 3000.09, which states that humans should have the right to object to the actions of armed robots. (although there may be some exceptions). This directive remains in effect. The current policy of the Pentagon is that the decisive factor in the use of weapons should always be a person, and that such judgment should be. conforms to the laws of war.

Although Americans have been using flying, Predator, Reaper and many other supermachines for decades, they were not and are not autonomous models. They are controlled by operators remotely, sometimes from a distance of several thousand kilometers. A heated discussion about the autonomy of machines of this type began with the premiere of the prototype. drone X-47B (5), which not only flew independently, but could also take off from an aircraft carrier, land on it and refuel in the air. The meaning is also to shoot or bomb without human intervention. However, the project is still under testing and review.

5. Tests of the unmanned X-47B on an American aircraft carrier

In 2003, the Department of Defense began experimenting with a small tank-like robot. SPOES equipped with a machine gun. In 2007 he was sent to Iraq. however, the program ended after the robot began behaving erratically, moving its rifle erratically. As a result, the US military abandoned research on armed ground robots for many years.

At the same time, the US Army has increased its spending on operations from $20 million in 2014 to $156 million in 2018. In 2019, this budget has already jumped to $327 million. This is a cumulative increase of 1823% in just a few years. Experts say that as early as 2025, the US military may have a battlefield more robot soldiers than humans.

Recently, a lot of controversy has been caused and announced by the US Army ATLAS project () - automatic. In the media, this was regarded as a violation of the aforementioned Directive 3000.09. However, the US military denies and assures that the exclusion of a person from the decision-making cycle is out of the question.

AI recognizes sharks and civilians

However, defenders of autonomous weapons have new arguments. prof. Ronald Arkin, a roboticist at the Georgia Institute of Technology, states in his publications that In modern warfare, intelligent weapons are essential to avoid civilian casualties, as machine learning techniques can effectively help distinguish between combatants and civilians, and important and unimportant targets.

An example of such AI skills is patrolling Australian beaches. drones Little Ripperequipped with the SharkSpotter system developed by the University of Technology Sydney. This system automatically scans the water for sharks and alerts the operator when it sees something unsafe. (6) It can identify people, dolphins, boats, surfboards and objects in the water to distinguish them from sharks. It can detect and identify about sixteen different species with high accuracy.

6. Recognized sharks in the SharkSpotter system

These advanced machine learning methods increase the accuracy of aerial reconnaissance by more than 90%. For comparison, a human operator in a similar situation accurately recognizes 20-30% of objects in aerial photographs. In addition, identification is still verified by a human prior to an alarm.

On the battlefield, the operator, seeing the image on the screen, can hardly determine whether the people on the ground are fighters with AK-47s in their hands or, for example, farmers with pikes. Arkin notes that people tend to "see what they want to see," especially in stressful situations. This effect contributed to the accidental downing of an Iranian aircraft by the USS Vincennes in 1987. Of course, in his opinion, AI-controlled weapons would be better than the current "smart bombs", which are not really sentient. Last August, a Saudi laser-guided missile hit a bus full of schoolchildren in Yemen, killing forty children.

“If a school bus is properly labelled, identifying it in an autonomous system can be relatively easy,” argues Arkin in Popular Mechanics.

However, these arguments do not seem to convince the campaigners against automatic killers. In addition to the threat of killer robots, another important circumstance must be taken into account. Even a "good" and "attentive" system can be hacked and taken over by very bad people. Then all arguments in defense of military equipment lose their force.

Add a comment