The weaponization of robots is a concern

Mondo Technology Updated on 2024-01-30

From armed robot dogs to unmanned submarines to drones searching for targets, the increasing application of artificial intelligence technology in modern warfare has brought people a problem that needs to be answered urgently.

On November 27, 2020, at 3:30 p.m., a train convoy pulled into Imam Khomeini Avenue in Tehran. There was an important person in the car - Mohsin Fakhrizadeh, the head of Iran's secret nuclear ** program. He and his wife were driving to their country home, escorted by a security vehicle, when they were attacked as they were about to reach their destination.

After several gunshots, the black Nissan car driven by Fakrizadeh was hit and could not continue driving. Gunshots rang out again, and the bullet hit the scientist in the shoulder, forcing him to step out of the vehicle and call for help. Subsequently, he suffered another fatal blow when he was completely exposed, while his wife, who was sitting in the passenger seat, was safe and sound.

Next, something strange happened. A pickup truck parked on the side of the road suddenly ** for some reason. After the incident, Iranian security forces, while examining the wreckage of the pickup truck, found some fragments of machine guns. It is a remotely controlled machine gun with multiple cameras. Could it be that Fakrizadeh was shot by a robot?

A follow-up report in the New York Times revealed that the machine gun was not a fully automatic firearm. Presumably, the attacker was 1,000 kilometers away and would judge the timing of pulling the trigger based on the pickup truck's return and use artificial intelligence software to solve the problem of signal delay.

It's a nightmare. The ** clips of the Russia-Ukraine conflict circulating on the Internet have deepened people's fears. On the battlefield in Ukraine, drones are everywhere, from Turkish-made "Flagbearer" drones, maritime drones to attack ships, to quadcopters capable of throwing grenades. If these clips are true, then the reality is only getting worse.

There is a passage on the Internet**: a drone transports a robot dog to its destination. The robot dog quickly activates as soon as it hits the ground, carrying a machine gun on its back. In another paragraph**, a Russian modifies a commercial robot dog to shoot bullets.

In response to these shocking **, in October 2022, Boston Dynamics and five other robotics companies issued a joint statement: "We believe that modifying remote-controlled robots or automated robots, adding ** to them, and remotely controlling them into the previously inaccessible work and living places of others, creates new risks and is also a serious violation of ethics." Turning these new types of robots into a new type of robot will lead to a shift in public trust in robotics that instinctively benefit society. There are a large number of incidents of individuals attempting to turn commercial robots into **, and we strongly oppose them and will try our best to avoid them. We hope that the relevant authorities can help us promote the safe use of mobile robots. ”

Hyundai Rotum, a subsidiary of Hyundai Motor Group, does not seem to have any concerns about this. In April 2023, Hyundai Rotem announced that it would cooperate with South Korean company Rainbow Robotics to develop a multi-legged defense robot. The promotional video shows that this product is actually a robot dog equipped with a gun.

Defense analyst and military historian Tim Ripley said Boston Dynamics' statement didn't have much practical significance. "Even if the robot is not equipped with **, it will still be used in war," he said. For example, after a reconnaissance drone finds a target, the user decides to shoot a cannonball, killing people. Well, this drone is actually not much different from a drone that can fire shells, they are both links in the chain of attack. He also noted that reconnaissance drones have played a key role in the Russia-Ukraine conflict, both tracking enemy movements and spotting bombing targets.

Computer-controlled military hardware typically consists of two parts: the hardware itself and the control software. On the battlefield, apart from drones, other robots are rare. However, the use of smart software has become more and more popular. Martin Mike, a senior war researcher at King's College London, said: "There is a need for a range of autonomous software built into the UK's system. With these software, soldiers can make decisions faster, such as an Apache that detects thermal radiation signals from the ground, and its onboard software can quickly identify potential targets and even give pilots advice on which targets to prioritize. Once the pilot has the information, he or she can make a decision. ”

The military clearly wants to have more similar systems, especially those that can be adapted to robots. The robot dog (also known as a quadruped robot in the industry) developed by the American Ghost Robotics Company is known as a reconnaissance weapon that can help patrol officers detect potential risk areas. In addition, they are also known as killing machines.

At the annual meeting of the Association of the United States Army in October 2021, Ghost Robotics exhibited a "killer mobile phone dog" with a gun on its back. The gun was manufactured by the American company Sword Defense Systems under the designation Special Purpose Unmanned Rifle. The official ** of the Sword company declared: the special-purpose unmanned rifle is the future of unmanned ** systems, and this future has arrived.

The Royal Navy has tested an unmanned underwater vehicle called Manta. The nine-metre-long submersible is equipped with sonar, cameras, communications and jamming devices for underwater operations. In addition, in November 2021, the U.S. and British forces practiced in the Mojave Desert Xi using drones, robotic transports and artificial intelligence technology to help the British military become more lethal on the battlefield.

However, for now, even the most sophisticated autonomous systems still require human involvement in decision-making. There are two types of participation. One is "human-in-the-loop," where the computer presents a potential target to a human operator, who decides how to act. The other is "man-in-the-ring", where the computer gives the target a kill sequence, and although humans can control the computer at any time, most of the decisions are made by the computer. In more extreme cases, the system operates completely autonomously, selecting and eliminating targets without human intervention.

Martin Mike, a senior war researcher at King's College London, said: "Hopefully we'll never get to the last stage, and if we leave the decision to an autonomous system, then we're out of control." What if the system thinks that getting rid of the human leader is the best option for the current battle?There's no guarantee that this won't happen. The movie "The Terminator" depicts a similar nightmare scenario: AI robots wage war and vow to wipe out humanity.

Virginia Tech professor Feras Batarse believes that human society is still far from fully autonomous systems, but that artificial intelligence has advanced to dangerous levels. "AI technology isn't smart enough for us to fully trust," he said. But they're not that stupid either, so humans react subconsciously and can't let them get out of control. ”

In other words, if a soldier trusts the AI system, he may put himself in a more dangerous position. The reason is that current AI cannot handle situations that have not been Xi. Researchers refer to the unexpected as outliers, and wars clearly create a large number of outliers. On the battlefield, the unexpected can happen anytime, anywhere. Outliers are simply synonymous with war, and current artificial intelligence is not yet able to handle it well.

Even if this problem is solved, we still need to deal with intractable ethical dilemmas. For example, when an AI makes a kill decision, how do you tell if it's the right one?This is similar to the problem of electric cars that are thwarting the development of autonomous driving technology.

Solan Mattei, a professor at Purdue University, believes that the solution may lie in making AI warriors aware of their vulnerability. In this way, they will value their own "life" and "push themselves and others". Mattey even thinks that doing so might make warfare more humane, and that to make AI trustworthy, it must give them what they fear losing.

But even the most ethical war robots have a flaw: they can't defend themselves against hacking. Develop a ** system here, and someone over there will try to crack it. This is why robot units are clearly a prime target for enemy invasions. If the enemy erases ethics from the robot's chips and turns against them, the consequences will be devastating.

Mike thinks that if you want to control these terrible **, maybe you can refer to military history. "There are many ** that have frightened human beings in history, such as nuclear**, chemical**, and biological**. Now that all parties have signed an arms control treaty, it is not because of which country has taken the initiative to give up research and development, but because everyone has found that these weapons are so terrible in the arms race, so they are willing to sit down and talk about it. ”

Today, drones, robots, and other unmanned drones are appearing on the battlefield more and more frequently. Before the introduction of the independent ** control treaty, there must still be a period of fearful days to pass.

Related Pages