Artificial intelligence has a dark side — militaries around the world are using it in killing machines – MarketWatch

However, artificial intelligence (AI) can also be used in negative, destructive ways. It can be embedded into killing machines — drones, intelligent weapons, even humanoid robots — that are unleashed to wreak havoc on armed forces and civilians.

For years, armies have been using artificial intelligence to improve existing vehicles, and pilots’ reaction time and decision maki…….

npressfetimg-5176.png

However, artificial intelligence (AI) can also be used in negative, destructive ways. It can be embedded into killing machines — drones, intelligent weapons, even humanoid robots — that are unleashed to wreak havoc on armed forces and civilians.

For years, armies have been using artificial intelligence to improve existing vehicles, and pilots’ reaction time and decision making on the battlefield. For example, AI is used in the Mitsubishi X-2 Shinshin, the Japanese stealth fighter prototype built in 2016. The onboard artificial intelligence uses an array of sensors all over the aircraft to provide vital information about the status of every component and to determine the severity of any damage.

The Russian military aircraft Sukhoi Su-57 also relies on AI, which constantly analyses multiple parameters such as air quality and pressure to provide information on how to stabilize the aircraft that goes into a spin, as well as override any pilot maneuvers the system predicts could cause a crash.

Until recently, however, AI would only amplify human input. A soldier is still the bottleneck in these systems — even something as simple as discharging a weapon takes more time when a person and not an automated, AI-powered system is in charge. These milliseconds of delay often mean the difference between victory and defeat, and world governments are keenly aware of it. They want an edge over one another, and the lack of international standards or legal documents governing the use of AI in combat systems certainly encourages such ambitions. The result is the steady increase in “incidents,” both documented and undocumented, where lethal autonomous weapon systems, or LAWS, have been put to use.  

Day of infamy

History will remember March 2021 as the date of the first documented use of one such weapon. A report commissioned by the United Nations claims that a military drone used in 2020 in Libya’s civil war was unmanned and autonomous. It engaged militants during their retreat, without requiring any input from a human operator.

May 2021 marks the first use of a drone swarm in combat. In this incident, Israel Defense Forces targeted members of the terrorist organization Hamas, unleashing a drone swarm to strike multiple enemy targets. When using the phrase “drone swarm,” it is important to denote that this doesn’t just mean multiple drones. In a true drone swarm, drones have the ability to make autonomous decisions based on information shared by other units in the vicinity.

The May 2021 incident was just the beginning. A number of countries have already included drone swarms  in their arsenals and are integrating them in military operations. Needless to say, the U.S. is a world leader in swarm technology. It started with 2016 Perdix swarm demo, when a trio of F/A-18 Super Hornet fighter jets released more than 100 drones into the air. It continued with Defense Advanced Research Projects Agency (DARPA) X-61A Gremlin drones, which were launched from a “mothership”  cargo carrier, demonstrating the carrier’s functionality to launch and retrieve a great number of small, cheap and reusable drones.

Recent developments include kamikaze drone swarms, and several swarm-related projects that are part of DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program.

Robots on patrol

But it’s not just the drones. There are intelligent robots, too. Boston Dynamics’ cute robot “Spot” is currently being used by Hawaii police to detect homeless who might be infected with COVID-19.

In Singapore, robots already patrol the streets, relying on facial recognition software to police “undesirable” behavior (not respecting social distancing, smoking or improperly parking bicycles).

While some may find this normal and not problematic, it’s easy to envision scenarios in which robots enforce not only socially acceptable behavior, but anything else they’ve been encoded with. Unleashing them on civilians creates a dystopian society in which AI-based enforcers ensure obedience without any human involvement.

Also, remember that any technologies mentioned in this column, especially drone swarms, can and will most likely be used as a crowd-control mechanism, employing lethal and non-lethal countermeasures against anyone not complying with rules set down by increasingly authoritarian governments.

Unlike human police officers and military personnel, AI has no feelings. No ethics. No sense of the value of human life. It processes the data; it reacts according to the sensory input and its algorithm.

AI is not perfect. We’ve seen time and again that AI makes mistakes. In the future, these mistakes will likely cost lives as AI makes bad calls, targets “friendlies,” fires on civilians and worse.

While engaging with the enemy, AI is likely to commit acts that would be deemed atrocities and breaches of international treaties, since it lacks the capacity to understand context and interpret situations in ways that leave room for interpretation and adherence to complex laws and customs of war.

Finally, there’s scalability-as-a-threat. Many of these machines (especially drones) are incredibly cheap to produce and proliferate, especially at a large scale. This alone makes them disproportionally more powerful than many weapons currently available in conventional military arsenals. It also turns them into an unpredictable element on the battlefield, one that can escalate international conflicts in a way that could push countries on the receiving end to consider last-resort tactics, including the reintroduction of weapons of mass destruction.

Standing up to LAWS

There has been considerable pushback against LAWS — the site StopKillerRobots.org is run by a coalition of non-governmental organizations that seek to pre-emptively ban lethal autonomous weapons. The Future of Life Institute (FLI) is another outreach organization which, among other hot topics of our age, tackles LAWS, ethical uses of AI and other issues I’ve discussed in this article.  

Finally, even UN Secretary General António Guterres says that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”

Make no mistake, though: This pushback has done little to stop what’s called the “arms race in autonomy.” Although the UN has voiced its distain for LAWS multiple times, and the U.S., Chinese, Russian and German military (and several others) have stated that creation of LAWS isn’t their objective, the truth is that the technology is already here and likely to proliferate. These kinds of weapons provide an edge that armed forces of the world simply cannot ignore.

The same goes for using LAWS in a civilian setting. During times when governments vie for more complete control, any device or weapon that facilitates that kind of control is more than welcome — and LAWS definitely fit the bill. As demonstrated in Honolulu and Singapore, we’re already seeing some innocuous applications of AI that are setting the stage for more elaborate setups.

Robots policing our streets, fighting our wars. That’s one thing I’m not particularly looking forward to.

What about you? What’s your stance on LAWS? What do you think about the future of warfare? How do autonomous weapon systems fit in?

Let me know in comment section below.

Source: https://www.marketwatch.com/story/artificial-intelligence-has-a-dark-side-militaries-around-the-world-are-using-it-in-killing-machines-11640107204