Who would have thought we’d be seeing AI (Artificial Intelligence) robots on chat shows in 2018! We also saw Sophia, a robot created by Hanson robotics being granted a Saudi Arabian citizenship. Sophia is a realistic humanoid robot capable of displaying humanlike expressions and interacting with people. It is designed for research, education, and entertainment and helps promote public discussion about AI ethics and the future of robotics.
AI is transforming our society and affecting the ways in which we do business, interact socially, and conduct war. China has publicly committed to becoming the global leader in AI by 2030, and is spending billions of dollars to gain advantage. Russia is likewise investing heavily in AI applications and testing those systems in live combat scenarios. Russia’s new T-14 Armata battle tank, part of its Universal Combat Platform, is said to have autonomous capabilities. China is testing autonomous tanks, aircraft, reconnaissance robots, and supply convoys.
Globally, the public sector, private industry, academia, and civil society are engaging in ongoing debates over the promise, peril, ethics and appropriate uses of AI. Amid the global AI arms race, prominent researchers are protesting the use of AI in the development of weapons. In August 2017, prominent AI researchers wrote an open letter to the United Nations urging it to ban the use of autonomous weapons. “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale that is greater than ever, and at timescales faster than humans can comprehend,” Elon Musk, Mustafa Suleyman, and 116 machine learning experts from 26 countries wrote.
In the current conflict between Azerbaijan and Armenia, the extensive use of AI powered drones by Azerbaijan has psychologically scarred the Armenian forces. “Advanced drone capabilities have enabled Azerbaijan to adopt a lower-risk strategy of attrition, relying on precision strikes to destroy high-value Armenian military assets (such as air defense missile systems and armored vehicles)” Matthew Bryza, a Senior Fellow at the Atlantic Council, former U.S. Ambassador to Azerbaijan and a former U.S. mediator of the Nagorno-Karabakh conflict, said. In recent years, Azerbaijan acquired Israeli-built Harop loitering munitions, also known as ‘suicide’ or ‘kamikaze’ drones powered by AI and designed primarily for destroying enemy radars. Azerbaijan also acquired Turkish Bayraktar TB2s this year, which carry precision-guided MAM-L (Smart Micro Munitions).
Turkey’s innovative and decisive drone strikes in Syria’s Idlib earlier this year were “so innovative and effective” that the Royal United Services Institute in London went so far as to state that it called into question the utility of the main battle tank. In the future, main battle tanks will require advanced electronic warfare and short-range air defense systems to defend themselves against such attacks.
India has also embarked on infusing AI advantage in its defense services. “The world is moving towards an artificial intelligence-driven ecosystem,” Dr. Ajay Kumar, secretary at the defense ministry, said in a statement in 2018. “India is also taking necessary steps to prepare our defense forces for the war of the future,” he added.
The push for AI-enhanced defense platforms is a top priority for Prime Minister Narendra Modi, who said at the Defence Expo 2018 held in Chennai, that AI and robots would be “the most important determinants” of the readiness of future militaries. “India, with its leadership in [the] information technology domain, [will] strive to use this technology to its advantage,” he said.
While speaking at the Raise 2020 conference held in the first week of this October, Prime Minister Narendra Modi has also stressed on the responsible use of artificial intelligence and protecting the world against weaponization of AI by non-state actors.
Chief concern is the possibility that an AI weapon system might not perform as intended, with potentially catastrophic results. Machine learning incorporated into weapon systems might learn to carry out unintended attacks on targets that the military had not approved and escalate a conflict.
In this context, the Defense Innovation Board of U.S. unveiled ‘the principles for ethical use of AI by the Defense Department’ on Oct. 31, 2019, which call for AI systems in the military to be responsible, equitable, reliable, traceable, and governable. The Board pointed to the circuit breakers established by the Securities and Exchange Commission to halt trading on exchanges as models. They suggested analogues in the military context including “limitations on the types or amounts of force particular systems are authorized to use, the decoupling of various AI cyber systems from one another, or layered authorizations for various operations.”
AI ethics and principles should enrich discussions about how to advance the still-nascent field of AI in safe and responsible ways.
The views and opinions expressed in this article are those of the author.
Source: thegeopolitics.com