Elsa B. Kania Profile picture
Apr 12, 2018 22 tweets 5 min read Read on X
ICYMI, China has posted a working paper for the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) #CCWUN unog.ch/80256EDD006B89…
Here are a few initial reflections. First, it's encouraging that China is actively participating in this process, and I hope China (and Russia) will remain engaged on the legal and ethical issues underlying their development of military applications of AI.
At the same time, it's important to note that the Chinese military and defense industry are actively engaged in research and development of - and experimentation with - a range of AI-enabled capabilities, including swarm intelligence, as I've documented. cnas.org/publications/r…
This is hardly surprising - just about every major military is pursuing different applications of AI - but great power competition in this domain does pose a range of risks to military and strategic stability. lawfareblog.com/great-power-co…
At first glance, China's position paper itself seems to seek to preserve a degree of ambiguity and optionality, while expressing outright skepticism about any "uniform standard" on these issues. That doesn't surprise me at all.
The paper notes that LAWS "are closely related to existing weapons and
new weapon systems that are being developed" and "still lack a clear and agreed definition," but "should be understood as fully autonomous lethal weapon systems."
It's worth noting that the PLA's official dictionary included a definition for artificial intelligence weapon (人工智能武器) as early as *2011*, though presumably PLA thinking has continued to evolve as the technology has advanced.
"a weapon that utilizes AI to automatically (自动) pursue, distinguish, & destroy enemy targets; often composed of information collection & management systems, knowledge base systems, assistance to decision systems, mission implementation systems, etc.,” e.g., military robotics
There may be a major divide between China's diplomatic engagement on these issues and the PLA's approach. The PLA doesn't have a legal culture comparable to the U.S. military's, e.g., due to its lack of experience with application of laws of armed conflict or rules of engagement.
Traditionally, the PLA has also approached issues of international law in terms of legal warfare (法律战), seeking to exploit rather than be constrained by legal frameworks.
See, for instance, Dean Cheng's report on legal warfare, which argues that China approaches lawfare "as an offensive weapon capable of hamstringing opponents and seizing the political initiative": heritage.org/asia/report/wi…
I've also written on the PLA's approach to the "three warfares" based on authoritative publications that focus on concepts such as seizing “legal principle superiority” (法理优势) or delegitimizing an adversary with "restriction through law" (法律制约). jamestown.org/program/the-pl…
Back to the paper, which has a very specific definition of LAWS, including the characteristics of "impossibility for termination" and "indiscriminate effect, meaning that the device will execute the task of killing and maiming regardless of conditions, scenarios and targets."
That allows for a lot of leeway, it seems. Would an intelligent/autonomous weapons system that can be terminated and is not indiscriminate be seen as not at all problematic from this perspective?
The paper highlights Human-Machine Interaction as "conducive to the prevention of indiscriminate killing and maiming...caused by breakaway from human control." The PLA will likely care a lot about security and controllability due to core aspects of its command culture.
The paper does articulate concern for the capability of LAWS in "effectively distinguishing between soldiers and civilians," calling on "all countries to exercise precaution, and to refrain, in particular, from any indiscriminate use against civilians."
Again, that statement may be consistent with the arguments of those seeking to "ban killer robots," but it doesn't articulate any commitment to caution in developing capabilities that can exercise that sort of distinction.
At the same time, China's position paper emphasizes the importance of AI to development and argues, "there should not be any pre-set premises or prejudged outcome which may impede the development of AI technology." That sounds reasonable, given the nascency of these technologies.
It proceeds to highlight that national reviews on 'new weapons'
have shown "positive significance on preventing the misuse of relevant
technologies and on reducing harm to civilians." That AI may allow for greater distinction and proportionality is also a very valid point.
For comparison, see China's December 2016 position paper for that UN GGE, which is much less detailed for the most part, with one important exception: unog.ch/80256EDD006B89…
The December 2016 paper declared: "China supports the development of a legally binding protocol on issues related to the use of LAWS, similar to the Protocol on Blinding Laser Weapons, to fill the legal gap" on LAWS.
This April 2017 position paper doesn't call for such a "legally binding protocol" but merely calls for "full consideration of the applicability of general legal norms to LAWS." So there has been a notable shift.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Elsa B. Kania

Elsa B. Kania Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EBKania

Feb 4, 2018
Wow, this is fascinating on a number of levels. China reportedly plans to update the computer systems on nuclear submarines with AI to enhance commanders' thinking and decision-making. scmp.com/news/china/soc…
First of all, the fact that that the senior scientist on the program is talking to the South China Morning Post about this is quite notable as an indicator that, despite the sensitivity of the project, the powers that be presumably want this story to receive some attention.
Next, this is telling in terms of how the PLA thinks about the utility of AI. The researcher reportedly highlighted that 'a submarine with AI-augmented brainpower would give the PLA Navy an upper hand in battle and push applications of AI technology to a new level.'
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(