U.S. Department of Defense Adopts Ethical Principles For Artificial Intelligence

In a statement released yesterday, the U.S. Department of Defense officially adopted a series of ethical principles surrounding military applications of Artificial Intelligence. The adoption of these principles follows recommendations provided by the Defense Innovation Board, which resulted from 15 months of deliberation with leading AI experts in the commercial industry, government, academia, and the American public.

The recent adoption of these AI ethical principles closely aligns with the Department of Defense’s AI strategy objective directing the U.S military to consider AI ethics and the lawful use of AI systems. These ethical principles address new challenges faced in the digital era and build upon the U.S. military’s existing ethical framework based on documents such as the Constitution, and existing international norms and treaties.

The Department of Defense recognized the need to “accelerate the adoption of AI and lead in its national security applications to maintain [the United States’ strategic position, prevail on future battlefields, and safeguard the rules-based international order.” Despite this, the adoption of AI ethical principles was still considered necessary. According to the Department of Defense, “AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior.”

These ethical principles will apply to both combat and non-combat functions and assist the U.S military in upholding legal, ethical, and policy commitments in the field of AI.

The Department of Defenses’ AI ethical principles encompass five main areas:

  1. Responsible – Department of Defense personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities. 
  2. Equitable – The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable – The Department’s capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable – The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles. 
  5. Governable – The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behaviour. 

 

Recommended Posts