IEEE 8000 Working Group - Trustworthiness of Artificial Intelligence

Standards Committee: Entity Collaborative Activities Governance Board (BOG/CAG)

Title: P8000.1 – Standard for a Method to Assess the Trustworthiness of Artificial Intelligence (AI) Systems

Scope:  The standard defines a method to assess the trustworthiness of Artificial Intelligence (AI) systems. The method allows for the delivery of grades on key principles that characterize confidence in AI systems. The standard defines indicators for each principle and addresses the dependencies between the principles. The standard covers specific ethical properties of interest such as transparency, accountability, privacy, and fairness.

Purpose: The method is intended to underpin contextual impact assessment and customization. The method may also be used for evaluation, conformity assessment and ethics certification of AI systems.

Abstract: A method to assess the trustworthiness of AI systems is defined by the standard.

Seven distinct scores – one for each of the following principles – reflecting the trustworthiness and ethical soundness of an AI system can be assigned using the method, namely:

  • Accountability
  • Human Agency & Oversight
  • Technical Robustness & Safety
  • Privacy & Data Governance
  • Transparency
  • Diversity, Non-Discrimination & Fairness
  • Societal & Environmental Well-Being

The method is applicable to both high-risk and non-high-risk AI systems. The method is designed to score AI systems at different stages of the supply chain, from development through deployment and operation.

The method is intended to serve as the foundation of an AI system trust rating service and certification program.