In April 2021, the European Commission (EC) proposed regulation for AI systems, outlining a risk-based approach to governing the development and use of AI systems. The proposed regulation categorizes AI systems based on their associated risks to the health, safety, and fundamental rights of people, into one of the four risk categories: “unacceptable risk”, “high risk”, “transparency risk” or “minimal/no risk”. In addition, the draft regulation outlines a list of domains of AI use cases that would be considered high-risk and thus subject to requirements outlined in the regulation.

While the proposed regulation is widely considered a key step in the right direction, there are several open questions when it comes to applying the regulation and understanding what could be considered high-risk under the regulation. This has also been a particular focus of a lot of industry feedback received by the European Commission. Clarifying these questions is particularly important now as the regulation is close to final approval, and more and more organizations seek to use the Act as a guiding standard while seeking to operationalize their responsible AI strategies.

One of the best mechanisms to support the rollout of the proposed regulation and other AI regulations is to offer concrete use cases in an easily accessible repository of real-world examples to help inform and improve policy and technological outcomes. The IEEE’s AI Impact Use Cases Industry Connections Initiative seeks to address this need by creating a searchable database of examples of high-risk AI systems as defined by the proposed European regulation.

To enable this, our cross-domain specialists have developed a framework for use case submission, and we are conducting an open call to begin building the repository. We welcome submissions from large and small corporations, academia, industry, and government agencies interested in this work. Submissions should contain an example of an AI system as defined in the AI Act: ”AI system means a system that is designed to operate with elements of autonomy and that, based on machine and/or human provided data and inputs, infers how to achieve a given set of human-defined objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts”.

Furthermore, we are specifically seeking examples of AI systems that may be classified under one of the high-risk categories of the AI Act:

  • AI-based products covered by the European Union harmonization legislation (AI Act annex II) that are required to undergo a third-party conformity assessment, for example, AI-based medical devices;
  • AI systems intended to be used for biometric identification;
  • AI systems intended to be used for management and operation of critical infrastructure;
  • AI systems intended to be used for education and vocational training;
  • AI systems intended to be used employment, workers management and access to self-employment;
  • AI systems intended to be used to defining access to and enjoyment of essential private services and public services and benefits;
  • AI systems intended to be used for law enforcement;
  • AI systems intended to be used for migration, asylum and border control management;
  • AI systems intended to be used for administration of justice and democratic processes;
  • General-purpose AI systems which may be used as AI systems or components of such in any of the above potentially high-risk application areas of AI.

These categories are intended to capture systems that are likely to significantly impact an individual’s health, safety, or fundamental rights. More detail on the categories and examples of high-risk systems are detailed in the AI Act Annexes II-III . However, industry feedback on the Act indicates that there is uncertainty and disagreement about which use cases are captured in this risk tier and whether there is overreach in the definition. Through this initiative, we seek to shed more light on this question and capture grey area cases by highlighting real-world examples. We will present our findings on commonalities in these cases as part of the project outputs. As we receive submissions, we will

  • Vet and assess the associated risk tiers using the EC’s definition, the expertise of IC18-004 ECPAIS members, as well as of this initiative’s expert volunteers;
  • Curate use cases with standardized descriptions of the systems under consideration;
  • Invite submitters to participate in future research around different approaches to identifying the potential positive and negative impact and mitigating the risks and potential harms.

Building on the repository of AI system use cases, we intend to conduct an open content review period to elicit feedback from external entities and experts. As this information is contextualized and standardized in form, the output of this work will be a searchable database that can be used by developers, deployers, and policymakers. Those using the database will be able to compare cases in categories of interest to them including specific domains, risk categories, and types of AI technology. During 2023, the initiative will host a series of roundtables to share key learnings and facilitate in-depth discussions with industry players across the high-risk application domains, those who submit use cases will have priority access to these events.

We encourage all AI providers and users potentially impacted by the regulation to participate by submitting real-world examples of high-risk AI systems to the open call via this form.

Your input will be critical to maximizing the coverage of the database and understanding how the proposed regulation can be best applied and what potential challenges we may face as we move forward.

Thank you for your time and participation!