P7021 Addressing Ethical Concerns in AI based Climate Projections

Title: Recommended Practice for Implementing Ethical Artificial Intelligence (AI) in Long-Term Climate Projections Towards Sustainability

Scope: This recommended practice addresses embedding ethical Artificial Intelligence (AI) into climate modelling processes. This includes: 1) Fairness & Bias Mitigation: The document helps to ensure AI systems do not reinforce inequities or amplify biases in climate data and projections. 2) Transparency & Explainability: The document promotes interpretable AI models so that scientists, policymakers, and communities can trust and act upon climate predictions. 3) Data Governance & Privacy: The document establishes protocols for responsible data handling, security, and consent. 4) Accountability & Standards: The document supports aligning AI-driven climate applications with international norms, regulations, and scientific standards. 5) Sustainable Practices: The document helps to ensure AI usage supports long-term environmental and social sustainability. The recommended practice aligns with UNESCO’s AI Ethics Framework, while focusing on Disaster Risk Reduction (DRR) and long-term climate predictions. While the document covers ethical and procedural best practices, it does not address technical AI performance standards (e.g., algorithmic efficiency, model accuracy) or specific operational protocols for disaster response (e.g., emergency coordination, infrastructure deployment).

FAQs:

  • What is this standard?

IEEE P7021 is an IEEE recommended practice that provides practical guidance for implementing ethical AI in long-term climate projections and disaster risk reduction. Climate modelling is increasingly using machine learning, hybrid physics–AI methods, climate foundation models, and Digital Twins. These tools introduce governance challenges. They include deep uncertainty, long time horizons, uneven or contested data, and high-stakes impacts on communities, ecosystems, and critical infrastructure. P7021 sets out actionable methods for fairness and bias mitigation, transparency and explainability, accountability, and meaningful human oversight. The goal is to ensure AI-augmented projections can be trusted, tested, and used responsibly in decisions. It is intended for agencies, researchers, system architects, policymakers, disaster and humanitarian organisations, First Nations communities and Indigenous knowledge holders, civil society, and assurance bodies. It supports adoption at scale while protecting scientific credibility and public trust. It also reinforces Indigenous data sovereignty and appropriate use of Traditional Ecological Knowledge. By strengthening credible and inclusive climate intelligence, P7021 supports resilience and adaptation priorities and contributes to the UN Sustainable Development Goals through responsible, equitable, and accountable climate AI deployment.

  • Why is it important?

Climate and disaster decisions increasingly rely on modelling outputs that shape policy, infrastructure investment, emergency planning, and community safety, and AI works as a double-edged sword that can create value with proper governance but can be harmful without it. Therefore, it is important to support the responsible adoption of AI in climate projections in line with AI ethics principles. A standard like IEEE P7021 matters because it translates ethical AI into practical steps that protect trust and legitimacy. It helps agencies and researchers produce projections that are transparent and explainable, so they can be tested, challenged, and improved. It helps policymakers and regulators make defensible decisions with clear accountability and human oversight, rather than treating AI outputs as unquestionable. It helps Digital Twin and systems teams build governance into design, not as an afterthought. It supports data sovereignty and the respectful use of knowledge, thereby strengthening participation and fairness. It also helps disaster and humanitarian organisations reduce bias and improve equity in risk and warning systems. Recent research (Randeniya, J.N., Haigh, R., & Amaratunga, D., 2025, “Responsible intelligence: ethical AI governance for climate prediction in the Australian context,” AI and Ethics) reveals a concerning gap in AI adoption across climate institutions. Government agencies demonstrate AI adoption rates of only 2.1 out of 5, compared to academic institutions at 4.0 out of 5. Yet government agencies bear primary responsibility for disaster management and climate policy decisions. This paradox creates potential for systematic bias and reduced accountability in the systems that most directly affect public safety and infrastructure investment. Overall, it supports resilience and sustainability and contributes to the United Nations Sustainable Development Goals (UN SDGs).

  • What is a real-world example or case study of how this might help?

One practical way this could help is through a multi-hazard climate intelligence and decision support service operated by meteorological agencies, disaster management authorities, infrastructure operators, local governments, and research partners. It is designed to manage compounding extremes such as extreme rainfall that drives river and flash flooding, coastal storm surge that coincides with high tides, cyclones and typhoons that bring damaging wind and rain, heatwaves that raise health risk and energy demand, and drought and fire weather that disrupt landscapes, livelihoods, and supply chains. In this program, machine learning and physics-informed models combine satellite observations, ground sensors, forecasts, historical impact data, and local knowledge to produce probabilistic, location-specific outlooks and practical planning scenarios weeks to seasons ahead. These outputs feed shared operational tools used for evacuation planning, hospital surge preparation, prioritising critical assets, staging relief logistics, and guiding recovery investment.

IEEE P7021 would help by making responsible AI a consistent practice, ensuring the service remains trustworthy when decisions are urgent. It would require transparent handling of uncertainty, stronger testing under changing conditions, equity checks to ensure risk is not missed in lower-capacity areas, and clear accountability so model updates and warning decisions are reviewed and authorised by accountable agents. The result is a more sustainable and equitable climate service that supports safer action across many hazards and helps communities and decision-makers understand both what the system predicts and its limits.

  • What type of people might be interested or well-suited for this standards group?

P7021 will be most relevant to people who work at the intersection of climate risk, long-term projections, and the responsible use of AI. It will suit those who can contribute practical expertise in governance, leadership, regulation, and ethical principles. This includes leaders and specialists from meteorological and climate agencies, national climate services, climate research institutions, and Earth observation teams who develop and operate modelling and projection programs. It also includes AI engineers, data scientists, system architects, and product owners who design and deploy climate AI systems, including climate Digital Twins and decision-support platforms. Disaster risk and early warning practitioners in emergency management agencies and humanitarian organisations are also well-suited because they rely on climate risk outputs for preparedness, anticipatory action, and recovery. First Nations and Indigenous representatives and knowledge holders are essential to support Indigenous data sovereignty, culturally safe governance, and appropriate engagement with Traditional Ecological Knowledge. Moreover, assurance, audit, and ethics professionals, along with legal and regulatory experts, are important participants because they help set expectations for accountability, transparency, oversight, and compliance. Where AI-enabled projections directly shape public plans and investment decisions, a focused group of senior policy and governance decision owners can help ensure the standard supports defensible decision-making and responsible leadership in practice.

  • How does P7021 relate to existing frameworks and standards?

P7021 aligns with UNESCO’s AI Ethics Framework and is designed to translate globally recognised ethical principles into climate-ready, implementable practice for long-term projections and DRR. It is informed by established standards and initiatives such as the IEEE P7000 series, ISO approaches to resilience and continuity, and the United Nations Office for Disaster Risk Reduction (UNDRR) Sendai Framework, while addressing a clear gap because there is currently no dedicated standard focused on ethical AI implementation for long-term climate projections and DRR. By strengthening trustworthy, inclusive climate intelligence, P7021 also supports the UN Sustainable Development Goals (UN SDGs), particularly those related to climate action, resilient infrastructure, reduced inequalities, and effective institutions. In this way, P7021 supports global and national and peak-body agendas by enabling adoption at scale across government, enterprise, not-for-profit, and profit-for-purpose sectors, reinforcing resilience, equity, transparency, and accountable governance for AI-enabled climate decision infrastructure.

  • Call to Action

The P7021 working group is actively seeking diverse perspectives to inform this crucial standard. Whether you’re a climate scientist, AI developer, policymaker, Indigenous knowledge holder, or community advocate, your expertise is valuable. We particularly welcome participation from practitioners in national meteorological agencies and climate services, Indigenous representatives and Traditional Knowledge holders, AI ethics specialists with environmental application experience, DRR professionals using AI in operational contexts, researchers working on explainable AI and algorithmic fairness, and community representatives from climate-vulnerable regions.