By Claire Dennis, AI Governance Researcher, GovAI, and Johannes Kirnberger, Policy Consultant AI & Sustainability, OECD
Human progress is remarkably fast and, consequently, fraught with risks. In just over a century, we have burned through coal and oil deposits which took tens of millions of years to form, causing a rapidly escalating climate crisis. In the last sixty years, we have engineered a trillion-fold increase in computing power, driving exponential growth in artificial intelligence, a technology with immense potential benefits and harms.
While both climate change and AI present deep uncertainties – even existential risks – our self-created environmental predicament demands us to act even faster and to utilise all tools at our disposal. Despite its own hazards, AI is rapidly emerging as one of our most invaluable tools to mitigate climate change.
For example, Svante Arrhenius, the late-nineteenth-century scientist who first predicted global warming, performed tens of thousands of computations by hand to prove the greenhouse effect. Today, climate models run on supercomputers the size of tennis courts, with AI writing and analysing over a million lines of code per model.
Beyond modelling, AI can automate extremely complex tasks – for example, optimising industrial equipment to minimise energy use. It can also predict agricultural yields as extreme weather threatens food security and establish early warning systems for floods and other natural disasters.
Many real-world applications of AI are essentially large-scale experiments with unknown effects, much like the release of carbon emissions into the atmosphere. As we employ AI to address critical global issues like the climate crisis, businesses, investors, and regulators must establish effective guardrails that ensure responsible AI deployment without stifling its potential impact. This in turn requires the establishment of common standards, metrics, and indicators on the environmental impact of AI and its enabling potential.
The IEEE Planet Positive 2030 Committee on Metrics and Indicators calls for “holistic designs that provide quantitative/qualitative measures of ecosystem impact and map pathways with evidence-based data will facilitate our reaching goals of shared benefit to people and the Planet”. Here are a few starting points to achieve this mission.
Policy can steer AI towards reducing carbon emissions
Right now, there is little regulation specifically for AI, and the private sector develops most cutting-edge AI. While global AI private investment has significantly increased during the last decade, reaching $91.9 billion in 2022, most of this funding is going to just a few industries – healthcare, data processing, fintech, and retail, and applications like automated vehicles, fitness and wellness tech, and semiconductors. Little of this funding is primarily dedicated to climate research and solutions.
Governments are starting to weigh in and are increasingly aware of issues related to AI and its intersection with climate change. For example, the recent amendments of the EU AI Act adopted by the European Parliament in June 2023 call for any high-risk AI system to identify, estimate and evaluate “the reasonably foreseeable risks that the high-risk AI system can pose to (…) the environment when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.” The EU AI Act also calls for the creation of Key Performance Indicators to track the energy consumption of AI systems and promote the use of more efficient AI technologies, as well as measure the impact of AI systems on the Sustainable Development Goals (SDGs).
International organisations, civil society, and standards bodies will also be instrumental in centering the focus of AI on climate change issues. The OECD has taken a lead in bringing AI and climate issues to the attention of policy makers, as demonstrated by its report on measuring the AI footprint. Its work on AI brings together a high-profile, multi-stakeholder network of AI experts from academia, industry, civil society, non-governmental organisations (NGOs), and intergovernmental organisations (IGOs) who make concrete policy recommendations to OECD member states and observers.
As regulators increasingly look to “soft law” instruments – like certification programs and standards – to help achieve their regulatory objectives, the inclusion of climate considerations in such instruments can give organisations and regulators common ways to measure and understand climate impacts and mitigations. Civil society organisations like the Responsible AI Institute, which develop certification programs and assessments for AI implementation, can integrate climate and environmental analysis into their programs. They can inform organisations of opportunities to apply AI to climate work and research and connect community members who have skills and interests at this intersection.
Standards bodies like the International Standards Organization (ISO) have already stated that AI systems should “not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.” The IEEE Standards Association has worked extensively on standards such as IEEE 7000 series which prioritises people and the planet as metrics for responsible autonomous systems. In 2015, IEEE began an initiative identifying environmental sustainability and well-being as key aspects of ethical technology design, published as Ethically Aligned Design. In 2019, as part of a multi-stakeholder process, the OECD released its OECD AI Principles, the first intergovernmental standard for AI policies. Principle 1.1 on inclusive growth, sustainable development and well-being stresses the importance of “responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet.” While such statements remain broad in nature, they provide a framework for future specifications and measurement standards.
Focus AI research globally on better understanding and combating climate change
As governments assume a central role in shaping AI and climate change outcomes, they should not only prioritise regulating AI systems but also substantially augment AI research funding. To begin with, governments can funnel AI research into targeted areas related to climate change by facilitating access to advanced computational resources and extensive government datasets in a secure cloud environment.
This is happening in the United States, where the US National Science Foundation (NSF) has invested over $140 million to establish several AI research institutes, including the Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography. This institute focuses specifically on AI research to track changes in weather patterns, oceans, sea level rise, and disaster risk. Other institutes that received this funding focus on climate-smart forestry and next-generation food systems.
Beyond funding and resources, national policies should address major shortages in expertise and confidence in AI. In a recent survey of over 1,000 global leaders on AI or climate-change initiatives, 78% stated a lack of AI expertise as an obstacle to applying AI to combat climate change, and 67% stated their organisation lacked confidence in AI data and analysis.
But there are government initiatives working in this direction. In 2020, the US Government formed a National AI Research Resource (NAIRR) Task Force to expand access to resources needed in AI research. In NAIRR’s final report released in January of this year, climate change is listed as one of the core global challenges to address with AI in partnership between academia, government, industry, and civil society.
In March of this year, the UK government confirmed an investment of £900 million to develop a dedicated AI Research Resource for public benefit, which will be used to better understand climate change, among other objectives.
Together, the US and EU recently announced they will collaborate using AI to address major global challenges in five key areas: extreme weather and climate forecasting, emergency response management, health and medicine improvements, electric grid optimisation, and agriculture optimisation. Four out of five of these issues are directly connected to climate change, a clear signal of how the public sector envisions using this technology in the next few decades.
AI for climate action, however, should not simply be another initiative on the science agenda. It has to come from all sectors and be applied to both domestic and international efforts, with appropriate resources and scaling.
Industry can mitigate the environmental impact of “AI compute” across the supply chain of large AI systems
Technology organisations that provide large AI systems and the requisite computing power are critical to the effort to fight climate change. Leading companies across various industries are significant users of vast AI systems and computing resources, which they anticipate will provide them with a competitive edge in the future.
Prominent technology firms have signalled their commitment to combating the negative impacts of technology on the climate. Companies like Google, Meta, AWS, and Microsoft are targeting carbon neutrality for their organisations and net zero emissions for their data centres. Google even committed to running on 24/7 carbon-free energy by 2030, meaning that every kilowatt-hour of electricity use is matched with carbon-free electricity sources.
Recent publications such as a joint report of Google and BCG on “Accelerating Climate Action with AI” or Microsoft’s playbook for “Accelerating Sustainability with AI” show that the topic is high on the agenda of technology companies. They could go a step further by extending this logic to their clients. As a condition of providing large AI systems and AI compute resources to industry, tech giants and others can require their customers to demonstrate carbon neutrality, net zero, or other appropriate policies and commitments. Since some industries are more carbon-intensive than others, this approach would likely require using an industry-specific approach to what kinds of policies and commitments are appropriate for access to cutting-edge AI systems and compute resources.
While striving to focus AI efforts on the climate crisis, we should also be wary of oversimplifying climate change as an engineering problem to be fixed by AI as a magical silver bullet that relieves us from the urgent decarbonization efforts across sectors. Addressing climate change with the appropriate use of AI demands global coordination, not only geographically but also across political, social, and economic spheres. Governments, industry, civil society, and international organisations all have key roles to play in this urgent effort. Planet Positive 2030 brings together many of these stakeholders and can be a crucial step in this direction.
Claire Dennis is an international AI governance researcher with the Center for the Governance of AI (GovAI) in Oxford. She recently co-authored the report, “Towards a UN Role in Governing Foundation AI Models,” which examines the UN’s strengths and challenges in regulating frontier AI systems. Previously, Claire was an AI governance research fellow at Cambridge University, policy fellow at the Responsible AI Institute, strategic planning consultant at the UN Executive Office of the Secretary-General, and U.S. diplomat. She holds a bachelor’s degree in international affairs from the George Washington University and a Master in Public Affairs from Princeton University.
Johannes Leon Kirnberger is a policy consultant for AI and sustainability at the OECD. He previously led the program on climate action and biodiversity preservation at the Global Partnership on AI (GPAI) and the International Centre of Expertise in Montreal on AI (CEIMIA). Johannes is a member of the UNEP Expert Group on Digital Tech for Circular Economy, where he co-developed a digital transformation roadmap for catalysing digital technologies to accelerate a circular economy. He holds a Bachelor’s degree in Management from ESCP Business School, a Master’s degree on International Public Management from Sciences Po, and a Master’s degree in International Affairs, Energy and Environment from Columbia University. As a guest lecturer at the Technical University of Munich (TUM), he teaches students on climate change and AI policy. He is a member of the IEEE Planet Positive 2030 Metrics and Indicators Committee.