When we think of artificial intelligence (AI) and climate justice, we can imagine two stars in an orbital waltz; each with its own gravity, sometimes in harmony, while other times in tension. In moments of alignment, their fields reinforce each other, giving new vitality and perspective.
Yet not all orbits are stable and the gravitational field of AI is growing at an accelerated pace. The ultimate danger we face is the potential for AI to swallow everything around it, much like a black hole.
AI has the potential to illuminate patterns in climate data, sharpen models and increase our chances to forecast an uncertain future. Yet there is a justified concern over the carbon footprint of big AI model training, the secrecy surrounding it, and the risk of exclusion—particularly with communities already on the fringes of both climate and tech discourse. The risk is that AI becomes the superstar rather than co-star, disrupting the equilibrium of the whole system. The question, therefore, is not whether or not these two spheres can coexist, but whether or not they can do so in mutually reinforcing terms.

In this time of rapid information consumption, one of the greatest challenges may simply be to open the door to conversation. In other words, not to idealize or dismiss AI, but to collaborate on conventions, best practices and boundaries that will govern its development within the climate realm in general and climate justice more specifically. Indifference can relegate the silent stakeholder to the sidelines, and the climate community could miss an opportunity to be fully engaged in what could possibly be one of the most transformative developments of our era.
Understanding where climate justice and AI meet should not be a pure academic exercise either. It should require dialogue that flows between technical, ethical, community, policy, political, environmental and social spaces. The effort should be rooted in the needs of the day-to-day lives of people most vulnerable to both climate hazards and technological exclusion.
In an initial effort to try to understand the relationships and tensions between AI and climate justice, we can use a popular strategist’s toolkit: the SWOT (strength, weakness, opportunity, threat) analysis. By examining AI and climate justice in the context of this analysis, we can start exploring what this relationship might look like.
AI’s ability to quickly process massive, unrelated datasets, from weather patterns to social media sentiment, is transforming climate work across different fields. Emergency responders can use AI to optimize response times following hurricanes or wildfires. In Southeast Asia and East Africa, AI-powered early warning systems are already saving lives. Satellite imagery that once took weeks to process can now be assessed in hours. As a consequence, deforestation in the Amazon, coastal erosion in Bangladesh and methane flares in Texas are all being tracked with unprecedented precision.
The benefits of AI, however, are not evenly distributed and are counterbalanced by evident weaknesses. One of the many problems is data imbalance, as some of the communities most vulnerable to climate change—such as Indigenous and rural farming communities—are not properly represented in the data that AI uses. These omissions can lead to considerable bias: for example, a flood simulation program that is ignorant of the topography of slums could underestimate the potential hazards involved; likewise, a wildfire model that lacks understanding of traditional land management may overlook essential firebreaks.
It is also costly to train large AI models, requiring computer power, electricity, broadband connectivity and skills. These conditions are unaffordable for most nations in the Global South, as well as for local communities in the Global North. AI is, in this case, a luxury.
Additionally, AI’s unexplainability—or its “black box” status—raises issues of accountability. If AI is used to decide where adaptation finance should be invested, or whose communities should be evacuated, stakeholders need to be able to question those decisions. But if the models can’t explain how and why they’re reaching certain conclusions, how can they be trusted, or challenged?
And then there’s the environmental cost. In an ironic twist, training AI models takes vast amounts of energy—energy that is often generated by fossil fuels. Research shows that building one advanced language model can create as much carbon as five cars emit during their lifetimes. If AI is to be part of the climate solution, it first needs to fix its own contribution to the problem.
Yet there are also numerous opportunities to use AI for climate justice, which might help mitigate some of these weaknesses.
One of the most promising opportunities is community-driven AI. Local communities worldwide are taking the lead in developing their own tools—adapting AI to Indigenous knowledge, cultural values and local priorities. For example, in Pacific Island states, community-driven drone programs closely track erosion and provide guidance for adaptation. In Canada, First Nations data sovereignty initiatives are ensuring environmental models serve their communities, not surveil them.
Another opportunity is policy integration. AI can inform the entire spectrum of policy from zoning laws to carbon taxes, if it is embedded in transparent and responsible processes. Some cities are already using AI to plan cooling systems in heat-prone neighborhoods, targeting investments where they are likely most effective. Others include emissions forecasts in building codes or subsidies to farmers.
Additionally, researchers and activists from various countries are getting together to co-create tools and datasets to combat digital colonialism by facilitating a bidirectional knowledge exchange. Predictive applications can anticipate disease outbreaks following floods, track hotspots of food insecurity and even project climate-induced migration flows. If paired with social services and policy responses, such indicators could shift climate action from the reactive to the preventive.
Increased investment in justice-led innovation marks another turning point. Venture and philanthropic firms are backing projects that put equity to the fore—from refugee camp sensors powered by the sun to climate finance algorithms that factor in social vulnerability. With the right guardrails in place, these investments could catalyze a new wave of inclusive technology.
But we cannot forget the many risks.
Digital colonialism is pervasive. Too often, AI tools created in the Global North are dispatched in bulk to the Global South with little regard for local context. This action subverts native expertise and imposes untested standards. The model in which data flows north, choices are made remotely, and communities have very little to say is clearly flawed and needs to be replaced with a co-produced and co-created one.
Surveillance and displacement issues are cause for great concern. Technologies such as drone surveillance and geospatial tracking can be used against the very individuals they are supposed to safeguard—enabling evictions, policing traditional land use and criminalizing dissent. In addition, accountability diminishes when monitoring overreaches, and mechanisms for guaranteeing fairness in algorithmic decision-making are absent. This can lead to the risk of techno-solutionism (simplifying complicated societal problems into algorithms and apps), implicitly sidestepping the necessity for human action, framing climate change as purely a data issue and removing all other dimensions.
No algorithm, no matter how sophisticated, can substitute human action and interaction. Simply digitizing knowledge for extraction without prioritizing values does not lead to real progress. In this regard, the next decade will be crucial. With climate hazards intensifying and AI systems becoming more powerful, the echo of our choices will shape the future more and more.
We can choose how AI and climate justice interact. To make that choice wisely, we must listen not only to the facts; we also need to realize that climate change is more than a data problem—it is a humanitarian and social crisis. To leverage AI technology properly, we need to regulate activities we know are harmful to society and to the most vulnerable, and we need to start co-creating new tools that include the voices of those who are too commonly marginalized. Artificial intelligence helps us expand our rational intelligence beyond limits that we would have never imagined even a few years ago. Let us not forget we also need emotional intelligence.
Marco Tedesco is Lamont Research Professor of Marine and Polar Geophysics at Lamont-Doherty Earth Observatory (LDEO), which is part of the Columbia Climate School. At the recent MR2025 conference, Tedesco gave a presentation called “Perspectives on AI and Climate Justice.”
Views and opinions expressed here are those of the authors, and do not necessarily reflect the official position of the Columbia Climate School, Earth Institute or Columbia University.
Article source: https://news.climate.columbia.edu/2025/07/02/ai-and-climate-justice-balancing-risk-with-opportunity/