AI Can Help or Harm the Planet. It's Up to Us.
Communities around the world face three connected crises. Climate disruption is accelerating, bringing historic heatwaves, floods and wildfires. Natural ecosystems, from coral reefs to primary forests, are receding at alarming rates. And progress on human development is uneven and even reversing in some areas.
In times like this, it's easy to cast new technology as either a hero or a villain. Artificial intelligence (AI), especially the latest wave of generative AI, is often framed in this dichotomy. For some, it's the shining star of progress and abundance. For others, it's a harbinger of chaos, environmental harm and cascading job loss. Many remain somewhere in the middle.
WRI is no stranger to this tension. As a research institute with decades of experience delivering innovative data and applications; a team of hundreds of technologists and researchers; and trusted partnerships with leading technology companies, we have lived through many waves of digital optimism and skepticism. From the early days of machine learning, to the mobile revolution, to the largely unrealized potential of blockchain technologies, we have seen hype cycles both bust and deliver technology with transformative impact.
As AI ushers in the next wave of innovation, our belief remains unchanged: New technology can improve the world; in fact, it's one of the few things that reliably does. At the same time, new technology opens new risks and potential for real harm. What makes the difference is how people and institutions choose to deploy it.
AI Offers Both Promise and Potential Perils for the Planet
While the state of generative AI is evolving rapidly, the core risks and opportunities have been consistent since the public launch of ChatGPT in 2022.
For one, generative AI can produce outputs that look and sound right but are factually wrong (a phenomenon known as "hallucinations") or mind-numbingly generic (so-called "AI slop"). The sheer volume of either wrong or mushy content generated by AI risks overwhelming thoughtful, vetted work. This could make it harder for people to vet facts, increasing the chances that important decisions are based on bad information and slowing action.
Training and deploying AI is also expensive, both environmentally and financially. Vast amounts of energy and water are needed to fuel AI systems at scale. According to the International Energy Agency, a typical AI-focused data center consumes as much electricity as 100,000 households; the larger centers under construction today will consume 20 times that amount. In the U.S. alone, peak electricity demand is expected to increase by 128 gigawatts by 2029, in large part due to the data centers needed for AI. Meanwhile, electricity prices and water shortages are already surging in many places around the world, adding new pressure on households already struggling to make ends meet.
Over-automation can also weaken human systems and judgment. Ill-conceived use of AI can erode an individual's or organization's ability to reason, weigh ethical decisions and build institutional memory. We may not see the cost until shocks and stresses reveal that the proverbial emperor has no clothes. For example, in the medical field, research shows that doctors using AI to deliver vital services like colonoscopies are experiencing deskilling, which could actually worsen healthcare over time.
These risks are real, consistent and much covered in the media and elsewhere. But at the same time, AI has the potential to help solve some of the world's greatest challenges facing people, nature and the climate:
Faster science and translation of science into solutions.
AI has the potential to transform scientific discovery. It is already helping us solve open challenges around weather forecasting, grid optimization and material science, all of which are critical for scaling renewable energy. For example, researchers at MIT found that AI-based models can help utilities integrate renewables by improving the accuracy of solar and wind forecasts. This in turn improves grid management, helping deliver the right amount of electricity at the right place and time to keep the lights on and costs down.
Within WRI, our climate researchers have used AI to increase the efficiency of data collection and cleaning, speeding insights about countries' national climate plans (known as Nationally Determined Contributions, or "NDCs"). This means that trends and gaps in emissions commitments can be identified earlier compared to manual processes, clarifying the world's progress towards climate goals and helping the 400 ministries and over 100,000 users who rely on Climate Watch to track progress, see trends and act sooner than before.
Democratizing the power of information and analytics.
From frontline forest defenders to climate investors, we are all awash with information. Well-designed AI can help organizations find the signal in the noise, focus and act.
For example, we know from years of experience that communities who have access to better data are more effective at preventing deforestation. In Peru, we found that groups using near-real time tree cover loss alerts saw deforestation decrease 52% in their territories, compared to similar communities that did not change their monitoring practices. AI will help us take the next massive step forward, allowing users to access critical nature data regardless of their technical background.
This month Land & Carbon Lab (an initiative convened by the Bezos Earth Fund and WRI), with the support of our Data Lab, will launch WRI's first environmental monitoring platform featuring an AI-powered interface. This will make it easy for users to discover, analyze and visualize data on nature, turning complex geospatial datasets into actionable insights for partners around the world. The system will allow users to generate maps and analysis in response to plain-language questions — like, "how much grassland was lost in the Maasai Mara last year" — rather than coding or using complicated visual interfaces to surface the information they need.
Augmenting human expertise.
Research shows that AI can be transformational for individual efficiency and that combining human insight with AI tools can produce better outcomes than either working alone. For example, research shows that AI-based tools help individual software developers complete tasks over 55% faster. Moreover, emerging research suggests that AI may help us find more creative solutions to real-world challenges like urban planning.
This combination of efficiency and creativity has distinct benefits for the many small teams and organizations working to improve people's lives, protect nature and halt climate change. AI gives us a pathway to scale our work to meet the needs of the communities we serve within cost and time constraints.
Moving Beyond Promise and Peril: Foundations of Rigorous, Responsible AI Innovation
For organizations like ours and so many others, AI's opportunities and risks are no longer in question. The question is how we choose to move forward. For those who care about building a better future, this is a call to take the wheel: We must move beyond commentary toward using these technologies for public good, and share our lessons broadly.
How can we steer AI toward positive outcomes? It starts with setting guidelines that ground us in our values and light the way forward regardless of where the winds of technical change blow. At WRI, we use and advocate for a set of principles and practices to guide responsible AI use:
Be curious and data driven. It is better to ask good questions, design experiments, learn, and act on results than to trust the hype or cower in fear. Curiosity gives us the energy to drive forward; data gives us the focus to steer toward value and away from harm.
Be user centered. Build new technology that solves real problems for people who are trying to make the world a better place. Don't be satisfied with novelty or captured by utopian or dystopian visions. We must place users and their needs above our own excitement, fear or financial incentives.
Be accountable. AI often invites us to outsource responsibility. Instead, we must double down. New technology requires us to increase our accountability and environmental stewardship while using its power to augment human capacity, ingenuity and care, rather than automating it away.
Evaluate and learn at every stage of product development. Building successful AI products for novel use cases requires measurement at every step. This starts with custom evaluations to assess accuracy, reliability and cost before launch; it progresses to live monitoring of published tools to identify both success and misuse; it ends with formal impact evaluations to understand value and risk in the real world. AI without evaluation and learning is unlikely to produce positive outcomes.
Measure and manage environmental and financial costs. Organizations should closely monitor environmental and financial costs when deploying AI. Best practices include instrumenting systems to track computation load and costs in real-time, favoring efficient models, using caching to reduce repeat queries, and designing user interfaces that encourage thoughtful, measured use.
Clarify methods and product maturity for external users. AI is an experimental technology that advances fastest through real-world testing. Doing this testing responsibly requires clear communication and informed consent. When launching a new product that leverages AI, it's important to make that clear and to explain the state of the product: Is it experimental (early in testing requiring care); beta (stable enough to pilot, but likely to change); or a general release that's tested and ready for daily use?
Responsible AI for People and Planet
AI will not build a world where people, nature and climate thrive together. People will. And those people, from community leaders to wise companies to dedicated civil servants, will use technology, and indeed AI, to do so. No single organization can do this alone. That's why we work alongside partners like Patrick J. McGovern Foundation, Google.org and the Bezos Earth Fund, as well as technical friends like Development Seed, Fenris and Pew Research Center, who share our commitment to rigorous and responsible innovation.
The real question is not whether AI is good or bad, but instead how we choose to shape the future. Like other transformative technologies before it, AI is still a tool, and its impact depends on the hands that guide it. Let's take the wheel with our values in mind and courage in our hearts and harness AI to serve people and planet.
Cover photo: Tongpool Piasupun/Shutterstock