Some Researchers Adopt Sustainable AI As the Mission
How well AI is helping the UN achieve its 17 SDGs is a test of how worthwhile is the technology; some individuals and organizations are committed to seeing the goals through
By John P. Desmond, Editor, AI in Business

The UN-issued 17 sustainable development goals (SDGs) are seen as one of the best representations of the goals and challenges facing humanity.
Adopted in 2015, the SDGs provided a roadmap of ambitious goals to achieve by 2030, including the eradication of poverty, a reduction of inequality and more protection from climate change. More than halfway to the deadline, only 17 percent of SDGs are on track, with progress impeded by Covid-19 pandemic, a worsening climate crisis and escalation of conflicts in many regions of the world, according to a recent summary from the Council on Foreign Relations.
AI is seen as a double-edged sword in relation to the SDGs. On the one hand, AI is capable of helping make the world a better place and doing good things for people, including contributing to advances in medicine especially. The Covid-19 vaccine was rapidly developed with the aid of AI. On the other hand, the amount of electrical energy required to power AI now and projected in the future, has the potential to cause serious environmental harm and to potentially worsen inequality.
Companies taking venture capital to fund their efforts today are setting up on one side of the equation. They need to press ahead with commercial AI; their investors are looking for a return. In the nonprofit and to some extent academic worlds, many researchers are exploring how to deliver on the UN’s SDGs with the help of AI not specifically aimed at making personal fortunes. If that becomes a byproduct, more power to the innovators.
Computer scientist Serge Stinckwich is head of research at the United Nations University Institute in Macau, a center of learning established by the UN in 1992 to research and train on the use of technology to address global issues. He is interested in how AI can help countries hit their SDG targets by the 2030 deadline, according to a recent account in nature.
One area of his concentration is synthetic data, needed to fulfill the demand of LLMs for machine-readable, diverse data sets for the training of AI algorithms. Gartner estimates more than 60 percent of data used to train LLM models will be synthetic by the end of the year, according to the account in nature.
“Synthetic data are generated from data sets that already exist. So, biases in the initial data sets could be propagated throughout the synthetic data, and in turn, AI models that have been trained on them,” stated Stinckwich. “Our work at UNU Macau focuses on understanding the impact of synthetic data used in machine learning, including the risks for sustainable development through research.”
The UNU Macau team published guidelines last year on the responsible use of synthetic data, including suggestions for using diverse data when creating the synthetic data sets, such as by incorporating a wide range of demographics, environments and conditions. “We also recommend disclosing or watermarking all synthetic data and their sources, disclosing quality metrics for synthetic data and prioritizing the use of non-synthetic data when possible,” Stinckwich stated.
He encourages institutions and organizations to set global quality standards, security measures and ethical guidelines for the generation and use of synthetic data. He hopes UN member states and agencies will adopt the guidelines.
Most LLMs built to date have been developed by private companies, not academics and research institutions. Public-private partnerships are now emerging to attempt to strike a better balance. For example, Data Science Africa, a nonprofit group in Nairobi, is partnering with the Swiss Federal Institute of Technology, to help young Africans use data science to develop solutions for local problems.
IEEE Publishes Metrics for Responsible AI
The IEEE has weighed in on data governance (See In Search of Permission and Wealth-Sharing Around Personal Data, AI in Business, July 14, 2023.) and is now joining the discussion on responsible AI with the recent publication of the paper, “Prioritizing People and Planet, as the Metrics for Responsible AI.” The report was produced by the committee for Ethically Aligned Design for Business, chaired by Adam Cutler, who works on AI design for IBM.
The IEEE report recognized the UN SDGs as “the most well-known well-being indicators.” Without such a list, an AI developer may not think about prioritizing the eradication of hunger worldwide. “The good news is that the tide is turning regarding well-being (ESG/SDG) reporting,” the authors stated. Shareholders are asking their companies to provide better reporting on environment, social and governance (ESG), focused on long-term sustainability, “as our planet needs restoration (not just avoidance of further harm).” And money talks; companies are beginning to tie executive pay to ESG performance.
For an organization to adopt the values needed for responsible AI, the IEEE authors’ number one recommendation is to: identify the inhibitors. The major one is likely to be skepticism. “This skepticism reinforces the need for well-being to be a strategic priority supported by leaders for these initiatives to take root and succeed,” the authors state.
Practical actions to address the inhibitors include: consequence-scanning workshops, an assessment of harm, and use of many metrics. One tip: “It is essential to understand that metrics may conflict with one another—or seem to—based on what you are trying to measure and what you wish to prioritize,” the authors stated. (Who said this stuff was easy?)
Also, ask questions. “Asking questions or having more general discussions is a good place to start.” Some samples: “Does our company have any existing AI principles? … Do we talk about the UN SDGs or other ESG metrics in regard to our AI products (either ones we use internally or that we produce for customers)? … Who focuses on wellness, well-being, or ESG metrics in our organization?”
AIU for Good Has Been Plugging Away Since 2017
Perhaps the organization most closely associated with the use of AI for the benefit of all humanity is AI for Good, established in 2017 by the International Telecommunication Union (ITU), the UN agency for digital technologies. The organization is co-convened with the Government of Switzerland and is in partnership with over 40 UN agencies, with a self-declared mission of using AI to help achieve the UN SDGs.
Today, AI for Good has a community of over 37,000 contributors from 180 countries. It is engaged in many initiatives related to, for example, health, natural disaster management, digital agriculture, environmental efficiency, robotics challenges for young people, and hackathons. Its leaders include António Guterres, secretary-general of the United Nations, Fei-Fei Li, professor of computer science and co-director of Stanford’s Human-Centered AI Institute, Lila Ibrahim, chief operating officer of Google DeepMind, Geoffrey Hinton, Nobel Laureate, and Timnit Gebru, founder and executive director of the Distributed AI Research Institute.
Its 2025 AI for Good Global Summit is scheduled for July 8-11 in Geneva.
Also worthy of attention is the AI4SDGs Think Tank, an online open service, describing itself as “a global repository and an analytic engine of AI projects and proposals that impacts UN SDG, both positively and negatively.” Its goal is to promote the “possible” use of AI for sustainable development, and to investigate the negative impact of AI on sustainable development.
While its “champion network” is based in China, the network includes representation from all over the world, including in its “expert network”: Serge Stinckwich, Marie-Therese Png of Harvard University, and Mutale Nikonde, formerly a journalist in Zambia, the founder of AI for the People in 2020, who is currently pursuing a PhD at Cambridge University in the Department of Digital Humanities.
A recent paper from CLAI Research, entitled “Rethinking the Redlines Against AI Existential Risks,” includes the following: “It is imperative to establish clear redlines as preventive means against existential risk that are induced by AI.” It includes a theoretical framework for analyzing the direct impact of AI existential risks, and proposes a set of exemplary AI redlines.
See the source articles and information at the Council on Foreign Relations, in nature, in the IEEE paper Prioritizing People and Planet, as the Metrics for Responsible AI. on the site of AI for Good and on the site for the AI4SDGs Think Tank.