While artificial intelligence (AI) promises immense benefits, it also imposes tremendous risks. Some of them - accelerating misinformation, sophisticated cyber attacks and soaring energy consumption - have already arrived. Others, including super intelligent machines that take decisions independently of human oversight, are likely still a few years away. Although awareness about these risks is growing, there are many others that have yet to be defined. And for all the incalculable opportunities afforded by AI, especially in developing countries, it is risky business.
Concerns are mounting about the ways in which the rapid adoption of AI will negatively impact societies, including in the Global South. Last year my Institute together with New America convened a Global Task Force made up of AI specialists from across the Americas, Africa, and Asia to review ways to improve AI safety and alignment. And in 2024, the task force issued a primer to practically mitigate risks and improve resilience while also closing governance and regulatory gaps between the Global North and South.
AI risks
One of the most significant risks the group identified is mass automation and job displacement. AI is expected to impact vast numbers of workers across sectors ranging from agriculture, manufacturing, retail to the law, medicine, finance. While new forms of employment will undoubtedly emerge, the jobs of up to 800 million people are at risk of automation by 2030, including 300 million in wealthy countries. The International Labor Organization estimates that over 56 percent of all jobs in low- and middle-income countries are at “high risk” of automation. Without safeguards in place, this could sharpen economic inequality and exclude low-skilled workers.
Another risk involves deepening digital divides and sharpening inequality. The gap between those who can access advanced technologies and those who cannot is expected to widen over the coming years, leading to lower productivity, reduced economic growth, and greater social and economic inequality. This is particularly so in lower and middle-income settings already facing shortfalls in digital talent and related services.
Biases and discrimination are another risk associated with AI. Advanced technologies and models designed in the US, China, and EU can perpetuate and amplify biases already present in the training data. This can lead to discriminatory outcomes in everything from credit scores to policing. It can also result in unfair exclusion from opportunities in the job market, credit and loans, and health services due to biased algorithms.
The intensification of surveillance and privacy violations are also enabled by AI. The integration of AI into everything from smart cities to law enforcement can infringe on privacy, civil liberties, and human rights. This is especially so in countries with weaker democratic institutions. Indeed, authoritarian regimes are already deploying AI-enabled systems to track political opponents, suppress dissent, and target marginalized communities based on ethnic, religious, or ideological grounds.
What is more, the reliance of foreign technologies and expertise also constitutes a risk in the Global South. The over-dependence on US, Chinese and European innovations can potentially reduce the incentives to build domestic tech sectors in lower-income settings. It can also degrade the bargaining power of local governments, contributing to higher costs for technology while reducing control over standards. Dependence on foreign suppliers can also result in data being more easily accessed, controlled, manipulated and exploited by foreign actors, raising concerns about privacy, property theft, and integrity of critical infrastructure.
Emerging solutions
Given all these risks, what are some of the solutions being considered in the Global South? For one, there is a growing chorus for more involvement of developing country governments and experts in the formulation of global standards. This is a call echoed in a recent 2024 UN General Assembly resolution on AI inclusion and a recently agreed Digital Compact intended to overcome digital, data and innovation divides. A consensus is emerging to ensure that AI is more equitable, inclusive, and sensitive to the specific challenges of the Global South.
Practically, more investment in education and vocational training is essential to prepare for coming automation and job losses. This requires developing training centers and online courses, retraining grants, job placement services, and progressive unemployment benefits as well as universal basic income (UBI) schemes. And promising examples are emerging: India’s AI for All initiative, Rwanda’s digital ambassadors, and Brazil’s Conecta program are all helping people and companies transition to the digital economy. And countries as varied as Kenya, Namibia and India are piloting UBI, though more action is needed.
Public and private actors will need to dramatically scale investment in digital infrastructure to redress digital divides across the Global South. This includes expanding internet access and broadband access to the 2.6 billion people who are still not connected. Policies that promote digital hubs, equitable access to digital services and lost-cost technology programs are essential. One example of how to amplify such activities is Smart Africa Alliance’s AI for Development (AI4D) program which is building ethical AI frameworks for governance, agriculture and healthcare.
Bias and discrimination can be minimized through improved guidance and standards for AI development and deployment. Countries, companies, and digital activists need to craft and enforce regulatory frameworks that mandate algorithmic transparency and regular audits. There are also major opportunities to mandate that data used to train AI systems is more diverse and representative. The Global Task Force identified close to 700 such strategies, albeit over two thirds formulated in wealthy countries suggesting that more work needs to be done to close the gap.
Curbing surveillance and privacy violations requires robust data protection and privacy laws to safeguard personal information. The European Union and countries like Brazil, India, Kenya, South Africa, and Tanzania, are developing agile regulatory frameworks tailored to their specific realities. There also needs to be clear regulation governing the use of AI for surveillance to minimize invasive practices as well as public awareness campaigns together with advocacy from civil society for stronger protections.
And reducing over-dependence on foreign technology providers requires investment in local AI policy and research, alongside grants and incentives for local accelerators, start-ups and labs. International partnerships and collaboration also play a key role, such as programs to skill-up law makers and civil servants led by ITU, UNESCO, UNDP- as do dedicated centers for workforce training and education provided by groups such as Google, Intel, and Microsoft.
Measures to reduce AI risks must address the gaping AI governance divide between the Global North and South. This gap is expressed not just in terms of data scientists and data centers, but also in terms of regulation. A key priority involves stepping-up the participation of Global South decision-makers and experts in AI policy development, including in the G20 and OECDcontexts. Notwithstanding legitimate concerns with regulatory fragmentation, AI governance frameworks also need to be aligned with local contexts. The recently agreed AI Strategy and Digital Transformation Strategy established by the African Union offers promising signposts.
The good news is that a recent UN Resolution on Inclusive AI, High-Level Panel on AI, and Global Digital Compact are charting a positive path forward. The compact calls explicitly for more inclusive AI policy development, the standing-up of an independent scientific panel on AI, and a global dialogue to anchor AI in human rights. Perhaps most important, it also recommends the launch of a global fund to support digital infrastructure and skills development. Such a fund will need to make big bets (similar to those set out in private-led initiatives such as the recent fund launched by IBM and Blackrock if it is to help close the AI governance and capabilities gap.