In the past two years, the global spotlight on generative AI (GenAI) has intensified. From chatbots that mimic human conversation to tools that automate the creation of complex content, GenAI technologies are reshaping how we engage with information, make decisions, and deliver services. But amid this rapid global uptake, a critical question lingers: what value do these technologies hold for low- and middle-income countries (LMICs), where development challenges are most acute—and where the infrastructure, data, and safeguards for GenAI remain limited?

A new report from the Global Center on AI Governance, The Role of Generative AI in Addressing Development Challenges in LMICs, produced for the Gates Foundation, takes up this question. Drawing on insights from 50 projects funded under the Foundation’s Catalyzing Equitable AI Use Grand Challenges program, the report offers a grounded and nuanced exploration of the opportunities—and limitations—of GenAI for development.

At its core, the report addresses three critical questions:

  • Can GenAI meaningfully address socio-economic challenges in LMICs?
  • Is GenAI safe and culturally appropriate for use in these contexts?
  • What are the barriers to equitable, inclusive, and responsible GenAI use?

GenAI for Development: Real Potential, Conditional Promise

Under the right conditions, GenAI tools can be powerful enablers in LMIC contexts. From improving diagnostic tools in overstretched health systems to offering multilingual financial education chatbots, the use cases documented in the report show the creativity and ambition of innovators across Africa, Asia, and South America.

In sectors like healthcare and agriculture, GenAI was used to generate contextually relevant information, improve frontline decision-making, and expand access to marginalized populations. For example, some projects used speech-to-text GenAI tools to reach low-literate communities, while others tested chatbots that offered confidential advice on sensitive issues like sexual and reproductive health.

Yet this potential is conditional. As the report makes clear, realizing the benefits of GenAI in LMICs hinges on addressing core structural challenges—access to compute, poor data infrastructure, and the absence of policy frameworks that can ensure ethical and equitable AI deployment.

Infrastructure and Inclusion: Uneven Ground

The report highlights the stark digital divides shaping GenAI development globally. While the promise of AI-powered tools lies in their scalability and adaptability, most LMICs lack access to affordable compute power, reliable internet, and large, high-quality local datasets—conditions essential for GenAI models to function safely and effectively.

Even more pressing are the language barriers and cultural mismatches embedded in many foundational models. Many projects struggled to adapt GenAI tools trained on English-language data from the Global North, facing significant issues with bias, hallucination, and accuracy. This underlines the urgent need to build and invest in models that work with low-resourced languages and reflect local norms, knowledge, and realities.

Gender and Ethics: From Afterthought to Design Principle

A standout feature of the report is its focus on ethical, inclusive AI design. With support from a distributed Network of Experts, grantees were encouraged to integrate gender equity and ethical considerations throughout their project lifecycles. Tools like a Privacy-by-Design Checklist and an Intersectionality Matrix were developed to support innovators in embedding these principles from the outset.

Still, the report notes uneven uptake. While some projects demonstrated sophisticated, community-informed approaches to gender and ethics, others struggled to move beyond compliance-oriented checklists. This signals a broader challenge in the AI for development space: how to translate principles into practice in resource-constrained, fast-moving contexts.

Recommendations for a Path Forward

The report doesn’t simply catalogue challenges—it offers a pragmatic roadmap for action. Key recommendations include:

  • Donors should fund the development of locally relevant datasets, provide access to compute, and support long-term capacity building in AI ethics and governance.
  • Innovators need tools and guidance to integrate gender, ethical, and cultural considerations throughout product design, not just at the end.
  • Governments and policymakers must prioritize the creation of regulatory frameworks that enable innovation while protecting rights and promoting equity.

The Role of Generative AI in Addressing Development Challenges in LMICs is an important and timely contribution to the global conversation on AI for development. It tempers the hype surrounding GenAI with grounded evidence from the field—evidence that underscores both the incredible potential and the deep complexity of deploying these tools in LMICs.

Ultimately, the report calls for a shift in mindset. It’s not enough to transfer GenAI tools developed in Silicon Valley into low-resource contexts. What’s needed is a model of innovation that is locally led, ethically grounded, and globally supported—one that values the lived realities of users in LMICs and sees them not as passive recipients of technology but as co-creators of its future.

Download the report here

Acknowledgements

Deep gratitude to Kelle Howson and the team at the Gates Foundation for their insights and support.

We also thank the principal investigators and their teams from the 2023 Grand Challenges cohort, whose innovative work and engagement shaped the findings of this report.

Sign up to our newsletter
Stay updated with the latest news and exciting updates from GCG!
By subscribing, you agree to receive occasional emails and newsletters from the Global Center on AI Governance. We respect your privacy and will never share your information outside our organization. For more details, please see our terms & conditions.