
The Future of Global Health: Embracing a minimum regulatory floor for AI Governance in Africa
The 2026 Consortium of Universities for Global Health conference was on the theme of The Future of Global Health. As a Commissioner and lead for the governance workstream within the Lancet Commission on AI and HIV (the Commission), I had the opportunity with other Commissioners to present lessons learned so far from the work we've been doing at the Commission on incorporating AI into HIV programming. In our panel at the conference, Solange Baptiste, who leads the Commission’s work on Communities and AI, made the case for centering equity in AI interventions within HIV programs and Jirair Ratevosian discussed the idea of AI as a Global Commons and the necessity of collective stewardship. These ideas are important as AI is making its way into HIV programs, and it's not hard to see why. Chatbots are being designed to deliver prevention information in local languages, predictive models are flagging patients at risk of falling out of care and analytical tools are being developed for public health analysis. The adoption of AI shows potential but lacks robust monitoring, regulatory oversight, and equity safeguards. My insights at the conference focused on the governance questions that arise in relation to the development of these tools such as the use of terms and conditions to obtain consent in AI tools that may need to be understood at an individual level, cross-border transfers of data, secondary data use for public health analysis, as well as automated decision-making for harm-flagging systems that are embedded in some of these tools.
AI interventions in HIV programs are entering one of the most heavily regulated, politically contested, and community-driven arenas in global health. The HIV response was built on a set of principles that took decades and enormous sacrifice to establish: nothing about us without us. In practice, this means accountability, transparency, and the right to redress. Any technology that enters this space must also meet those same standards, or it risks dismantling the very architecture it claims to improve.
As background research for the Commission, the African Observatory on Responsible AI conducted a landscape study of the top 15 countries with the highest HIV rates, all of which are in Africa. The study assessed each country's governance and regulatory infrastructure for AI to map the current landscape and identify gaps with actionable recommendations.
Working with researchers, Sharon Essilfie and Rose Njuguna, on the country case studies, our analysis shows that while there are growing digital health ambitions in African countries through ICT or digital health strategies, health-sector-specific AI frameworks remain nascent or absent, limiting the safe, equitable, and scalable use of AI in HIV programmes. Data protection laws serve as de facto frameworks but AI-specific provisions on secondary data use, algorithmic accountability, and cross-border data transfers for AI model training are largely missing.
Further, there are capacity constraints in national authorities and ethics committees assessing AI tools as a medical device. Oversight structures are also fragmented, with various health agencies claiming responsibility for different areas of the AI ecosystem. Ministries of Health, data protection authorities, ICT regulators, medicines agencies, and newly established AI councils all have overlapping but incomplete mandates. Enforcement ends up shared, which can work, but only if coordination is deliberate and mandates are clear. For example, in South Africa, the Health Products Regulatory Authority will likely have a say in the approval of AI tools as medical devices while at the same time, the Competition Commission plays a role in consumer protection and dealing with restrictive business practices by the innovators while the information regulator will be responsible for policing data processing within such devices.
While there are notable actions by international actors such as the WHO on AI based software as a medical device, these guides cannot directly substitute for national regulatory capacity. International frameworks play a role in ensuring the application of transnational governance for the development of AI tools whose value chains are cross-jurisdictional but given the non-enforceability of some of these frameworks, enforceable national enforcement mechanisms are also needed.
Our findings show that urgent priorities include developing health-sector-specific AI policies and regulatory pathways for AI tools as medical devices. There is also a need to strengthen data governance to manage secondary use, cross-border flows of health data, interoperability and algorithmic bias.
This minimum regulatory floor is necessary to bridge the governance gaps that can move countries beyond fragmented pilots toward responsible AI scaling in HIV care. AI's value in the HIV response is entirely conditional. It depends on whether the governance, regulation, and community oversight structures surrounding it are strong enough to keep the trust of the communities it aims to protect.
The Commission’s report will be launched by 1 December 2026 on World Aids Day. This day is dedicated to raising awareness about this disease and is an important reminder to always centre the human in health care interventions as well as the technology we deploy to aid this mission. This means establishing human rights based governance frameworks that promote accountability, transparency and redress. We hope that all interested stakeholders will engage with the report’s findings and use their sphere of influence to implement the recommendations to shape the future of AI use in HIV care in resource-constrained settings.
