
How Can African Countries Tackle the Emerging Threat of AI-generated or Manipulated Mis-and Dis-information in the Context of Elections?

While AI presents great opportunities to achieve sustainable development, peace and democracy, information integrity in elections have become increasingly concerning for governments after the emergence of generative AI. According to the International Panel on the Information Environment, AI was used in more than 80% of elections that took place in 2024. On the African continent there are already examples of AI-generated or manipulated content being used to spread false narratives on social media platforms during elections. For example, a leaked audio clip that turned out to be a deepfake, i.e. an image, video or audio clip that have been manipulated or generated using AI, depicting a conversation between leaders of Nigeria’s main opposition party, the People’s Democratic Party, planning to compromise the country’s 2023 election, cast a shadow on public trust in democratic processes.
Many governments are now at a crossroads to develop appropriate policy and regulatory frameworks for battling the spread of AI-generated or manipulated mis-and dis-information which can erode public trust and alter electoral outcomes. This poses a question: in a world of emerging AI policy, how ready are African countries to handle the threat of AI use in elections?
After spending four years living in Ethiopia and South Africa, including during a General Election in Ethiopia 2015, and now working on digital policy questions within the United Nations diplomatic community in New York, I am curious to better understand the policy landscape of AI use in elections from an African perspective. As AI is increasingly taking more and more center stage in information environments, what policy options are on the table to ensure a healthy online information environment, particularly during socially pivotal moments such as elections, and what can African countries learn, if anything, from existing policy initiatives?
With elections coming up in Malawi (2025), Tanzania (2025) and Kenya (2027), among other African countries, it will be key to enhance resilience against the harmful impacts of AI-generated or manipulated mis-and dis-information. Governments need to balance regulation with the duty to respect international human rights law, including protecting citizens’ right to freedom of expression and privacy, as well as recognize the private sector’s responsibilities to respect human rights in line with the UN Guiding Principles on Business and Human Rights (UNGPs). While some private sector companies have taken measures to disrupt influence operations aimed at voters, such as OpenAI in Rwanda, one can also question whether a voluntary self-regulatory approach, particularly in contexts where foreign companies are operating without other regulatory frameworks, is sufficient.
African countries can learn from emerging practices of AI regulation in other regions and countries, and UN initiatives, to gain an overview of different policy options. EU’s AI Act, China’s AI Law, UK’s AI Regulation, South Korea’s AI Basic Act, and Brazil’s proposed AI Act are all examples of principle-based regulatory frameworks based on foundational principles including transparency, accountability and human oversight, while some also include a principle to protect fundamental human rights. EU’s AI Act, China’s AI Law and Brazil’s proposed AI Act further offer a risk-based approach towards AI governance, which imposes a gradual scheme of requirements and obligations depending on the level of risk an AI system is posing. A number of US states, such as Colorado and California, have also taken steps to introduce risk-based AI regulation, and, as in the case of California, specifically to battle the threat of spread of AI-generated or manipulated mis- and dis-information, particularly deepfakes, in political contexts. This includes, like in EU’s AI Act, China’s AI Law and as mentioned in the first UN General Assembly resolution on AI, to watermark or label AI-generated or manipulated content. For more specific regulatory measures, South Korea has, for example, prohibited AI-generated deepfake content in political campaigning within 90 days of an election.
While a number of African countries have recently published AI strategies, including Kenya in 2025 and the regional African Union continental AI strategy in 2024, which both mention the threat of AI-enabled election manipulation as a threat to the integrity of democratic processes, and have implemented data protection and privacy laws, many still lack AI regulatory frameworks in the context of socially pivotal moments such as elections. Policy options for governments, based on existing policy initiatives and author’s ideas, include:
- Development of enforceable principle-risk-and outcomes-based regulatory frameworks that are flexible to keep pace with rapid AI advancements and emerging risks tailored to local contexts. National regulations will ensure international and local companies act on the same terms in accordance with national due legal processes. Policy options for specific stakeholders include:
- For political actors and communication teams:
- Mandate a commitment to responsible use of AI in political campaigns, which would include a strict ban on AI-generated or manipulated mis- and dis-information to safeguard the integrity of electoral processes and promote public accountability. Enforceability should be delegated to, for example, a country’s electoral commission.
- Mandate disclosure of use of AI systems in political campaigns, including watermarking or clearly labeling ‘AI-generated or manipulated content’, including information on what part, and to what extent, of the content has been generated or manipulated using AI. This can be seen in, for example, the “Principles and Guidelines for the Use of Digital and Social Media in Elections in Africa” by the Association of African Election Authorities which mentions that “the use of AI should be transparent”.
- Enforce penalties for spreading mis-and dis-information in digital campaigning that are commensurate with the potential risk(s) inflicted, and a restorative process or action to redress any harm caused.
- Mandate political teams, including communication staff, to undertake digital ethics and transparency training for responsible campaigning, for example inspired by the “AI for Electoral Actors” training provided by the International Institute for Democracy and Electoral Assistance (International IDEA).
- For social media platform and AI generative product companies:
- Implement mandatory Know-Your-Customer (KYC) due diligence that recognizes how customers and advertisers can misuse media platforms for disinformation campaigns. As done in the financial industry to mitigate Anti-Money Laundering and Countering the Financing of Terrorism (AML/CFT), this approach can be applied in the online information environment to counter disinformation campaigns. Meta, owner of Facebook, is, for example, conducting confirmation of identity checks for customers to run ads about social issues, elections or politics.
- Implement mandatory human rights due diligence and impact assessments to ensure accountability, as part of companies’ responsibilities to uphold the UNGPs and as mentioned in the Global Digital Compact (GDC), UN resolution A/RES/78/265 and EU’s AI Act. This is again referenced in, for example, the “Principles and Guidelines for the Use of Digital and Social Media in Elections in Africa” by the Association of African Election Authorities.
- Implement mandatory fact-checking teams, proportionate to the number of impressions generated by the respective company while also being proficient in key national languages, which play a crucial role in identifying and addressing mis-and dis-information. This is again referenced, for example, in the “Principles and Guidelines for the Use of Digital and Social Media in Elections in Africa” by the Association of African Election Authorities. If AI systems are used for automated fact-checking they must be trained on data that is locally representative to prevent bias and discrimination, and meet privacy and personal data regulation requirements. Synthetic data, which is information created by computer simulations or algorithms that reproduce structural and statistical properties of real-world data, can enhance representativeness of datasets by making them more suitable for local contexts where there are data gaps.
- For political actors and communication teams:
- Promote public education policies to increase media literacy and ensure all citizens have the skills and knowledge to safely and critically interact with online content which can enhance resilience against the harmful impacts of mis-and dis-information. This includes both short-term local and community-based education initiatives and long-term national curriculum development actions.
These initiatives can assist in increasing the readiness of African countries with upcoming elections in dealing with the threat of AI-generated or manipulated mis- and dis-information, while ensuring regulations reflect local contexts.
Author: Andreas Fransson
NEXT READ: AI in Social Sciences: How Large Language Models are Reshaping Text Analysis
Articles in the “Ideas from the Palaver Tree” collection were co-edited by Selamawit Engida Abdella and Dr. Fola Adeleke