AI IN AFRICA: REGULATIONS ARE CATCHING UP AND THE COURTS ARE ALREADY THERE

By Danielle van Deventer

Artificial intelligence is no longer a future concept in Africa. It is here, it is being used, and it is now reshaping regulatory agendas and courtroom practice across the continent.

Over the last two years, we have seen a decisive shift in how African governments approach AI. What was once discussed in policy workshops and position papers has moved firmly into national strategies, draft legislation, and judicial scrutiny.

A major catalyst for this shift was the African Union’s Continental AI Strategy, adopted in July 2024. This strategy provides a unified roadmap for AI development across Africa for the period 2025 to 2030. It sets out a shared framework for innovation, data governance, risk mitigation, skills development, and investment, and it now operates as an umbrella under which national AI approaches are being developed or aligned.

In practical terms, this means that African states are no longer regulating AI in isolation. Instead, they are increasingly coordinating their approaches, ensuring a degree of consistency across member states while still responding to local priorities.

National AI Strategies: Momentum across the continent

At a national level, the pace of development has been striking: –

Kenya has taken a leading role, launching its National AI Strategy in March 2025, positioning AI as a key driver of economic growth and public sector reform. Zambia followed closely behind, rolling out its own strategy in late 2024, developed with support from the United Nations and the Tony Blair Institute.

In Central Africa, Cameroon unveiled an ambitious National AI Strategy in 2025, with a strong emphasis on local language models and sovereign AI development.

South Africa, meanwhile, has taken its first formal steps toward comprehensive AI legislation and is on the verge of circulating a draft AI policy for public comment (after having published its Draft AI Policy Framework in October 2024). While this is not yet an AI Act, it clearly signals the direction of travel and would place South Africa among the first African jurisdictions with a fully developed AI regulatory regime.

AI governance in South Africa has also been reinforced through the new King V Code, which explicitly positions artificial intelligence within corporate governance structures. Principle 10 requires boards to exercise oversight over data, information, technology and AI, embedding AI governance at board level and reinforcing the broader regulatory trajectory.

Elsewhere on the continent, Ghana has finalised its National AI Strategy, with enabling legislation now anticipated, while countries such as Mozambique and Mauritius are progressing through structured consultation processes to shape their own frameworks.

Taken together, these developments reflect a continent actively building the regulatory scaffolding needed to adopt AI responsibly, competitively, and with accountability.

Policy Meets Practice: AI in the Courts

This policy landscape forms important context for what is already happening in African courts.

Across the continent, judges are being confronted with artificial intelligence not as a theoretical construct, but through real disputes and real misconduct that test professional ethics and the integrity of judicial processes.

In South Africa, the most prominent example remains the Pietermaritzburg High Court decision in Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others. There, Judge Bezuidenhout refused leave to appeal after discovering that seven of the nine authorities cited in the applicant’s submissions were entirely non-existent, apparently generated by an AI tool.

The court described the conduct as “irresponsible and downright unprofessional” and referred the matter to the Legal Practice Council. It remains the clearest example to date of AI hallucinations contaminating court filings in Africa.

Crucially, however, Mavundla was not an isolated incident.

In Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator and Others, decided on 30 June 2025, Smit AJ delivered another early judicial warning about the misuse of AI in legal practice. Several authorities cited in the applicant’s heads of argument were again found to be fictitious. Upon investigation, the court determined that the fabricated citations had been generated by an AI tool known as “Legal Genius”.

Counsel conceded that time pressure and inadequate verification, not bad faith, had led to the inclusion of incorrect authorities. But the court made its position unambiguous in stating that such conduct is unacceptable.

The judicial message has been consistent and increasingly firm: lawyers may use AI tools, but they remain personally responsible for every authority they cite.

Persuasive writing is no substitute for authentic sources, and AI outputs must always be independently verified.

These judgments are shaping the ethical baseline for AI use in legal practice in South Africa.

A Continental Pattern

South Africa is not alone in confronting these issues.

In Kenya, the judiciary reached a similar inflection point in 2025 after submissions were filed containing entirely fabricated authorities. A Supreme Court judge publicly warned both judges and practitioners against using generative AI tools in court proceedings until formal guidelines are in place, citing the reputational and procedural risks when AI-generated errors make their way onto the record.

In Ghana, legal commentary has echoed the same concerns, warning that the use of unverified AI content in court could amount to professional misconduct, deception of the court, and potentially even negligence.

What is emerging is a recognisably global pattern. Courts, often ahead of legislators, are setting practical rules for AI through the cases that come before them.

The Takeaway

Across the continent, the message from both policymakers and judges is consistent. AI is here to stay, and it can be a powerful tool when used properly. But it does not replace professional judgment, ethical obligations, or fundamental duties to clients and the courts.

As Africa’s AI regulations continue to take shape, these early court decisions offer a clear preview of how responsibility will be assessed in practice.

The technology may be artificial, but accountability remains entirely human.

Leave a Reply

Your email address will not be published. Required fields are marked *