Dickson Marfo Fosu, a researcher at the Responsible AI Lab (RAIL), contributed to a critical dialogue on AI governance in the Global South. He participated in an online webinar hosted by the Institute for AI Policy & Governance (AIPG), titled “Governing AI: Making the Case for Africa’s AI Tools & Risk Registry.”
The session convened researchers, policymakers, and practitioners to explore frameworks for effective, context-sensitive AI governance in Africa and beyond. Dickson provided key insights on a foundational issue: AI infrastructure sovereignty.
A central theme from Dickson’s intervention was the need to move beyond rhetoric. He argued that AI infrastructural sovereignty must be measurable, not merely declared. This entails developing tangible metrics and tools to assess control over the entire stack of AI systems, encompassing data storage, compute infrastructure, supply chains, and technical autonomy. This approach reframes sovereignty as an ongoing, evidence-based practice crucial for genuine self-determination in the digital age.

The panel identified a core obstacle to effective governance: a pervasive “AI visibility gap”within public institutions. Governments often lack answers to basic questions:
- Which AI systems are deployed across ministries and agencies?
- Where do they operate, and for what purpose?
- How were they acquired—through vendor contracts, donor projects, or internal development?
- What specific risks do they pose in our local context?
Participants highlighted how AI systems enter the public sector through fragmented channels, such as isolated donor-funded pilots, vendor-led installations, and agency-level procurement—creating a landscape of opacity that undermines oversight and accountability.
The discussion drew lessons from early AI registry models in cities like Amsterdam and Helsinki. While these registries demonstrated the value of transparency, they also revealed pitfalls, such as becoming static repositories disconnected from active risk management.
Dickson emphasised that governance failures often stem not only from algorithmic bias but from hidden dependencies on foreign cloud services, computing infrastructure, and internet connectivity. This expands the risk landscape to include digital sovereignty and resilience.
A key consensus emerged: If AI systems cannot be seen, mapped, and understood, they cannot be effectively governed. The panel concluded that context-aware AI tools and dynamic risk registries are a vital first step to translate governance principles into actionable, enforceable oversight.