The right to an explanation nobody can give
AI transparency is an increasingly important topic, with the conversation often starting with a reasonable question: Do people know that AI is being used in decisions affecting their lives? While the use of AI in shaping policy and services is important, I believe a more pressing issue emerges – what are the consequences when the organisation using AI cannot explain how the system arrived at its output?
In previous articles, I have written about the ethical structures shaping AI adoption in public services and the need for transparency and accountability. While these principles remain vital, in practice, transparency often amounts to a simple notification that AI is involved, failing to address the core problem: a lack of understanding of how AI systems reach their outcomes.
Over the past five years, Modular Data has evaluated data and AI platforms in the UK public sector, and we have repeatedly found a consistent gap. Systems described as transparent and well-governed lack the technical visibility necessary for true explainability. Operational staff cannot trace how outputs are derived, and governance logic remains inaccessible. Professionals making consequential decisions, such as assessing risk, allocating resources, and shaping someone’s future, cannot understand or interrogate the information behind their screens.
What results is institutional opacity, and this is a human rights problem. When someone is assessed, scored, or categorised by a process the organisation cannot explain, their right to a reasoned and challengeable decision is undermined. This opacity is not usually intentional; the system was simply not built with explainability as a key design requirement. The effect on the person at the end of the chain is the same, regardless of intent.
Institutional opacity corrodes the relationship between an organisation and the people it serves. When someone cannot grasp how an institution reaches its conclusions, they lose the ability to engage with it meaningfully. They cannot question, prepare, or advocate for themselves. They are forced to participate in a process without knowing its rules, often when the stakes are highest. Those already in vulnerable positions, for example, people navigating welfare systems, healthcare, housing, or justice, bear the greatest burden. The institution may have published values of fairness and transparency, but the actual procedures and outcomes may tell a different story. This gap between declared intent and lived experience is where trust collapses.
Much of this opacity stems from the supply chain. Public sector AI does not come from one team or organisation. It spans vendors, subcontractors, platform providers, and integration partners. Each layer adds abstraction. By the time an output reaches a frontline worker or citizen, the decision path has passed through so many hands that no party can trace the whole journey. The person most affected has the least visibility. The UK's new Office for Responsible Business Conduct and evolving European obligations are starting to address this. However, regulation assumes a self-knowledge that many deploying organisations do not have.
Through years of conducting independent evaluations and technical fixes on government systems, Modular Data has seen that explainability, traceability, and accountability are still treated as afterthoughts rather than foundations. Our goal is to help organisations adopt AI and advanced capabilities on solid foundations by providing guidance rooted in practical experience. This is why we now also build platforms with governance embedded from the start. Every piece of information supporting a decision can be traced to its source, enabling system operators to provide genuine explanations of their outputs. UNESCO's Recommendation on the Ethics of Artificial Intelligence makes this clear. Public sector AI must be interpretable, and decision-making must be accountable to humans. This demands genuine architectural commitment.
General confidence in AI remains low, and for good reason. People sense that the systems shaping their lives are opaque, even to those operating them. Rebuilding that trust will take more than disclosure notices and privacy policies – it will require a fundamental change in how public sector technology is conceived. Leaders, technologists, and decision makers must visibly and concretely commit to transparency beyond surface-level disclosure.
Explainability must become the default in every AI-driven decision – only then can we truly protect the rights and interests of the people we serve.