The right to know: transparency as an AI and human rights issue
Guest blog by Tanya Goodin, CEO at EthicAI, as part of our Human Rights Campaign Week.
A day that celebrates both AI and human rights should begin with a simple principle - people deserve to know when they are dealing with a machine, and when a machine is shaping decisions about their lives.
Transparency isn’t a technical detail or a marketing choice – it’s the foundation that makes other rights possible: fairness, privacy, equality, and the ability to challenge decisions. When AI systems operate invisibly, accountability dissolves. What looks like efficiency quickly becomes unchallengeable power.
This matters because AI is already deeply embedded in everyday systems. Many people use AI-enabled tools daily without realising it. A nationwide US study by Gallup and Telescope found that although almost all adults used at least one product with AI features every week, only 36% were aware those products actually involved AI. When technology disappears into the background like this, meaningful consent becomes impossible.
Across surveys, people are clear: they want to be told when AI is involved. In the UK, the Institute of Practitioners in Advertising found that 75% of people want to be notified when they are not dealing with a real person, and 74% expect brands to be transparent when using AI-generated content. This isn’t about resisting technology; it’s about being informed and educated participants in digital systems.
Concerns become sharper when content creation is involved. A YouGov survey of more than 2,000 UK adults found that 73% were worried about AI-generated online content, particularly because of misinformation. Half of respondents believed labelling AI-generated or digitally altered content could help reduce misinformation. Yet the same research contained a warning, 48% said they wouldn’t trust AI-content labels even if they existed – compared with just 19% who said they would trust them. Transparency that exists only on paper without visible enforcement or credibility, risks becoming ‘trust theatre’.
Human rights where AI is concerned are not only about being informed, they’re also about agency. Public attitude research from the Alan Turing Institute shows what people need to feel safe with AI systems. 65% said clear procedures for appealing AI-made decisions would make them more comfortable, while 61% said understanding how decisions were made about them in AI systems would increase their comfort. People don’t expect perfection from AI in their lives, but they do expect recourse.
This expectation aligns closely with existing UK legal protections. Under UK GDPR, Article 22 provides specific rights where decisions are made solely by automated means and have legal or similarly significant effects, including the right to request human intervention. The Information Commissioner’s Office (ICO) also makes clear that transparency obligations are higher where automated decision-making is involved, requiring explanations of logic, significance, and likely consequences.
So, the problem isn’t that these rights don’t already exist. It’s that without transparency, people may not know they apply. If AI use is hidden individuals can't object, appeal, correct errors, or even recognise when harm has occurred. At that point opacity itself becomes a human rights issue.
Transparency itself changes behaviour. In the same YouGov study above, when respondents were shown AI-labelled social media content, 27% said they would block or unfollow the account. Disclosure therefore also creates accountability – legally, socially, and economically. Disengagement isn’t an option for high-stakes decisions involving employment, housing, healthcare, or benefits. In these contexts, transparency must be paired with accessible routes to challenge outcomes.
Transparency must be a minimum human rights standard for AI. If an AI and Human Rights Day are to be more than symbolic, transparency must be treated as a baseline obligation in algorithmic systems. Say clearly when AI is being used, explain what it does and its limits and provide a straightforward way to complain, appeal, and reach a human decision-maker.
The real challenge is not simply declaring that AI systems are transparent, but being able to demonstrate it – which is where AI assurance becomes critical. Independent assessment, documentation, testing, and ongoing monitoring help ensure that transparency is genuinely built into AI models and systems, rather than added as an afterthought.
For users and stakeholders, assurance provides confidence that AI use has been scrutinised, that explanations are meaningful and that accountability mechanisms actually work. For private and public organisations, it offers a way to show that commitments to transparency are operational, not just symbolic. In a future where AI is increasingly embedded but may be invisible to users, assurance is what turns transparency from a promise to protection.