19 Feb 2026
by Professor Ashley Braganza, Dr Elena Abrusci, Dr S Asieh Tabaghdehi

Transparency, Trust, and the Red Lines AI Must Not Cross

The human rights considerations in the technology sector have expanded in recent years rapidly. Where attention once focused on supply chain and data protection, the widespread deployment of AI has shifted the today’s conversation towards value chains of automated decision-making and the impact these may have on the enjoyment of a wide range of human rights. As AI systems become embedded, often invisibly, into economic and social infrastructures, transparency is increasingly framed as a prerequisite for trust, legitimacy, and the protection of rights1,2. Yet despite its prominence in policy and regulatory discourse, transparency remains poorly defined and unevenly applied in practice. 

Transparency should not be understood as a narrow compliance exercise, but as a responsibility distributed across the AI value chain. As mandated by the UN Guiding Principles on Business and Human Rights, this is the only way to make it effective in protecting human rights3. It raises three interrelated questions: what transparency means at different stages of the AI lifecycle; how it should be communicated to affected actors across supply chain; and how transparency expectations should vary according to the risk associated with the use and deployment of AI.  

First, transparency takes different forms across the AI value chain. For developers, it involves clarity around data provenance, design assumptions, model limitations, and sources of bias or uncertainty4. For organisations deploying AI, transparency is primarily a matter of governance: how systems are implemented, monitored, audited, and reviewed, and where accountability lies when harm occurs. At the societal level, transparency requires that individuals are aware when AI is being used, understand its role in shaping decisions or outcomes, and know what rights, safeguards, or remedies apply. Problems arise when transparency is reduced to technical disclosure rather than treated as a shared obligation across actors. Human rights risks do not arise solely at the point of deployment5; they accumulate across interconnected supply chains, platforms, and institutional arrangements, amplifying their impact on the most vulnerable in society. 

Second, transparency must be communicated in ways that are meaningful. Merely informing individuals that an AI system is in use is rarely sufficient, particularly for those who lack technical expertise, bargaining power, or the resources to contest automated outcomes. Transparency that cannot be understood, questioned, or acted upon offers limited protection6. These concerns are most acute where AI is embedded in essential or high-impact contexts such as recruitment, education, welfare administration, law-enforcement or content moderation. In such cases, individuals may be nominally informed of AI involvement yet have little practical ability to opt out, seek explanation, or request meaningful human review. Transparency risks becoming procedural and performative rather than protective. 

This leads to a third question: how should transparency obligations vary according to risk? Not all AI systems pose the same level of human rights concern. Systems that affect access to employment, education, credit, or public services warrant significantly higher standards of transparency, oversight, and accountability than low-risk applications. The European AI Act reflects this logic through its risk-based regulatory framework, recognising that higher-risk systems require stronger safeguards, documentation, and human oversight7

Certain red lines, however, should not be crossed at any level of risk. When AI technologies threaten human rights and it is not possible to put in place appropriate mitigation measures, then the technologies should be abandoned altogether as reiterated by the Council of Europe Framework Convention on Artificial Intelligence and Human Rights8. Another instance is when AI systems present themselves as human. As The Rt Revd Dr Steven Croft, Lord Bishop of Oxford, said in his interview in AI Adoption Podcast9, when AI interactions blur the distinction between human and machine, they risk misleading individuals and eroding trust, with potential consequences for dignity, wellbeing, and autonomy. Individuals have a fundamental right to know when they are interacting with an AI system. 

Ultimately, transparency alone is insufficient. Without meaningful choice, avenues for redress, and clearly assigned accountability, transparency can obscure rather than mitigate harm. The challenge ahead is not simply to make AI transparent, but to ensure that transparency is proportionate to risk, intelligible to those affected, and embedded throughout the AI value chain. Only then can transparency serve as a foundation for trust, human rights protection, and responsible innovation as AI becomes ever more deeply woven into social and economic life. 

References

1 Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People- An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and machines, 28(4), 689-707. 

2 Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & governance, 12(4), 505-523.  

https://www.ohchr.org/en/publications/reference-publications/guiding-principles-business-and-human-rights 

4 Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86-92. 

https://www.cambridge.org/core/journals/business-and-human-rights-journal/article/abs/artificial-intelligence-and-human-rights-a-business-ethical-assessment/33D07AB42FC76A4BA49B03F600186E1B 

6 Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. new media & society, 20(3), 973-989. 

7 European Commission (2024). Regulation laying down harmonised rules on artificial intelligence (AI Act). 

https://rm.coe.int/1680afae3c 

https://open.spotify.com/episode/17TBAjpLwVbDRVuM2x4L2k?si=rB7HjN5DRa6orIbE_DiTtg 

 

 

 

Authors

Professor Ashley Braganza

Professor Ashley Braganza

Brunel University

Dr Elena Abrusci

Dr Elena Abrusci

Brunel University

Dr S Asieh Tabaghdehi

Dr S Asieh Tabaghdehi

Brunel University