19 Feb 2026
by Andrew Burgess

Public trust in human rights and AI

Trust is a theme that runs like a golden thread through the Human Rights Act – it is fundamental to its operation and success. Similarly, without public trust of Artificial Intelligence (AI), as well as its developers and implementers, it will never realise its full potential with its users – citizens and employees. 

However, public trust in AI is at an all-time low. For the vast majority of people, now with access to some of the most powerful models in the palm of their hands, the terms ‘AI’ and chatbots are conflated, creating a narrow view of a much broader and richer field. This lack of understanding has created an industry that looks like an Ouroborus—a serpent eating its own tail. 

But building public trust in AI can be a complex and risky process, potentially eroding trust before it can be built up again. Dario Amodei, CEO of Anthropic, argues that we are in a ‘technological adolescence’ and must publicise civilisational risks to force governments to legislate. While this transparency is rightfully intended to build safety, its immediate effect can be to scare the public, fuelling a ‘doomerism’ that makes the technology feel like an existential threat rather than a tool 

AI is also frequently used as a scapegoat for broader societal anxieties. We hear constant warnings of an ‘AI jobpocalypse’, yet an analysis of the labour market suggests the cooling of employment in knowledge-based roles since 2022 is more closely linked to interest rate hikes and tax changes than to algorithmic replacement. Either to seem ‘forward-thinking’ to investors or divert attention from deeper issues, when companies link layoffs to AI, they inadvertently fuel public fear. 

To fix this erosion of trust in AI, the UK needs a strategy based on education, positive action, and a commitment to human rights. 

  1. Education and ‘Digital Resistance’: We cannot rebuild trust without a population that understands the tools they are using. The school curriculum has to embrace ideas such as ‘digital resistance’, which advocates for a cautious and ethical approach to AI literacy. Rather than just teaching technical skills, ‘digital resistance’ focuses on building resilience and ‘digital judgment’ in students to empower them against specific risks like algorithmic manipulation, digital addiction, and synthetic deception. 

  1. A Rights-Based Approach: Trust must be anchored in the European Convention on Human Rights (ECHR), moving beyond voluntary ‘ethical frameworks’ that offer no legal recourse for the individual. When, for example, opaque algorithms are deployed to determine welfare eligibility or assist in policing, they often operate as ‘black boxes’ that threaten the right to a private life (Article 8) and the right to a fair trial (Article 6). Without algorithmic transparency, a citizen cannot effectively challenge a decision that affects their liberty or livelihood, rendering these fundamental protections toothless. The constant ‘chipping away’ at these rights by political entities and public organisations (often under the guise of ‘efficiency’) must be challenged at every opportunity to ensure technology serves the citizen rather than the state. A robust governance framework should ensure that AI deployment is legally compliant, contestable, and tethered to the rule of law. 

  1. AI-for-Good: Finally, trust is built through tangible benefits that improve the human condition, rather than through abstract promises or over-zealous marketing. Relentlessly championing initiatives like the NHS early-warning system for patient safety demonstrates how AI can proactively identify health risks, shift healthcare from reactive to preventative, and save lives. Similarly, citizen-led initiatives like Prompt Action highlight how AI can be a valuable and caring force when it is designed to empower the community rather than just extract data. By focusing on these high-impact, human-centric applications, the sector can pivot away from the ‘ouroboros’ of self-serving growth and prove that AI can be a tool for social good. Rebuilding public confidence requires this shift toward positive action, proving the technology’s worth through its ability to solve systemic problems without compromising ethical standards. 

The path to rebuilding trust isn't through more marketing, but through relevant education for all groups, positive action, and a governance framework that puts human rights before corporate hype. 

 

 

 

Authors

Andrew Burgess

Andrew Burgess

Founder , GreenhouseAI

Andrew is an AI strategist, ethicist, author and speaker with over 30 years’ experience. He advises organisations on their AI strategy, its application in business and its ethics. He is the author of ‘The Executive Guide to Artificial Intelligence’.