19 Feb 2026
by Gwendoline Grollier

When Personalisation Becomes Manipulation: AI, Autonomy, and the Rights You Cannot See

In May 2025, a study published in Nature Human Behaviour found that when an AI model was given access to six sociodemographic attributes about a debate participant (gender, age, ethnicity, education, employment status, and political affiliation), it showed an 81.2% relative increase in the odds of achieving higher post-debate agreement compared with a comparable baseline in structured debates. Without personalisation, AI was already more convincing than human opponents in many settings. With personalisation, the gap became more pronounced. The researchers warned that malicious actors could plausibly achieve even stronger effects using the far richer behavioural data available through social media and digital traces. 

This finding crystallises a question that sits at the heart of the AI and human rights debate: where does personalisation end and manipulation begin? 

We are already surrounded by systems that tailor experiences to us. Streaming platforms recommend what we watch. Retailers adjust the prices we see. Job platforms filter opportunities. Health apps nudge our behaviour. In isolation, each feels like convenience. Taken together, they form an environment in which our choices are quietly shaped by systems we did not choose, cannot see, and do not understand. 

The human rights dimension is not abstract. Autonomy, non-discrimination, privacy, and equal access to information are all engaged when AI systems profile individuals and adapt what they see, what they are offered, and how they are addressed. When a system knows enough about a person to tailor its approach for maximum effect, and the person knows nothing about the system, the relationship is no longer one of service. It is one of asymmetric power. 

Regulators are starting to respond. The EU's AI Act explicitly prohibits AI systems that deploy subliminal techniques or manipulative or deceptive techniques that materially distort behaviour and cause (or are likely to cause) significant harm, as well as systems that exploit vulnerabilities in ways that lead to such harm. The EU's forthcoming Digital Fairness Act initiative, with a legislative proposal widely expected by the end of 2026, is intended to address unfair personalisation practices that exploit consumer vulnerabilities, including personalised pricing, addictive design, and dark patterns. In the UK, the Digital Markets, Competition and Consumers Act 2024 gave the CMA strengthened direct consumer enforcement powers, with key provisions coming into force in April 2025. UK regulators, including the CMA, have also highlighted the risk that AI-driven systems may amplify unfair or manipulative online practices. 

Yet enforcement is struggling to keep pace. AI-powered personalisation now often operates through optimisation systems that continuously test, adapt, and refine in real time. These are not static designs that can be audited once. They are living systems that evolve with every interaction, learning what works on each individual. The line between a helpful recommendation and a manipulative nudge shift with each iteration, and the person on the receiving end will rarely know the difference. 

This matters beyond consumer protection. When the same techniques are applied to political communication, welfare interactions, or content moderation, the stakes extend to democratic participation itself. The World Economic Forum's Global Risks Report 2025 ranks misinformation and disinformation among the leading short-term global risks. Personalised persuasion at scale is not a theoretical concern. It is an operational one. 

For organisations developing and deploying AI, this is both a governance challenge and an assurance opportunity. Testing whether a system personalises or manipulates requires examining the incentive structures, optimisation targets, and feedback loops that shape how it behaves over time. Independent assurance is not a constraint on innovation. It is a condition of responsible deployment. 

The central insight is simple: AI systems now understand individuals better than those individuals understand the systems. When that asymmetry is used to serve people, it is personalisation. When it is used to steer them, it is manipulation. The difference is not always visible to the person affected, which is precisely why it is a human rights concern. 

Autonomy, including the autonomy to make informed decisions, is a human right. Systems designed to circumvent that autonomy require scrutiny, transparency, and meaningful accountability. 

Gwendoline Grollier is the cofounder of T3, specialising in AI testing, assurance, and responsible innovation | For TechUK Human Rights Campaign Week 2026 

 

 

 

Authors

Gwendoline Grollier

Gwendoline Grollier

T3