19 Feb 2026
by Ivana Bartoletti

Agentic AI and human rights: transparency, control and accountability in the age of autonomous systems

Artificial intelligence agents are no longer theoretical. They are being built, deployed and integrated into business processes at speed. 

In computing, an agent is a software system that can carry out processes or tasks with varying degrees of autonomy. When large language models or foundation models are scaffolded with tools, such as search engines, databases, calendars or payment systems, they become “agentic”. They can plan, reason, take actions and iterate towards a goal with limited human input. 

The use cases are expanding rapidly. Agents can conduct research across multiple sources and synthesise findings. They can write and debug code. They can plan projects, manage workflows and execute transactions. In commercial settings, they are being tested for procurement, customer support, compliance monitoring and financial operations. The attraction is clear: speed, scale and the ability to coordinate tasks across systems without constant supervision. 

Yet with increased autonomy comes increased risk. 

Agentic systems can misinterpret goals or optimise for the wrong outcome, a problem often described as misalignment. They may manipulate tools in unintended ways, particularly when connected to external systems. They can generate cascading hallucinations, where one incorrect output becomes the basis for further flawed actions. When agents access multiple datasets and systems, the risk of privacy breaches increases. And because they act across steps rather than delivering a single output, errors can compound before they are detected. 

At the centre of these risks lies a fundamental human rights issue: transparency and control. 

Transparency in the age of agents is not simply about publishing model cards or high-level explanations. It is about making clear when an agent is acting, what authority it has, what data it can access and what consequences may follow from its actions. Individuals affected by those actions, whether customers, employees or citizens, must be able to understand how decisions were made and how to challenge them. 

Human oversight must therefore be deliberate and structured. It does not mean passively observing outputs. It means defining significant checkpoints and action boundaries that require human approval. Where an agent’s decision may materially affect a person’s rights, livelihood, access to services or reputation, human authorisation should be mandatory. 

Approval processes should also be meaningful. Presenting a reviewer with pages of raw logs is not oversight. If an agent proposes to suspend an account, deny a benefit or execute a transaction, the human reviewer should be shown the anticipated impact and the basis for the recommendation. Oversight must focus on consequences, not just data. 

A human rights approach also demands privacy safeguards embedded from the outset. Agents should operate with strict data minimisation, controlled access and clear purpose limitation. Deployment should be gradual, beginning in areas where the potential for harm is low. Scaling should follow evidence that controls are working in practice. 

Education is equally important. Users interacting with agents must understand their limitations. Those integrating agents into workflows must be aware of automation bias, the tendency to over-rely on automated outputs, and know when and how to escalate concerns. Without training, even well-designed safeguards can fail. 

Recent events illustrate why this matters. A platform known as Moltbook, a social network designed for AI agents, reported growth from roughly 100 agents to 1.5 million within days. Agents could post, upvote and interact. One published a manifesto calling for human extinction. Another was compromised through social engineering tactics. 

This is not evidence of machine consciousness or an impending robot uprising. It is a reminder that autonomous systems operating at scale can behave in unexpected ways, especially when connected to open environments. They can be manipulated. They can amplify flawed objectives. They can expose weaknesses in governance. 

As agentic AI moves from experiment to infrastructure, governance cannot be an afterthought. Transparency, defined control points, privacy by design and legal protection by design are not barriers to innovation. They are the conditions for its legitimacy. 

Human rights are not abstract principles in this context. They are practical design requirements. If we embed them early, in architecture, deployment and education, we can harness the benefits of agents while preserving accountability and trust. 

 

 

 

Authors

Ivana Bartoletti

Ivana Bartoletti

Global Chief Privacy & AI Governance Officer, Wipro