20 Feb 2026
by Natalia Domagala

What’s next for human rights and tech: three shifts companies need to know

As artificial intelligence becomes embedded across economies and workplaces, the conversation about what it means to deploy AI with human rights considerations in mind is changing. Three key shifts are reshaping companies’ understanding of human rights in tech. 

From principles to accountability  

For much of the past decade, technology governance has been dominated by high-level ethical principles. While these principles helped establish a baseline, they are no longer sufficient without strong implementation and accountability mechanisms. As the use of AI rapidly increases across all industries, regulators, courts and civil society have been framing technology harms as potential human rights violations rather than technical failures or unintended side effects. The focus is shifting to whether any potential harms were foreseeable, preventable and addressed when they happened.  

Companies are being asked to demonstrate that they have identified risks, taken meaningful steps to prevent harm and provided access to remedy when people are adversely affected. This marks a fundamental change where responsibility means having effective processes, decisions and accountability mechanisms across the full lifecycle of technology, from design and data sourcing to deployment and oversight. 

Data workers as rights-holders 

A second shift is bringing long-overlooked workers into view. Data workers, including those involved in data labelling, content moderation and other forms of digital labour, are increasingly recognised as essential to AI systems, yet often remain exposed to precarious conditions, low pay, and psychological harm. As scrutiny of AI supply chains grows, expectations are changing. Fair compensation, safe working environments, mental health support and transparency about how data labour is organised are now central to discussions about responsible AI.  

Recognising data workers as skilled, value-creating contributors, strengthens both governance and outcomes. Companies that that engage seriously with labour conditions in their AI supply chains are better positioned to build resilient and trustworthy technologies – while those that fail to address these risks face reputational damage, regulatory scrutiny and systems built on unstable foundations.  

Human rights at work as a question of power 

The third shift is unfolding in workplaces. AI increasingly shapes how work is monitored, evaluated, scheduled, and rewarded. As a result, debates about the use of AI in the workplace are moving beyond impact mitigation toward deeper questions of power and control. Concerns about surveillance, deskilling and loss of autonomy are reframing workplace technology use as a human rights issue. Workers and their representatives are demanding greater transparency about the use of AI in their companies, and a say in deployment decisions as well as the ability to contest automated outcomes that affect pay, performance or job security. As companies adopt AI systems, failing to ensure meaningful worker involvement carries growing commercial risks, including reduced morale, talent retention challenges, reputational damage and operational inefficiencies when systems are misunderstood or mistrusted.  In this context, explainability, consultation, technology deployment with workers’ representatives, and meaningful oversight of the existing AI systems are becoming central to responsible workplace AI, alongside the recognition that technological change should augment human work rather than erode dignity and agency. 

What companies should do now 

These shifts indicate that reactive compliance is no longer enough. Human rights due diligence in AI deployment should be treated as a continuous process. There are several steps that companies need to take to stay ahead: 

  1. Audit the human rights impact of your AI use 

Companies need to identify, assess and address risks associated with technologies across their full lifecycle, and adjust their design and deployment choices accordingly.  

One tool to help them do that is the AI Company Data Initiative (AICDI) powered by the Thomson Reuters Foundation. This free framework helps corporate leaders map how AI is developed and used across their operations. It supports businesses to identify potential human rights risks – such as labour conditions in data labelling, content moderation and outsourced digital work – and formulate a plan to address these. 

  1. Engage with stakeholders 

In addition to a comprehensive audit, meaningful stakeholder engagement is essential. Workers, users, affected communities, unions, civil society organisations and subject-matter experts often reveal risks that technical teams cannot see alone. Their lived experience is critical to understanding how systems operate in practice. 

  1. Build cross-functional governance 

Human rights responsibilities cannot sit solely with legal or compliance teams. Effective governance requires collaboration across engineering, product, procurement, HR, legal and risk functions. Companies need clear accountability at senior levels, and leaders should be explicitly responsible for technology-related human rights risks. For companies deploying AI, procurement is a key lever. Contracts with data vendors and platform partners should include expectations on fair pay and good working conditions.  

  1. Embed rights and labour protections by design 

Just as privacy by design has become standard practice, a rights by design approach is increasingly a necessity for businesses. Embedding fairness, explainability and human oversight from the outset, in particular in systems that affect employment, working conditions or access to opportunities, helps companies avoid the costly and complex processes once challenges emerge. Designing workplace technologies to support and augment human work, rather than intensify surveillance or deskill roles, also strengthens trust and retention. Before deployment, companies should use bias audits, stress testing and red-teaming to identify and mitigate potential harms, including labour-related harms. This proactive approach not only meets rising regulatory expectations but supports smoother adoption and long‑term resilience. 

  1. Strengthen transparency and redress 

Transparency builds trust, but only when paired with meaningful avenues for redress. Companies can practice meaningful transparency by publicly disclosing relevant policies and practices related to data sourcing, data labour, supply chains, and workplace AI. To ensure that workers, including contracted and outsourced data workers, can raise any concerns, companies might consider introducing grievance mechanisms that are safe and effective. When using AI in more sensitive context, companies need to be able to explain how automated systems influence decisions about hiring, pay, scheduling or performance, and how those decisions can be challenged.  

  1. Collaborate across ecosystems 

No company can address these challenges alone - shared risks require collective solutions. Industry initiatives such as the AI Company Data Initiative can help establish and benchmark common standards for ethical AI, fair data labour and responsible workplace technology. Engagement with governments, unions, NGOs and researchers is equally important to shaping norms that protect workers and uphold human rights across complex technology ecosystems. 

Human rights due diligence is critical to successful AI adoption that can withstand the lightening pace of this technology. Companies that recognise this and act accordingly will be better equipped to navigate regulatory change and create technologies that genuinely serve people. 

 

 

Authors

Natalia Domagala

Natalia Domagala

Thomson Reuters Foundation