Power With Principles: AI Governance that Strengthen Human Rights in Global Supply Chains
As regulatory expectations rise and supply chains become more complex, businesses are under increasing pressure to identify, prevent and address human rights risks with greater speed and clarity. These challenges often sit deep within multi-tiered networks, where traditional audits alone struggle to uncover issues around forced labour, unsafe working conditions or exploitative recruitment.
Artificial intelligence is reshaping this landscape. Used responsibly, AI enhances human insight, closes long standing visibility gaps and supports more responsive, evidence-based decision making. This all enables victims of human rights abuses to be put at the centre of an organisation’s response. But technology is only part of the answer. Strong governance, legal oversight and an ethical foundation are essential to ensure AI reduces harm rather than amplifies it.
Where AI helps today
AI can accelerate the quality and speed of human rights due diligence when used carefully:
-
Mapping beyond tier one using data linking and analysis to uncover hidden sub suppliers, brokers and labour agents. With many organisations having limited, if any, visibility beyond their own direct suppliers, AI can identify supply webs. It is worth noting that these webs currently only show the connection between companies and not necessarily the exact chain that products or services follow through a supply chain. They are therefore a way to narrow the search and identify potential issues.
-
Real-time risk sensing that scans multilingual sources for indicators such as withheld wages, excessive overtime or unsafe conditions.
-
Worker voice analytics to spot patterns in grievance channels and surface issues traditional audits often miss.
-
Geospatial analysis to identify environmental proxies for risk, from deforestation to nighttime activity linked to excessive shifts.
-
Predictive insight to prioritise high risk sites for audits and remediation, enabling teams to focus resources where they matter most.
Crucially, these tools support, not replace, human judgement. They help sustainability, legal and procurement teams intervene earlier and more effectively.
A rapidly evolving legal environment
The International Court of Justice’s landmark advisory opinion in 2025 reframed climate action as a legal obligation, not a policy choice. Although directed at states, the opinion reinforces expectations on corporates whose activities contribute to environmental and human rights impacts. It strengthens the direction of travel: organisations must demonstrate meaningful due diligence across their operations and supply chains, including where climate and human rights intersect.
Courts in the UK are also increasingly willing to hear claims linked to abuses in the lower tiers of supply chains. Recent cases involving manufacturers and suppliers abroad show a clear trend: even when harm occurs several layers removed from the parent company, English courts may still consider claims where oversight, governance or economic benefit sit in the UK. For businesses with global supply chains, this elevates the importance of early detection, transparent action and well governed technology.
AI’s role in climate change
AI is also changing how organisations understand and respond to climate related impacts across their supply chains, and with the line between climate change and human rights becoming increasingly blurred, addressing both issues is something AI can support holistically.
However, AI is not impact free. Model training requires energy and water, data centres have environmental footprints, and poorly designed tools can create social harms, including bias or inequitable access. As the need for climate aligned action grows, so does the need for good AI governance: governance that recognises environmental impact, mitigates emissions and energy use, and ensures AI systems are socially and ethically grounded.
Build responsibly: developing, training and embedding AI that limits bias
To ensure AI strengthens human rights outcomes and climate responsibility, organisations must:
-
Use representative, high-quality data, stress test models for bias and avoid features that replicate discrimination.
-
Embed human oversight, ensuring that high risk decisions always involve human review and the ability to pause or escalate.
-
Minimise environmental impact, choosing energy efficient models, evaluating data centre emissions and building sustainability into procurement.
-
Strengthen vendor governance, with contractual obligations for responsible AI, bias testing and remediation.
What this means for leaders
The direction is clear: corporates are expected to understand and address human rights risks across their value chains, including the lower tiers that were once seen as too remote to monitor. AI can dramatically enhance visibility and responsiveness, but only when built and used with strong governance that protects people and the planet. Currently, this is not the case as research conducted by Shoosmiths, in partnership with FT Longitude, revealed a significant gap in AI governance. Only a third of the 200 organisations surveyed have AI usage policies, with even fewer addressing the environmental and social impact of AI.
How Shoosmiths can help
Shoosmiths supports organisations to embed responsible AI frameworks, integrate human rights due diligence into procurement and governance, and manage litigation and regulatory risk. With combined expertise in technology, ESG, data protection and disputes, we help clients adopt AI confidently, ethically and sustainably.
If you’d like to explore how to strengthen AI enabled due diligence or governance in your organisation, we’d be delighted to support you.