19 Feb 2026
by Désirée Abrahams

AI development requires an ‘all of society approach’

2026 kicked off with global coverage of approximately 3 million non-consensual intimate images generated by an AI tool over a two-week period, including 23,000 that appeared to depict children. Regulators from Australia, France, India the UK, and the European Commission all launched formal investigations within days.  

Governments are having to play catch-up – and quickly. AI solutions are being deployed at speed across increasing broad aspects of our work, play and relationships.  Even those in the AI industry now acknowledge that risks once perceived as theoretical are materialising into real-world harms.  

The International AI Safety Report (February, 2026) categorises general-purpose AI risks into three broad areas: 

Malicious use: Content generated for misinformation or disinformation campaigns, blackmail, fraud or scams, and non-consensual intimate imagery.  

Malfunctions: Inaccurate or biased outputs, misleading advice, and the absence of effective remedies when systems fail.  

Systemic risks: Threats to human intelligence and autonomy, alongside large-scale  economic and social disruption.   

Consider the potential risks to labour markets arising from widespread AI deployment and corresponding workforce reductions – particularly for future generations, such as  Generation Beta. This raises further questions about possible shortfalls in tax revenue, coupled with an increase in welfare benefits, stemming from earlier government and company decisions to adopt AI at scale.  

It’s not a stretch to imagine how such pressures could heighten societal tension and contribute to intergenerational discord.  
While the benefits of AI are already evident – with efficiency gains across technology, finance, education, healthcare, and other sectors – the associated human rights risks remain are not theoretical. They are real-life concerns. 
 
Developing and deploying AI solutions requires an all-of-society approach. As AI’s tentacles reach ever deeper into daily life, reliance on a single actor – whether government or industry –  leaves us vulnerable.  

While governments and companies developing AI solutions bear clear responsibility to put in place appropriate guardrails, it’s vital that parents, civil society, academics and other relevant stakeholders also scrutinise government and company AI advancements and participate in stakeholder consultations.  

What can companies do?  

The incidents and risks outlined above share a common feature: they were foreseeable. Each point to gaps in data governance, testing, transparency, accountability, or remedy. This is why companies cannot treat AI deployment as purely technical innovation, but as a human rights relevantrelevant business decision. 

Companies should treat AI solutions just like any other new product or service within their business and conduct an initial risk assessment. In the ai context, this includes an assessment that focuses explicitly on human rights – identifying and mitigating potential adverse human rights impacts. 

Key questions to consider:  

  • Was the data collected with informed consent?  
  • What steps are in place to avoid algorithmic bias, data misuse, or misinterpretation?  
  • Are data analysts trained in responsible AI, ethics and human rights? 
  • Was the AI solution trained and tested on diverse populations prior to deployment?  
  • Are users clearly informed when AI is used within products or services?  
  • Is there an accessible redress mechanism for those affected by AI-driven decisions?  

Adopting a human rights due diligence approach to AI design, commissioning, and deployment is not optional – it is part of a company’s responsibility to respect human rights.

 

 

 

Authors

Désirée Abrahams

Désirée Abrahams

Consulting Director, ERM