Ensuring Responsible Digital Identity: An AI Governance Perspective
*Please note that these thought leadership pieces represent the views of the contributing companies and do not necessarily reflect techUK’s own position.
At Holistic AI, we specialise in AI governance, working to ensure that AI systems are developed and deployed responsibly across various sectors and with the appropriate safeguards in place to minimise risk and prevent harm.
Overall, our goal is to empower the adoption of AI at scale by fostering trust in the technology. Our interdisciplinary expertise in this field uniquely positions us to address the critical intersection of AI and digital identity. In this article, we argue that robust AI governance is crucial for creating ethical and inclusive digital identity systems that benefit all members of society.
The Intersection of AI and Digital Identity
In our increasingly digital world, digital identity has become a cornerstone of modern society, facilitating everything from online banking to accessing government services, with artificial intelligence (AI) playing an ever-expanding role in their development and implementation. While AI offers immense potential to enhance the efficiency and security of digital identity systems, it also introduces new ethical and safety challenges that must be carefully addressed. Indeed, there are concerns about bias and discrimination in AI algorithms, privacy issues related to data collection and processing, lack of transparency in AI decision-making, and the potential for systems to be used maliciously. As AI becomes more prevalent in digital identity systems, it's crucial to address these risks through effective governance frameworks.
Key Ethical Considerations in AI-Powered Digital Identity
To ensure the responsible development and deployment of AI in digital identity systems, several key ethical considerations must be addressed. Fairness and non-discrimination are paramount; AI systems must be designed and trained in a way that ensures that they do not result in unjustifiable differences in treatment or outcomes for different groups. To achieve this, the data used to train algorithms should be as representative as possible to ensure that they are optimised to evaluate all subgroups that they may be deployed to. The features in the model should also be carefully examined to ensure they are not proxies for protected attributes, and models should be evaluated to ensure they are accurate across subgroups. Outcomes should also be continuously monitored for unjustifiable differences in outcomes across subgroups, taking context into consideration.
The decision-making processes of AI systems should be transparent and explainable to both users and regulators, particularly in digital identity systems where AI decisions can have significant impacts on individuals' lives. Information should be provided on how individuals can opt out of engaging with AI and how the decisions made by AI can be challenged by users. There should be appropriate human oversight to ensure responsible decision-making and override or even stop the system. Disclosures should be conspicuous and clear, and notification should be given in advance as far as is possible.
Privacy and data protection are paramount. This includes implementing robust security measures, minimising data collection to only the data that is necessary, and ensuring user consent for data usage. Appropriate data stewardship and data governance practices should be in place, and there should be well-established policies and procedures that should be followed in the event of a data breach.
Ensuring Inclusivity in Digital Identity Systems
Creating truly inclusive digital identity systems requires addressing the unique challenges faced by marginalised and underrepresented groups. These challenges may include limited access to technology, language barriers, cultural differences in identity documentation, and disabilities that may affect traditional authentication methods. These groups may also be underrepresented in the data used to train models.
To promote inclusivity, engaging diverse stakeholders in the design process and using representative and diverse datasets for AI training is essential. Implementing multiple authentication options to accommodate different needs, providing multilingual support, and culturally sensitive user interfaces are also crucial. Diverse teams developing these systems bring a range of perspectives and experiences to the table.
Holistic AI's Approach to Ethical Digital Identity
At Holistic AI, we've developed a comprehensive governance platform to address the ethical challenges of AI across sectors, including in digital identity systems. Our holistic approach to independent evaluations of AI systems is grounded in research in AI auditing and assurance, and our platform takes model specifications and deployment context into consideration to ensure that the most up-to-date best practices are followed. We assess systems for risks related to bias and fairness, privacy, robustness, transparency and explainability, and efficacy, providing mitigation strategies and ongoing monitoring for systems to ensure their safety and maximise their value. We have audited well over 20,000 algorithms across a variety of sectors and applications, including identity verification.
For example, we work with financial institutions to risk manage AI-powered identity verification. Leveraging our interdisciplinary expertise and governance framework, we helped the institution to ensure that their facial recognition system had the appropriate safeguards to prevent bias and ensure that the appropriate privacy-preserving techniques for data handling were in place.
Recommendations for Enterprises
Adopting a comprehensive AI governance framework not only helps enterprises to gain a competitive advantage by gaining trust and maximising their AI ROI, but can also help to shield them from financial, reputational, and legal risks. Regular audits of AI systems throughout their lifecycle can help to detect and mitigate biases, privacy risks, and other ethical issues. Fostering an ethical AI culture within the organisation through training, clear policies, and leadership commitment is equally important to gain internal buy-in and ensure that best practices are followed.
Enterprises should also invest in research and development of more ethical and inclusive AI technologies. Staying informed about evolving regulations and best practices in AI ethics and governance is crucial for maintaining high standards in this rapidly evolving field. To stay on top of developments in the AI governance ecosystem, sign up for a free account on the Holistic AI Tracker and check out our state of AI regulations report.
Conclusion
As digital identity systems become increasingly integral to our daily lives, ensuring their ethical development and deployment is paramount. AI governance plays a crucial role in addressing the complex challenges at the intersection of AI and digital identity, promoting fairness, transparency, privacy, and inclusivity and reducing risks.
At Holistic AI, we are committed to continuing to advance AI governancein digital identity technologies and beyond. There is no one entity responsible for ethical AI; multiple stakeholders must come together to create digital identity systems that are not only not only cutting edge and effective but also ethically sound and truly inclusive for all members of society.
Authored by,
Holistic AI
Digital Identity programme activities
Digital identities will provide a gateway for citizens and SMEs into the digital economy. techUK members demonstrate the benefits of digital identity to emerging markets, raise their profile as thought leaders, influence policy outcomes, and strengthen their relationships with potential clients and decision-makers. Visit the programme page here.
Digital ID campaign week 2025! 🔐
Recap on key insights shared during Digital ID Campaign Week.
Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.
Sue leads techUK's Technology and Innovation work.
This includes work programmes on cloud, data protection, data analytics, AI, digital ethics, Digital Identity and Internet of Things as well as emerging and transformative technologies and innovation policy.
In 2025, Sue was honoured with an Order of the British Empire (OBE) for services to the Technology Industry in the New Year Honours List.
She has been recognised as one of the most influential people in UK tech by Computer Weekly's UKtech50 Longlist and in 2021 was inducted into the Computer Weekly Most Influential Women in UK Tech Hall of Fame.
A key influencer in driving forward the data agenda in the UK, Sue was co-chair of the UK government's National Data Strategy Forum until July 2024. As well as being recognised in the UK's Big Data 100 and the Global Top 100 Data Visionaries for 2020 Sue has also been shortlisted for the Milton Keynes Women Leaders Awards and was a judge for the Loebner Prize in AI. In addition to being a regular industry speaker on issues including AI ethics, data protection and cyber security, Sue was recently a judge for the UK Tech 50 and is a regular judge of the annual UK Cloud Awards.
Prior to joining techUK in January 2015 Sue was responsible for Symantec's Government Relations in the UK and Ireland. She has spoken at events including the UK-China Internet Forum in Beijing, UN IGF and European RSA on issues ranging from data usage and privacy, cloud computing and online child safety. Before joining Symantec, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a Masters Degree on International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.
Associate Director - Technology and Innovation, techUK
Laura Foster
Associate Director - Technology and Innovation, techUK
Laura is techUK’s Associate Director for Technology and Innovation.
Laura advocates for better emerging technology policy in the UK, including quantum, future of compute technologies, semiconductors, digital ID and more. Working alongside techUK members and UK Government she champions long-term, cohesive, and sustainable investment that will ensure the UK can commercialise future science and technology research. Laura leads a high-performing team at techUK, as well as publishing several reports on these topics herself, and being a regular speaker at events.
Before joining techUK, Laura worked internationally as a conference researcher and producer exploring adoption of emerging technologies. This included being part of the team at London Tech Week.
Laura has a degree in History (BA Hons) from Durham University and is a Cambridge Policy Fellow. Outside of work she loves reading, writing and supporting rugby team St. Helens, where she is from.
Elis joined techUK in December 2023 as a Programme Manager for Tech and Innovation, focusing on Semiconductors and Digital ID.
He previously worked at an advocacy group for tech startups, with a regional focus on Wales. This involved policy research on innovation, skills and access to finance.
Elis has a Degree in History, and a Masters in Politics and International Relations from the University of Winchester, with a focus on the digitalisation and gamification of armed conflicts.