In the search for ways to build greater trust in AI across the world, the concept of AI assurance is rapidly gaining traction. It covers a wide range of activities, but what they have in common is the aim to investigate, and then communicate, whether AI systems are reliable and trustworthy. Used effectively, AI assurance should increase confidence of anyone considering deploying AI as well as those whose lives may be impacted by this deployment.
But choosing the right approach to AI assurance is not a simple task, and as the emphasis on assurance is growing among policymakers and regulators, so is the number of organisations faced with such choices.
In this webinar, we heard from techUK members working in different fields who have made progress on AI assurance. They shared what their approach has been so far, what they're looking to achieve and what are some of the difficult decisions they have had to make in the process.
- Chris Anley - Chief Scientist, NCC Group
- Jacob Beswick - Director for AI Governance Solutions, Dataiku
- Ben Montgomery - Senior Governance Consultant, Dataiku
- Xin Chen - Executive Director, European Lead on AI, Data Governance Policy, Standards & Industry Digitization, Huawei Technologies
- Dr Hsiao-Ying Lin - Principal researcher, Trustworthy AI Lab, Huawei Technologies
- Lee Glazier - Head of Service Integrity, Rolls-Royce
- Emilie Sundorph - Programme Manager, Digital Ethics & AI, techUK
The webinar featured presentations on several interesting subtopics on the current state of AI assurance. In the first presentation, Chris Anley of NCC Group discussed what drives security assurance and some of the methods applied. Jacob Beswick and Ben Montgomery discussed the importance of AI governance and helped visualise how businesses could scale from vision to practice.
Understanding how AI assurance progresses requires understanding the policies and legislations that exist around the globe. Xin Chen from Huawei Technologies focuses on the role of the EU AI Act, and his colleague Hsaio-Ying Lin then followed by discussing how Huawei’s Trustworthy AI Lab has progressed in AI assurance. The last presentation featured Lee Glazier explaining how Rolls Royce map ethical challenges and how their Aletheia Framework for assuring AI Ethics is utilised across other organisations external to Rolls-Royce.
For full details you can watch the event at the link below!