14 May 2025
by Leanne Allen

The age of AI is Here, but levels of trust are low 

Guest blog from Leanne Allen, Partner at KPMG UK, as part of our #SeizingTheAIOpportunity campaign week 2025.

The use of AI in the UK is growing exponentially, but levels of trust are low.  

That’s the tension at the centre of a new global study from KPMG and the University of Melbourne on trust, attitudes and use of AI. The research, conducted between November 2024 and January 2025, captures the views on AI of more than 48,000 people across 47 countries – 1,029 of which were located in the UK.  

The research revealed that while the pace of AI adoption is rapid, it’s outstripping governance, regulation and education. This is creating a trust gap that must be closed if the UK is going to become a world leader in AI. 

Less than half of the UK is willing to trust AI 

Just 42% of those surveyed in the UK are willing to trust AI, citing AI risks and AI-generated misinformation as reasons for this. A large majority (80%) also believe that AI regulation is required to combat AI-generated misinformation. 

It’s clear that the UK is grappling with an AI trust issue as the technology increasingly integrates into our daily lives.  

While some current laws do apply, this rapid pace makes it difficult for new regulation to keep up. It means both the government and organisations have a crucial role in educating and upskilling the public, not only about the existing legislative safeguards but also in providing guidance on how to use assurance mechanisms and standards to mitigate risks as AI continues to develop.  

The UK is falling behind on AI literacy  

Part of the reason for a lack in trust could be down to a lack of AI understanding. As it stands, the UK is falling behind many other countries when it comes to AI training and literacy. Only a quarter (27%) of people in the UK say they have AI education or training. Despite that, almost half (48%) believe they can use AI tools effectively – although a smaller proportion (36%) feel they have the skills and knowledge to use it appropriately.  

It’s important therefore to think about how to familiarise and educate people to use AI effectively and responsibly, to ensure we don’t fall behind other countries. This education should extend beyond the workplace and into schools and homes, as people integrate AI into their everyday life.  

The age of AI working is here 

Despite the apparent low levels of trust in AI, 71% of the UK public expect AI to deliver on a range of benefits  and 59% have personally experienced or observed benefits from AI use.  

For some, AI has become an essential work tool with 65% of UK employees now intentionally use AI at work and two-fifths (39%) of employees feel they can’t complete their work without the help of AI. Over half (53% or more) of those using AI at work say they’ve already observed or experienced increased efficiency, quality of work and innovation, while 45% report increased revenue generating activity. And adoption of AI is only going to increase.  

To deliver the benefits while minimising the risks, organisations need a long-term strategy that breaks ingrained habits and adopts new ways of working collaboratively with AI. 

Employees are complacent in their use of AI 

Whilst workers are using AI, many are using it in complacent ways that present complex risks for organisations. This is demonstrated in the 54% of UK workers who say they’ve made mistakes in their work due to AI and 39% that have uploaded company information into a public AI tool. 

Employees are eager to use AI to enhance their productivity at work, but in the absence of suitable tools from their employers, some are turning to publicly available options. This isn’t malicious misuse - employees want to increase their own efficiency - but it carries significant risks. Once your intellectual property is entered into a public AI prompt, it cannot be retracted.  

Additionally, monitoring AI usage and effectiveness is challenging, particularly with the risks associated with shadow IT and employees presenting AI-generated content as their own. This is why it’s important to embed strong governance, education, and controls. 

Closing the trust gap 

It’s understandable that trust in AI can be challenging. How do you trust something that is constantly evolving and developing so quickly? 

While low levels of AI literacy is one reason for the trust gap, we need to look beyond education. To build confidence in AI, it needs to be built in a way that is ‘trusted by design’.  

That means having a trusted AI framework that underpins everything done at an organisation. It is now a strategic priority to build an AI approach and framework for designing, building, and using AI in a responsible and ethical manner that is anchored in corporate values and complementary to your people.   

If people know the technologies have been built in a responsible way and have controls and assurances built in, then they will be more confident and feel safer in using them. 

The emergence of AI brings huge opportunities for business and society, but its full potential will only be realised if people have trust in it. 

Read the full report referenced throughout this piece to learn more, ‘Trust, attitudes and use of artificial intelligence: A global study 2025’: https://kpmg.com/uk/en/insights/ai/uk-attitudes-to-ai.html  


ai_icon_badge_stroke 2pt final.png

techUK - Seizing the AI Opportunity

For the UK to fully seize the AI opportunity, citizens and businesses must have trust and confidence in AI. techUK and our members champion the development of reliable and safe AI systems that align with the UK’s ethical principles.

AI assurance is central to this mission. Our members engage directly with policy makers, regulators, and industry leaders to influence policy and standards on AI safety and ethics, contributing to a responsible innovation environment. Through these efforts, we help build public trust in AI adoption whilst ensuring our members stay ahead of regulatory developments.

Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.

Upcoming AI events

Latest news and insights

Subscribe to our AI newsletter

AI updates

Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.

Contact the team

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Visit our AI Hub - the home of all our AI content:

Seizing the AI Opportunity generic AI campaign card.jpg

 

 

Authors

 Leanne Allen

Leanne Allen

Partner, KPMG UK

Leanne Allen is a Partner in KPMG’s Financial Services Tech Consulting Practice and leads KPMG’s Data capability. She is an experienced data architect with broad experience across data management, data and systems architecture, data visualisation, reporting and analytics and data migration.  

She is a thought leader in Tech-Data including driving the Data lens for KPMG’s 30 Voices campaign and co-authoring a paper with UK Finance on the Ethical use of customer data in a digital economy. Leanne is passionate about driving a diverse culture in technology and in particular, supports working mothers in tech founding the Superwoman network at KPMG with over 100 women in the network. 

Read lessmore