Across all sectors, organisations are looking to leverage the use of artificial intelligence (AI) to improve efficiency and gain deeper insights. This trend has sparked numerous discussions about how to achieve AI readiness. The first step in laying the groundwork for robust AI use is data cleaning.
Data cleaning is the process of detecting and rectifying errors and inconsistencies within data sets to enhance their quality. It is critical that data used within AI has been evaluated to ensure that it’s fit for purpose.
Accurate data underpins the reliability of AI models, allowing them to make precise predictions and informed decisions. Moreover, clean data contributes to the efficiency of AI algorithms, enabling them to learn more swiftly and perform optimally, thus conserving time and computational resources. The reliability of AI outcomes also depends on the consistency of the underlying data, which is critical for building trust in AI systems.’
Data should be evaluated to ensure that it is:
Secure: Data must be protected from unauthorised access – this could be to prevent sensitive data from being presented to the users of AI, or from external breaches and cyberattacks. Implementing robust security measures is essential to maintain the integrity and confidentiality of data.
Data Quality: High-quality data is fundamental to producing accurate and consistent AI results. Data should be of high quality, having had missing values, errors, or outliers resolved before use.
Ethical: Ethical and privacy considerations should be considered for the data utilised. Any data used to train AI should be representative and free from bias.
An important thing to remember with AI is that it assumes that it is permitted to look at any data available in order to ‘learn’. Unless we have classified and protected any data that we don’t want to use (personal letters, end of year self-assessment forms etc) the AI will use it. Therefore, thinking about what data you don’t want to use can be just as important as thinking about the data you do wish to use.
Risks of poor data quality used to train AI
Bias
Large language models (LLMs) can require massive amounts of data to train and the most cost-effective way to do this is by scraping the internet. However, when we accept large amounts of web text as ‘representative’, we risk perpetuating dominant viewpoints, and inevitably perpetuating biases.
Data is primarily created by humans so carries inherent biases. It is therefore important to evaluate data for both accuracy and discriminatory biases that may affect the AI's perspective. Conduct parity tests during training to identify and address any biases.
Inaccurate representation
Representation errors are often a result of subjective training data. Additionally, accurate data labelling is crucial to avoid measurement errors. Without quality control, human-labelled data can introduce bias.
Outdated data
AI programmes may struggle with data quality aspects like timeliness and consistency. If trained on historical data, they can't account for changes in the 2020s, resulting in outputs that lack up-to-date and complete information.
Duplication errors
Using unchecked and duplicate data from multiple sources can cause errors. Moreover, unstructured data without metadata can create confusion, complicating analysis for the AI programme.
Our FDM Consultants are actively helping clients to become AI-ready, as well as supporting the identification of use cases and promoting adoption of AI throughout the user groups.
Use cases for AI
The first step in any organisation’s AI journey is to identify the right use cases that align with their business objectives.
A key challenge is that organisations that implement AI often have to stop because they don't have a data strategy in place. Or, if they do have an established data strategy in place, there are no Data Governance measures in place to ensure that data isn't mishandled, overshared and is of a good quality.
FDM’s AI offering
All our consultants are introduced to AI as a core capability. Our AI Engineers can support with Prompt Engineering to building custom AI solutions for your business. We are working with partners to encourage ethical use of AI and how to govern this through GRC (Governance, Risk and Compliance) tools.
The AI revolution is well and truly underway, but its success depends on how well-prepared organisations are to adopt it – with a combination of strategy and talent to implement it.
FDM is conducting a global survey of business leaders to determine how AI is being adopted and deployed. Please fill in this short 5-minute questionnaire. Answers will be treated anonymously, and your data and contact details will only be used for the purpose of this research. The results of the study will be published in a whitepaper and will be shared with those who responded via the email they used to respond.
techUK - Seizing the AI Opportunity
The UK is a global leader in AI innovation, development and adoption.
AI has the potential to boost UK GDP by £550 billion by 2035, making adoption an urgent economic priority. techUK and our members are committed to working with the Government to turn the AI Opportunities Action Plan into reality. Together we can ensure the UK seizes the opportunities presented by AI technology and continues to be a world leader in AI development.
Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.
On February 11, 2026, techUK convened a timely discussion exploring the intersection of AI insurance, assurance, and risk mitigation. The panel brought together Philip Dawson (Armilla AI), Professor Lukasz Szpruch (University of Edinburgh/Alan Turing Institute),
UK Research and Innovation has published its first AI Strategy, setting out a long-term plan backed by £1.6 billion to strengthen the UK’s AI research, skills and infrastructure. The framework outlines six priority areas aimed at advancing technology development, supporting responsible AI and translating scientific excellence into economic and societal benefit.
Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.
Contact the team
Kir Nuthi
Head of AI and Data, techUK
Kir Nuthi
Head of AI and Data, techUK
Kir Nuthi is the Head of AI and Data at techUK.
She holds over seven years of Government Affairs and Tech Policy experience in the US and UK. Kir previously headed up the regulatory portfolio at a UK advocacy group for tech startups and held various public affairs in US tech policy. All involved policy research and campaigns on competition, artificial intelligence, access to data, and pro-innovation regulation.
Kir has an MSc in International Public Policy from University College London and a BA in both Political Science (International Relations) and Economics from the University of California San Diego.
Outside of techUK, you are likely to find her attempting studies at art galleries, attempting an elusive headstand at yoga, mending and binding books, or chasing her dog Maya around South London's many parks.
Usman joined techUK in January 2024 as Programme Manager for Artificial Intelligence.
He leads techUK’s AI Adoption programme, supporting members of all sizes and sectors in adopting AI at scale. His work involves identifying barriers to adoption, exploring solutions, and helping to unlock AI’s transformative potential, particularly its benefits for people, the economy, society, and the planet. He is also committed to advancing the UK’s AI sector and ensuring the UK remains a global leader in AI by working closely with techUK members, the UK Government, regulators, and devolved and local authorities.
Since joining techUK, Usman has delivered a regular drumbeat of activity to engage members and advance techUK's AI programme. This has included two campaign weeks, the creation of the AI Adoption Hub (now the AI Hub), the AI Leader's Event Series, the Putting AI into Action webinar series and the Industrial AI sprint campaign.
Before joining techUK, Usman worked as a policy, regulatory and government/public affairs professional in the advertising sector. He has also worked in sales, marketing, and FinTech.
Usman holds an MSc from the London School of Economics and Political Science (LSE), a GDL and LLB from BPP Law School, and a BA from Queen Mary University of London.
When he isn’t working, Usman enjoys spending time with his family and friends. He also has a keen interest in running, reading and travelling.
Sue leads techUK's Technology and Innovation work. This includes work programmes on AI, Cloud, Data, Quantum, Semiconductors, Digital ID and Digital ethics as well as emerging and transformative technologies and innovation policy. In 2025, Sue was honoured with an Order of the British Empire (OBE) for services to the Technology Industry in the New Year Honours List. She has also been recognised as one of the most influential people in UK tech by Computer Weekly's UKtech50 Longlist and was inducted into the Computer Weekly Most Influential Women in UK Tech Hall of Fame.
A key influencer in driving forward the tech agenda in the UK, in December 2025 Sue was appointed to the UK Government’s Women in Tech Taskforce by the Technology Secretary of State. She also sits on the UK Government’s Smart Data Council, Satellite Applications Catapult Advisory Group, Bank of England’s AI Consortium and BSI’s Digital Strategic Advisory Group. Previously, Sue was a member of the Independent Future of Compute Review and co-chaired the National Data Strategy Forum. As well as being recognised in the UK's Big Data 100 and the Global Top 100 Data Visionaries in 2020, Sue has been shortlisted for the Milton Keynes Women Leaders Awards and has been a judge for the Loebner Prize in AI, the UK Tech 50 and annual UK Cloud Awards. She is a regular industry speaker on issues including AI ethics, data protection and cyber security.
Prior to joining techUK in January 2015, Sue was responsible for Symantec's Government Relations in the UK and Ireland. Before that, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a Master’s Degree in International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.
Visit our AI Hub - the home of all our AI content:
Enquire about membership:
Become a techUK member
Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.
Rod is the founder and Chief Executive Officer of FDM Group and has more than 40 years’ experience in the technology sector. He has been instrumental in the development of the Group into an international, award winning employer with a prestigious client base operating in multiple markets. Rod is a strong advocate of improving diversity in the technology industry, as demonstrated by the Group’s Women in Tech, Returners Programme, Ex-Forces and veteran career transition initiatives. In 2019, he was featured in the Management Today Agents of Change Power List for the second consecutive year. He was also featured in the Yahoo HERoes Top Advocate Executives of 2019 for his work promoting gender equality in the workplace.
This morning, the Department for Science, Innovation and Technology’s (DSIT) Secretary of State, the Rt Hon Peter Kyle, announced the publication of two new Responsible Technology Adoption Unit (RTA) products at the Financial Times Future of AI Summit.
AI is transforming business, but governance is lagging. Shoosmiths’ Technology & AI Partners, Alex Kirkhope and Sarah Reynolds, explore why embedding compliance into strategy is critical for resilience and growth.
This is an informative insight about artificial intelligence system impact assessments (AI-SIAs) which are documented processes for identifying individuals and groups impacted by an AI system. Common-use impact assessments are already found in the context of business, the environment, finance, human rights, IT, privacy, personally identifiable information and security. As part of a growing list of governance and policy assets available to those deploying and providing AI systems, AI-SIAs can provide analytical frameworks and approaches to SIAs supported through technical rigor.
Chatham House has released a collection of essays that examines innovative approaches to AI regulation and governance. It presents and evaluates proposals and mechanisms for ensuring responsible AI, from EU-style regulations to open-source governance, from treaties to CERN-like research facilities and publicly owned corporations. You can read the essays here. Drawing on perspectives from around the world, the collection underscores the need to protect openness, ensure inclusivity and fairness in AI, and establish clear ethical frameworks and lines of cooperation between states and technology companies.
As AI continues to transform industries across the globe, the need for professionals who can operationalise its ethical implementation has never been more critical. Whether you're looking to join the field or are already working as a responsible AI practitioner, these resources from techUK will help you navigate this evolving profession.
Join us on 16 March for techUK’s annual Policy Conference, bringing together senior leaders from government, regulators, industry and academia to discuss the key issues shaping UK tech.