Robust AI Governance as a Path to Organisational Resilience
The speed with which AI is being woven into critical organisational systems means it is now or very soon to be inseparable from those systems. At GSK, AI integration now spans all levels of the organization, from day-to-day tools like Microsoft Office to advanced research applications in early-stage drug discovery. This makes AI governance not only about managing the range of well-documented risks certain AI systems present but supporting organizational resilience more broadly. In a heavily regulated industry such as biopharma, where patient safety is at the core of risk management, AI governance is not complementary, but core to overall company strategy.
What is AI Governance?
AI governance refers to a range of mechanisms, including laws, regulations, policies, institutions, and norms that can all be used to outline processes for making decisions about AI. As much of the AI regulatory landscape is yet to take shape, the current form of AI governance tends to be based on voluntary frameworks such as the National Institute for Standards and Technology AI Risk Management Framework, or sectoral-specific guidelines, such as Good Machine Learning Practice for Medical Device Development. These serve as important industry standards that help organizations like GSK manage risk in the absence of regulatory clarity. When an organization adopts these frameworks, it’s also necessary that they operationalize them through various organizational structures such as governance boards, review bodies, and documentation practices.
Responsible AI Governance at GSK
At GSK, responsible use of AI considers the ethical, societal and governance impacts across all the company’s business units and functions. The cross-functional AI Governance Council oversees the ethical adoption of AIML and advises on broader AIML strategy across the company. It also serves as the ultimate authority to the different business units, who perform the evaluations of AI projects against ethical and technical standards. The system is upheld by three main pillars: AI Policy, Governance, and Culture.
GSK ensures AI tools are designed ethically and with forethought, and that they are delivered, embedded, and used for positive impact.
GSK respects and protects the security of company data, privacy interests and rights of individuals, including using data in an ethical manner.
GSK is expected to build and buy safe and reliable AI tools, use only GSK-approved technology, and ensure appropriate human oversight is defined.
GSK uses data that is as representative as possible, embeds fairness considerations in model development, and deploys models fairly.
GSK is accountable for decisions on how AI is developed or procured, used, and monitored. To ensure transparency, all AI must be included in the AI Register and Accountability Report.
Governance
GSK’s specific AI governance regime is designed to ensure all AI tools used within the company – whether externally procured or internally developed – uphold these principles. To achieve this, all AI tools are subjected to an Accountability Report, which operationalizes the AI Principles. These are completed by the Business Owner of the project, who must respond to questions related to its potential benefits and harms, fairness considerations, and unsupported uses. They must also answer specific questions related to the AI system’s model card and the dataset on which the AI tool was trained. Once completed, these are reviewed by a division-specific cross-functional expert panel who have expertise in AIML and the relevant domains in which AIML is being applied, including pharmacovigilance and clinical operations. The panel then recommends approval or in rare cases, suspension or termination. The AIGC oversees this process for all GSK departments.
Culture
While documentation practices like the Accountability Report provide an important record and paper trail for AI governance, the quality of those documents and the care employees take to maintain risk management is driven by culture. GSK uses training, regular communication and awareness-making practices to foster a responsible AI culture. For example, in a given month, various departments across GSK, including the cross-functional AI Governance Council, might circulate a policy guide on the Accountability Report process, publish research pieces on issues related to AI in the healthcare space, and collaborate regularly with software developers on the specific risks presented by their projects. The iterative nature of GSK’s AI governance regime – in which projects must update Accountability Reports when major changes occur – also ensures project teams remain vigilant and constantly involved in AI governance.
What We've Learned
GSK’s AI governance regime, which is now entering its second year, has allowed us to learn early on what risks may be project or user specific and which risks have the potential to impact the whole organization. Take, for example, the company’s R&D-specific generative AI assistant. It's capable of analyzing and summarizing multiple documents and answering questions using general scientific knowledge and data from user-provided documents.
A user-level risk might be when a R&D scientist queries the LLM Assistant and the tool produces results that are biased towards more popular scientific theories and potentially overlooks accurate but underrepresented theories in the training data. This could impact the scientist’s research direction. An organization-specific risk might be the reproduction of GSK sensitive data from uploaded documents in the LLM assistant, which could then be shared externally if unnoticed by the user, thus impacting business functions and data privacy. Through our AI governance system, we documented these risks, ensured mitigation strategies are in place, and established appropriate user training so that these risks are minimized.
Users are now not only more vigilant when using LLM assistants but show greater confidence in their use of AI tools in general, which expands AI adoption across the company. Ultimately, this leads to greater long-term organizational resilience: users will not only embrace new tools but do so responsibly with the knowledge proportionate guardrails are in place to guide them.
Industrial AI Sprint Campaign
techUK brings together industry leaders, innovators and wider stakeholders to showcase the UK’s vibrant Industrial AI ecosystem. Through engaging discussions and shared insights, we explore how Industrial AI is driving innovation across advanced manufacturing, energy, defence, and life sciences.
We dive into key themes like predictive maintenance at scale, human-AI collaboration, and building strong data foundations for confident AI adoption. Alongside these, we’ll address challenges such as skills gaps, data access, cybersecurity, and the importance of keeping people at the heart of AI integration.
Together, we’ll uncover how Industrial AI can boost productivity, fuel economic growth, and pave the way for a resilient, sustainable industrial future.
Guest blog by from Dan Smalley, Head of Industrial AI – Digital Industries at Siemens UK & Ireland, as part of our as part of our #techUKIndustrialAI day!
Missed our latest Industrial AI event? Click to learn more!
techUK - Seizing the AI Opportunity
The UK is a global leader in AI innovation, development and adoption.
AI has the potential to boost UK GDP by £550 billion by 2035, making adoption an urgent economic priority. techUK and our members are committed to working with the Government to turn the AI Opportunities Action Plan into reality. Together we can ensure the UK seizes the opportunities presented by AI technology and continues to be a world leader in AI development.
Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.
On 18 September 2025, the UK and United States published a Memorandum of Understanding called the Technology Prosperity Deal, committing the two governments to deepen cooperation in a range of science and technology areas
On Wednesday 10 September 2025, techUK and the Ada Lovelace institute convened a group of over 50 experts, each representing organisations in the AI assurance and ethics ecosystem including responsible AI leads, assurance firms, civil society and professional bodies, to discuss and continue the work of mapping the responsible AI profession.
Subscribe to our AI newsletter
AI and Data Analytics updates
Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.
Contact the team
Kir Nuthi
Head of AI and Data, techUK
Kir Nuthi
Head of AI and Data, techUK
Kir Nuthi is the Head of AI and Data at techUK.
She holds over seven years of Government Affairs and Tech Policy experience in the US and UK. Kir previously headed up the regulatory portfolio at a UK advocacy group for tech startups and held various public affairs in US tech policy. All involved policy research and campaigns on competition, artificial intelligence, access to data, and pro-innovation regulation.
Kir has an MSc in International Public Policy from University College London and a BA in both Political Science (International Relations) and Economics from the University of California San Diego.
Outside of techUK, you are likely to find her attempting studies at art galleries, attempting an elusive headstand at yoga, mending and binding books, or chasing her dog Maya around South London's many parks.
Usman joined techUK in January 2024 as Programme Manager for Artificial Intelligence.
He leads techUK’s AI Adoption programme, supporting members of all sizes and sectors in adopting AI at scale. His work involves identifying barriers to adoption, exploring solutions, and helping to unlock AI’s transformative potential, particularly its benefits for people, the economy, society, and the planet. He is also committed to advancing the UK’s AI sector and ensuring the UK remains a global leader in AI by working closely with techUK members, the UK Government, regulators, and devolved and local authorities.
Since joining techUK, Usman has delivered a regular drumbeat of activity to engage members and advance techUK's AI programme. This has included two campaign weeks, the creation of the AI Adoption Hub (now the AI Hub), the AI Leader's Event Series, the Putting AI into Action webinar series and the Industrial AI sprint campaign.
Before joining techUK, Usman worked as a policy, regulatory and government/public affairs professional in the advertising sector. He has also worked in sales, marketing, and FinTech.
Usman holds an MSc from the London School of Economics and Political Science (LSE), a GDL and LLB from BPP Law School, and a BA from Queen Mary University of London.
When he isn’t working, Usman enjoys spending time with his family and friends. He also has a keen interest in running, reading and travelling.
Sue leads techUK's Technology and Innovation work.
This includes work programmes on cloud, data protection, data analytics, AI, digital ethics, Digital Identity and Internet of Things as well as emerging and transformative technologies and innovation policy.
In 2025, Sue was honoured with an Order of the British Empire (OBE) for services to the Technology Industry in the New Year Honours List.
She has been recognised as one of the most influential people in UK tech by Computer Weekly's UKtech50 Longlist and in 2021 was inducted into the Computer Weekly Most Influential Women in UK Tech Hall of Fame.
A key influencer in driving forward the data agenda in the UK, Sue was co-chair of the UK government's National Data Strategy Forum until July 2024. As well as being recognised in the UK's Big Data 100 and the Global Top 100 Data Visionaries for 2020 Sue has also been shortlisted for the Milton Keynes Women Leaders Awards and was a judge for the Loebner Prize in AI. In addition to being a regular industry speaker on issues including AI ethics, data protection and cyber security, Sue was recently a judge for the UK Tech 50 and is a regular judge of the annual UK Cloud Awards.
Prior to joining techUK in January 2015 Sue was responsible for Symantec's Government Relations in the UK and Ireland. She has spoken at events including the UK-China Internet Forum in Beijing, UN IGF and European RSA on issues ranging from data usage and privacy, cloud computing and online child safety. Before joining Symantec, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a Masters Degree on International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.
Visit our AI Hub - the home of all our AI content:
Enquire about membership:
Become a techUK member
Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.
Ella works as an AI Policy Analyst in GSK’s AI/ML division, focusing on the responsible AI use in healthcare and life sciences. She is involved in developing ethical governance frameworks for secure data sharing and helps guide GSK’s internal AI governance. This work includes examining the uptake of AI use within GSK and the wider pharmaceutical industry to understand broader patterns. Her background includes works on different tech policy issues like AI In the civic sector with Nesta, Internet fragmentation with the Internet Society, and election-driven misinformation and hate speech with Internews.
Report launch: techUK’s 2030 vision: a roadmap for building a digital assets economy
The UK stands on the brink of a digital revolution – one powered by blockchain and digital assets. Register today for the launch of our latest report to hear from keynote speakers, and get involved in the panel and Q&A session.