31 Jul 2025
by Ella Shoup

Robust AI Governance as a Path to Organisational Resilience

​The speed with which AI is being woven into critical organisational systems means it is now or very soon to be inseparable from those systems.  At GSK, AI integration now spans all levels of the organization, from day-to-day tools like Microsoft Office to advanced research applications in early-stage drug discovery. This makes AI governance not only about managing the range of well-documented risks certain AI systems present but supporting organizational resilience more broadly. In a heavily regulated industry such as biopharma, where patient safety is at the core of risk management, AI governance is not complementary, but core to overall company strategy. 

What is AI Governance? 

AI governance refers to a range of mechanisms, including laws, regulations, policies, institutions, and norms that can all be used to outline processes for making decisions about AI. As much of the AI regulatory landscape is yet to take shape, the current form of AI governance tends to be based on voluntary frameworks such as the National Institute for Standards and Technology AI Risk Management Framework, or sectoral-specific guidelines, such as Good Machine Learning Practice for Medical Device Development.  These serve as important industry standards that help organizations like GSK manage risk in the absence of regulatory clarity. When an organization adopts these frameworks, it’s also necessary that they operationalize them through various organizational structures such as governance boards, review bodies, and documentation practices.  

Responsible AI Governance at GSK 

At GSK, responsible use of AI considers the ethical, societal and governance impacts across all the company’s business units and functions. The cross-functional AI Governance Council oversees the ethical adoption of AIML and advises on broader AIML strategy across the company. It also serves as the ultimate authority to the different business units, who perform the evaluations of AI projects against ethical and technical standards. The system is upheld by three main pillars: AI Policy, Governance, and Culture.  

AI Policy 

GSK’s Responsible AI Policy is defined by five AI principles: 

Ethical Innovation & Positive Impact

Data Ethics, Privacy & Security

Robustness & Reliability

Fairness & Representation

Transparency & Accountability

GSK ensures AI tools are designed ethically and with forethought, and that they are delivered, embedded, and used for positive impact.

GSK respects and protects the security of company data, privacy interests and rights of individuals, including using data in an ethical manner.

GSK is expected to build and buy safe and reliable AI tools, use only GSK-approved technology, and ensure appropriate human oversight is defined.

GSK uses data that is as representative as possible, embeds fairness considerations in model development, and deploys models fairly.

GSK is accountable for decisions on how AI is developed or procured, used, and monitored. To ensure transparency, all AI must be included in the AI Register and Accountability Report.

 

Governance  

GSK’s specific AI governance regime is designed to ensure all AI tools used within the company – whether externally procured or internally developed – uphold these principles. To achieve this, all AI tools are subjected to an Accountability Report, which operationalizes the AI Principles. These are completed by the Business Owner of the project, who must respond to questions related to its potential benefits and harms, fairness considerations, and unsupported uses. They must also answer specific questions related to the AI system’s model card and the dataset on which the AI tool was trained. Once completed, these are reviewed by a division-specific cross-functional expert panel who have expertise in AIML and the relevant domains in which AIML is being applied, including pharmacovigilance and clinical operations. The panel then recommends approval or in rare cases, suspension or termination. The AIGC oversees this process for all GSK departments. 

Culture 

While documentation practices like the Accountability Report provide an important record and paper trail for AI governance, the quality of those documents and the care employees take to maintain risk management is driven by culture. GSK uses training, regular communication and awareness-making practices to foster a responsible AI culture. For example, in a given month, various departments across GSK, including the cross-functional AI Governance Council, might circulate a policy guide on the Accountability Report process, publish research pieces on issues related to AI in the healthcare space, and collaborate regularly with software developers on the specific risks presented by their projects.  The iterative nature of GSK’s AI governance regime – in which projects must update Accountability Reports when major changes occur – also ensures project teams remain vigilant and constantly involved in AI governance.  

What We've Learned 

GSK’s AI governance regime, which is now entering its second year, has allowed us to learn early on what risks may be project or user specific and which risks have the potential to impact the whole organization. Take, for example, the company’s R&D-specific generative AI assistant. It's capable of analyzing and summarizing multiple documents and answering questions using general scientific knowledge and data from user-provided documents.  

A user-level risk might be when a R&D scientist queries the LLM Assistant and the tool produces results that are biased towards more popular scientific theories and potentially overlooks accurate but underrepresented theories in the training data. This could impact the scientist’s research direction. An organization-specific risk might be the reproduction of GSK sensitive data from uploaded documents in the LLM assistant, which could then be shared externally if unnoticed by the user, thus impacting business functions and data privacy. Through our AI governance system, we documented these risks, ensured mitigation strategies are in place, and established appropriate user training so that these risks are minimized.  

Users are now not only more vigilant when using LLM assistants but show greater confidence in their use of AI tools in general, which expands AI adoption across the company. Ultimately, this leads to greater long-term organizational resilience: users will not only embrace new tools but do so responsibly with the knowledge proportionate guardrails are in place to guide them.  

 

Industrial AI Thought Leadership:

 


ai_icon_badge_stroke 2pt final.png

techUK - Seizing the AI Opportunity

The UK is a global leader in AI innovation, development and adoption.

AI has the potential to boost UK GDP by £550 billion by 2035, making adoption an urgent economic priority. techUK and our members are committed to working with the Government to turn the AI Opportunities Action Plan into reality. Together we can ensure the UK seizes the opportunities presented by AI technology and continues to be a world leader in AI development.  

Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.

Upcoming AI events

Latest news and insights

Subscribe to our AI newsletter

AI and Data Analytics updates

Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.

Contact the team

Kir Nuthi

Kir Nuthi

Head of AI and Data, techUK

Usman Ikhlaq

Usman Ikhlaq

Programme Manager - Artificial Intelligence, techUK

Sue Daley OBE

Sue Daley OBE

Director, Technology and Innovation

Visit our AI Hub - the home of all our AI content:

Seizing the AI Opportunity generic AI campaign card.jpg

 

 

Authors

Ella Shoup

Ella Shoup

AI Policy Analyst, GSK

Ella works as an AI Policy Analyst in GSK’s AI/ML division, focusing on the responsible AI use in healthcare and life sciences. She is involved in developing ethical governance frameworks for secure data sharing and helps guide GSK’s internal AI governance. This work includes examining the uptake of AI use within GSK and the wider pharmaceutical industry to understand broader patterns. Her background includes works on different tech policy issues like AI In the civic sector with Nesta, Internet fragmentation with the Internet Society, and election-driven misinformation and hate speech with Internews.