Robust AI Governance as a Path to Organisational Resilience
The speed with which AI is being woven into critical organisational systems means it is now or very soon to be inseparable from those systems. At GSK, AI integration now spans all levels of the organization, from day-to-day tools like Microsoft Office to advanced research applications in early-stage drug discovery. This makes AI governance not only about managing the range of well-documented risks certain AI systems present but supporting organizational resilience more broadly. In a heavily regulated industry such as biopharma, where patient safety is at the core of risk management, AI governance is not complementary, but core to overall company strategy.
What is AI Governance?
AI governance refers to a range of mechanisms, including laws, regulations, policies, institutions, and norms that can all be used to outline processes for making decisions about AI. As much of the AI regulatory landscape is yet to take shape, the current form of AI governance tends to be based on voluntary frameworks such as the National Institute for Standards and Technology AI Risk Management Framework, or sectoral-specific guidelines, such as Good Machine Learning Practice for Medical Device Development. These serve as important industry standards that help organizations like GSK manage risk in the absence of regulatory clarity. When an organization adopts these frameworks, it’s also necessary that they operationalize them through various organizational structures such as governance boards, review bodies, and documentation practices.
Responsible AI Governance at GSK
At GSK, responsible use of AI considers the ethical, societal and governance impacts across all the company’s business units and functions. The cross-functional AI Governance Council oversees the ethical adoption of AIML and advises on broader AIML strategy across the company. It also serves as the ultimate authority to the different business units, who perform the evaluations of AI projects against ethical and technical standards. The system is upheld by three main pillars: AI Policy, Governance, and Culture.
AI Policy
GSK’s Responsible AI Policy is defined by five AI principles:
Ethical Innovation & Positive Impact |
Data Ethics, Privacy & Security |
Robustness & Reliability |
Fairness & Representation |
Transparency & Accountability |
GSK ensures AI tools are designed ethically and with forethought, and that they are delivered, embedded, and used for positive impact. |
GSK respects and protects the security of company data, privacy interests and rights of individuals, including using data in an ethical manner. |
GSK is expected to build and buy safe and reliable AI tools, use only GSK-approved technology, and ensure appropriate human oversight is defined. |
GSK uses data that is as representative as possible, embeds fairness considerations in model development, and deploys models fairly. |
GSK is accountable for decisions on how AI is developed or procured, used, and monitored. To ensure transparency, all AI must be included in the AI Register and Accountability Report. |
Governance
GSK’s specific AI governance regime is designed to ensure all AI tools used within the company – whether externally procured or internally developed – uphold these principles. To achieve this, all AI tools are subjected to an Accountability Report, which operationalizes the AI Principles. These are completed by the Business Owner of the project, who must respond to questions related to its potential benefits and harms, fairness considerations, and unsupported uses. They must also answer specific questions related to the AI system’s model card and the dataset on which the AI tool was trained. Once completed, these are reviewed by a division-specific cross-functional expert panel who have expertise in AIML and the relevant domains in which AIML is being applied, including pharmacovigilance and clinical operations. The panel then recommends approval or in rare cases, suspension or termination. The AIGC oversees this process for all GSK departments.
Culture
While documentation practices like the Accountability Report provide an important record and paper trail for AI governance, the quality of those documents and the care employees take to maintain risk management is driven by culture. GSK uses training, regular communication and awareness-making practices to foster a responsible AI culture. For example, in a given month, various departments across GSK, including the cross-functional AI Governance Council, might circulate a policy guide on the Accountability Report process, publish research pieces on issues related to AI in the healthcare space, and collaborate regularly with software developers on the specific risks presented by their projects. The iterative nature of GSK’s AI governance regime – in which projects must update Accountability Reports when major changes occur – also ensures project teams remain vigilant and constantly involved in AI governance.
What We've Learned
GSK’s AI governance regime, which is now entering its second year, has allowed us to learn early on what risks may be project or user specific and which risks have the potential to impact the whole organization. Take, for example, the company’s R&D-specific generative AI assistant. It's capable of analyzing and summarizing multiple documents and answering questions using general scientific knowledge and data from user-provided documents.
A user-level risk might be when a R&D scientist queries the LLM Assistant and the tool produces results that are biased towards more popular scientific theories and potentially overlooks accurate but underrepresented theories in the training data. This could impact the scientist’s research direction. An organization-specific risk might be the reproduction of GSK sensitive data from uploaded documents in the LLM assistant, which could then be shared externally if unnoticed by the user, thus impacting business functions and data privacy. Through our AI governance system, we documented these risks, ensured mitigation strategies are in place, and established appropriate user training so that these risks are minimized.
Users are now not only more vigilant when using LLM assistants but show greater confidence in their use of AI tools in general, which expands AI adoption across the company. Ultimately, this leads to greater long-term organizational resilience: users will not only embrace new tools but do so responsibly with the knowledge proportionate guardrails are in place to guide them.
Sprint Campaign: Industrial AI
From predictive maintenance to advanced process automation and smarter supply chain management, Industrial AI is the application of AI technologies in industrial settings to transform operations across various sectors—including manufacturing, and robotics. This campaign will showcase the game-changing potential of Industrial AI, and how it can solve industry challenges, drive efficiency, increase productivity, boost innovation, and redefine the future of industrial operations.
Event Round-ups
Authors
Ella Shoup
AI Policy Analyst, GSK
Ella works as an AI Policy Analyst in GSK’s AI/ML division, focusing on the responsible AI use in healthcare and life sciences. She is involved in developing ethical governance frameworks for secure data sharing and helps guide GSK’s internal AI governance. This work includes examining the uptake of AI use within GSK and the wider pharmaceutical industry to understand broader patterns. Her background includes works on different tech policy issues like AI In the civic sector with Nesta, Internet fragmentation with the Internet Society, and election-driven misinformation and hate speech with Internews.