Why a companywide effort is key to responsible and trustworthy AI adoption
Understanding AI adoption across the business is critical to drive innovation and trust, as Katie Fowler, Director of Responsible Business at the Thomson Reuters Foundation, explains.
Artificial intelligence is transforming business at a breathtaking pace, with PwC estimating it could contribute over $15 trillion to the global economy by 2030. Yet public trust in AI remains fragile – a recent global survey found 58% of people view AI systems as untrustworthy – and regulators are sharpening their focus. The EU’s draft AI Act, for example, threatens fines up to 7% of global revenue for high-risk AI misuse.
As a result, companies are recognising that comprehensive AI governance frameworks are not just a compliance box to tick, but a strategic imperative. Effective AI governance means going beyond technical model validation and developing specific AI tools, to fostering transparency around AI adoption across the whole business. Crucially, it demands collective action across departments to ensure the power of AI is harnessed responsibly, enabling innovation while mitigating risks to people, planet and the business itself.
Beyond technical checks: a cross-departmental approach
Governing AI effectively must be a team effort. According to McKinsey, only about 18% of companies have any system (such as an AI oversight council or board) to ensure their AI is used ethically. This underscores a major gap: AI governance cannot be left to the IT or data science team alone.
From legal compliance and data privacy to HR and customer trust, the potential risks and impacts of AI cut across the entire organisation. This requires input and oversight from many functions:
-
Procurement & supply chain: Ensuring any AI systems acquired from vendors meet your company’s ethical and security standards.
-
Legal & compliance: Monitoring regulatory requirements and ensuring AI use (e.g. in hiring or customer analytics) adheres to laws and ethical standards
-
Data management: Implementing safeguards for privacy and fairness, and mitigating bias in the data and algorithms that drive AI decisions.
-
Human oversight: Defining roles for the human review of AI outcomes and clear escalation paths when automated decisions significantly affect individuals.
-
Workforce impact: Anticipating the potential effect of AI on jobs and investing in employee upskilling or reskilling to adapt to new AI-enabled workflows.
-
Environmental considerations: Assessing and managing the carbon footprint and energy usage of training and deploying AI models.
-
Diversity & inclusion: Involving diverse stakeholders in AI development and auditing systems to prevent discrimination or bias and promote inclusive outcomes.
-
Risk monitoring & response: Continuously auditing AI system performance, reporting incidents or near-misses, and refining processes to address new risks.
Leading a culture of transparency
Taking a company-wide approach like this embeds AI ethics and risk management into the everyday business operations of companies. It also promotes transparency in the broadest sense: not only making algorithms more explainable, but also openly communicating where and how AI is being used and governed within the organisation.
The weight of evidence continues to point towards corporate transparency breeding more responsible and legally resilient businesses. By de-mystifying AI systems for their employees, customers and regulators, organisations can start to bridge the trust gap and pre-empt problems before they escalate.
Governance frameworks in action
How can companies put these principles into practice? One emerging tool is the AI Company Data Initiative (AICDI) powered by the Thomson Reuters Foundation. This voluntary, free framework helps corporate leaders map how AI is developed and used across their operations. Through a comprehensive questionnaire, it guides firms through all the areas outlined above – from AI procurement to data bias – allowing them to benchmark their readiness, identify gaps, and improve accountability.
Companies can undertake a thorough self-audit, which serves as a roadmap to best practice, while highlighting potential risks that need mitigation. The AI Company Data Initiative is grounded UNESCO’s Recommendation on the Ethics of AI, the first global standard on AI ethics, and is designed for use by companies of all sizes.
Turning ethical AI principles into practice
Early adopters of governance frameworks like this can enjoy many benefits. Companies can encourage innovation – by confidently deploying AI in more areas – while actively managing the associated risks.
Internally, taking a holistic approach to AI adoption creates a baseline to measure progress and helps leadership pinpoint where policies or data are lacking. Mapping AI use now also enables companies to get ahead of emerging legislation and ensure future compliance.
As corporate AI use becomes more widespread, prioritising transparency in this way can build trust with a community of stakeholders. Externally, putting governance frameworks into practice demonstrates to investors, customers, and regulators that the company is innovating on responsible, ethical AI - not just as a PR pledge but through concrete action.
Driving responsible AI for long-term value
Comprehensive AI governance is fast becoming a hallmark of forward-looking companies. By enlisting every department in a shared governance effort, businesses can navigate the dual challenge of innovation and responsibility.
The result is not only compliance with new rules or the avoidance of scandals, but a foundation of transparency, accountability and trust that will underpin AI-powered growth in the years ahead. In a world increasingly shaped by intelligent systems, such an approach is essential to build public confidence, safeguard stakeholders, and unlock AI’s full potential for good.

techUK - Seizing the AI Opportunity
For the UK to fully seize the AI opportunity, citizens and businesses must have trust and confidence in AI. techUK and our members champion the development of reliable and safe AI systems that align with the UK’s ethical principles.
AI assurance is central to this mission. Our members engage directly with policy makers, regulators, and industry leaders to influence policy and standards on AI safety and ethics, contributing to a responsible innovation environment. Through these efforts, we help build public trust in AI adoption whilst ensuring our members stay ahead of regulatory developments.
Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.
Upcoming AI events
Latest news and insights
Subscribe to our AI newsletter
AI and Data Analytics updates
Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.
Contact the team
Visit our AI Hub - the home of all our AI content:
