Summit Round-Up: Digital Ethics Summit 2025

The ninth annual Digital Ethics Summit, hosted by techUK on 3 December 2025, marked a moment of the community coming together to take stock of lessons learned in 2025 and look forward together in 2026. Since 2017, the summit has brought together leaders and experts across the digital ethics landscape to reflect on and assess the progress made. 

Key developments highlighted at the summit included the maturation of AI assurance, the shift in global discourse from AI safety to security, and the UK government's launch of initiatives like the Trusted Third-Party AI Assurance Roadmap and the Fairness Innovation Challenge. The discussions emphasised the importance of building justified trust through robust scientific testing, maintaining human agency as AI adoption accelerates, and embedding inclusion at every stage of AI design and deployment. Throughout the event speakers stressed the essential need for cross-sector collaboration, organisational-wide AI literacy, and moving beyond compliance checkbox exercises toward genuine quality frameworks that center real-world outcomes and human flourishing. 

Reflections on 2025 

The summit discussed progress in AI adoption, with both job creation alongside labor market disruption across sectors and AI tools becoming embedded in daily operations across industries. However, this rapid advancement brought pressing challenges to the forefront: AI-enabled cybercrime and deepfake fraud continue to threaten digital trust, while managing the risks of rapidly evolving technologies (particularly agentic and autonomous AI) presents ongoing difficulties around human oversight and liability frameworks.  

Looking Ahead to 2026 

The year ahead demands greater regulatory clarity as digital regulation continues to evolve, creating uncertainty for organisations seeking responsible AI adoption pathways. While the UK possesses strong foundational tools, including regulatory sandboxes and robust sector laws on privacy and safety, practical guidance and frameworks remain essential as legislation matures.  

As we look to 2026, the summit emphasised the importance of shifting from predominantly risk-focused discourse to bold, optimistic thinking about AI's societal impact. Rather than fixating on profitability metrics or job displacement fears, leaders are urged to broaden their vision to encompass human flourishing, community needs, and long-term capability building. The UK's success depends less on international comparisons and more on strategically leveraging national strengths, particularly around data strategy and the world-leading AI Assurance ecosystem and AI Security Institute. Key priorities suggested and shared at the summit included: 

  • Maintaining people-centered approaches throughout technological progress 

  • Building organisation-wide AI literacy so every employee understands both risks and responsible use 

  • Taking calculated risks while challenging overly conservative mindsets 

  • Avoiding tunnel vision on AI alone, as emerging technologies like digital twins, robotics, quantum computing, and digital identity require equal attention and governance frameworks. 

  • Navigating the governance challenges of agentic systems and addressing legal frameworks for applications like companion AI 

  • Closing critical gaps in board-level expertise will prove essential, as businesses often lack board-level expertise to properly understand and deploy AI systems 

  • More sector-specific tools for education and child safety, which remain insufficient 

  • The UK's technical talent base and institutional strengths position it well for testing and iterating responsible AI implementation with our leading AI Assurance ecosystem. 

  • Examining the role of procurement teams, investor due diligence, and AI insurance as mechanisms for incentivising responsible AI and assurance in the market 

Above all, trust remains paramount, encouraging transparent systems, collaborative regulatory frameworks that span sectors and borders, clear sources of expert guidance, and a continued commitment to solving the discoverability problem so that responsible AI practices can be shared and scaled across organisations of all sizes. 

Summit Outputs and Acknowledgments  

On the day of the summit techUK launched our sector specific AI assurance paper exploring current implementations across the UK sectors. Also, DSIT launched the Fairness Innovation Challenge results, showcasing innovative solutions to bias and discrimination across higher education, finance, healthcare, and recruitment. 

The event was made possible through collaboration with silver sponsor Clifford Chance, alongside speaking sponsors Kainos, Advai, Synoptix, and KPMG. Our distinguished academic and institutional partners included the Ada Lovelace Institute, The British Academy, The Royal Academy of Engineering, Open Data Institute, The Alan Turing Institute, and The Royal Society. 

Thank you to everyone that was able to join us. Please note all sessions have been recorded and will be available on the techUK YouTube channel shortly. 

Reflections and Progress: The 2025 Digital Ethics Summit Agenda

9:30 – Welcome and introductory remarks

The ninth annual Digital Ethics Summit marked a moment for reflection amid technological and geopolitical shifts. Since 2017, this gathering has evolved alongside AI's transformation, through ChatGPT's "iPhone moment" in 2022, to today's AI-dominated global markets and geopolitics. 

Despite 2025's shifting political headwinds, steady progress continued. The UK's AI Security Institute (formerly Safety Institute) maintained critical evaluation work on frontier models. The Paris AI Action Summit produced international safety reports, while DSIT's Trusted Third-Party AI Assurance Roadmap advanced professionalisation of the assurance ecosystem.  

Semantic shifts emerged: from ethics to assurance, safety to security, principles to compliance. Yet this rebalancing period makes ethical foresight more crucial than ever. The UK's world-leading digital ethics community, mapped in techUK's responsible AI practitioner report, demonstrated resilience by maintaining focus on operationalising principles, addressing agentic AI challenges, and launching initiatives like the Fairness Innovation Challenge, proving that work continues despite global uncertainty, and competitive pressures. 

9:45 – Ministerial Keynote Address

Minister Kanishka Narayan MP opened the Digital Ethics Summit 2025 by emphasising that AI is no longer a future technology but a fundamental part of daily decisions affecting people's lives. The critical question has shifted from whether we use AI to how we use it, making responsible and safe deployment essential. The minister outlined the government's commitment to building a strong, trusted AI assurance ecosystem, highlighting the AI Opportunities Action Plan published in January and the Trusted Third Party Assurance Roadmap released in September. These initiatives aim to drive quality and growth in the UK's AI assurance market, addressing a major barrier to adoption: ethical concerns among business leaders. Assurance, the minister argued, provides the evidence-based solution to demonstrate that AI can be trusted.  

Marking International Day of Persons with Disabilities, the minister reinforced that digital ethics is fundamentally about people, with inclusion needing to be embedded at every stage of AI design, development, and deployment. High-quality assurance ensures systems remain accessible, usable, and fair for everyone, preventing technological progress from leaving anyone behind. Ultimately, the minister declared that the success of AI in the UK will not be measured by technological breakthroughs alone, but by the trust and confidence people have in the systems shaping their lives, with the goal of setting a global gold standard for AI that is safe, fair, and serves the public good. 

9:50 – Ministerial Fireside with techUK Deputy CEO

In conversation with Anthony Walker from TechUK, Minister Narayan outlined priorities for advancing AI assurance and adoption in the UK. The minister emphasised that unlike mature technologies, AI assurance transcends mere compliance, it's fundamentally about building trust that enables people and organisations to feel genuine agency in AI deployment. With UK productivity gains hinging on the pace of AI adoption, and adoption depending on widespread confidence, the minister expressed particular interest in identifying examples where effective assurance has successfully scaled AI deployment in ways that prove both fulfilling for individuals and productive for organisations.  

A second priority involves maintaining human understanding as AI spreads throughout organisations, ensuring clear boundaries between human control and AI decision-making to preserve long-term human agency. Looking ahead to the coming year, the minister highlighted the powerful alignment between commercial incentives, ethical considerations, and national interests in the UK's approach, creating conditions where productivity, adoption, and assurance reinforce one another.  

The Minister emphasised how the UK's AI Security Institute provides unparalleled government capability and deep understanding of frontier AI, positioning the country as a significant player in global governance conversations despite geopolitical shifts, with the upcoming India summit offering an important opportunity to demonstrate leadership. The minister also acknowledged the technological shift toward open weight models requiring evolved governance approaches for more diffused deployment scenarios.  

10:00 – The realities of implementing digital ethics in 2025: What have we learned this year?

The 2025 landscape revealed significant progress in AI adoption, with over 86,000 new jobs created in the sector, though concerns about AI-enabled cybercrime and deepfake fraud remain pressing. Regulating rapidly evolving technologies, particularly agentic and autonomous AI, presents ongoing challenges around human oversight and liability frameworks. Panelists pointed to gaps emerging in several areas: companion AI lacks adequate legal coverage despite serious psychological risks; businesses often lack board-level expertise to properly understand and deploy AI; and sector-specific tools for education and child safety remain insufficient. Speakers emphasised the need to focus on measurable real-world outcomes rather than speculative futures, cautioning against rushed legislation before the landscape is fully understood. Key priorities include strengthening data transparency, attracting technical talent through supportive immigration policies, and carefully considering deployment contexts, particularly avoiding predictive AI in sensitive sectors like welfare and policing. The UK's technical talent base positions it well for testing and iterating responsible AI implementation. 

11:20 – Has 2025 been the breakthrough year for AI assurance and could the UK take the lead?

The panel ‘Has 2025 been the breakthrough year for AI assurance and could the UK take the lead?’ explored whether 2025 marked a breakthrough year for AI assurance and the UK's potential leadership role. Panellists spoke about 2025 as a transitional year where the conversations have begun to move beyond hype towards addressing practical implementation challenges, focusing on building structured mechanisms for justified trust through robust scientific testing and benchmarks. They emphasised that AI assurance must be iterative and context-specific, providing crucial insights into system behaviour including bias detection, while also acknowledging the impossibility of testing for everything. The discussion highlighted the importance of cross-sector collaboration, particularly learning from financial services' established data standards and regulatory frameworks. The conversation stressed the need for two-way dialogue between industry and standard-setters to operationalise regulatory principles effectively. Looking toward 2026, priorities include live AI testing, sharing use cases while respecting business confidentiality, developing practical metrics, and maintaining the UK's competitive position by working dynamically while leveraging existing strengths in standards and assurance practices. 

12:05 – AI in the Public Sector: Lessons learned from Health, Defence, Justice and Education (Morning Breakout – Governance & Accountability)

Public sector AI deployment offers significant efficiency gains while fundamentally reshaping citizen interactions with essential services, from NHS diagnostics to benefit assessments. These applications carry unique responsibilities demanding tailored assurance and ethical frameworks, particularly in sensitive areas like defence where autonomous decision-making raises critical accountability questions. Getting data foundations right emerged as fundamental in this discussion, with proper governance underpinning all trustworthy systems. 

The panel noted that the health sector demonstrates effective practice through robust multi-stakeholder mechanisms that challenge and guide AI regulation. Justice applications require careful navigation of fairness, bias, and due process concerns when identifying appropriate use cases. 

Key barriers in public sector AI include data silos, legacy systems, and procurement processes that slow responsible implementation. Cross-sector lessons offer valuable insights, though specifics must adapt to individual contexts given AI's dynamic nature. Panellists stressed the importance of having diverse teams from across different parts of an organisation to help combat bias, and to introduce new perspectives and offer scrutiny. 

Both government departments and technology suppliers have key roles to play in ensuring responsible AI deployment. Industry support strengthens public sector assurance through shared best practices, technical expertise, and standards collaboration. Successful deployment hinges on evidence-based approaches demonstrating system trustworthiness while maintaining public confidence in government services increasingly shaped by AI technologies. However, there was a need to hold each party to account, guarding against things like greenwashing and over inflated ESG credentials. 

12:05 – Interactive workshop with the Royal Society: Happy International Day of Persons with Disabilities! Exploring Inclusive Digital Technologies (Morning Breakout – Governance & Accountability)

This interactive breakout session facilitated by the Royal Society explored how data-driven digital technologies could reduce everyday barriers faced by disabled people – when disabled people were meaningfully involved in the process. The Royal Society's Disability technology report (June 2025) demonstrated that inclusively designed, sustainable digital assistive technologies (whether for work, play, rest, or care), could create a more accessible society for all. In the AI era, built on vast datasets, the report emphasised innovative research methods like "small data" approaches that derived insights from limited data, enabling development of personalised digital assistive technologies. 

Attendees explored key areas including travel, gaming, music, health and social care, and work, examining how current practices supported, or failed to support, inclusive data collection, sustainable technology, co-design principles, digital inclusion, and smartphones as assistive devices. On this International Day of Persons with Disabilities this workshop offered opportunities to discuss and share practical approaches to inclusive digital innovation. Participants benefited from insights that could be applied to a wide range of technology development. 

12:05 – What does the spectrum of open and closed models mean fro developers trying to democratise access? (Morning Breakout – Governance & Accountability)

The open vs closed source debate is evolving, raising the question of whether the narrative should shift toward a spectrum-based model. If so, what would this mean in practice? The panel discussed how globally, discussion around open-source AI is moving higher on policy agendas. China is outperforming many countries, while the US, Canada, and other G7 nations are lagging behind, and the UK is further behind still. 

There are trade-offs between open and closed source approaches, particularly in how they contribute to return on investment, economic growth, and startup development. More actors need the capability to experiment, yet barriers to implementation remain significant. 

Democratisation of AI requires the ability to build models, not merely use them. For the UK to contribute to a balanced system incorporating both open and closed source models, it will need open access to datasets, such as through the National Data Library, greater funding for open-source development, increased capacity across universities and institutions, and strong but enabling regulation. Government should lead by example by expanding its own range of AI use cases.  

12:50 – Meet the Responsible AI Practitioners: a view from the frontlines of AI adoption

Responsible AI teams vary significantly across organisations, requiring multidisciplinary approaches that deliver meaningful impact beyond "ethics theatre." Practitioners emphasise that governance should enable innovation rather than slow it down, moving at the speed of product development while providing teams with necessary tools and training. A critical insight emerged that most AI failures are human-based rather than technical, requiring executive-level engagement and robust frameworks that extend beyond mere regulatory compliance. The "three lines of defence" model helps embed governance into existing business structures, though focusing solely on legal compliance can create tunnel vision and miss broader systemic risks. 

Essential skills include curiosity, comfort with ambiguity, and widespread AI literacy across organisations. Practitioners stressed the importance of early conversations during system design, establishing clear values and measurable metrics to guide deployment. The community faces both opportunities and urgent challenges: building trustworthy systems that thoughtfully handle power, automation, and uncertainty while navigating emerging regulations. With AI's growing capability to address major issues alongside potential harms, collective professional responsibility is essential for ensuring ethical implementation. 

2:20 – Afternoon Fireside – Human Centered Innovation in 2025: What future are we building?

Dr. Rumman Chowdhury joined the Digital Ethics Summit virtually to explore the critical question of human-centered innovation and the future we're building with AI. Chowdhury explained that generative AI fundamentally differs from previous technologies because it acts as a knowledge synthesis machine, taking content and synthesising it to provide outputs, which raises concerning questions about veracity and hallucinations. With the emergence of agentic AI, systems now theoretically take action on behalf of users, raising profound questions about how we explain our preferences and where humans fit in this evolving landscape. Unlike previous industrial revolutions that automated manual labor, AI represents the first time we've automated knowledge work itself, leading to widespread concern about human agency and purpose.  

Reflecting on her 2018 TED talk on moral outsourcing, Chowdhury noted that the anthropomorphising language we use about AI, saying "AI diagnoses disease" or "AI replaces teachers", removes human accountability by imparting intention and will onto AI systems. This linguistic pattern absolves developers of responsibility for outputs, and today we're seeing consequences manifesting in AI psychosis and parasocial relationships as people believe AI is alive and has free will, when it's actually simple mimicry based on design decisions. Chowdhury emphasised that AI doesn't do anything humans haven't designed it to do, yet this cognitive dissonance allows engineers to claim they're "just doing their job" when harmful outcomes emerge.  

On red teaming and evaluation, she explained how generative AI solved the accessibility problem by enabling everyday people to test AI systems based on lived experience rather than programming expertise. Her work focuses on broadening test and evaluation because whoever defines evaluation essentially defines what success means, and current benchmarks prioritise technical capabilities over societal impact.  

Looking toward 2026, Chowdhury highlighted positive progress in evaluation diversity, with more foundations developing benchmarks, but emphasised the critical need to solve the discoverability problem, helping companies adopt responsible AI practices when they lack clear regulation or unified guidance, particularly for third-party vendor evaluation, internal AI uses, and empowering non-technical employees to use AI responsibly and effectively. 

2:35 – From Safety to Security – Just Semantics or A New Focus?

The panel explored whether the shift from "safety" to "security" in AI governance represents a fundamental change or merely semantic reframing. The consensus emerged that while predominantly semantics. Panelists from industry and research agreed the work hasn't fundamentally changed (capabilities and context drive evolution, not terminology). 

The discussion revealed that recent international AI summits actually focused on catastrophic risks including cyber attacks and weapons development, essentially security issues has always been a part of the discussion and that safety and security are indistinguishable concepts, making the English-language debate somewhat artificial. Hence, the shift reflects changing geopolitical realities as countries grapple with their role in an AI-dominated world, though assumptions about which nations possess truly frontier capabilities were challenged during discussion.  

Representatives from the AI Security Institute emphasised that evolving capabilities, unprecedented adoption rates, and use case proliferation drive their work more than terminology, with resilience becoming the critical new focus, moving from purely preventive approaches to preparing for, absorbing, and responding to AI risks when they occur. Industry perspectives highlighted that external environmental changes, including over a billion people globally now using AI tools and nation-state actors deploying AI in influence operations, make clarity around evolving risks essential, while core responsible AI principles remain constant even as implementation approaches adapt to new frameworks and legislation.  

Legal and policy experts observed that safety language triggers associations with existential risk among policy communities, while security resonates more concretely through concepts like cyber attacks and infrastructure protection, representing a maturation where previously abstract challenges are now discussed in operational, instrumental terms that policymakers can better grasp and act upon.  

The panel ultimately agreed security language helps ground conversations for decision-makers but risks abstracting human impact, a safety-focused approach better centers how errors affect real people, particularly crucial given rapid adoption means vastly more users encounter these systems than they were tested for, creating complexity in both technology and deployment contexts that challenges traditional accountability frameworks and demands new approaches to governance. 

3:20 – Addressing bias in AI: Lessons from the Fairness Innovation Challenge (Afternoon Breakouts - Participatory Design & Public Benefit)

How can we ensure our AI systems are fair, transparent, and compliant with evolving regulation? How do we move from AI ethics principles to real-world impact? This panel presented the outcomes of the Fairness Innovation Challenge (FIC), which was launched at this year's Digital Ethics Summit, a cross-sector initiative delivered in partnership by DSIT, Innovate UK, the ICO, and the EHRC to drive innovative solutions to bias and discrimination in AI systems. 

The session featured the four winning projects across higher education, finance, healthcare, and recruitment, as they shared how they developed and tested innovative tools to make AI fairer, more transparent, and aligned with emerging regulation. From auditing CV screening algorithms and improving fairness in educational tools, to reducing bias in clinical AI systems and designing LLM fairness toolkits for finance, each project highlighted concrete approaches to ethical AI. Through discussion of lessons learned, socio-technical design choices, and regulatory challenges, this session offered a look at what it really took to assure fairness in AI. 

3:20 – Interactive Workshop: Ada Lovelace and Royal Academy of Engineering: A Participatory Design Workshop: AI with participation, for public benefit - (Afternoon Breakouts - Participatory Design &

This hands-on workshop made public interest central to AI design from the ground up. Participants worked collaboratively to explore how public interest and participation could support designing and reimagining AI systems to work for people and communities. Through a structured exercise, attendees explored how thinking about public benefits and participatory approaches could and were reshaping AI development. 

The session invited participants to explore what terms like 'public benefit' and 'participation' meant for them and their organisation, discuss the importance of capturing nuances in what 'public benefit' could mean for different publics, get inspired to find ways to translate these visions into their work as example projects were shared where visions for public benefit collided in practice, as well as tips on how to work through different tensions that might arise in the process productively. 

Whether technologists seeking more inclusive design methods, policy makers interested in democratic AI governance, or community advocates looking to influence AI development, this workshop provided a generative space for collaborative exploration of what 'public benefit AI' in practice meant, who needed to be heard, and how to make it happen. 

3:20 – Human-Computer Interaction: Robotics and other Embodied Intelligence (Afternoon Breakouts - Participatory Design & Public Benefit)

Growing public openness toward autonomous systems presents opportunities for adoption, though businesses require stronger education on long-term risks associated with embodied AI technologies. The complexity of robotics and physical AI systems amplifies existing ethical and practical challenges beyond traditional software applications. 

Regulatory approaches must evolve through cross-sector collaboration, establishing mechanisms for sharing knowledge and lessons across industries. Gradual implementation proves essential for building and maintaining public trust as these technologies enter physical spaces and daily life. Critical accountability questions remain unresolved, namely clear distinctions between data creators and users must be established, with practical frameworks defining responsibility for privacy, transparency, and oversight in embodied systems. As AI converges with IoT, digital twins, and extended reality, each technological intersection demands careful, specialised consideration. 

Robust standards emerge as foundational, not merely for compliance, but for establishing trustworthiness and long-term resilience. Companies must shift from checkbox regulation toward genuine quality frameworks. The central challenge: embodied AI makes prediction and control significantly harder. Responsibility spans regulators, government, and businesses collectively, all working to reassure end users and enable responsible adoption of physically integrated intelligent systems. 

4:25 – Looking to the year ahead: Do we have the institutions, infrastructure, and resources we need?

The year ahead demands greater regulatory clarity as digital regulation evolves slower than technology itself, creating uncertainty for organisations seeking responsible AI adoption. While the UK possesses strong foundational tools, including regulatory sandboxes and robust sector laws on privacy and safety, practical guidance and detailed frameworks remain essential as legislation evolves. 

A critical theme emerged around shifting from risk-focused discourse to bold, optimistic thinking about AI's societal impact. Rather than fixating solely on profitability or job displacement fears, leaders must consider human flourishing, community needs, and long-term capabilities. The UK's success depends less on international comparisons and more on leveraging national strengths, particularly around data strategy and the world-leading AI Security Institute. 

Key priorities for 2026 include: maintaining people-centered approaches throughout technological progress; building organisational-wide AI literacy so every employee understands risks and responsible use; taking calculated risks while changing conservative mindsets; and avoiding tunnel vision on AI alone, emerging technologies like digital twins, robotics, and quantum as well as digital identity require equal attention. Trust remains paramount, requiring transparent systems, collaborative regulatory frameworks, and clear expert guidance sources. 

 

Sue Daley OBE

Sue Daley OBE

Director, Technology and Innovation

Tess Buckley

Tess Buckley

Senior Programme Manager in Digital Ethics and AI Safety, techUK

Related topics