16 Feb 2026

Understanding Trusted Research: insights from workshop 1

This first workshop in the Trusted Research series explored how trusted research is understood and implemented across institutions, examining operational scope, governance and areas of ambiguity. The session highlighted emerging alignment as well as variation in practice, alongside the wider cultural, regulatory and geopolitical factors shaping trusted research implementation.

On 29 January, techUK, with the support of Prolinx and with participation from techUK members and trusted research leaders from across the sector, hosted the first workshop in the Trusted Research series. The session focused on exploring how trusted research is currently understood and implemented across institutions, with particular attention to language, ownership, operational scope, and areas of ambiguity. Discussions highlighted both areas of emerging alignment and significant variation in how trusted research is interpreted, particularly around what it acts upon, what awareness and practices it requires/calls for, and the wider governance and cultural pressures shaping implementation.

As a way of synthesising these discussions, we have translated key themes into an interactive visual graphic referenced below which can be found here. The graphic is intended as a working interpretation of the conversation rather than a definitive model. It moves from the central operational domains that participants consistently associated with trusted research, through a surrounding layer capturing the awareness and practices trusted research demands in practice, and finally to an outer layer reflecting wider tensions in values, governance, and context that influence how trusted research is understood and applied.

To support navigation of the graphic, we have also broken down each layer and have provided more detailed explanations of the individual elements that emerged from discussion below:

Trusted Research Workshop 1 - pic 1.png

Core operational domains

This central layer reflects areas where discussion showed the strongest alignment around the practical scope of trusted research. Rather than defining trusted research in abstract terms, participants consistently described it through the domains where it operates directly, including people, partnerships, research data, outputs, and compliance obligations linked to collaboration. These domains represent the points at which trusted research interventions become tangible in institutional practice.

  • People involved in research: Focus on individuals participating in research activity, including researchers, collaborators, visiting academics, and associated personnel, recognising that risk often emerges through relationships and access.
  • Compliance obligations: export control, visas, etc.: Regulatory requirements linked to international collaboration, movement of people or knowledge, and legal obligations that shape research partnerships and participation.
  • Data involved in research: Research data as an asset requiring appropriate stewardship, including considerations of access, storage, transfer, and potential misuse.
  • Protection of research outputs or IP: Safeguarding research outcomes, intellectual property, or sensitive knowledge from unintended disclosure, exploitation, or long term information exfiltration.
  • Research collaborations and partners: Institutional and international partnerships through which research takes place, including due consideration of organisational context, ownership structures, and evolving risk.

Tensions in values, governance, and context

The surrounding layer captures what trusted research requires in practice, combining forms of awareness and operational responses highlighted throughout the discussion. Participants described both interpretive lenses that shape how risk is understood, such as geopolitical framing or sensitivity classification, and practical approaches used to respond, including training, the use of subject matter champions and the desire for shared vetting services. This layer reflects how trusted research is enacted day to day.

Risk awareness lenses

  • Sovereignty framing: Understanding research partnerships and technology choices through national security, strategic autonomy, or geopolitical considerations.
  • Disinformation awareness: Recognising risks related to narrative manipulation, reputational influence, or misuse of research outputs in information environments.
  • Foreign influence considerations: Assessing potential external influence on research agendas, partnerships, or outcomes, including funding sources or institutional affiliations.
  • Sensitivity classification approaches: Determining how institutions identify and categorise sensitive research areas, including the methodological challenges of defining what requires additional scrutiny.

Enabling practices or operational responses

  • Shared vetting approaches: Collaborative or centralised processes for assessing partners or risks, intended to reduce duplication and improve consistency across institutions.
  • Training and onboarding: Building awareness and capability among researchers and staff to understand trusted research principles and apply them in practice.
  • Subject matter champions: Use of respected disciplinary peers or internal experts to support cultural adoption and translate governance expectations into research contexts.
  • Threat analysis approaches: Structured assessment methods used to identify and understand potential risks across collaborations, technologies, or research activities.

Trusted Research practice requirements

The outer layer represents the broader environment within which trusted research operates, characterised by competing priorities and ongoing tensions. Discussions emphasised that trusted research is shaped as much by institutional culture, governance expectations, and wider geopolitical dynamics as by formal policy frameworks. These tensions highlight why implementation varies across contexts and why trusted research continues to evolve as a practice rather than existing as a fixed model.

  • Research integrity alignment versus security prioritisation: Balancing established research integrity principles with emerging security focused governance and risk mitigation approaches.
  • Social impact expectations versus collaboration restrictions: Pressure to deliver global societal benefit while introducing safeguards that may limit or reshape international partnerships.
  • Overlap risk between trusted research and existing integrity governance: Potential for duplication or confusion where trusted research frameworks intersect with existing ethics and integrity structures.
  • Academic freedom versus structured governance: Formalised oversight and risk management processes may feel in tension with academic autonomy and exploratory research practices.
  • Disciplinary identity versus institutional control: Researchers often align with disciplinary norms or theoretical communities, which can differ from institutional governance expectations.
  • Institutional culture versus compliance frameworks: Local practices and organisational culture influence how trusted research policies are interpreted, adopted, or resisted.
  • REF incentives versus risk cautious behaviour: Performance metrics encouraging openness and collaboration may conflict with more cautious approaches to partnership risk.
  • Funding pressures versus risk thresholds: Resource constraints can create incentives to pursue collaborations that carry higher levels of uncertainty or risk.
  • Capacity gaps and uneven resourcing versus equitable implementation: Variation in staffing, expertise, and institutional maturity creates uneven adoption and capability across the sector.
  • Global policy variability versus consistent governance: Geopolitical instability and differing national regulatory environments complicate efforts to apply consistent trusted research practices.

Austin Earl

Austin Earl

Programme Manager, Education and EdTech, techUK

Austin leads techUK’s Education and EdTech programme, shaping strategies that support the digital transformation of schools, colleges, and universities. His work focuses on strengthening the UK’s education technology ecosystem, enhancing core technology foundations, and advancing the adoption of emerging technologies to improve educational outcomes.

Austin also chairs the EdTech Advisory Panel for AI in Education, contributing to national discussions on the future of EdTech, AI, and the UK's Education system.

Email:
[email protected]
Phone:
020 7331 2000

Read lessmore

Education and EdTech Programme activities

techUK’s Education and EdTech programme seeks to address this challenges by bridging the gap between education, the tech industry, and policymakers. We ensure that education institutions can effectively adopt technology that enhances learning, streamlines operations, and supports skills development. Visit the programme page here

 

Upcoming events

Latest news and insights

Learn more and get involved

Education updates

Sign-up to get the latest updates and opportunities from our Education programme.

 

 

 

Here are the five reasons to join the Education and EdTech Programme

Download

Join techUK groups

techUK members can get involved in our work by joining our groups, and stay up to date with the latest meetings and opportunities in the programme.

Learn more

Become a techUK member

Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.

Learn more

Meet the team

Austin Earl

Austin Earl

Programme Manager, Education and EdTech, techUK