UK SPF Cluster 1 event roundup: AI in and for spectrum management and wireless networks
At a recent UK SPF Cluster 1 workshop, speakers and participants compared notes on a fast-moving reality. As networks become more software-defined and traffic more unpredictable, artificial intelligence is shifting from nice to have to a core control mechanism. Discussion ranged from AI-assisted Wi‑Fi operations to dynamic, real-time spectrum allocation, as well as the sustainability and governance questions that follow as automation becomes the default.
Rather than focusing on any single vendor, the most useful takeaway was the shared direction of travel: networks are becoming closed-loop systems that sense, predict, and act. That shift matters not only for performance, but for security, resilience, energy use, and ultimately the rules that govern access to spectrum.
AI in enterprise Wi‑Fi and networking
One practitioner described AI moving into day-to-day network operations as organisations adopt newer Wi‑Fi generations and manage a growing mix of devices, applications, and security requirements.
Capabilities that came up repeatedly included adaptive learning to tune radio settings as conditions change, predictive analytics to anticipate congestion or faults, and automated workflows that shorten incident response.
Participants also highlighted security-relevant functions, such as profiling users and devices to spot suspicious patterns, and root-cause analysis that correlates events across access points, switches, and authentication services to explain why and where performance drops occur.
A practical example focused on campus environments, where hundreds of access points must be deployed and tuned quickly. AI-assisted planning can generate Wi‑Fi heat maps from building data and measurements, then continuously adjust power, channels, and roaming once live—reducing site revisits and improving low-latency stability.
Spectrum stops being static
The workshop then widened from network operations to spectrum itself. Instead of treating spectrum as a fixed configuration that changes slowly, speakers described a future where it is managed more like a pool of compute resources—optimised in near real time based on demand and measured conditions.
In practical terms, an enterprise venue could balance capacity between Wi‑Fi and private 5G, shifting resources as usage peaks move around a site. AI closes the loop: sense interference and load, decide a better configuration, and push changes through programmable radios and controllers. Over time, the network begins to look self-optimising and, in some cases, self-healing.
Future-facing ideas helped make this concrete. Digital twins were framed as living simulations of a radio environment, allowing policy constraints or configuration changes to be tested before deployment. Others discussed a “unified wireless fabric”, where multiple access technologies are orchestrated as one system and performance targets are expressed as intent.
Why testbeds matter: shared platforms for AI-and-spectrum research
Another theme was collaboration infrastructure: multi-partner programmes connecting universities, demonstrators, and industry around shared spectrum and networking challenges. The emphasis was on repeatable experimentation so AI techniques can be tested against realistic RF conditions, not only in simulation.
One shared platform described links multiple sites for large-scale trials, including spectrum monitoring that can collect measurements in real time, archive them, and later “replay” scenarios. That matters for machine learning because consistent spectrum data is hard to gather. Replayable datasets also enable benchmarking, so different groups can compare approaches under the same conditions.
Research topics ranged from hybrid fibre/wireless/optical links to reinforcement learning for spectrum sharing and AI-driven resource orchestration, but shared a goal: help networks adapt when assumptions break. Demonstrations combined simulators with real-world trials, showing how policy constraints, propagation effects, and incumbent activity can inform decision systems that propose safe actions, not just optimal ones.
Sustainability, assurance, and the policy questions that won’t go away
Sustainability was treated as a first-order design constraint. Expanding coverage and capacity sits in tension with net-zero ambitions. The discussion suggested AI can help, but only if it changes operating modes—not just squeezes marginal gains from always-on infrastructure.
Technical approaches included multi-stage sleep modes for base stations, dynamic cell zooming as demand changes, and reinforcement learning agents that learn when to power down components while keeping experience within agreed limits. A recurring point was that combined methods tend to outperform single levers.
For example, coordinating sleep modes with cluster-level optimisation can avoid simply pushing traffic (and energy cost) to neighbouring cells. More radical “cell-free” designs were also presented as a route to improve coverage with lower per-site transmit power.
Reconfigurable intelligent surfaces were discussed as a tool for shaping propagation to reduce dead zones and reliance on brute-force densification. One presentation also pushed for whole-system carbon accounting, noting that supply-chain emissions can dwarf operational emissions, so efficiency gains should be evaluated end-to-end.
The most animated Q&A centred on assurance: if AI makes spectrum decisions at millisecond timescales, how do we prove they remain safe, fair, and compliant? Participants argued for feedback loops from incumbent users (so secondary systems can confirm they are not causing harmful interference), plus continuous logging and observability so actions can be reconstructed later. But the hard part remains: distributed models evolve over time, and explainability can be weakest when systems are most complex.
On the policy side, one idea was a shift from fixed, worst-case interference limits to probability-based thresholds backed by better real-time data. That would imply new approaches to equipment approval, auditability, and ongoing compliance monitoring. Economics also surfaced: assurance mechanisms cost money, and durable sharing frameworks need credible models for how costs (and benefits) are distributed across stakeholders.
Closing thoughts
Across the sessions, the message was that AI is becoming a shared layer tying together radio performance, spectrum access, security posture, and energy use. That makes it powerful—and means technical design choices quickly become governance choices. The next step is to treat AI-enabled spectrum sharing as both engineering and policy: build testable systems, define measurable assurances, and agree what “fair” behaviour looks like before automation scales.
Tales Gaspar
Tales has a background in law and economics, with previous experience in the regulation of new technologies and infrastructure.
Sophie Greaves
Sophie Greaves is Associate Director for Digital Infrastructure at techUK, overseeing the Telecoms Programme, the Data Centres Programme, and the UK Spectrum Policy Forum.