Effective AI systems for Public Services: Managing the Hype
The Treasury’s November budget and subsequent spring spending review leaves our public services under pressure; the broad consensus amongst financial commentators is that any increase to public service funding is far outweighed by an unprecedented demand driven by a decade of austerity, inflation, the pandemic, and the rising cost of living.
An obvious and common response for any organisation, either public or private, when faced with stretched resources is to either: (a) do more with what you already have; or (b) do what you do more efficiently (i.e., cost-effectively). In the case that an organisation’s ‘what you already have’ represents a set of digital assets (e.g., data stores, computing infrastructure, or sensors), or ‘what you do’ involves using said digital assets, options (a) and (b) often lead to an Artificial Intelligence (AI)-based solution whereby a business process is automated. Such systems are an attractive proposition for an under-funded public sector body – a service may be provided many times faster, and resources may be redistributed to address other pressing needs.
The disconnect between developer and user
Usually, development of AI-based systems is outsourced to a private company in the tech sector. This, in my experience, can cause a disconnect in expectation between the end user and the developers: the former might (quite understandably) expect the system to work perfectly, while the latter is all too aware of its limitations.
So, why shouldn’t the public sector end user expect flawless performance from their new automatic tool. After all, AI is everywhere, it’s in our cars and in our homes (“Alexa…”): the hyperbole around AI is unavoidable and public service staff quite rightly expect state-of-the-art.
In contrast, consider the developer, experienced in building AI solutions. They will likely tell you of scars from previous projects: predictive algorithms are built to generalise, not solve one particular problem; data sets are noisy, imbalanced, and unrepresentative; computing resources are expensive; training labels are not available; the list goes on.
Of course, the relationship between developer and user is complex and often the developer is trying to win new business. Why should they present the potential limitations and difficulties of the envisioned system if a competitor might not.
Bridging the gap
So, what can be done to reduce the disconnection identified in this blog, and allow public services to effectively leverage AI-based systems? I believe that we data scientists need to embrace what predictive algorithms can’t do, as well as what they can. There is skill in understanding why and when a system doesn’t work rather than adopting a ‘black-box’ attitude, and even more so in communicating this to the end user. Addressing shortcomings of AI shouldn’t be viewed as failure, but the responsible thing to do that ultimately makes better use of public funds, and results in a system that the user can trust. In addition, public services might improve their in-house AI capability (either through experience or recruitment) in order to scope realistic requirements from the get-go and mitigate against overpromising.
The offering of AI to our public services is obvious, demonstrable by existing successful systems (e.g., in health care and transportation) – it’s an exciting time to be working in the field. As public service staff gain experience in developing and using AI-based tools, the disconnect between developer and user will only reduce. However, I think, we data scientists can speed this process up by showing our scars at invitation to tender, rather than final delivery.
Tom Strain, Data Scientist at Techmodal