Event round-up: AI Leaders Lunchtime Lecture Series - Algorithmic impact assessments in healthcare
Our host was Emilie Sundorph, techUK’s Programme Manager for Digital Ethics and Artificial Intelligence, and our speakers were:
- Jenny Brennan, Senior Researcher, Ada Lovelace Institute
- Lana Groves, Researcher, Ada Lovelace Institute
Jenny began by stressing the importance of processes or mechanisms that ensure oversight and accountability in AI, stating that these bridge the gap between software developers and the users of their products. They do so in four key ways:
- By building public trust in and acceptance of the use of AI systems
- By considering the impacts and risks of systems prior to them going live
- By meaningfully engaging citizens in the development and evaluation of AI systems
- And by fostering a greater sense of accountability
Then, Jenny introduced the concept of AIAs, explaining that there is currently no standardised methodology for creating them and that the only instance in which they have been used in practice is in Canada. Therefore, building on the Ada Lovelace Institute’s work on methods for assessing and inspecting algorithmic systems, Jenny and Lana partnered with the NHS AI Lab to conduct a first-of-its-kind study into algorithmic impact assessments in healthcare.
Lana then talked attendees through the research process, which consisted of a literature review of AIAs in theory and practice, twenty stakeholder interviews, and then iterative process development. To achieve an empirical grounding, these stages were applied to the National Medical Imaging Platform’s data access process.
This had the advantage of providing a ready-to-use accountability mechanism in the form of the platform’s existing Data Access Committee, comprised of representatives from the fields of social sciences, biomedical sciences, computer science/ AI, and law/ data ethics, plus, crucially, two patients.
The AIA process to come out of this research consisted of seven stages: A reflexive exercise, application filtering, a 2–3-hour participatory workshop, synthesis, a data access decision, publication, and iteration. It is hoped that by implementing this, researchers and developers applying for access to NHS data will consider the possible social impacts of the AI systems they are creating.
This process will be piloted by the NHS in England as part of its commitment to tackling the underlying biases that exacerbate health inequalities.
Following the presentation, discussion topics included the ownership of decisions made or enabled by AI systems, the optimal point of AIA intervention in a developer’s ops lifecycle, the possibility of ensuring transparency throughout the AIA process, as well as the question of which actor(s) will ultimately define algorithmic risk in the future.
You can find a recording of the event here:
You can also access the full AIA report, template, and user guide on the Ada Lovelace Institute website, here.