This morning the Health Secretary Matt Hancock announced that the NHS was enabling Amazon to allow people to access verified NHS information through its AI-enabled voice assistant, Alexa.
The collaboration will automatically connect people in the UK searching for health advice through Alexa to NHS.UK, giving them medically certified NHS information on different medical conditions.
The announcement was accompanied by a video of Adi Latif, that brought to life how powerful the use of voice recognition can be. Adi is registered blind and says that for people like him, having the ability to access NHS information through voice activated devices like Alexa will be “really great”.
Voice technology has made rapid progress in recent years and it has huge potential to help speed up access to verified health information, increase accessibility for vulnerable people and ease pressures on health and social care services. Allowing more people to access better health information more easily should be a good news story.
But today’s announcement, despite the obvious benefits, has been met with a fair amount of scepticism and mistrust. So, what are the concerns and how can we build trust and confidence in the use of technology to aid the delivery of health and social care?
The concerns have centred on two main points: the potential commercial terms between Amazon and the NHS; and the use of health-related data.
On the commercial agreement, the NHS has made clear that it isn’t paying Amazon as part of the collaboration. It is simply enabling freely available NHS content to be more easily accessed in a different form – and we’d expect other voice-based assistants to follow suit.
The second and more complex concern centres on data. By using Alexa, users are sharing their health concerns with a large technology platform. So, what happens to that data? Amazon has been clear that it is not building a health profile on customers or making product recommendations, nor will it share information with third parties. Amazon has also been clear that users can delete their voice data.
To ensure collaborations like this can succeed we need to ensure that the public have trust and confidence in the way their data is managed and the assurance that it will not be used in ways that they would not desire or expect. At the point where innovations like this are being announced, both the NHS and suppliers should be clear about how the public’s legitimate questions about health data will be addressed and also reiterate the safeguards and oversight that are in place to ensure the public interest comes first. Both the National Data Guardian for Health and Social Care and initiatives such as the Wellcome Trust’s work on Understanding Patient Data have done important work in this area which the public should know more about.
The effective use of data can have huge personal and societal benefits – from identifying and preventing significant disease outbreaks, to finding new treatments for diseases and equipping health and social care services to deal with peaks and troughs in demand. However, public understanding and confidence about how data can be used safely in the NHS and elsewhere remains low. This undermines trust, ultimately making it more difficult for bodies like hospitals, universities, charities and others to use health data for good.
For all of us working in healthtech, the key takeaway from today’s debate is that when launching new innovations, we need to be clear about how legitimate public questions have been taken into account. It isn’t enough simply to talk about the direct benefits for people. We have to be clear about how proper safeguards have been designed into these industry-NHS collaborations so that the public can use them with confidence.
techUK's Annual Digital Ethics Summit will explore these issues in detail, bringing together stakeholders from the NHS, patient representatives and the technology sector.