Ethical AI Development is everyone’s responsibility

Guest Blog: Tom Fowler, Principal Consultant and Data Science Capability Lead, Techmodal #AIWeek2021

As a data scientist working in the field it troubles me to see a cognitive disconnect between technology and application. Somehow, the practical work of developing AI tools has been separated from the way in which that tool will be used and that cannot be the case. Just two weeks ago the EU published draft legislation setting out a legal framework on the use of AI. This proposal aims to bring back trust from the public in the way AI systems are used and put restrictions on what are deemed ‘high risk’ areas, and outright ban systems deemed to have ‘unacceptable risk’ such as those threatening safety or livelihoods. Whilst I hope this prompts a renewed engagement with AI from the general public in the way that GDPR forced us to think about our digital footprint, this still leaves considerable wiggle room for those intent on exploiting the grey areas.

I believe, as data scientists, we need to move beyond a race to the bottom. We shouldn’t be governed strictly by whether something is legal or not, there is a moral and ethical obligation to consider carefully how a technology could be used and critically not just now, but in the future. The legal and ethical framework in which something is developed will change over time and as such we need to take a more considered approach to make sure what we build will continue be used as intended.

There are very few clear lines and rules in this area. Various laws govern the bounds in which we must work, but these were often written decades ago and couldn’t possibly have foreseen the types of use cases I come across daily. Stepping into this gap has been a proliferation of data ethics frameworks. Just do a quick search online and you’ll find some from the UK government and big tech giants such as Google and Microsoft. I don’t think it matters which you choose to use, the key thing is that you are engaged and aware that this is everyone’s responsibility when using AI.

This judgement on what is and isn’t high risk doesn’t sit purely with elected politicians. Those on the ground need to engage with this agenda and no-one is better placed to understand an emergent technology than those developing it. I would urge all data scientists and those involved in AI projects to carefully consider all aspects of their work, from the types of technology being used, the data selected for training and even the language of any outputs. Often customers will not have the depth of knowledge around AI that we do and it is our responsibility to highlight the risks as well as selling the potential benefits. Everyone benefits from this and the products we build will ultimately be better for it.

 

Author:

Tom Fowler, Principal Consultant and Data Science Capability Lead, Techmodal

Visit Techmodal here

 

You can read all insights from techUK's AI Week here

 

Katherine Holden

Katherine Holden

Associate Director, Data Analytics, AI and Digital ID, techUK

Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme. 

Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.

Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.    

Katherine has a BSc degree in Biology from the University of Nottingham.

Email:
[email protected]
Phone:
020 7331 2019

Read lessmore