AI holds fantastic opportunities for large and small-medium organisations alike, and businesses are right to embrace them. Be it to improve back office operations, maximise marketing efforts or deploy predictive technologies to allocate resources more efficiently, algorithms have a lot to offer and we are seeing many organisations deploying AI systems already.
Talking with industries as well as policy makers, I notice that we all seem to share the same belief, that is that innovation and ethics can go hand in hand. In fact, many believe that businesses that can utilise data, and do so ethically, have a clear competitive advantage. This is for two main reasons:
- First, because the trust consumers have in the handling of their personal data has been impacted by the recent scandals in the media, including Facebook, several data breaches and stories around microtargeting for online manipulation. Businesses need to embed transparency so that customers can trust them.
- Second, because organisations need to demonstrate due diligence in their deployment of AI systems. Algorithms require a large amount of data, and that data needs to be collected fairly, handled lawfully and safely. Furthermore, limited datasets and poorly thought algorithmic procedures can produce unfair, biased and discriminatory outputs, thus infringing upon human rights and equality law. Businesses will want to make the most of their data without putting their reputation at risk.
But how do we turn ethics into practice?
Amidst all the talk about ethics over the past few months, with the largest organisations producing manifestos (like Google) and systems to police their algorithms for their clients benefits (IBM) and with both the UK Government and the EU setting up bodies and committees working on ethics, businesses now need to turn ethics into practice – and that is not always easy especially after a lot of commitment has gone into the GDPR recently. But data is power, and with power comes responsibility, so some steps can be taken right now to embrace innovation and turn ethics into a competitive advantage.
- What does ethics mean for your business? It is important to have a clear understanding of the principles underpinning your innovation strategies. The fact that something is technologically possible doesn’t necessarily means it is the right thing to do. Business leaders need to have an ethical framework guiding their choices and must translate their choices into an organizational and technological structures.
- Who is in charge to make the decisions? Appointing a Chief Privacy Officer may work for the larger organisations. Else, it is important to ensure innovation projects are assessed through ethical lenses.
- Deploy algorithmic impact assessments (AIAs) to ensure you apply due diligence to your systems. AIAs should consider privacy and data protection; the data sets used to train the algorithms must be validated to avoid embedding bias and/or discrimination within machine based decision making; security and safety of the data must be optimized to avoid harming data subjects in the process; human intervention must be retained as a check; and finally, algorithms must be explainable and legible.
- Communicate with customers and citizens: when you deploy AI and especially if you use algorithms that have a significant effect on individuals, you need to communicate this to your stakeholders. For example, by enabling them to understand the training procedures and parameters used, changes can be accurately mapped to different outcomes; this also offers the possibility to challenge the decisions made and have it re-taken by humans under a clear case of discrimination and/or bias.
- Engage with others in your industries: ethics by design could be a daunting process but businesses are not alone in this. Sharing best practice and ideas is always important and organisations like TechUK and the CBI are good safe spaces to discuss ideas.
AI systems alone cannot be trusted as scientist Joanna Bryson says: it is the humans designing and deploying them that bear the responsibility. Therefore, algorithms should be used based on several principles. Accountability, ensuring it is clear who does what, and when – and that is very important in relation to liability which is something organisations need to think carefully about. Legibility and transparency: using personal data needs to happen lawfully, and algorithms needs to be explainable. Responsibility: humans are responsible for their algorithms and define the degree of autonomy of a machine. The key thing is that, with machines making value-based decisions, the lesser the human intervention, the more businesses need to ensure that values are strongly and uniformly represented across all data subjects implicated.
These are very complex processes but are essential in the world we live in; organisations must consider them through a principled lens.
Dealing with this now will prove crucial to build solid foundations to your innovation strategy and avoid problems later.
We support organisations with their ethics by design work and would be delighted to help you.
 See Gartner’s trends for 2018-19: https://techcrunch.com/2018/10/16/gartner-picks-digital-ethics-and-privacy-as-a-strategic-trend-for-2019/
 Joanna Bryson, https://cpr.unu.edu/ai-global-governance-no-one-should-trust-ai.html
Gemserv are an expert provider of professional services enabling the data revolution. We work with organisations to achieve and maintain compliance across information and data security standards. We specialise in Data Protection, ISO 27001, NIS, ISO 22301 as well as other risk management services to truly assess your security landscape.
For more information about the Digital Ethics Summit see: https://www.techuk.org/digital-ethics-summit/about