Artificial Intelligence is reshaping how we think about the future of our economy and the world of work. Businesses are increasingly looking at tech as the solution to existing problems and future growth. Too often, however, this discussion is happening without the active involvement of either the workforce that will be affected by change or wider society.
According to Microsoft, nearly a quarter of companies now state they have an AI strategy in place. The debate about what this means for digital ethics and how we make this about tech for good has accelerated over the last year, and yet again takes centre stage at the TechUK Digital Ethics Summit. TechUK has been at the heart of this discussion bringing together industry, policy-makers and other stakeholders to look at how we get this right.
Risks of a digital ethics divide
AI has the potential to create prosperity but - in the immediate term - it will leave a lot of people behind. Clearly there are huge benefits for society from the advance of new technology but unless we also talk about the challenges it presents we will not deal with the problems it is creating. Research commissioned from YouGov by Prospect earlier in the year found that 58% of UK workers felt they would be locked out of any discussion about how technology would affect their jobs.
The backlash against facial recognition software, for example, demonstrates how the failure to include people in discussions and decisions about new technologies are threatening their legitimacy and the positive opportunities they could offer.
The growth of the new tech economy also needs to be understood in the context of wider trends. Uncertainty haunts our labour market. Wages have barely recovered to pre-crash levels. Skilled, secure careers feel ever harder to come by, whilst precarious and part-time has boomed, much of it in the platform economy. In many countries the post-war settlement of shared prosperity and expanding opportunities is challenged. The Institute for the Future of Work pick up on both this inequality and the geographical impact of the new economy in its latest discussion paper.
Much of the discussion is now around how we use digital ethics to ensure technology is applied fairly and well. But we also need something greater. PWC have done some interesting thinking on this in their 2020 AI predictions. And through our international partnership with Uni Global Union we have already set out some ideas on worker views on ethical AI and data rights.
These are my reflections on how we get digital ethics right:
1. Technology is going to leave some people behind – not everyone is going to be a winner. Unless we acknowledge that and recognise the anxieties and issues that causes, we will not succeed.
2. The future of work is going to rely on higher skills and adaptability. In large parts our economic model is not good at helping people reskill. We need a national strategy to address the digital flexibility of the existing workforce and new skills for the next generation. That will need the active involvement of today’s workers
and a radical rethink of how schools, colleges, business and government are configured and work together.
3. If the 20th Century was about defining the work relationship between people (owners, managers and workers); today, we also have to think about the definition between people and data/machines.
4. At the heart of this we need to talk about power and inequality. Unchecked AI/automation is going to reinforce existing equalities and biases. Algorithms and machines are not inherently biased, but without scrutiny the human choices behind them may be. It risks creating patterns that entrench the privilege of those with power.
5. AI ethics only matter if they address these issues, and power and equality. Increasingly citizens and workers want to see how they fit into the world around them. That means business, government and regulators need to create an inclusive debate that people feel part of.