Exploiting the AI boom: a guide for business

Guest Blog: Dr Detlef Nauck, Head of AI and data science research at BT writes It’s time to stop thinking of Artificial Intelligence (AI) as something conceptual. #AIWeek2021

It’s time to stop thinking of Artificial Intelligence (AI) as something conceptual that’ll happen ‘one day’. AI is already here, thriving in businesses around the globe — and it’s here to stay.

We’re in the middle of an AI boom, driven by the commoditisation of data, networks and computing capabilities. These factors have now reached critical mass and have emerged from the realms of R&D to offer concrete possibilities for businesses. The pace of adoption continues to be high, and more and more organisations are looking to AI technology to drive their success. There is hype clouding the picture though — the vendor space is full of promises that aren’t backed up by systems integration — but there’s a powerhouse of AI waiting for those that implement wisely.

Look around, and AI is thriving across all areas of industry, although its complexity varies. Its more familiar face is the automated models behind marketing and workforce management systems, automated order and billing processes, automated production lines and text-processing chatbots. Then there are less well-known forms of automation that involve human-in-the-loop models, such as systems that scan legal documents to identify key information for experts to assess, or image recognition systems that support medical professionals by highlighting potential cancer in cell samples. The man and machine model is a successful one, supporting people to do their jobs better by speeding up the process and reducing levels of human error.

Right now, specialists are working to factor in the effects of the coronavirus pandemic to AI. Human behaviour has changed – take, for example, the shift to online shopping – so the data that AI models are trained on must be updated and the models re-trained. AI is being asked to model different pandemic scenarios, too. This means moving away from machine learning towards AI techniques that rely on available, not historic, data. Experts are moving to model-based, knowledge-based and agent-based techniques as a result. The pandemic is also removing barriers to AI by prompting the creation of large, open and free data repositories. Data giants, such as Google, Facebook and Apple have made banks of anonymous, aggregated location information available that can be used for new types of modelling to help communities plan and manage resources better.

It’s clear that AI has significant benefits, but those benefits are coming at a far too high a price. AI is currently a pressurised technological race with little regulation or ethical frameworks, so we’re learning AI best practice the hard way – in the wake of problems. You only have to look at the Cambridge Analytica scandal, the growth in deep fakes and fake news, and the Tay chatbot that learned to be racist in less than 24 hours, to see the potential for disaster.

Successfully unlocking the potential of AI requires action in three key areas:

  1. establishing a corporate culture around data infrastructure and quality
  2. growing AI skills for the future
  3. implementing robust AI governance.


AI needs the support of a strong corporate culture

The success of AI depends on the quality and availability of data and making high-quality data available needs to be embedded into an organisation’s corporate culture. Historically, data’s been treated as a natural output from operations and its quality hasn’t been questioned or looked after. This causes problems when businesses apply a machine learning model, because the model will draw on this unchecked data and replicate any flaws in the information. Although this is a prime example of ‘garbage in, garbage out’, it’s been standard practice to only check or police data where regulations apply or where accuracy is essential, such as billing.

Organisations need to consider the quality and availability of data from the moment it enters the business. You need to be able to collect, store, curate, quality test, aggregate and refine data before pulling it into a software environment where it can power machine learning and apply AI techniques successfully. Plus, to develop a robust AI model that doesn’t make mistakes, you need to test it with data that hasn’t been used during the creation process, and then continue to test it once its operational. This is a challenge for businesses that are used to only collecting data to run operational systems rather than to run analytics or AI.

Growing a rich skills base for the future

The future of AI will be built by humans, but there’s currently a shortage of the skills we’ll need and it’s threatening to hold back progress. Perhaps surprisingly, the shortage doesn’t affect the building of AI models: there’s plenty of software for that. The skills gap is affecting internal and data infrastructures, with a lack of experts to prepare the data and to turn it into the right features for the models to use. Data scientists and engineers who understand the domain and the technologies being used are at a premium.

At BT, we believe in nurturing the skills we’re going to need if society is to unlock the full potential of AI and machine learning. To tackle the need for skills in this complex space, we draw extensively on our long-standing research partnerships with global universities. We work closely with world-leading academics and students to explore the possibilities, the applications and the implications of the technologies in areas, including cybersecurity, network planning and operations, the Internet of Things and customer experience.

For example, alongside other partner companies, we’re working with MIT’s Computer Science and AI Lab and their Systems That Learn consortium to explore two areas: the fundamental issues of accountability, robustness and security of AI systems, and practical questions, such as how machine learning will transform the design, creation and operations of networks and software.

Similarly, we’re working with MIT’s Center for Information Systems Research to understand the role AI will play in organisations’ digital transformations. Plus, with partners across the Institute, we’re looking at how AI will transform work and skills — for the individual, the enterprise and the nation.

To grow skills in this field, we’re taking part in industry groups to define new Masters-level apprenticeships in AI, data science and operational research. And we’re also supporting universities who are planning to offer postgraduate conversion courses in AI and data science for non-specialists. Through student projects, sponsored PhDs, internships and joint research, we’re spotting talented individuals and recruiting them into our business.

Establishing protective guidance around AI

Now’s the time to bring ethics and regulation into how we work with AI. We need to see through the hype of AI to spot the pitfalls — and take steps to protect against them. Organisations must pursue AI carefully, making ethical frameworks and governance an integral part of AI’s development so they’re not exposed if regulation is introduced later.

Fundamentally, businesses must recognise that AI will get decisions wrong. Dealing with this involves building ways to mitigate errors into systems. Take the issue of bias, for example. AI models learning from past data will reflect any biases in that original data, and those who handle the data can involuntarily introduce bias. For example, research has already shown that widely used facial recognition systems have been trained with data that is race and gender biased.

This turns the spotlight onto the need to be able to explain AI’s decisions where they affect individuals or carry significant value or risk. If we leave decision making to automation unchecked, injustices can occur. How on guard should we be, for example, when dealing with machine learning models making decisions about the parole of prisoners or about teachers’ performance that could lead to dismissal? Incorrect decisions by AI leave your business wide open to liability claims and litigation so, to manage those risks, you must continuously monitor your systems.

On a business level, regulation and accountability begins with knowing where you’re using AI in your organisation. From there, you should aim to build in transparency to every stage of the supply chain so you’re sure of quality standards and you can track all decisions in an auditable way. Then, focus on being able to explain how any problems arose and work out what lessons you can learn. 

Ethical and regulatory guidance is developing all the time, but it’s very much a work in progress. The European Commission has set up a High-Level Expert Group on AI working on ethics for trustworthy AI, and the UK has set up the Centre for Data Ethics and Innovation to provide guidance on how to get the most from data-enabled technologies. The UK Information Commissioner’s Officer is working on an AI audit framework and the European Commission has set a target of providing regulation within the first 100 days of its session.

Leading the way in AI

Contributing to the global development of AI is part of our business as usual — in fact, we’ve been applying machine learning to our network for decades. We have a strong research base with an intense focus on iterative learning through AI development and application, backed by our own data science and AI training course. 

In the UK, we work with the Ministry of Defence to develop AI training programmes and integrate AI into defence. Plus, we collaborate with the National Security Centre to shape its use of machine learning and to develop Masters programmes and security modules. We’re proud to say our research labs are regarded as a national resource by government.

Discover how your business can exploit technology and innovation both now and in the future by downloading our ‘Winning the innovation race’ brochure.

If you want to explore AI or machine learning with us further, please get in touch with your account manager. We’re here to help.

 

Author:

Dr Detlef Nauck, Head of AI and data science research, BT

 

You can read all insights from techUK's AI Week here

Katherine Holden

Katherine Holden

Associate Director, Data Analytics, AI and Digital ID, techUK

Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme. 

Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.

Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.    

Katherine has a BSc degree in Biology from the University of Nottingham.

Email:
[email protected]
Phone:
020 7331 2019

Read lessmore