Advisor vs Autocrat: AI Friend or Foe? Part 2

Guest Blog: Sam Baldwin from Panintelligence highlights how AI and ML could or should be used when it comes to assisting or making decisions #AIWeek2021

Part Two of the ‘AI: Friend of Foe?’ series is based on our recent webinar with Panintelligence Co-founders - Ken Miller and Zandra Moore - along with Denis Dokter (Relationship Officer at Nexus Leeds), and Nick Lomax (Associate Professor in Data Analytics for Population Research). Read Part 1: The Ethics of Algorithms here.

Should AI be an advisor or a decision-maker?

Much of the fear of futuristic Terminator-like AI comes from the sci-fi fantasy of AI being able to make decisions and take actions completely without human intervention. While there are some applications of AI being almost autonomous – self-driving cars being one – for many more use cases, AI and ML act not as autocrats, but as a very helpful assistant to humans.

Take cancer screening for example, where algorithms can quickly and accurately assess an image and then flag areas of concern for review by human eyes. In this scenario, AI is acting as a powerful tool and valuable assistant in the process of cancer identification, but it does not make any decisions itself.

If we removed the human entirely from the process things can get more difficult. But it does depend on the level of risk for each scenario. If the algorithm that chooses your Spotify playlist selects a track you don’t like, it’s a minor annoyance at most. But in use cases where the stakes are higher, having a human as part of the process, is probably a good thing.

cancer.JPG

Giving Choice Vs Making The Decision

AI already helps to give us choice, but are we comfortable with allowing it to make the final decision?

Take the concept of a ‘Smart Fridge’; an internet-connected device, it monitors the levels of food and drink stored inside and knows when you’ve run out of milk for example. The Smart Fridge can either simply alert you to this fact or it can go one step further: make the decision to order you more milk.

However, this second step may not always be wanted. Perhaps you are about to leave for a week’s holiday and therefore do not want to order more milk. This example illustrates a scenario where the algorithm doesn’t have all the data required to make the right decision. So in this case, you need the human in the loop.

The line between augmenting the human decision-making process, versus the AI itself making the decision, is a very interesting and important line.

At the moment, we’re much more comfortable with AI giving advice rather than making the final decision. However, this will likely change over time, as AI becomes ever more sophisticated, and ever better at making decisions that humans agree with.

fridge.JPG

Where do we draw the line with AI? The Ethics of AI

Like with many advancements, the line will be continuously drawn and redrawn. Ultimately it comes back to the human. Humans need to see how and why decisions are being made. Is it useful? Is the application good? Is it moral?

We need to be flexible with legislation and regulation as time goes on. We must ensure accountability (“it wasn’t me it was the AI!”). Who is accountable for decisions that are made because of algorithms?

An oft-cited theoretical problem with autonomous cars is the scenario where the car is forced to choose between hitting an elderly person or hitting a child. The scenario is somewhat of a moot point (on the rare occasions when this occurs with human drivers – is there really any conscious thought or merely a reflex reaction?) but with the autonomous car, does a human have to programme the software with a priority list of who should die first in such an incident?

tesla car.JPG

The Pitfalls of AI

Though many companies are seeking to build robots and AI that replicates humans, such mimicry can rub people up the wrong way. People need human contact, and as soon as they detect that the ‘person’ on the other end is a machine, they can be turned off.

But this will surely diminish, as AI gets better and better and ‘being human’. In 2014 a piece of software passed the longstanding ‘Turing Test’ (though not everyone agreed). In time, distinguishing between humans and AI will become increasingly more difficult.

Could AI eventually outperform humans in empathy?

We tend to think of futuristic AI and robotics in military situations, but is it possible that AI could one day outperform humans in terms of empathy? What if lonely or ill people could find support, even companionship from a robot?

Fictional sci-fi series like Channel 4’s Humans looks at this scenario where ‘Synths’ – highly realistic ‘synthetic humans’ – play the role of emotional or sexual partners to real humans. Such a future would seem plausible, if or when the technology reaches that level of sophistication, though would also require a dramatic shift in our attitude towards AI.

synthetics.JPG

Right now, we use medication to normalise people but in the future why not other methods? Could AI be used to recreate a person’s dead family allowing a person to talk to them? This scenario was explored in the first episode of the Black Mirror series ‘Be Right Back’  and is a fascinating vision of how AI could be used one day.

Trust is Very Important

We may think of algorithms as futuristic, but they have in fact been in use for a very long time, for example with loan approvals or insurance quotes. We have long used historic data to predict future behaviour.

Trust is very, very important. Many big companies are really struggling with trust right now. There have been incidences where algorithms make the wrong decision, and without a human looking over that decision, problems occur. So companies should be more open about how they make decisions.

Companies must declare that each prediction has a degree of uncertainty with it. So humans should be able to challenge any decisions made about them, whether by a human or algorithm, and request to see what data a company holds about them.

By providing the evidence and the explanation behind the decision and allowing people to challenge it, we should be able to enhance society and use AI, ML and algorithms to improve humankind, rather than destroy it.

If you want to hear more on the topics - watch the full webinar below:

 

Author:

Sam Baldwin from Panintelligence

 

You can read all insights from techUK's AI Week here

Katherine Holden

Katherine Holden

Associate Director, Data Analytics, AI and Digital ID, techUK

Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme. 

Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.

Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.    

Katherine has a BSc degree in Biology from the University of Nottingham.

Email:
[email protected]
Phone:
020 7331 2019

Read lessmore