12 Nov 2020

AI hype? It’s slow and steady that wins the race

Guest Blog : Stéphane Bédère CCO at Sidetrade questions the notion of "trust" in AI

Artificial intelligence (AI) is clearly no longer an option for companies. However, the over optimism of the early days is giving way to a more cautious approach. Do our algorithms owe us an explanation?

Working practically every day with firms trying to tame AI, I clearly see two schools of thought: one inventing a new way of working; and the other obstinately trying to fit square pegs into round holes.

Where conventional software offers the instant gratification of plug and play, effective AI demands a painstaking, continuous progress. Call it the ‘slow and steady’ approach.

 

Understanding just enough

AI is often negatively perceived as a “black box”, whose internal workings are hidden or not readily understood, and therefore allegedly suspicious. And yet, in our everyday life, we unquestioningly accept all sorts of information beyond our understanding. From the family physician diagnosing nasopharyngitis, to the boat mechanic wanting to change the sacrificial nodes, we put our faith in expertise we cannot begin to fathom. How the expert came to their conclusions does not matter to me, as long as their recommendations get results.

The same goes for business. As soon as an AI system can make recommendations that solve concrete business problems, the technology is doing its job: creating value for the company. In a way, AI helps us question and rethink our actions, which, after all, “is only human”. Over-focusing on the supposed inexplicability of AI is a red herring.

 

In praise of iteration

Imagine you’re trying to teach someone tennis. You have her adjust, readjust, and try again and again, until she starts to find the winning technique. This is an allegory for machine learning. The algorithm is how we get a grip on data. The AI system needs to repeat the same moves over and over again until we get winning results.

In the language of computing, this repetitive technique is called iteration; and it is the very core of machine learning. By tirelessly comparing the results of its calculations, over and over again, the system learns to refine its models until outcomes are achieved that would have been beyond the capacity of mere mortals.

However, even when working on presumably adequate data, the painstakingly developed model may fail, due to contextual changes unknown to the AI system. The problem posed, the relative weight of the parameters, and the expected outcomes are all subject to change. These are things we must accept.

Finally, there is obviously no one truth, and the user is under absolutely no obligation to do what the artificial intelligence says. AI proposes, man disposes, always.
 

Getting over prejudices

A professional with years of experience rightly trusts their intuition. Faced with a problem, they immediately select significant parameters and quickly make a judgement. AI analyzes a huge amount of data and, in mere seconds, establishes correlations undetectable by the human mind. It offers a new, often disruptive perspective. As AI reveals a much wider context than previously perceived, longstanding assumptions may be invalidated.

User education must focus on two points: demystification and analysis of results. AI is no more or less than an ultra-specialized tool. It will never replace staff; quite the opposite: its purpose is to enhance people’s capability.

Some companies make the mistake of ending an AI project on the grounds that it did not achieve the expected results, whereas they most likely had formulated the problem incorrectly, or used unsuitable data.

AI is effective technology because of its intuitiveness. To paraphrase philosopher Henri Bergson, intuition can save us from the traps laid by our own intellect. As long as we trust it.