NEWS / AI TECHNOLOGY

“Will machines replace humans?” The year was 1940, and Franklin D. Roosevelt, in a debate with the president of MIT, was worrying about the impact of machines on  the unemployment rate. Sixteen years later, at the Dartmouth conference, Artificial Intelligence (AI) officially enters the pantheon of scientific disciplines.

“Will machines replace humans?” The year was 1940, and Franklin D. Roosevelt, in a debate with the president of MIT, was worrying about the impact of machines on  the unemployment rate. Sixteen years later, at the Dartmouth conference, Artificial Intelligence (AI) officially enters the pantheon of scientific disciplines. This time, surely, robots are going to replace humans. It’s only a question of years, perhaps of months… Sixty years later, our professional colleagues are still made of flesh and blood, and our ambitions have finally been revised downwards. The latest example: the driverless car, which we predicted for 2020 four years ago. A word of advice: hold on to your driver’s licence for another ten years, at least.

Yes, it’s likely that one day, an artificial intelligence will know how to do everything as well as a human, as Geoffrey Hinton, winner of the 2019 Turing Award predicts. But it’s not a question of months, nor even of years… A half-century? A few hundred years? More? It would be imprudent to even try to guess.

Researchers are confronted with several major obstacles which compel AI to remain at a “weak” stage. For the moment, algorithms can solve “specific” problems for which we have trained them (games, for example), briefly interpret sensory data (vocal and visual recognition), and even generate voices, texts or images, as Samsung and others have recently demonstrated. “Deep learning”, which is based on networks of artificial neurons, has taken us a leap forward these last years. But it is not simply because an AI is capable of beating the world champion of the game of Go or any human at chess that makes it “powerful”. Try asking AlphaGo to memorise your shopping list for you to convince yourself: you won’t return from the supermarket with much – if anything – in your bag. It’s even estimated the intellectual quotient of today’s artificial intelligence would be equivalent to that of a four-year-old child (a result that should be taken with a grain of salt, as the AI tested had been specifically programmed for the competences evaluated by the test). Reassuring nonetheless for President Roosevelt.

Four major areas are not yet mastered by this “four-year-old child”, over whom we humans hold significant advantages:

  • Autonomous programming and adaptation. Imagine a sanitation robot cleaning in a park. If his battery is low, he won’t be able to generate his own plan to recharge himself. The programmers will have had to integrate a system so that he knows how to locate the charging zone and go there. And if that zone is out of service one day, he won’t know how to adapt on his own, unless such a case has been provided for upstream by the creators of the algorithm – there where a human agent would have no problem imagining a contingency plan if the park sandwich shop was closed. In other words, in an uncertain environment, our artificial intelligences haven’t really got all that much intelligence. It’s true that the combination of current “deep learning” and “reinforcement learning” AI techniques enable our robot to learn about his milieu and even the changes taking place within it, but only in closed environments with fixed and known rules, as on the boards of a game of Go or chess for example – not on a highway network, where the unexpected may happen at any time.
  • The ability to learn with fewer examples. Imagine this same robot, in this same park. To identify the dog approaching as a potential danger, the robot would have had to digest millions of photos with and without dogs prior to becoming operational. Because today, no matter how intelligent they are, our algorithms need a huge quantity of examples to be able to recognise what a dog, a tree, or a table is. The four-year-old child doesn’t require thousands or millions of examples of dogs to recognise one. A research approach, called “transfer learning”, would enable our robot to learn to recognise the environment in which he finds himself, as diverse as it might be, from a reduced number of examples.
  • Explanation-based learning. Today, AI algorithms are derived exclusively from examples but cannot benefit from a conceptualisation of what they’ve learned. We can tell a child that a panther is a big cat, and that a boat doesn’t have legs or else it would walk. The child would thus recognise panthers and not expect to see a photo of a boat wearing shorts. A machine cannot do this; it doesn’t know how to identify a panther unless it has already seen numerous examples, and it will never be bothered by the sight of a photo of a catamaran strolling about.
  • Result explainability. For the most part, humans are capable, when asked, to explain at least partially why they have made one decision as opposed to another. The most advanced AIs are very poor teachers when it comes to explaining how they have solved a problem. This is worrisome now that they are providing more and more assistance to bankers, insurers, and doctors. Modern deep learning algorithms are composed of millions of artificial neurons which are organised among themselves, and once trained, they enter into “black box” mode: even those who designed them cannot easily interpret the results of their functioning. It is both practical (they can rapidly solve very complex problems) and extremely problematic: how to justify to your client a loan refusal decided by an algorithm? And how to understand that this driverless car has opted to make a highly dangerous manoeuvre, risking material or even human damage? Even if it is the right decision, how can one have confidence in an artificial intelligence if it makes a diagnosis contrary to that of an expert physician?

Due to these impediments, today’s artificial intelligences are “weak” AI over which humans have the advantage. Especially as researchers are confronted with problem of theoretical formalisation of the algorithms used. Certain theorems exist, but our expertise still relies mainly on empiric knowledge and not always on implacable mathematical theories. We proceed by trial and error, we advance, we modify, to finally achieve our goals. For if machines sometimes find it hard to understand humans, humans also find it difficult to understand the workings of their machines.

Artefact Newsletter

Interested in Data Consulting | Data & Digital Marketing | Digital Commerce ?
Read our monthly newsletter to get actionable advice, insights, business cases, from all our data experts around the world!