Hervé Mignot, Partner & Chief Scientist Officer at Equancy recently spoke to the review of the ELS group (Editions Lefebvre Sarrut). Read the full interview below.
With the ever-increasing amount of data on the Web, algorithms seem to be taking over. Why? How? How far will it go? Pia de Buchet interviews Hervé Mignot, Partner & Chief Scientist Officer at Equancy.
Artificial intelligence is an old concept: how do you explain the current resurgence of interest in this subject?
Alan Turing introduced the concept of artificial intelligence (AI) in 1950. In an article entitled the ‘Imitation game’, he imagined a machine capable of resolving problems with reasoning indistinguishable from that of an intelligent being. The concept has not always been in the public eye; its popularity has ebbed and flowed. The driving force behind the current renewal in interest in AI is the increasing use and optimization of neural network technology. These networks are composed of interconnected processors, which mimic the functioning of biological neural networks: when a signal enters the network, the ‘neurons’ transmit the information, producing a relevant response at the output. There have been enormous advances in the way to build, structure and train neural. As a result, what is generally referred to as ‘deep learning’ has made significant inroads, attracting substantial investment from big technology companies such as Facebook, which has opened a laboratory in Paris dedicated to this topic.
What is “deep learning” all about and what value does it bring?
It is a matter of teaching neural networks to recognize images, sounds and texts. For example, Facebook’s face-recognition application and Apple’s Siri are able to do this. In order to learn, the network is presented with a tagged data base (of sounds or images in the case of these applications) . For example, the network integrates a large amount of tagged data “this is a good move in the game of Go” versus “this is not a good move in the game of Go” or “this is an image of a cat” versus “this is not an image of a cat”.
Learning is an iterative process; the algorithm constant adjusts as new data is presented. No human intervention is required, hence the expression “self-learning algorithms”. Today, what is new is the processing power of computers and the volume of available data which enables very large neural networks to learn complex phenomena, and so to transform knowledge into recommendations or instructions.
Could you give a concrete example of an algorithm developed for one of Equancy’s clients?
We developed a product recommendation engine for Sarenza, the online shoe shop. A self-learning algorithm enables this engine to showcase products which could be of interest to a visitor. The recommendations are based on customer records (interactions with products, which she or he has consulted, saved, purchased, recommended on social networks, etc.) and on records of visitors with similar profiles. Cross-referencing the two data sets generates highly targeted recommendations, creating the impression that the site’s product offer is in perfect harmony with the customer’s current needs. If a user is a frequent purchaser of boots, the engine might propose trainers that are similar in style to her most-recently purchased boots! If she buys children’s shoes at Sarenza, the engine is capable of modifying its recommendations to take into account their growth and their changing shoe sizes! It is easy to understand why there is such a race for data and records left on the web!
Why is the recent victory of AlphaGo over the world Go champion considered as a revolution?
Due to the nature and complexity of the game, it is considered difficult for a computer to learn to play Go: the combination of possible moves is immense (in factorial terms – 361!) and game positions are not easy to evaluate. Scientists had estimated that it would take another ten years before a machine would surpass man in terms of mastery and expertise at Go. So, the victory of AlphaGo, Google’s computer, over Lee See-dol, the world champion, in March 2016, represents an important step in the creation of artificial intelligence. The match commentators, praised the “intelligence” of AlphaGo’s moves, the game play was indistinguishable from that of champion human player. Perhaps in the future, we may even be tempted to say: “too intelligent to be human”!
What can we look forward to and what is there to fear?
AlphaGo was able combine two “deep learning’ algorithms in a very intelligent manner. On the one hand, an algorithm by which it learned to assess the relevance of a move. Just like a human player, AlphaGo proved itself capable, at each move, of cutting down the number of possibilities, to focus just on the interesting moves. On the other hand, an algorithm by which it learned to assess the strengths and weaknesses of a game position and the probability of winning by playing the move. The performance of AlphaGo was universally hailed: it demonstrates the power of algorithms designed by human intelligence. It is to be expected that many intelligent systems, in domains of significant economical and societal impact, will benefit from this. Today, key voices are warning that data exploitation requires a legal framework. While others, paint a more alarming picture, fearing that one day computers will become more intelligent than humans, escape from their control and seek to destroy them… Today, we are far from this scenario!