At the end of the 2019 edition of the Machine Learning Conference of Prague, we spent some time together with one of the event’s speaker, Tomaso Poggio, who is the Eugene McDermott professor in MIT’s Department of Brain and Cognitive Sciences and the director of the NSF Center for Brains, Minds and Machines at MIT. As one of the founders of computational neuroscience, he is a devote supporter of interdisciplinarity as a fundamental instrument for bridging brains and machines to increase our chances to arrive at understanding and developing a ‘real artificial intelligence’.
Nowadays, machine learning has become almost ubiquitous; its applications have demonstrated to be successful in so many different contexts. Professor Poggio compared the results achieved in 1995 for identification of pedestrian with the current solutions developed by Mobileye for driverless cars: 10 false alarms per second then, versus one error every 40,000 km covered by the car today. This means that in the last twenty years, every year we have been able to double the precision of machine learning algorithms, improving their accuracy one million times overall.
“A similar trend cannot continue indefinitely,” adds Tomaso Poggio. “Machine Learning is a mature discipline that now needs to expand its application potential to many different contexts: from images, to voice, text, big data and image generation. I think that more than improving the accuracy of existing algorithms, it is now time to do basic research to develop new ones.”
Professor Poggio, for Artificial intelligence we have already experienced several hype cycles, followed by disappointment and criticism. Are we going to face another AI winter soon?
Definitely, I reckon we are not entering another AI winter as those we had in the past. However, I think that some of the expectations from AI will have to be managed, people expecting to have a really intelligent machine in less than ten years are going to be deluded. We need to have a set of several different disruptive innovations before arriving at the Artificial Intelligence we are dreaming of. For now, our focus should be on the ocean of possible applications to which we can already apply currently available learning algorithms.
This is a golden age for applications and indeed, we cannot count the number of companies that are already improving the performances of existing solutions, augmenting the level of automation and delivering new services in many different contexts from the healthcare, to the manufacturing. Avoiding fine things to say, we need more pragmatism to generate a series of incremental technologies that will have a deep impact on people’s lives.
Deep learning and reinforcement learning have been inspired by neuroscience results obtained in the fifties, yet we do not know how these Machine Learning algorithms work. Do you think that not knowing the theory behind is going to limit our capabilities to exploit these technologies?
In human history, similar patterns already happened: let us think about electricity. The discovery of the voltaic pile by Alessandro Volta in 1800 as a result of a professional dispute with Luigi Galvani, opened the door to the exploitation of different applications of electricity as such as the electric motor and circuits. However, the theory behind these discoveries was conceived only later, around the 1860s by James Clerk Maxwell.
Anyway, we are now very close to defining a theory for Deep Learning and Reinforcement Learning. In my presentation, I have introduced the three main theoretical puzzles of Deep Learning, two of which have already been resolved within some of my recent work that justify deep convolutional networks. For the third question, there are only some proposed theoretical approaches: but I think that in the next two years we will have derived a more complete understanding of the overall framework for deep learning. Will this help us in deriving better algorithms? I am not sure of that, because the theory does not always imply we can derive improved solutions and applications.