Back

 Industry News Details

 
No, artificial intelligence won’t steal jobs – it will make you more creative and productive Posted on : Mar 03 - 2018

Marcos Lima, Responsable de la filière Marketing Innovation, and Distribution, EMLV (Ecole de Management Léonard de Vinci), Pôle Léonard de Vinci – UGEI

“Whatever your job is, the chances are that one of these machines can do it faster or better than you can.”

No, this is not a 2018 headline about self-driving cars or one of IBM’s new supercomputers. Instead, it was published by the Daily Mirror in 1955, when a computer took as much space as a large kitchen and had less power than a pocket calculator. They were called “electronic brains” back then, and evoked both hope and fear. And more than 20 years later, little had changed: In a 1978 BBC documentary about silicon chips, one commentator argued that “They are the reason why Japan is abandoning its shipbuilding and why our children will grow up without jobs to go to”.

Artificial intelligence hype is not new

If one types “artificial intelligence” (AI) on Google Books’ Ngram Viewer – a tool that allows us to check how often a term was printed in a book between 1800 and 2008 – we can clearly see that our modern-day hype, optimism and deep concern about AI are by no means a novelty.

The history of AI is a long series of booms and busts. The first “AI spring” took place between 1956 and 1974, with pioneers such as the young Marvin Minsky. This was followed by the “first AI winter” (1974-1980), when disillusion with the gap between machine learning and human cognitive capacities first led to disinvestment and disinterest in the topic. A second boom (1980-1987) was followed by another “winter” (1987-2001). Since the 2000s we’ve been surfing the third “AI spring”.

There’s plenty of reasons to believe this latest wave of interest for AI is going to be more durable. According to Gartner Research, technologies typically go from a “peak of inflated expectations” through a “trough of disillusionment” until they finally reach a “plateau of productivity”. AI-intensive technologies such as virtual assistants, the Internet of Things, smart robots and augmented data discovery are about to reach the peak. Deep learning, machine learning and cognitive expert advisors are expected to reach the plateau of mainstream applications in two to five years.

Narrow intelligence

We finally seem to have enough computing power to credibly develop what is called “narrow AI”, of which all the aforementioned technologies are an example. These are not to be confused with “artificial general intelligence” (AGI), which scientist and futurologist Ray Kurzweil called “strong AI”. Some of the most advanced AI systems to date, such as IBM’s Watson supercomputer or Google’s AlphaGo, are examples of narrow AI. They can be trained to perform complex tasks such as identifying cancerous skin patterns or playing the ancient Chinese strategy game of Go. They are very far, however, from being capable to do everyday general intelligence tasks such as gardening, arguing or inventing a children’s story. View More