Back

 Industry News Details

 
13 Predictions on Artificial Intelligence Posted on : Dec 03 - 2016

We have discussed some AI topics in the previous posts, and it should seem now obvious the extraordinary disruptive impact AI had over the past few years. However, what everyone is now thinking of is where AI will be in five years time. I find it useful then to describe a few emerging trends we start seeing today, as well as make few predictions around machine learning future developments. The following proposed list does not want to be either exhaustive or truth-in-stone, but it comes from a series of personal considerations that might be useful when thinking about the impact of AI on our world.

The 13 Forecasts on AI

1. AI is going to require fewer data to work

Companies like Vicarious or Geometric Intelligence are working toward reducing the data burden needed to train neural networks. The amount of data required nowadays represents the major barrier for AI to be spread out (and the major competitive advantage), and the use of probabilistic induction (Lake et al., 2015) could solve this major problem for an AGI development. A less data-intensive algorithm might eventually use the concepts learned and assimilated in richer ways, either for action, imagination, or exploration.

2. New types of learning methods are the key

The new incremental learning technique developed by DeepMind called Transfer Learning allows a standard reinforcement-learning system to build on top of knowledge previously acquired — something humans can do effortlessly. MetaMind instead is working toward Multitask Learning, where the same ANN is used to solve different classes of problems and where getting better at a task makes the neural network also better at another. The further advancement MetaMind is introducing is the concept of dynamic memory network (DMN), which can answer questions and deduce logical connections regarding series of statements.

3. AI will eliminate human biases, and will make us more “artificial”

Human nature will change because of AI. Simon (1955) argues that humans do not make fully rational choices because optimization is costly and because they are limited in their computational abilities (Lo, 2004). What they do then is “satisficing”, i.e., choosing what is at least satisfactory to them. Introducing AI in daily lives would probably end it. The idea of becoming once for all computationally-effort-independent will finally answer the question of whether behavioral biases exist and are intrinsic to the human nature, or if they are only shortcuts to make decisions in limited-information environment or constrained problems. Lo (2004) states that the satisficing point is obtained through an evolutionary trial and error and natural selection — individuals make a choice based on past data and experiences and make their best guess. They learn by receiving positive/negative feedbacks and create heuristics to solve quickly those issues. However, when the environment changes, there is some latency/slow adaptation and old habits don’t fit the new changes — these are behavioral biases. AI would shrink those latency times to zero, virtually eliminating any behavioral biases. Furthermore, learning over time based on experience, AI is setting up as a new evolutionary tool: we usually do not evaluate all the alternatives because we cannot see all of them (our knowledge space is bounded).

4. AI can be fooled

AI nowadays is far away to be perfect, and many are focusing on how AI can be deceived or cheated. Recently a first method to mislead computer vision has been invented, and it has been called adversarial examples (Papernot et al., 2016; Kurakin et al., 2016). Intelligent image recognition software can indeed be fooled by subtle modifying pictures in such a way the AI software would classify the data point as belonging to a different class. Interestingly enough, this method would not trick a human mind.

5. There are risks associated with AI development

It is becoming mainstream to look at AI as potentially catastrophic for mankind. If (or when) an ASI will be created, this intelligence will largely exceed the human one, and it would be able to think and do things we are not able to predict today. In spite of this, though, we think there are few risks associated to AI in addition to the notorious existential threat. There is actually the risk we will not be able to understand and fully comprehend what the ASI will build and how, no matter if positive or negative for the human race. Secondly, in the transition period between narrow AIs and AGI/ASI, there will be generated an intrinsic liability risk — who would be responsible in case of mistakes or malfunctioning? Furthermore, there exists, of course the risk of who will detain the AI power and how this power would be used. In this sense, we truly believe that AI should be run as a utility (a public service to everyone), leaving some degree of decision power to humans to help the system managing the rare exceptions.

6. Real general AI will likely be a collective intelligence

It is quite likely that an ASI will not be a single terminal able to make complex decisions, but rather a collective intelligence. A swarm or collective intelligence (Rosenberg, 2015; 2016) can be defined as “a brain of brains”. So far, we simply asked individuals to provide inputs, and then we aggregated after-the-fact the inputs in a sort of “average sentiment” intelligence. According to Rosenberg, the existing methods to form a human collective intelligence do not even allow users to influence each other, and when they do that they allow the influence to only happen asynchronously — which causes herding biases. An AI on the other side will be able to fill the connectivity gaps and create a unified collective intelligence, very similar to the ones other species have. Good inspirational examples from the natural world are the bees, whose decision-making process highly resembles the human neurological one. Both of them use large populations of simple excitable units working in parallel to integrate noisy evidence, weigh alternatives, and finally reach a specific decision. According to Rosenberg, this decision is achieved through a real-time closed-loop competition among sub-populations of distributed excitable units. Every sub-population supports a different choice, and the consensus is reached not by majority or unanimity as in the average sentiment case, but rather as a “sufficient quorum of excitation” (Rosenberg, 2015). An inhibition mechanism of the alternatives proposed by other sub-populations prevents the system from reaching a sub-optimal decision. View More