Back

 Industry News Details

 
Differentiating between AI, Machine Learning and Deep Learning Posted on : May 22 - 2017

With all the quickly evolving jargon in the industry today, it’s important to be able to differentiate between AI, machine learning and deep learning. The easiest way to think of their relationship is to visualize them as a concentric model, as depicted in the figure to the right, with each term defined. Here, AI—the idea that came first—has the largest area, followed by machine learning—which blossomed later and is shown as a subset of AI. Finally, deep learning—which is driving today’s AI explosion—fits inside both.

Machine learning takes some of the core ideas of AI and focuses them on solving real-world problems with neural networks designed to mimic our own decision-making. Deep learning focuses even more narrowly on a subset of machine learning tools and techniques, and applies them to solving just about any problem which requires “thought”—human or artificial.

Machine learning is well-suited for problem domains typically found in the enterprise, like making predictions with supervised learning methods (e.g. regression and classification), and knowledge discovery with unsupervised methods (e.g. clustering). Deep learning is an area of machine learning that has achieved significant progress in certain application areas that include pattern recognition, image classification, natural language processing (NLP), autonomous driving, and so on. Machine learning techniques like random forests and gradient boosting often perform better in the enterprise problem space than deep learning.

Deep learning attempts to learn multiple levels of features of large data sets with multi-layer neural networks and make predictive decisions for the new data. This indicates two phases in deep learning: first, the neural network is “trained” with a large number of input data; second, the trained neural network is used for “inference” designed to make predictions with new data. Due to the large number of parameters and training set size, the training phase requires tremendous amounts of computation power. View More