Back Industry News

Machine Learning and AI Have Roots in Neural Networks Posted on Apr 20 - 2017

Share This :

Artificial intelligence (AI) and machine learning are surging in popularity as these technologies become the foundation for making networks smarter, faster, and more intuitive. Today machine learning and AI are being touted as key elements to making the Internet of Things (IoT) and 5G a success.

In fact at the recent Mobile World Congress conference in Barcelona, Spain, carriers like SK Telecom and Reliance talked about using machine learning by feeding it with analytics from network monitoring.

Plus, big name companies like IBM are incorporating AI into well-known projects like Watson, which is being used for everything from security to IoT to the cloud.

But AI and machine learning aren’t new technologies. They are based upon deep learning neural networks, a technology first conceived more than 70 years ago.

According to MIT News, back in 1944 Warren McCullough and Walter Pitts, two researchers from the University of Chicago, first came up with the idea of neural networks, which is basically what happens when a computer learns to perform tasks by analyzing patterns. McCullough and Pitts went on to start MIT’s cognitive science department.

Neural networks are modeled on the human brain, where thousands or even millions of nodes are interconnected. Most neural networks are organized into layers of nodes, and one individual node may be connected to several nodes in different layers that send and receive data.

The original neural networks conceived by McCullough and Pitts weren’t arranged in layers, and the researchers didn’t have any specific training mechanism, but they did show that a neural network could compute any function that a digital computer could compute, and that the human brain could be thought of as a computing device.

Perceptron, which was demonstrated by Cornell University psychologist Frank Rosenplatt in 1957, was considered the first trainable neural network. The Perceptron was similar to a modern neural net, except it had only one layer with adjustable weights and thresholds.

But at the time, researchers noted that Perceptron was limited because most machinery was more complicated and required more layers.

Neural networks experienced a resurgence around the year 2000 thanks to computer gaming, which needs deep learning tools to handle the complex imagery and rapid pace of the games. Deep learning is responsible for the underlying technology for artificial intelligence.

Today, the technology is again being lauded thanks to research from the Center for Brains, Minds, and Machines (CBMM), which has conducted a three-part study on neural networks. The first part, which was published last month in the International Journal of Automation and Computing, discusses the type and range of computations that deep-learning networks can execute. The second and third parts of the study address problems with guaranteeing that a network has found the settings that best fit with its training data, and cases in which the network is so trained to the specifics of certain data that it fails to generalize other instances of the same category. Source

x

Get the Global Big Data Conference
Newsletter.

Weekly insight from industry insiders.
Plus exclusive content and offers.