Back

 Industry News Details

 
How can machine learning create features in human-understandable ways? Posted on : Apr 27 - 2017

Why explain and re-explain logic when you can design a machine learning system to automatically learn for you? Performing activities that add value required access to data and intelligence. Start by defining the data you require to make intelligent decisions.

Without loads of data, we have problems that not even the most intelligent machine learning systems can solve. Simple directions become extremely difficult without a destination. Navigating and processing a healthcare claim is impossible without a payer identified. Finding the best vet for a pet is difficult without knowing the species.

Machine learning is about intelligence, but that intelligence requires data. Drug design, ad placement and web searches all can dramatically improve with machine learning agents or intelligent agents that have the ability to adapt and make decisions based on changing environments. This is where we enter the space of agent-based modeling (ABM). The difference between an agent that appears to have humanistic characteristics and an agent that continually runs into the wall, determined to clean that one-inch spot that was missed, is the ability to adapt.

Agents that are effective have the capability to devise a new strategy and have the rules to take action on that new information. Complex adaptive systems (CAS) are agents acting individually or as a system, e.g., swam of drones whose behavior changes, evolves or adapts. It’s almost like these agents can think in teams.

Machine learning and agent-based modeling

We wish machines could think, but we know they can’t. A plane flying on autopilot intersects the final course to the runway almost flawlessly. Cars that parallel-park automatically appear to float into parking spots. Smart rooms “understand” when you want the lights dim and the music low or the lights on and the music raging. At least it seems as though planes sense, cars feel and rooms think. These actions are the result of observations that were designed around a defined workflows of operations. Machines work on patterns and the more we comprehend how these operational flows are modeled, the faster we can apply the value of the automated operations into our businesses.

The models for autonomous operations

The first operational flow is the agent-based modeling cycle.

  1. Create an initial internal model.
  2. Observe the world and take note of rewards received.
  3. Update the internal model.
  4. Take action based on the internal model and the current observations, go back to Step 2 and repeat.

This cycle is adaptive and therefore after Step 4 the operational flow becomes adaptive and autonomous. The cycle creates the ability to process updates based on “learned information.”

The second operational flow is the machine learning cycle.

  1. Create an internal model.
  2. Observe the world and record observations in history.
  3. Update your internal model based on history.
  4. Take action and record it in history, go back to Step 2 and repeat.

They look almost identical. They’re not. The agent is acting alone, while machine learning is making recommendations for action and recording them in history. We have to consider more elements than just past actions when designing agent actions.

A machine walks into a bar

Another interesting challenge is how we factor in the effect of the agent in its environment. An agent that performs a recommended machine learning action affects the surrounding world. By performing an action, the agent also affects future outcomes. This concept is captured in game theory's El Farol Bar problem, which was put forward by W. Brian Arthur in 1994, based on the early work of B.A. Huberman and Tad Hogg.

El Farol is a bar located on Canyon Road in Santa Fe, New Mexico. The problem says that there is a finite population and Thursday night is the big night at the bar! Everybody wants to hit the bar. The challenge is that it’s a small place. If 60 percent of the town's residents go to the bar, they’ll have a better time than if they stayed home. However, if more than 60 percent go to the bar, they will decide that they would have preferred to stay home. You don't have the luxury to wait and see if others go. The entire population must decide at the same time. Do you go?

The logic is that if everyone uses the same strategy, then everyone is bound to be unhappy. Herbert Gintis, in the book Game Theory Evolving, explores many variants of this problem. For example, a common strategy would be to use the Nash equilibrium or mix approach, where each patron would make the best decision unchanged by the effect of others. This concept allows for probability estimations for how crowded the bar might be, the total number of patrons and the utility of going or not going. Further, approaches allow potential patrons to communicate with each other before deciding to go to the bar (telling the truth is not required).

As we design actions for robots, architect machine intelligence and build agent-based models, a simple database isn’t enough to capture data. We must build in logic to handle these environmental situations.

Practical decisions and novelty

Have you read a good novel lately? Novels are very enjoyable, but the action of reading a work of fiction holds little practical utility. Integrating agent-based models with machine intelligence requires policies, procedures and guideline for designing algorithms that are situationally aware and functionally offer utility.

A rekindled interest in best practices will develop new guidelines for future development that eventually will lead you clicking on this article faster on the web. Before that happens, we’ll need some more data based on observations from our designed intelligence models, not from data puddles we happened to step in. Source