Back

 Industry News Details

 
Machine Learning Audits in the 'Big Data Age' Posted on : Apr 24 - 2017

Machine learning—essentially a computer that recognizes patterns without having to be explicitly programmed—is revolutionizing many industries. Machine learning enables us to find answers and unexpected relationships in data that were impossible to find with the “cookbook recipe” style of programming that currently powers our software.

However, there is a downfall to the use of machine learning: the “black box effect.” In traditional programming that uses the recipe approach, if a decision-maker or assurance professional wanted to know why a decision was being made, software engineers or analysts could peek inside the program and see that threshold X was reached, which triggered the effect. But, with many machine learning algorithms, it is extremely difficult to look inside an algorithm to ascertain why a certain result was returned.

For instance, borrowing an example from Carlos Guestrin, the Amazon professor of machine learning at the University of Washington, a model “trained” on a set of images can tell if a given image is of a husky or a wolf with a high degree of accuracy. Unfortunately, there is one major problem with this algorithm that is unbeknownst to its trainers: All the wolf pictures on which the model was trained had snow in the background. So, when an image of a husky with snow in the background appears, the image would be classified as a wolf.

A misclassification in this direction would not necessarily be catastrophic, but an algorithm that determines which animals could enter a children’s park, for example, could be disastrous if a wolf appears on a sunny, snowless day and is classified as a husky.

Are Algorithms Providing the Intended Service?

With algorithms increasingly dictating our credit scores, which jobs we can interview for, and even, heaven forbid, what our jail sentence is going to be, how do companies have assurance that the algorithms they are employing are providing the service intended but not implicitly reinforcing policies that they did not intend? With the majority of algorithms in use today, there is a significant “black box effect,” and a dearth of assurance around the operating intricacies of these models.

Going back to our wolf/husky example, it is extremely difficult to find out why the algorithm made the decisions it did. However, machine learning experts are addressing such scenarios. View More