<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=152233&amp;fmt=gif">
❮ Back to Blog

Shining a Light on the Black Box of Machine Learning: Do You Need to Know Why Your Model Makes its Predictions?

by Danielle Baghernejad on January 22, 2018 at 3:50 PM

blog-machine-learning-black-box-shining-light.png

In order to pursue value, healthcare stakeholders are chasing big data and using advanced analytics to validate and improve their effectiveness from a financial, clinical and operational standpoint. The role of data scientists is growing to develop algorithms to turn chaotic data into organized insight. However, as machine learning algorithms evolve in complexity, they are also becoming more incomprehensible.

How can we measure the usefulness of a machine learning model?

Traditionally, we have answered this question through accuracy, engineering models to maximize predictive performance. In doing so, we inevitably build more complex models. In turn, as the complexity increases, the ability to understand the model decreases. The popular models now being deployed are more of a black box than a well understood model, whereby we put in data and receive accurate predictions with little understanding of the inner workings of the algorithm.

For example, neural networks have seen a rise in popularity lately, and with the recent success of AlphaGo defeating the reigning world Go champion, this trend is unlikely to change. Like a brain, a neural network has layers of artificial neurons, firing off signals and pruning away unneeded neurons as learning occurs. While their performance is spectacular, the resulting model is shrouded in mystery— even the researchers building it do not fully understand how the neurons are working together.

If accuracy is all we care about, then models like these are exactly what we need. However, for many machine learning applications, we not only care what the model predicts, but we also want to know why it makes the predictions it does.

In the case of medical prognosis, for example, we want to know whether the model detects a disease so we can treat patients effectively. Once a disease is detected, what should the doctor do? We cannot see the nuances in the data the model picks up on, which makes it harder to fully utilize the model in practice. The doctor must fall back on standard training, of which might not address the specific signs in the data that the model saw as significant. While helpful, these models still cannot fully deliver what is needed to be effective in situations like these.

Thus, the ability to augment models with layers of understanding is the next milestone to strive toward in machine learning. Current research is exploring ways to take those highly accurate models and create better understanding in their behavior.

For example, in a recent paper by Intermedix, we define Class Variable Importance (CVI) in tree-based models to help measure relationships for important variables. Linear regression, one of the most interpretable algorithms, is so readily understood because it generates a model with a numerical weight for each variable in the data set. The larger the weight, the more important the variable, and the sign of the weight indicates whether the variable is positively or negatively correlated to the target variable. Tree-based models are unable to provide the same interpretation, currently only being able to measure how important the variable is in the model overall. CVI is designed to measure not just if the variable is important, but whether it is important to a certain outcome.

Similar ideas are being explored with other classes of models, the goal being able to produce reasoning as well as a prediction. From detecting significant sections of a picture in a computer vision model to listing key interactions in a neural net, adding a layer for human understanding increases the success of machine learning models, pairing people with machines to augment both.

For organizations looking to grow their use of data science, it’s important to have a clear goal in mind for how the insights gleaned from machine learning will be used. Your team will need to determine what is more important, to be more transparent or more accurate. Second, you also need to have an idea of if machine learning is practice to solve the problem at hand. The more complex it is, the more time it takes to build the algorithm and develop a prediction that can be acted upon.

Finally, if data science is valuable to your organization, keep in mind that technology is always growing and changing. Make a goal to prioritize the use of data science in the future and invest in the infrastructure and talent necessary to address the financial, clinical and operational problems machine learning can solve.

New Call-to-action

Recent Posts

author avatar

This post was written by Danielle Baghernejad

Danielle Baghernejad is a Data Scientist at Intermedix. Prior to joining Intermedix, Danielle was a Data Analyst at TechnologyAdvice. Danielle obtained her bachelor's degree in mathematics and computer science at the University of Tennessee-Knoxville and master of science in mathematics at Middle Tennessee State University.

Connect with Danielle