Google’s “What-if” tool analyzes ML Models without writing code

The What-If Tool For Probing Machine Learning Models

When it comes to building a Machine Learning(ML) system, training a model is not enough. Instead, you need to ask lots of questions. Rather than just behaving a like a typical programmer, you need to act as a detective whereby you ask tons of questions. By being inquisitive, you will have a better understanding of how the model works.

Some of the questions that you need to ask include: Do the changes on a datapoint affect the predictions that the model will make? Does the model perform in a different manner when exposed to various groups? Is the dataset that I am testing my model on diverse? If so, what is the magnitude of the diversity?

As you can see getting concrete answers to these types of questions is not an easy process. Most ML programmers usually opt to write a one-off code that will be used to analyze the whole model. This option creates many loopholes and it is highly inefficient. For instance, it locks out the non-programmers and they won’t be able to participate in the process even when it is necessary.

This is one of the things that Google AI PAIR initiative aims to address. It wants to bring in different people in the whole process of examining, evaluating and debugging machine learning systems.

Google has already taken the first step toward achieving this goal. It has launched the What-If Tool. This is a completely new feature of the TensorBoard application that allows all interested users to analyze a Machine Learning model without having to write a single line of code. The What-If Tool uses dataset and pointers to a TensorFlow to produce an interactive interface which can be used for exploring the results of the model.

Finding the Counterfactuals

The What-If Tool is capable of visualizing the database on its own while at the same time is capable of editing the examples that you presented in your dataset. You simply need to click a button and you will be able to find the exact point where the model gives a different prediction. Such points are known as ‘Counterfactuals’ and they play a critical role in determining the decision boundaries of the model.

Analyze the Performance and determine the fairness of the Algorithms

You can also use the What-If Tool to explore the effects of using different classifications especially when you consider some constant constraints.

Demos of What-If Tool

To show the effectiveness of the What-If tool Google has released some demos which use the pre-trained models. These demos are used for:

  • Detecting the misclassifications of plants
  • Analyzing and assessing the fairness of binary classification models
  • Analyzing the performance of the model on different subgroups.

Putting the What-If Tool in Practice

To ensure that the What-If tool is effective in real-life ML applications, Google put it into various tests. Different teams tested it on different applications. One team discovered that the model was not detecting the whole feature of the dataset. This forced them to fix a bug in the model. In another team, the tool was used to organize their examples visually so that they discover familiar patterns. In overall everyone is hoping that the What-If tool will give a better understanding of the ML models.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.