Inside the black box: understanding Deep Convolutional Neural Networks through visualization

Inside the black box: understanding Deep Convolutional Neural Networks through visualization

healthy-neuron-NIH

Credit: National Institute on Aging, National Institutes of Health

healthy-neuron-NIH
Credit: National Institute on Aging, National Institutes of Health

Curses and Blessings of Deep Convolutional Neural Networks

Deep Convolutional Neural Networks (CNN) are nowadays state of the art in pattern recognition and achieve results comparable to human classifying images. Another side of the coin is however the complexity of Deep Convolutional Neural Networks. It remains difficult to get an understanding on how these networks work and what they have exactly learned. As described in our recent publication ‘Understanding Regularization to Visualize Convolutional Neural Networks’, getting a better knowledge about these topics, helps improve the learning procedure and detect corruptions in data sets.

Why Visualizing Features?

Feature visualization methods address the issue of extracting knowledge from a deep CNN in an elegant way. As human perception is strongly influenced by visual impressions, repre-senting complex relationships and high dimensional data in a visual manner seems to be a natural and logical way to go.

We aim to apply feature visualization in order to analyze CNNs. Visual analysis in general tries to extract knowledge from (high dimensional) data by visualising it. In case of visualising CNNs, our goals are:

  1. Assessing quality
  2. Building trust

Quality Assessment
In case of quality proving, we can distinguish between the cases of quality of the trained model and quality of the data. Proving the quality of the model model can for example be done by visualizing the network’s weights. It is possible to evaluate the convergence of the networks filters. An example is shown in the following figure: the filters in the left image have converged in a meaningful manner: structures like edge or blob detectors are visible. In contrast to this, the filters in the right image have not converged and lack clear geometrical interpretability.

convergence-of-the-networks-filters
convergence-of-the-networks-filters

An example for bad data quality is some unintended bias. Feature visualization is able to explore this kind of failure in the data. The following figure (first row, left) shows the visualization of a CNN’s internal representation of a King’s crab obtained by activity maxi-mization. The Network (VGG19) was trained on images of the ImageNet data set. As you can see, in the background some human-like figures appear. The reason for this is, that most of the images, from which the CNN has learned to detect a King’s crab, had some human in the background. An exemplary image can is shown in the second row on the left. Therefore the network also associated humans in the background with the respective class. In cases like this, feature visualization techniques help to explore biases in data. Another example of this behavior is shown in the first row on right, where we can see the networks internal rep-resentation of the class dumbbell. It seems like some arm-like structures appear. Therefore, the network seems to associate a strong arm with the class dumbbell, which indicates that the network has mostly seen images from dumbbells together with a strong arm holding it (example image in the second row on the right). On the one hand, this is good, because it shows that the networks is able to learn context information. On the other hand, this shows that there is a certain bias in the data and we can use this knowledge to improve our data set and remove it.

feature-visualization-techniques-01
internal-representation-feature-visualization-techniques-01
feature-visualization-techniques-02
internal-representation-feature-visualization-techniques-02

Building Trust
Feature visualization can also help us build trust in our trained model. It helps us understand, how a CNN makes its decision and whether the basis for making decision is plausible or not. One example for this is to look for regions in an image, which have been used by the CNN in order to classify it. We show this by visualizing the decision of a CNN for classifying a dog. As can be seen in the image below on the left, the network makes its decision by focusing on the head of the dog for classifying it. This seems plausible in order to classify it as a dog. Another possibility is to analyze the learned features.

If we regard the learned features (visualized by maximizing the internal representation of a CNN for the class tarantula), it can lead us to the conclusion that the main concept for the class of spider (some hairy legs, a body which is similar to a spider body, etc.) has been learned by the network. Of course, not all details of a spider are represented, such as a number of eight legs, but these feature are good enough to recognize and classify an image of a tarantula as the correct class. Moreover, the appearance of features in the whole image (e.g. instead of just in the middle of the image) indicates invariance learned by the network. The network is able to detect the spider in different positions and different orientations in the image, which makes its classification ability more robust.

Building-Trust
Building-Trust-

In many cases it is not that easy to interpret the results of feature visualization methods. The following picture shows the internal class representations of a neural network for a breast cancer classification task for histopathology images. The left image in the first row of the four smaller images shows the representation of the class ”benign”, the right image repre-sents the class ‘invasive carcinoma’. The left image in the second row of the four smaller images represents the class ‘In situ carcinoma’, the right image in the second row shows the representation of the ‘normal’ class, which shows tissue which has not been affected. As it can be seen, these images differ very much from example images (larger image on the right). At least, we can see some differences in the images, such as their color, or the shape of the structures.

neural-network-for-a-breast-cancer-classification
neural-network-for-a-breast-cancer-classification
neural-network-for-a-breast-cancer-classification
neural-network-for-a-breast-cancer-classification
neural-network-for-a-breast-cancer-classification

We can try to interpret these feature and verify, whether they make sense or not. In the example above it makes sense, that the invasive class representation is more bluish than e.g. the internal representation of the class ”Normal Tissue”. It makes sense, because the invasive case shows images of cancer which had grown uncontrollably and are characterized by a higher cell density. This leads to a larger number of cells, which are mainly colored blue (more exactly: their nuclei are colored blue) according to the standard staining procedure (Hematoxylin and Eosin staining) in the scope of histopatholgy. However, it remains difficult to extract deep knowledge from this kind of images.

In the following sections, I want to introduce, how some of the feature visualization methods work.

Filter visualization of the fi rst layer

Classical image processing approaches often use digital filters in order to optimize digital images with respect to some criteria like noise reduction, edge detection or image smoothing. Many of these filters are convolution-based, in which a filter mask (kernel) is ‘pulled’ over the image and convolved with the respective image region.
The first layer in common Convolutional Neural Networks works in a similar way. The difference is that their kernels are directly learned by the network from data. The learned filters help to extract the relevant information for the network in order to fulfill its task. By interpreting the weights of the filter as image (after normalizing them), we can visualize the task, which a certain kernel fulfills. The following figure shows the visual representation of the kernels in the first layer for a convolutional neural network, which was trained on images of the ImageNet challenge. It can be seen that some of the filters are activated by edges, some are activated by some blob-like structures and some respond to certain colors.

For a mathematical intuition, why this methods works, see the end of the post.

Filter-visualization-of-the-first-layer

Occlusion

Occlusion aims to detect regions in an input image, which are an important base for a neural network’s classification decision. Given a trained network and a decision of the network for a certain input image. We can obtain, how important a certain region is to the network (for making its classification decision) by covering the respective region by a occlusion mask (e.g. just draw it gray). We generate an occlusion input by sliding an occluding mask over the whole image.

Occlusion-aims-to-detect-regions-in-an-input-image

After each occlusion step, we classify the image again and record how the classification score has changed compared to the original score. The smaller the predicted value for the chosen class is after covering a region, the more important is this region for the network in order to classify the image. We can normalize, store and reshape the classification score in order to create a heat map. After that, the heat map is interpolated and resized to the original image shape. For our example, we obtain the heat map shown in the next figure.

We can now mask our original image with the aid of the computed heat map by point-wise multiplication of the pixel values with the heat map values of the respective regions. After applying this filter operation, we get an image which shows the important regions for the neural network for making its classification decision.

occlusion-step-classify-the-image
Building-Trust

Neuron Activity Maximization

Activity maximization approaches aim to compute an input image for a neural network. It can be seen as asking the network to draw an image, which maximizes the classification score for a certain class or neuron. Due to the complexity of deep neural networks, these images can’t be computed directly, it has to be generated iteratively which leads to an optimization procedure which is similar to the standard backpropagation algorithm for training neural networks.

One of the main principles of the classical backpropagation algorithm is the propagation of the partial derivatives of a specified error term through all layers in a deep neural network. This error term or loss indicates, in which direction the networks weights have to be changed in order to decrease the error for the given sample.

Neuron activity maximization exploits the same properties of error backpropagation, but instead of analyzing how the weights of the network have to be changed in order to reach better results, the input image is optimized while the filter weights are kept fixed. After initializing the input image randomly, we determine in each iteration step the direction, in which the image has to be updated in order to achieve better classification results and after that, we update the image a little bit in this direction. The next figure visualizes the iterative image update procedure. We plotted the images at different iteration steps during the maximization procedure (from left to right and above to below): step1, step 7, step 15, step 90, step 60 and step 150.

Activity-Maximization
Activity-Maximization
Activity-Maximization
Activity-Maximization
Activity-Maximization
Activity-Maximization

A problem of the randomly initialized input image is, that the error terms and therefore the update directions start randomly. We can think of it that all pixel values of the gen-erated image are regarded independently from each other, which would lead to a noisy non-interpretable result. To get meaningful visual results like in the shown example, we have to enforce structures and relationships in the image. This can be achieved by applying regularization methods to the image.

The results of the class activity maximization approach for the six different classes ‘daisy’, ‘fire-salamander’, ‘gondola’, ‘Indian cobra’, ‘jellyfi sh’ and ‘monarch’ is shown in the next figure.

daisy
fire-salamander
gondola
Indian-cobra
jellyfish
monarch

Feature Inversion

The feature inversion approach is similar to the activity maximization approach. The dif-ference is, that it does not aim at maximizing the activity of a certain neuron or class, but it aims to find an image which produces the same neural activities (features) in a certain layer like a test image. We also try to find this image by iteratively updating a randomly initialized input image in the direction of an image, which produces more similar features to the test image. We therefore try to reconstruct the test image from the features in an intermediate layer, which shows us which information is stored in the respective layer and which information has been lost due to learning invariances.

In order to get meaningful results, the same principle like in activity maximization approaches hold. Therefore we have to apply regularization strategies. In the following figure, a set of original images and the corresponding reconstructed images from feature inversion with two different kinds of regularization strategies is shown. It can be seen that the first regularization strategy produces finer structures in the image, whereas the second strategy is able to better reconstruct the original colors.

Feature-Inversion
Feature-Inversion
Feature-Inversion
Feature-Inversion
Feature-Inversion
Feature-Inversion
Feature-Inversion
Feature-Inversion
Feature-Inversion
Feature-Inversion
Feature-Inversion
Feature-Inversion

Mathematical Intuition to the first layer visualization

vector-picture

A mathematical intuition, why the weights of the fi lters can be interpreted as an image (or at least small image patches), is a property of an inner product. Let us consider two vectors in the two dimensional euclidean space. The inner product between these vectors can be defi ned in a geometric way, as shown in the picture on the right.

Assuming the length of the vector to be norm constrained, the inner product is maximized when the angle θ = 0 and therefore the cos(θ) = 1. In a geometric interpretation this means, that the inner product is maximized, when both of the vectors are parallel.

By interpreting a filter as a vector, we can also try to find a (norm constrained) vectorized input signal, which maximizes the filter response (by maximizing the inner product between these vectors). Such an input vector is parallel to our filter vector and can therefore be found by just normalizing the filter vector.

X