Computing Receptive Fields of Convolutional Neural Networks

Detailed derivations and open-source code to analyze the receptive fields of convnets.

The Paths Perspective on Value Learning

A closer look at how Temporal Difference Learning merges paths of experience for greater statistical efficiency

A Discussion of ‘Adversarial Examples Are Not Bugs, They Are Features’

Six comments from the community and responses from the original authors

Open Questions about Generative Adversarial Networks

What we’d like to find out about GANs that we don’t know yet.

A Visual Exploration of Gaussian Processes

How to turn a collection of small building blocks into a versatile tool for solving regression problems.

Visualizing memorization in RNNs

Inspecting gradient magnitudes in context can be a powerful tool to see when recurrent units use short-term or long-term contextual understanding.

Activation Atlas

By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it typically represents.

AI Safety Needs Social Scientists

If we want to train AI to do what humans want, we need to study humans.

Distill Update 2018

An Update from the Editorial Team

Differentiable Image Parameterizations

A powerful, under-explored tool for neural network visualizations and art.

Feature-wise transformations

A simple and surprisingly effective family of conditioning mechanisms.

The Building Blocks of Interpretability

Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them — and the rich structure of this combinatorial space.

Using Artificial Intelligence to Augment Human Intelligence

By creating user interfaces which let us work with the representations inside machine learning models, we can give people new tools for reasoning.

Sequence Modeling with CTC

A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.

Feature Visualization

How neural networks build up their understanding of images

Why Momentum Really Works

We often think of optimization with momentum as a ball rolling down a hill. This isn’t wrong, but there is much more to the story.

Research Debt

Science is a human activity. When we fail to distill and explain research, we accumulate a kind of debt...

Experiments in Handwriting with a Neural Network

Several interactive visualizations of a generative model of handwriting. Some are fun, some are serious.

Deconvolution and Checkerboard Artifacts

When we look very closely at images generated by neural networks, we often see a strange checkerboard pattern of artifacts.

How to Use t-SNE Effectively

Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading.

Attention and Augmented Recurrent Neural Networks

A visual overview of neural attention, and the powerful extensions of neural networks being built on top of it.

is dedicated to clear explanations of machine learning