Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them — and the rich structure of this combinatorial space.
By creating user interfaces which let us work with the representations inside machine learning models, we can give people new tools for reasoning.
A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.
How neural networks build up their understanding of images
We often think of optimization with momentum as a ball rolling down a hill. This isn’t wrong, but there is much more to the story.
Science is a human activity. When we fail to distill and explain research, we accumulate a kind of debt...
Several interactive visualizations of a generative model of handwriting. Some are fun, some are serious.
When we look very closely at images generated by neural networks, we often see a strange checkerboard pattern of artifacts.
Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading.
A visual overview of neural attention, and the powerful extensions of neural networks being built on top of it.