How to tune hyperparameters for your machine learning model using Bayesian optimization.
By focusing on linear dimensionality reduction, we show how to visualize many dynamic phenomena in neural networks.
What can we learn if we invest heavily in reverse engineering a single neural network?
Training an end-to-end differentiable, self-organising cellular automata model of morphogenesis, able to both grow and regenerate specific patterns.
Exploring the baseline input hyperparameter, and how it impacts interpretations of neural network behavior.
Detailed derivations and open-source code to analyze the receptive fields of convnets.
A closer look at how Temporal Difference Learning merges paths of experience for greater statistical efficiency
Six comments from the community and responses from the original authors
What we’d like to find out about GANs that we don’t know yet.
How to turn a collection of small building blocks into a versatile tool for solving regression problems.
Inspecting gradient magnitudes in context can be a powerful tool to see when recurrent units use short-term or long-term contextual understanding.
By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it typically represents.
If we want to train AI to do what humans want, we need to study humans.
A powerful, under-explored tool for neural network visualizations and art.
Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them — and the rich structure of this combinatorial space.
By creating user interfaces which let us work with the representations inside machine learning models, we can give people new tools for reasoning.
A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.
We often think of optimization with momentum as a ball rolling down a hill. This isn’t wrong, but there is much more to the story.
Science is a human activity. When we fail to distill and explain research, we accumulate a kind of debt...
Several interactive visualizations of a generative model of handwriting. Some are fun, some are serious.
When we look very closely at images generated by neural networks, we often see a strange checkerboard pattern of artifacts.
Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading.