[ ] We consider how a prediction service can implement an “insurance” against misprediction, such that its customers can use each prediction as if it is perfectly correct. The insurance can be implemented at provably no cost to the prediction service (in the long run).
[ ] We propose a framework to systematically investigate bias and generalization in deep generative models of images. Inspired by experimental methods from cognitive psychology, we probe each learning algorithm with carefully designed training datasets to characterize when and how existing models generate novel attributes and their combinations.
[ ] This tutorial discusses MMD variational autoencoders, a member of the InfoVAE family. It is an alternative to traditional variational autoencoders that is fast to train, stable, and easy to implement.