Blogs

A Mechanism Design Alternative to Individual Calibration

[ ] We consider how a prediction service can implement an “insurance” against misprediction, such that its customers can use each prediction as if it is perfectly correct. The insurance can be implemented at provably no cost to the prediction service (in the long run).

Bias and Generalization in Deep Generative Models

[ ] We propose a framework to systematically investigate bias and generalization in deep generative models of images. Inspired by experimental methods from cognitive psychology, we probe each learning algorithm with carefully designed training datasets to characterize when and how existing models generate novel attributes and their combinations.