About Me

I am a 5-th year PhD student at Stanford Computer Science advised by Stefano Ermon. I study trustworthy machine learning and uncertainty quantification. My key research question is how a forecaster can guarantee or certify the quality of its predictions, such that users can confidently rely on the predictions for high stakes decision making. I tackle this problem from a variety of angles, including online learning, game theory, conformal prediction, finance/insurance, generative models, and causal inference. I am broadly interested in other applications of uncertainty quantification, such as Bayesian optimization and high dimensional probabilistic models.

Research Interests

(Ranked by the amount of time I spend thinking about each topic, as of Apr 2021)

  • Uncertainty quantification, risk and calibration
  • Deep generative models, high dimensional distribution modeling
  • Active learning and Bayesian optimization
  • Variational definitions of information and machine learning quantities

Contact: sjzhao at stanford dot edu

Publications by Topics

Uncertainty Quantification and Trustworthy Machine Learning

  • Calibrating Predictions to Decisions: A Novel Approach to Multi-Class Calibration
    Shengjia Zhao, Michael P Kim, Roshni Sahoo, Tengyu Ma, Stefano Ermon [arXiv] (New work in submission)

  • Right Decisions from Wrong Predictions: A Mechanism Design Alternative to Individual Calibration
    Shengjia Zhao, Stefano Ermon [arXiv] (AISTATS’2021 Oral 3.1%)

  • Individual Calibration with Randomized Forecast
    Shengjia Zhao, Tengyu Ma, Stefano Ermon [arXiv] (ICML’2020)

  • A framework for Sample Efficient Interval Estimation with Control Variates
    Shengjia Zhao, Christopher Yeh, Stefano Ermon [arXiv] (AISTATS’2020)

Information Theory and Decision Theory

  • H-divergence: A Decision-Theoretic Probability Discrepancy Measure
    Shengjia Zhao*, Abhishek Sinha*, Yutong He*, Aidan Perreault, Jiaming Song, Stefano Ermon (New work in submission)

  • A Theory of Usable Information under Computational Constraints
    Yilun Xu, Shengjia Zhao, Jiaming Song, Russell Stewart, Stefano Ermon [arXiv] (ICLR’2020 Oral 1.9%)

Generative Models

  • Improved Autoregressive Modeling with Distribution Smoothing
    Chenlin Meng, Jiaming Song, Yang Song, Shengjia Zhao, Stefano Ermon (ICLR’2021 Oral 1.8%) openreview

  • Permutation Invariant Graph Generation via Score-Based Generative Modeling
    Chenhao Niu, Yang Song, Jiaming Song, Shengjia Zhao, Aditya Grover, Stefano Ermon [arXiv] (AISTATS’2020)

  • InfoVAE: Balancing Learning and Inference in Variational Autoencoders
    Shengjia Zhao, Jiaming Song, Stefano Ermon [arXiv] (AAAI’2019)

  • Bias and Generalization in Deep Generative Models: An Empirical Study
    Shengjia Zhao*, Hongyu Ren*, Arianna Yuan, Jiaming Song, Noah Goodman, Stefano Ermon [arXiv] (NeurIPS’2018 Spotlight 3%)

  • A Lagrangian Perspective on Latent Variable Generative Models
    Shengjia Zhao, Jiaming Song, Stefano Ermon [arXiv] (UAI’2018 Oral 8.6%)

  • Learning Hierarchical Features from Generative Models
    Shengjia Zhao, Jiaming Song, Stefano Ermon [arXiv] (ICML’2017)

Improving Classical Algorithms with Learning

  • Adaptive Antithetic Sampling for Variance Reduction
    Hongyu Ren*, Shengjia Zhao*, Stefano Ermon [paper] (ICML’2019)

  • Learning Neural PDE Solvers with Convergence Guarantees
    Jun-Ting Hsieh*, Shengjia Zhao*, Lucia Mirabella, Stefano Ermon [arXiv] (ICLR’2019)

  • A-NICE-MC: Adversarial Training for MCMC
    Jiaming Song, Shengjia Zhao, Stefano Ermon [arXiv] [code] (NeurIPS’2017)

Miscellaneous Topics

  • Privacy Preserving Recalibration under Domain Shift
    Rachel Luo, Shengjia Zhao, Jiaming Song, Jonathan Kuck, Stefano Ermon, Silvio Savarese [arXiv]

  • Cross domain imitation learning
    Kun Ho Kim, Yihong Gu, Jiaming Song, Shengjia Zhao, Stefano Ermon [arXiv] (ICML’2020)

  • Adaptive hashing for model counting
    Jonathan Kuck, Tri Dao, Shengjia Zhao, Burak Burtan, Ashish Sabharwal, Stefano Ermon [paper] (UAI’2020)

  • Learning Controllable Fair Representations
    Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, Stefano Ermon [paper] (AISTATS’2019)

  • Amortized Inference Regularization
    Rui Shu, Hung H Bai, Shengjia Zhao, Stefano Ermon [arXiv] (NeurIPS’2018)

  • Adaptive Concentration Inequalities for Sequential Decision Problems
    Shengjia Zhao, Enze Zhou, Ashish Sabharwal, Stefano Ermon [pdf] (NeurIPS’2016)

  • Closing the Gap Between Short and Long XORs for Model Counting
    Shengjia Zhao, Sorathan Chaturapruek, Ashish Sabharwal, Stefano Ermon [arXiv] (AAAI’2016)

Awards and Fellowships

  • JP Morgan PhD Fellowship (2019)
  • Qualcomm Innovation Fellowship (QInF) (2018)
  • Qualcomm Scholarship (2016)
  • Google Excellence Scholarship (2015)

Teaching and Services

  • Reviewer: NeurIPS (2017, 2019, 2020, 2021), ICLR (2019, 2020, 2021), ICML (2019, 2020, 2021)
  • Organizer: Information Theory and Machine Learning (ITML) Workshop (NeurIPS’2019)
  • Teaching: CS228 Head TA (2019 and 2021)