Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

Please visit IDEAL

In 2022, we won phase II NSF funding and became a part of the Chicago-wide IDEAL institute. This site is therefore no longer being maintained.
Nov 23 2021

talk by Victor Veitch

Foundations of Data Science Seminar Series

November 23, 2021

3:30 PM - 4:30 PM

Location

SEO 1000

Address

Science and Engineering Offices, 851 S Morgan St., Chicago, IL 60607

Title: Counterfactual Invariance to Spurious Correlations: Why and How to Pass Stress Tests
Speaker: Victor Veitch, University of Chicago
Abstract: Informally, a `spurious correlation' is the dependence of a model on some aspect of the input data that an analyst thinks shouldn't matter. In machine learning, these have a know-it-when-you-see-it character; e.g., changing the gender of a sentence's subject changes a sentiment predictor's output. To check for spurious correlations, we can `stress test' models by perturbing irrelevant parts of input data and seeing if model predictions change. In this paper, we study stress testing using the tools of causal inference. We introduce counterfactual invariance as a formalization of the requirement that changing irrelevant parts of the input shouldn't change model predictions. We connect counterfactual invariance to out-of-domain model performance, and provide practical schemes for learning (approximately) counterfactual invariant predictors (without access to counterfactual examples). It turns out that both the means and implications of counterfactual invariance depend fundamentally on the true underlying causal structure of the data---in particular, whether the label causes the features or the features cause the label. Distinct causal structures require distinct regularization schemes to induce counterfactual invariance. Similarly, counterfactual invariance implies different domain shift guarantees depending on the underlying causal structure. This theory is supported by empirical results on text classification.

Contact

Elena Zheleva

Date posted

Nov 9, 2021

Date updated

Nov 9, 2021

Speakers

Victor Veitch | Assistant Professor | Data Science and Statistics, University of Chicago

Victor Veitch is an assistant professor of Data Science and Statistics at the University of Chicago and a research scientist at Google Brain. His main recent research interests are the intersection of machine learning and causal inference, and the design and evaluation of trustworthy AI systems. He's also dabbled in models for network data, the foundations of learning, and quantum computing. Previously, he completed a PhD at the University of Toronto, and was a Distinguished Postdoctoral Researcher at Columbia University. He is an unusually poor juggler.