Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

Please visit IDEAL

In 2022, we won phase II NSF funding and became a part of the Chicago-wide IDEAL institute. This site is therefore no longer being maintained.
Nov 24 2020

talk by Sara Magliacane

Foundations of Data Science Seminar Series

November 24, 2020

3:30 PM - 4:30 PM

Location

online

Address

Chicago, IL 60607

Title:  Unsupervised domain adaptation by inferring untestable conditional independences through causal inference
Speaker: Sara Magliacane, University of Amsterdam
Abstract: An important goal common to domain adaptation and causal inference is to make accurate predictions when the distributions for the source (or training) domain(s) and target (or test) domain(s) differ. In many cases, these different distributions can be modeled as different contexts of a single underlying system, in which each distribution corresponds to a different perturbation of the system, or in causal terms, an intervention. We focus on a class of such causal domain adaptation problems, where features and labels for one or more source domains are given, and the task is to predict the labels in a target domain with a possibly very different distribution. In particular, we consider the case in which there are no labels in the target domain (unsupervised domain adaptation) and the underlying causal graph, the intervention types and targets are unknown.

In this setting, a stable predictor would use a subset of features for which the conditional distribution of the label is invariant in the source and target domains, which can be expressed as a conditional independence. On the other hand, since there are no labels in the target domain, this conditional independence is untestable from the data. We propose an approach based on a theorem prover that can infer certain untestable conditional independences from other testable ones using ideas from causal inference, but without recovering the causal graph. Under mild assumptions, this allows us to find subset of features that are provably stable under arbitrarily large distribution shifts. We demonstrate our approach by evaluating a possible implementation on simulated and real world data.

Contact

Elena Zheleva

Date posted

Nov 10, 2020

Date updated

Nov 10, 2020

Speakers

Sara Magliacane | Assistant Professor and Researcher | University of Amsterdam and MIT-IBM Watson AI lab

Sara Magliacane is an assistant professor at the University of Amsterdam and a researcher at the MIT-IBM Watson AI lab. She received her PhD at the VU Amsterdam on logics for causal inference under uncertainty, and then joined IBM Research in Yorktown Heights as a postdoc. Her current research focuses on different aspects of causal inference and symbolic approaches, from structure learning from different datasets to active learning of causal graphs and applications of ideas of causal inference to transfer learning.