A REMEDY FOR DISTRIBUTIONAL SHIFTS THROUGH EXPECTED DOMAIN TRANSLATION
Jean-Christophe Gagnon-Audet, Irina Rish, Soroosh Shahtalebi, Frank Rudzicz
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:05:48
Machine learning models often fail to generalize to unseen domains due to the distributional shifts. A family of such shifts, "correlation shifts," is caused by spurious correlations in the data. It is studied under the overarching topic of "domain generalization." In this work, we employ multi-modal translation networks to tackle the correlation shifts that appear when data is sampled out-of-distribution. Learning a generative model from training domains enables us to translate each training sample under the special characteristics of other possible domains. We show that by training a predictor solely on the generated samples, the spurious correlations in training domains average out, and the invariant features corresponding to true correlations emerge. Our proposed technique, Expected Domain Translation (EDT), is benchmarked on the Colored MNIST dataset and drastically improves the state-of-the-art classification accuracy by 38% with train-domain validation model selection.