Causal Discovery in Linear Latent Variable Models Subject to Measurement Error
Abstract
We focus on causal discovery in the presence of measurement error in linear systems where the mixing matrix, i.e., the matrix indicating the independent exogenous noise terms pertaining to the observed variables, is identified up to permutation and scaling of the columns. We demonstrate a somewhat surprising connection between this problem and causal discovery in the presence of unobserved parentless causes, in the sense that there is a mapping, given by the mixing matrix, between the underlying models to be inferred in these problems. Consequently, any identifiability result based on the mixing matrix for one model translates to an identifiability result for the other model. We characterize to what extent the causal models can be identified under a two-part faithfulness assumption. Under only the first part of the assumption (corresponding to the conventional definition of faithfulness), the structure can be learned up to the causal ordering among an ordered grouping of the variables but not all the edges across the groups can be identified. We further show that if both parts of the faithfulness assumption are imposed, the structure can be learned up to a more refined ordered grouping. As a result of this refinement, for the latent variable model with unobserved parentless causes, the structure can be identified. Based on our theoretical results, we propose causal structure learning methods for both models, and evaluate their performance on synthetic data.
Recommended Citation
Y. Yang et al., "Causal Discovery in Linear Latent Variable Models Subject to Measurement Error," Advances in Neural Information Processing Systems, vol. 35, The MIT Press, Jan 2022.
Department(s)
Electrical and Computer Engineering
International Standard Book Number (ISBN)
978-171387108-8
International Standard Serial Number (ISSN)
1049-5258
Document Type
Article - Conference proceedings
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2024 the Authors; Neural Information Processing Systems Foundation Inc., All rights reserved.
Publication Date
01 Jan 2022
Comments
National Science Foundation, Grant 2134901