Learning Large Causal Structures from Inverse Covariance Matrix via Matrix Decomposition - A&O (Apprentissage et Optimisation) Access content directly
Preprints, Working Papers, ... Year : 2023

Learning Large Causal Structures from Inverse Covariance Matrix via Matrix Decomposition

Abstract

Learning causal structures from observational data is a fundamental yet highly complex problem when the number of variables is large. In this paper, we start from linear structural equation models (SEMs) and investigate ways of learning causal structures from the inverse covariance matrix. The proposed method, called -ICID (for {\it Independence-preserving} Decomposition from Oracle Inverse Covariance matrix), is based on continuous optimization of a type of matrix decomposition that preserves the nonzero patterns of the inverse covariance matrix. We show that -ICID provides an efficient way for identifying the true directed acyclic graph (DAG) under the knowledge of noise variances. With weaker prior information, the proposed method gives directed graph solutions that are useful for making more refined causal discovery. The proposed method enjoys a low complexity when the true DAG has bounded node degrees, as reflected by its time efficiency in experiments in comparison with state-of-the-art algorithms.
Fichier principal
Vignette du fichier
2211.14221.pdf (928.96 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03885791 , version 1 (23-10-2023)

Licence

Attribution

Identifiers

Cite

Shuyu Dong, Kento Uemura, Akito Fujii, Shuang Chang, Yusuke Koyanagi, et al.. Learning Large Causal Structures from Inverse Covariance Matrix via Matrix Decomposition. 2023. ⟨hal-03885791⟩
100 View
29 Download

Altmetric

Share

Gmail Facebook X LinkedIn More