English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery

MPS-Authors
/persons/resource/persons84328

Zhang,  K
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons75626

Janzing,  D
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Pub3_[0].pdf
(Any fulltext), 325KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Zhang, K., Schölkopf, B., & Janzing, D. (2010). Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery. In P. Grünwald, & P. Spirtes (Eds.), 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010) (pp. 717-724). Corvallis, OR, USA: AUAI Press.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-BF40-9
Abstract
In nonlinear latent variable models or dynamic models, if we consider the latent variables as confounders (common causes), the noise dependencies imply further relations
between the observed variables. Such models are then closely related to causal discovery in the presence of nonlinear confounders, which is a challenging problem. However, generally in such models the observation noise is assumed to be independent across data dimensions, and consequently the noise dependencies are ignored. In this paper we focus on the Gaussian process latent variable model
(GPLVM), from which we develop an extended model called invariant GPLVM (IGPLVM), which can adapt to arbitrary noise
covariances. With the Gaussian process prior put on a particular transformation of the latent nonlinear functions, instead of the original ones, the algorithm for IGPLVM involves almost the same computational loads as that
for the original GPLVM. Besides its potential application in causal discovery, IGPLVM has the advantage that its estimated latent nonlinear manifold is invariant to any nonsingular linear transformation of the data. Experimental
results on both synthetic and realworld data show its encouraging performance in nonlinear manifold learning and causal discovery.