latent confounders.

Estimation of Causal Direction
in the Presence of Latent Confounders
Using a Bayesian LiNGAM Mixture Model
Naoki Tanaka,
Shohei Shimizu, Takashi Washio
The Institute of Scientific and Industrial Research,
Osaka University
Outline
1.
2.
3.
4.
5.
Motivation
Background
Our Approach
Our Model: Bayesian LiNGAM Mixture
Simulation Experiments
2
Motivation
• Recently, estimation of causal structure attracts
much attention in machine learning.
– Epidemiology
– Genetics
Cause
Sleep
problems
• The estimation results can be biased
if there are latent confounders.
Depression
mood
Latent confounder

→ Unobserved variables that have
more than one observed child variables.
1
2
Observed variables
• We propose a new estimation approach
that can solve the problem.
3
Outline
1.
2.
3.
4.
5.
Motivation
Background
Our Approach
Our Model: Bayesian LiNGAM Mixture
Simulation Experiments
4
LiNGAM(Linear Non-Gaussian Acyclic Model)
[Shimizu et al., 2006]
• The relations between variables are linear.
• Observed variables are generated from a DAG
(Directed Acyclic Graphs).
3
x1  1.4 x3  e1
1.4
x2  0.8 x1  0.5 x3  e2
x3  e3
1
1
3
0.5
-0.8
2
• External influences  are non-Gaussian.
• No latent confounders.
→ are mutually independent.
• LiNGAM is an identifiable causal model.
2
5
A Problem of LiNGAM
• Latent confounders make  dependent.
→The estimation results can be biased.
3 = 
x1  1.4 x3  e1
x2  0.8 x1  0.5 x3  e2
Patients’ condition
mild
Medicine
A
Survival
rate
x1  e1 '
dependent
x2  0.8 x1  e2 '
Patients’ condition
serious
Medicine
A
Survival
rate
6
LiNGAM with Latent Confounders
[Hoyer et al., 2008]
• LvLiNGAM (Latent variable LiNGAM)
 =
  +
  <()
λ  + 

 :Latent variables
・Independent
・Non-Gaussian
λ :Represent effects of  on 
7
A Problem in Estimation of LiNGAM
with Latent Confounders
• Existing methods:
• An estimation method using overcomplete ICA.
[Hoyer et al., 2008]
→Suffers from local optima and requires large sample sizes.
• Estimates unconfounded causal relations.
[Entner and Hoyer, 2011; Tashiro et al., 2012]
→Cannot estimate a causal direction of two observed variables
that are affected by latent confounders.
• We propose an alternative.
– Computationally simpler.
– Capable of finding a causal direction in the presence of
latent confounders.
8
Outline
1.
2.
3.
4.
5.
Motivation
Background
Our Approach
Our Model: Bayesian LiNGAM Mixture
Simulation Experiments
9
Basic Idea of Our Approach
• Assumption
– Continuous latent confounders can be
approximated by discrete variables.
→LiNGAM with latent confounders reduces to
LiNGAM mixture model. [Shimizu et al., 2008]
• Estimation
– Estimation of LiNGAM mixture. [Mollah et al., 2006]
• Also suffers from local optima.
– Propose to use Bayesian approach.
• Bayesian approach for basic LiNGAM. [Hoyer et al., 2009]
10
LiNGAM Mixture Model [Shimizu et al.,2008]
• A data generating model of observed variable 
within class  is
 =


 −  () +  () +  ()
  <()
Matrix form
 = ()  +  − 
mean
Class 1
0
Class 2
7
1
1

0.8
2
0.8
2


mean
0
+ ()
+
++
+
++
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
++
+
+
+
+
+
+
+
+
++
+
+
+
+
+
+
++
++
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
++
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+++++
+
+
+
+
++++
+
+
6
• Existing estimation methods of LiNGAM mixture model
also suffer from local optima.[Mollah et al., 2006]
11
Relation of Latent Variable LiNGAM
and LiNGAM Mixture (1)
• We assume that continuous latent confounders
can be approximated by discrete variables
having several values with good precision.
– The combination of the discrete values determine
which “class” an observation belongs to.
→ within the same class are mutually independent.
→It is simpler than incorporating latent confounders
in LiNGAM directly.
independent
x1  1.4 f 3  e1
x2  0.8 x1  0.5 f 3  e2
3 → constant
x1  μ1( c )  e1
x2  0.8 x1 μ(2c )  e2
12
Relation of Latent Variable LiNGAM
and LiNGAM Mixture (2)
• A simple example
– If latent confounders 3 and 4 can be approximated by 0 and 1 …
Latent Variable LiNGAM
 =


λ  +  ()
 +
  <()
LiNGAM Mixture
 =


  <()

reduces
3
0.7 0.9
0.3
1
0
1
 −  () +  () +  ()
4
0
1
0.6
2
Class 2
4
1
3
2
0
1 (1) = 0
1 (2) = 0.9
(2)
(4)
(1)
(3)
1 (3) =0.3
1
.9
1.2
0.3
0
.3
.2
1 (4) =0.9
1.2
0.7 0.9
0.3
1
1
0
2 (1) = 0
0.6
2
(2)
= 0.6
(3) = 0.7
(4)
(1)
(3)
22(2)
2 .7
.6
.3
0.7
02 (4) = 1.3

13
Outline
1.
2.
3.
4.
5.
Motivation
Background
Our Approach
Our Model: Bayesian LiNGAM Mixture
Simulation Experiments
14
Bayesian LiNGAM Mixture Model (1)
• The data within class  are assumed to be generated
by the LiNGAM model.
→  and  , the densities of  , have no relation to latent confounders  ,
so they are not different between classes. Although 
3
x1  1.4 f 3  e1
21 does
not change
• 

changes …
x2  0.8 x1  0.5 f 3  e2
Density 
do not
change
and  () are the same between classes, so
we replace 

and  () of the LiNGAM mixture model by  and  :
  −  () +  +  ()
 =
  <()
• Then their probability density is
  () =  ( −  () −
 ( −  () ))
  < 
15
Bayesian LiNGAM Mixture Model (2)
• The probability density of the data 
within each class is mixed according to some weights.

  () ()
 | =
=1
(: The number of classes)
• () : multinomial distribution.
• The parameters of the multinomial distribution:
Dirichlet distribution
– A typical prior for the parameters of the multinomial
distribution.
– Conjugate prior for multinomial distribution.
16
Compare Three LiNGAM Mixture Models
• Select the model
with the largest log-marginal likelihood.
• There are only three (1 , 2 and 3 ) models
between two observed variables because of
the assumption of acyclicity.
1
2
3
class
class
class
1
2
1
2
1
2
17
Log-marginal Likelihood of Our Model
• Bayes’ theorem
•  = 1 , … ,  
P   =
(| ) 
 
(: sample size)
• Log-marginal likelihood is calculated as follows:
log   = log
  ,     
LiNGAM-mixture
Prior distribution
• We use Monte Carlo integration to compute the integral.
• The assumption of i.i.d. data,
  , 


=
  − 
=1
=1

−
  − 

()
  < 
18
Distribution of 
•  follows a generalized Gaussian distribution
with zero means.
→Includes Gaussian, Laplace, continuous uniform
and many non-Gaussian distributions.
–   =

exp
1
2 Γ(  )

– ( ) =
| | 
−( ) 

2 Γ(3  )

Γ(1  )

– Γ( ) is the Gamma function.
( ) = 1
 = 1
 = 2
 = 10
19
Prior Distributions and the Number of Classes
• Prior distribution
–  and  () ~(,  )
– ( ),  and 2 ~ − (, )
–  can be calculated by using the equation
of ( ).
• How to select the number of classes.
Inv-Gamma(3,3)
– Note that ‘true ’ does not exist.
② Selects the best number of classes.
(painted in orange)
1
…

…
2log
1
0.6
2
0.3
0.8
0.5
3
0.1
0.1
0.2
0.1
In a Dirichlet process
mixture model,
 → ∞ ⇒  → log
[Antoniak, 1974]
0.3
① Selects the best
model. (letter in red)
20
Outline
1.
2.
3.
4.
5.
Motivation
Background
Our Approach
Our Model: Bayesian LiNGAM Mixture
Simulation Experiments
21
Simulation Settings(1)
• Generated data using a LiNGAM with latent confounders.
[Hoyer et al., 2008]
• 100 trials.
3
0.7
4
0.9
-1
0.6
0.8
0.3
(This graph is 2 .)
1
5
1
0.8
2
2
• The distributions of latent variables (1 ,2 ,3 ,4 and 5 )
are randomly selected from the following three
non-Gaussian distributions:
Laplace distribution
Mixture of two
Gaussian distribution
(symmetric)
Mixture of two
Gaussian distribution
(asymmetric)
22
Simulation Settings(2)
• Two methods for comparison:
– Pairwise likelihood ratios for estimation of
non-Gaussian SEMs [Hyvärinen et al., 2013]
→Assumes no latent confounders.
– PairwiseLvLiNGAM [Entner et al., 2011]
→Finds variable pairs that are not affected by
latent confounders and then estimate
a causal ordering of one to the other.
23
Simulation Results
1 (1
2 )
100
The number of
correct answers
The number of
correct answers
100
2 (1 → 2 )
80
60
40
20
0
50 100 200
Sample size
3 (1 ← 2 )
100
The number of
correct answers
True:
80
60
40
20
0
50 100 200
Sample size
100Our method
80
60
40
20
0
Pairwise
50measure
PairwiseLv
LiNGAM
0(Number of
50
outputs)
50 100 200
Sample size
• “(Number of outputs)” is the number of estimation by PairwiseLvLiNGAM.
– For the details,
Correct answers / Number of outputs
1
2
3
50
64/64
6/12
6/16
100
52/52
7/20
5/24
200
42/42
0/14
2/14
• Our method is most robust against existing latent confounders. 24
Conclusions and Future Work
• A challenging problem: Estimation of causal direction
in the presence of latent confounders.
– Latent confounders violate the assumption of LiNGAM
and can bias the estimation results.
• Proposed a Bayesian LiNGAM mixture approach.
– Capable of finding causal direction in the presence of
latent confounders.
– Computationally simpler: no iterative estimation
in the parameter space.
• In this simulation, our method was better
than two existing methods.
• Future work
– Test our method on a wide variety of real datasets.
25
26
Histograms of 
20
20
20
10
10
10
0
0
0
1 2 3 4 5 6 7
1 2 3 4 5 6 7
1 2 3 4 5 6 7
G1, sample size:50
G2, sample size:50
G3, sample size:50
20
20
20
10
10
10
0
0
0
1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
G1, sample size:100
G2, sample size:100
G3, sample size:100
20
20
20
10
10
10
0
0
0
1 2 3 4 5 6 7 8 9 10
G1, sample size:200
1 2 3 4 5 6 7 8 9 10
G2, sample size:200
1 2 3 4 5 6 7 8 9 10
G3, sample size:200
27
Density of a Transformation
[Hyvärinen et al., 2001]
• e.g.) = (1 , … ,  ) ,  = (1 , … ,  )
•  is the density of  and  is the density of .
–  is i.i.d data, so  =
  .
Similarly,  =
 
• We can rewrite LiNGAM in a matrix form.
 =  +  ⇔  = ( − )−1 
•   =
1
det − −1
  =
1
det − −1
 ( − )−1 
•  could be permuted by simultaneous equal row and column
permutations to be strictly lower triangular due to the acyclicity
assumption. [Bollen, 1989]
→ ( − )−1 is lower triangular whose diagonal elements are all 1.
• A determinant of lower triangular equals the product of
its diagonal elements.
→ |( − )−1 | = 1
28
Gaussian vs. Non-Gaussian
2
Gaussian
2
Non-Gaussian (uniform)
(1 → 2 )
2
1
2
1
(1 ← 2 )
1
1
29