This page contains the LDM EM results under matlab.

In this experiment, we use 2-dimensional X[] and 3 dimensional Y[].

EM experiment result:
- No_EM is the times of EM
- States initialization: each X[i] is initialized as [0.3
                                                                   0.5].
(Will try to find better initialization method later)

No_EM = 1

As predicted before, 1 time EM gives bad estimation the real parameters/states. The reason is that initialized parameters/states are far away from real parameters/states.



No_EM = 2

It is such a surprise that 2 times EM starts converging to the real parameters/states. I ran 2 times EM lots of times and 95% got good curves.



No_EM = 4

In general, 4 times EM gives better estimation than 2 times EM. The curve is comparable with the real states curve.



No_EM = 10

I can't tell by eye that 10 times EM is better than 4 times EM. A promising observation: LDM EM seems converge very fast and we don't need to run EM lots of times to get the estimation.



No_EM = 100

Seems give worse result than No_EM = 10. I am guessing it's because of turbulence or noise inside LDM model.



No_EM = 300

Gives even worse result. Sometimes the result goes to NaN.



No_EM = 1000

Always leads to NaN.



Summary
(1): EM didn't give good estimation of X[i] values. However, the estimated X[i] has very similar trend and variances to real hidden X[i].
(2): EM gives reasonable estimation to the parameters H, v, C. But the estimated values F, w, D are quite different with real model values.

LDM equations:


Estimated parameters and real parameters (4 times EM)




Analysis and plan

The evaluation to current EM implementation will depend on Likelihood Calculation result. One possibility is that Likelihood Calculation will give similar values between EM estimated parameters and real model parameters. Another possibility is that Likelihood Calculation will give quite different values between EM estimated parameters and real model parameters. In this case, we need more discussion and deeper dig to Likelihood Calculation algorithm. It could be some bugs inside current EM implementation. It is also possible that current Likelihood Calculation algorithm is not perfect enough and can't reflect LDM well. It might be a good opportunity to propose a modified Likelihood Calculation algorithm. Details are give as follows.

(1):
- If Likelihood Calculation gives similar values between EM estimated parameters and real model parameters, EM implementation is correct and we can start testing LDM using real speech data.
- LDM EM converge very quick, 2 times EM starts giving reasonable estimation, 10 times EM gives a fair estimation. If LDM model turns out to classify speech as good as HMM (I mean comparable), it is very possible the LDM model is more efficient than HMM. If so, it will be a better model for computation sensitive applications, such as speech recognition on cellphone/PDA.
- We might get a paper based on the efficiency improvement.

(2):
- If Likelihood Calculation gives quite different values between EM estimated parameters and real model parameters, I don't want to simply conclude that our EM implementation is wrong. We can try some deep dig to EM model itself and the current Likelihood Calculation algorithm.
- From my personal view, parameters of a LDM model are supposed to drive the trend and variances of hidden states X[i], not simply related to values of X[i].
- Based on this assumption, is it possible the ideal Likelihood Calculation algorithm should give H, v, C more weights than F, w, D? How about we try to modify Likelihood Calculation algorithm to give H, v, C more weights than F, w, D?
- We might be able to propose a modified Likelihood Calculation algorithm based on the idea of giving H, v, C more weights than F, w, D. After that, run experiments to compare these two Likelihood Calculation algorithms.
- If we get positive result, the theoretical breakthrough definitely will be a good paper.



--
April 06, 2007 by Tao.