financialnoob.me

Blog about quantitative finance

Expectation Maximization for Logistic Mixture Autoregressive (LMAR) model. Attempt #2

Some time ago I wrote an article describing my unsuccessful attempt to implement Expectation Maximization algorithm for estimating parameters of Logistic Mixture Autoregressive (LMAR) model. In this article I am going to describe how I was able to fix the problem and provide a correct implementation of the algorithm. I will also provide code for estimating the order of the model and replicate some of the simulation studies from the paper the article is based on (‘Basket trading under co-integration with the logistic mixture autoregressive mode’ Cheng et al. 2011). I am not going to describe the model again here, please read the previous article to get familiar with it. Let’s start right away with the problem and how to fix it.


The implementation of EM algorithm in the previous article worked quite well when some of the parameters were treated as known. I performed several experiments treating different sets of parameters as known and in all of the cases was able to get good estimates of the true parameters. But when I tried estimating all of the parameters together, everything fell apart. All estimates were completely wrong.

I believe that the problem was that the model was not identifiable. There was no way for the EM algorithm to distinguish between the two regimes. Sometimes the first regime was determined to be stationary (which is correct) and sometimes the second regime was stationary (which is wrong). In some simulations probably both regimes were stationary.

To fix this problem a simple convergence condition was needed — condition that the second regime is non-stationary. It is done with a couple lines of code.

The parameters of the model used along with the code to generate and plot a sample from it are shown below.

As can be seen from parameter vector zeta_2, the second regime is AR(1) model. For AR(1) model to be non-stationary its lag coefficient must be greater than or equal to 1. So that’s what we must check for.

The new implementation of EM algorithm is shown below. The only difference is on lines 96–97 where we check that the second regime is non-stationary. If this condition is not satisfied, the algorithm is restarted with new initial parameters.

To check that the algorithm works correctly we can perform some simulation studies. I am not going to perform exactly the same studies as in the paper because it takes too much time. I will use less simulations and a bigger sample size in each simulation.

The results of each simulation can be combined in a dataframe for further study.

Let’s look at the first several rows of the dataframe.

Simulation study results

It seems that the estimates are more or less correct. At least in the ballpark. Now let’s look at the mean and standard deviation of all 100 simulations.

Mean and standard deviation of parameters

Here we can see some problems with parameters delta. The means are far from true values and standard deviations are too high. What’s going on here? Let’s look at the results in a little more detail.

Results dataframe descriptive statistics

If we look at the min and max statistics, we see that parameters delta have outliers that are very different from the estimates from most other simulations. In fact, if we just use median instead of mean as parameter estimates, we get numbers that are really close to true values.

Median values of the estimated parameters

So I think that this implementation of EM algorithm does sufficiently good job at estimating model parameters.


Now we need to generalize it to work with a model of any order (m1,m2,n). Code for it is demonstrated below. Most of the differences are minor and can be easily understood from the code. One important thing to note is how convergence condition is changed. Recall that we need to check whether the second regime is non-stationary. In the case of AR(1) model we did that by checking the lag coefficient. For a general case of AR(p) model we need to compute the reciprocal roots of characteristic polynomial and check that at least one of them is larger than or equal to one. It is done on line 110–117.

I will run a small simulation study just to check that the code works fine. I am going to use the same parameters as before, but run just 10 simulations. The only difference in the code is that now we need to pass the order of the model to em_algorithm function.

After running the simulations we check the results to confirm that the estimates are close to real values of the parameters.

Median values of the estimated parameters

On the screenshot above we can see that everything seems to work fine.


Now we are able to fit a model given its order, but how do we know which order to select? In the paper authors propose using Bayesian Information Criterion (or Schwarz Information Criterion). It is calculated as follows.

Bayesian Information Criterion

Note that the formula above is different from the one presented in the paper. The last term represents the number of parameters in the model. In the paper they start with generating three cointegrated time series and then use OLS to estimate the coefficients used to construct the spread. Therefore they have three parameters more to estimate. I start with directly generating a spread (time series following LMAR model), so I don’t need to estimate the cointegration coefficients and the total number of parameters is just equal to the sum of model dimensions m1+m2+n.

Now we need to compute the maximized log-likelihood Q. It is explained in the paper and I’ve described how to do it in the previous article, but I’m going to repeat it here briefly.

We have a mixture model, so we don’t really know from which regime a given datapoint comes. If we had that information (if we had a complete data), the log-likelihood would be computed as follows:

Log-likelihood (complete data)

The variable Z above is an indicator of the current regime. These variables are not known, but we can replace them with their expected values – tau.

Expected values of variables Z

If we plug in tau in place of Z above, we will get the expected log-likelihood function Q. EM algorithm works by maximizing that function and we have everything we need to calculate it. I’m going to provide the code for the whole em_algorithm function again, but the only thing that changes is in the end (lines 132–137), where we calculate Q and return it along with the estimated parameters.

With that function we can perform simulation studies of model selection. These simulations are even more time consuming than the previous ones, so I’m not going to perform as many simulations as in the paper and I will only use two possible values for each of the parameters (m1=1,2, m2=1,2, n=1,2). This means that we will try to fit 8 different models at each simulation and select the one with the smallest BIC. Then we will see what percentage of the simulations correctly identify the model dimensions.

Code for running such simulations is shown below.

First several lines of the resulting dataframe are demonstrated on the screenshot below.

Now let’s see in what percentage of the simulations each individual dimension was identified correctly and in what percentage of the simulations all of the dimensions were identified correctly.

Model selection simulation results

78% of the simulations identified parameter m1 correctly. 48% identified parameter m2 correctly. 40% identified parameter n correctly. And only 22% of the simulations were able to correctly identify all the dimensions.

These numbers are lower than the ones provided in the paper, but probably it’s just a statistical fluke because I didn’t run enough simulations (only 50 versus 300 in the paper). Also we need to keep in mind that when applied to real world data no model would be an ideal fit, we just need to identify one that is good enough.

Now that we have a working implementation of the EM algorithm, we can try to implement a trading strategy based on LMAR model, but I’ll leave it for another article.


Jupyter notebook with source code is available here.

If you have any questions, suggestions or corrections please post them in the comments. Thanks for reading.


References

[1] Basket trading under co-integration with the logistic mixture autoregressive model

[2] On a logistic mixture autoregressive model

[3] https://en.wikipedia.org/wiki/Identifiability

[4] https://en.wikipedia.org/wiki/Bayesian_information_criterion

Leave a Reply

Your email address will not be published. Required fields are marked *