financialnoob.me

Blog about quantitative finance

Introduction to partial autoregressive (PAR) model

This article is based on the paper ‘Modeling Time Series with both Permanent and Transient Components using the Partially Autoregressive Model’ (Clegg et al. 2015). I am going to describe the model from that paper, show how to implement provided estimation techniques in Python and try to replicate some studies using both synthetic and real data.

UPDATE 21.02.2023: I found a typo in one of the functions for estimating parameters using lagged variance (lagvar_estimate_par). After fixing it, I had to rewrite the procedure for fitting PAR model and results I get now are a little bit different from what I got before. I updated the article and corresponding Jupyter notebook.


Partial autoregressive (PAR) model is proposed to describe time series, which have both permanent and transient components. It can be described with following equations:

Partial autoregressive model

Time series X_t consists of two components — random walk component R_t(permanent) and mean-reverting component M_t (transient), where rho is the coefficient of mean reversion.

Authors claim that:

the model presented in equation (1) is perhaps the simplest possible model having both a mean-reverting and a random walk component. This makes it the natural starting point for a study of such time series.

Let’s try to generate and plot a sample from PAR model to see how it looks. Code for it is shown below.

First we set model parameters — rhosigma_Msigma_R, and number of datapoints we want to generate. Then we generate each component separately, according to model specification presented above. After that we just sum the two components to get a sample from PAR model. To generate mean reverting and random walk components I’m using a function from statsmodels library. Documentation for it is available here.

Plots of PAR sample

Above you can see the plots of both components and of resulting time series. Note that the bottom plot closely resembles the random walk component, with some additional noise from mean reverting component.

We can check that the empirical parameters calculated from the sample are close to the real ones.

Empirical standard deviations sigma_M and sigma_R

Another interesting metric we can calculate is the proportion of variance attributable to mean reversion. Formula for it is provided below.

Proportion of variance attributable to mean reversion

The results of this calculation are shown on the screenshot below.

Calculating R_squared

I am going implement functions based on the code we used so far. One function to generate a PAR sample and another to calculate R squared.


Now we can try estimating the model parameters. First method proposed by the authors is using lagged variances. It doesn’t work very well for small sample sizes, but it’s a good starting point. Formulas and code for calculating model parameters are provided below.

Calculating model parameters using lagged variances
Estimation results

As you can see above, with large enough sample size we are able to get estimates that are pretty close to the true values of the parameters.

Let’s create a function for estimating parameters of the model using lagged variances. Note that when calculating variances we need to check that the value under the square root is non-negative, otherwise we will get an error.


Another estimation technique proposed in the paper is based on the Maximum Likelihood Estimation (MLE) of the Kalman filter. We reformulate the PAR model in a state-space form and design a Kalman filter for it. Then we maximize the corresponding log-likelihood function with respect to the parameters of the model (rhosigma_Msigma_R). You can learn more about state space models and Kalman filters in my other article.

Let’s try to implement this procedure step by step. First we need to write a function for Kalman filtering. Code for the function in R language is provided in the paper, I’m just rewriting it in Python with some minor changes. Note that additionally to mean reverting and random walk components (M and R) my function also returns prediction errors eps because they are needed for log-likelihood function.

When parameters of the PAR model are known, we can apply Kalman filter to the data and get estimates of the components and prediction errors. Having calculated prediction errors, we can calculate log-likelihood as follows.

Log-likelihood

Initially parameters of the model are not known to us. We use lagged variances method to calculate our initial estimates. Then we use numerical optimization methods (quasi-Newton) to obtain parameter values that maximize the log-likelihood function. Let’s see how it is implemented in code.

We need to implement a function to calculate the log-likelihood. Code for it is shown below.

I am going to use scipy library to perform numerical optimization. Objective function, that we are going to minimize, should follow a certain template (see here), so let’s create a function that we are going to pass to scipy minimize.

First argument of the objective function indicates the parameters, with respect to which the function must be minimized. Second argument contains our data. We want to maximize the likelihood function, which is the same as minimizing the negative likelihood function.

Let’s see how this estimation technique works from the beginning. First we generate a sample with given parameters, as shown in the code below. Here we don’t need a very big sample, so I’m generating just 250 datapoints.

Then we use lagged variances method to get initial estimates of the parameters. Results are shown on a screenshot.

Initial estimates

As you can see above, the estimates are not very accurate. Let’s see if we can get better estimates with maximum likelihood estimation. Code for performing MLE is shown below along with the results. The numbers we get are closer to the true values than our initial estimates.

MLE results

Now we are ready to create a function implementing this estimation technique and perform some tests using synthetic data.


There are two special cases of the PAR model that we need to keep in mind during implementation. The first one is pure random walk model, which occurs when sigma_M=0. And the second one is pure mean reverting model, which occurs when sigma_R=0. We can still use the same estimation techniques, we just need to explicitly set the respective parameter to zero. The code for parameter estimation in these special cases is shown below.

We also need to create different objective functions to use in numerical optimization. Code is provided below.

Sometimes using estimates calculated from lagged variances as initial guesses leads to suboptimal results.

Now are ready to write a function for MLE estimation.

Note that I am repeating the minimization procedure 10 times and return the parameters which give the biggest log-likelihood function. The reason for doing this is that numerical minimization procedure only guarantees that we will find a local minimum. To increase our chances of finding a global minimum we repeat the procedure several times with different initial values and return the best estimates out of all attempts.


Let’s perform some tests of the function to make sure it works as expected. Code for testing the function and plotting results is demonstrated below. Library tqdm imported in the beginning is used for showing a progress bar to monitor progression of the test.

MLE tests

As you can see above, most estimates are concentrated around true values of parameters, which means that our estimation function works as expected. I also tested it in other modes (assuming either sigma_M or sigma_R equals to zero) and it works as expected. Results are available in Jupyter notebook (link below).


I have failed to replicate the simulation studies with the likelihood ratio tests. For some reason my critical values are too different from the ones, provided in the paper. I believe it has something to do with particular implementation of the numerical optimization procedure. In some cases it might not converge to the true global minimum, resulting in outliers that shift the quantiles in wrong direction. Even in the paper, quantiles from the simulation are different from the ones predicted by Wilks’ theorem. I will leave the code for simulation studies in the Jupyter notebook, but I won’t post it here.

Instead of relying on likelihood ratio test, I’m going to use Akaike Information Criterion (AIC) to determine which model better describes the data. Some statisticians even suggest that this approach is better than using statistical tests (e.g. here).

I will try to fit PAR model to real world data. I am going to use the prices of some small cap stocks from 2002 to 2014. Similarly to the paper, I will divide the dataset into six non-overlapping periods (2 years each). For each time series 3 different models will be tested — PAR, pure mean reversion (AR(1) with |rho|<1) and pure random walk. Out of 3 models, the one with the smallest AIC will be selected. For time series, which are identified as PAR, average values of mean reversion coefficient rho and average proportion of variance attributable to mean reversion will be calculated.

First let’s try to work with simple price series. Code for doing it is provided below.

Fitting models to prices

As you can see on the screenshot above, approximately 30–40% of time series are found to be PAR. The average percentage of variance attributable to mean reversion is less than 50% in all periods except one, where it is only slightly bigger than 50%. Let’s try to test the persistence of PAR property — given that the time series is PAR in the current period, how likely is it to be PAR in the next period? I am going to construct a table similar to table 5 in the paper.

Persistence tests (normal prices)

As you can see in the last two columns above, time series in the current period is about 1.5 times as likely to be PAR if it was found to be PAR in the previous time period. Results in the paper look similar.


Now let’s perform similar tests, but with log prices. Code is exactly the same, we just apply log function to the prices before trying to fit a model. Results are provided below.

Fitting models to log prices

Here approximately the same proportion of time series is identified as PAR. The average proportion of variance attributable to mean reversion is also similar — less than 50% for each time period. What about persistence?

Persistence tests (log prices)

Results of persistence tests also look very similar to the tests of normal price series. It seems that the PAR model works equally well for normal price series and for log price series. However, using this model in practice doesn’t seem feasible because the proportion of variance attributable to mean reversion is too small — less than 50%. To use it for trading we need to further isolate the mean reversion component (e.g. using hedging), but this is a topic for another article.


In this article I explained the basic idea behind partial autoregressive model and implemented a function for estimating model parameters. We also performed several tests of the estimation procedure to make sure that everything works as expected.

Even though the model might not be very useful in its current form, it serves as a building block for a partial cointegration model that I will describe in the next article.


Jupyter notebook with source code is available here.

If you have any questions, suggestions or corrections please post them in the comments. Thanks for reading.


References

[1] Modeling Time Series with Both Permanent and Transient Components Using the Partially Autoregressive Model

[2] Why I don’t like statistical tests

[3] AIC versus Likelihood Ratio Test in Model Variable Selection

[4] https://github.com/matthewclegg/partialAR

Leave a Reply

Your email address will not be published. Required fields are marked *