Lagrange multiplier test of residual serial correlation
Note that both the ARCH and White tests outlined below can be seen as Breusch-Pagan-Godfrey type tests, since both are auxiliary regressions of the squared residuals on a set of regressors and a constant.
The Harvey test for heteroskedasticity is similar to the Breusch-Pagan-Godfrey test. However Harvey tests a null hypothesis of no heteroskedasticity against heteroskedasticity of the form of , where, again, is a vector of independent variables. To test for this form of heteroskedasticity, an auxiliary regression of the log of the original equation's squared residuals on is performed. The LM statistic is then the explained sum of squares from the auxiliary regression divided by , the derivative of the log gamma function evaluated at 0.
This statistic is distributed as a with degrees of freedom equal to the number of variables in. The Glejser test is also similar to the Breusch-Pagan-Godfrey test. This test tests against an alternative hypothesis of heteroskedasticity of the form with. The auxiliary regression that Glejser proposes regresses the absolute value of the residuals from the original equation upon.
An LM statistic can be formed by dividing the explained sum of squares from this auxiliary regression by. As with the previous tests, this statistic is distributed from a chi-squared distribution with degrees of freedom equal to the number of variables in. This particular heteroskedasticity specification was motivated by the observation that in many financial time series, the magnitude of residuals appeared to be related to the magnitude of recent residuals.
To test the null hypothesis that there is no ARCH up to order in the residuals, we run the regression:. This is a regression of the squared residuals on a constant and lagged squared residuals up to order.
The F -statistic is an omitted variable test for the joint significance of all lagged squared residuals. The exact finite sample distribution of the F -statistic under is not known, but the LM test statistic is asymptotically distributed as a under quite general conditions. The test statistic is computed by an auxiliary regression, where we regress the squared residuals on all possible nonredundant cross products of the regressors. For example, suppose we estimated the following regression:.
The test statistic is then based on the auxiliary regression:. Prior to EViews 6, White tests always included the level values of the regressors i.
This is no longer the case—level values are only included if the original regression included a constant. EViews reports three test statistics from the test regression.
The F -statistic is a redundant variable test for the joint significance of all cross products, excluding the constant. It is presented for comparison purposes. The third statistic, an LM statistic, is the explained sum of squares from the auxiliary regression divided by. This, too, is distributed as chi-squared distribution with degrees of freedom equal to the number of slope coefficients minus the constant in the auxiliary regression.
White also describes this approach as a general test for model misspecification, since the null hypothesis underlying the test assumes that the errors are both homoskedastic and independent of the regressors, and that the linear specification of the model is correct. Failure of any one of these conditions could lead to a significant test statistic. Conversely, a non-significant test statistic implies that none of the three conditions is violated.
When there are redundant cross-products, EViews automatically drops them from the test regression. For example, the square of a dummy variable is the dummy variable itself, so EViews drops the squared term to avoid perfect collinearity. This will bring you to the following dialog:. You may choose which type of test to perform by clicking on the name in the Test type box. The remainder of the dialog will change, allowing you to specify various options for the selected test.
The BPG, Harvey and Glejser tests allow you to specify which variables to use in the auxiliary regression. Note that you may choose to add all of the variables used in the original equation by pressing the Add equation regressors button.
If the original equation was non-linear this button will add the coefficient gradients from that equation. Individual gradients can be added by using the grad keyword to add the i -th gradient e. The White test lets you choose whether to include cross terms or no cross terms using the Include cross terms checkbox.
The cross terms version of the test is the original version of White's test that includes all of the cross product terms. However, the number of cross-product terms increases with the square of the number of right-hand side variables in the regression; with large numbers of regressors, it may not be practical to include all of these terms. The no cross terms specification runs the test regression using only squares of the regressors. Sep 14, Re: LM test for serial correlation. I am new to the programming of Eviews.
My intention was to conduct LM test for lag order from 1 to 6, so I use the for loop. In the attached code, the nan could not be used with lag order, so I generate 6 residual series with zero presample for the case of lag. Following works with serial. Now I discovered following testing methods: serial. In statistics, the Breusch—Godfrey test , named after Trevor S. Breusch and Leslie G. Godfrey, [1] [2] is used to assess the validity of some of the modelling assumptions inherent in applying regression-like models to observed data series.
In particular, it tests for the presence of serial correlation that has not been included in a proposed model structure and which, if present, would mean that incorrect conclusions would be drawn from other tests, or that sub-optimal estimates of model parameters are obtained if it is not taken into account.
The regression models to which the test can be applied include cases where lagged values of the dependent variables are used as independent variables in the model's representation for later observations. This type of structure is common in econometric models. Because the test is based on the idea of Lagrange multiplier testing, it is sometimes referred to as LM test for serial correlation.
As a rule of thumb, with 50 or more observations and only a few independent variables, a DW statistic below about 1. See Johnston and DiNardo , Chapter 6. There are three main limitations of the DW test as a test for serial correlation.
First, the distribution of the DW statistic under the null hypothesis depends on the data matrix. The usual approach to handling this problem is to place bounds on the critical region, creating a region where the test results are inconclusive. Second, if there are lagged dependent variables on the right-hand side of the regression, the DW test is no longer valid.
Lastly, you may only test the null hypothesis of no serial correlation against the alternative hypothesis of first-order serial correlation. Two other tests of serial correlation—the Q -statistic and the Breusch-Godfrey LM test—overcome these limitations, and are preferred in most applications. If there is no serial correlation in the residuals, the autocorrelations and partial autocorrelations at all lags should be nearly zero, and all Q -statistics should be insignificant with large p -values.
Note that the p -values of the Q -statistics will be computed with the degrees of freedom adjusted for the inclusion of ARMA terms in your regression. There is evidence that some care should be taken in interpreting the results of a Ljung-Box test applied to the residuals from an ARMAX specification see Dezhbaksh, , for simulation evidence on the finite sample performance of the test in this setting.
0コメント