Applied stochastic models in business and industry. Trend estimation of financial time series, страница 3

The minimization problem can be written in matrix notation as

                                                                            (4)

with Y=(Y1,...,YN),...,N) and K1N the (N −1)×N matrix given by

−1

0

K1N =⎜⎜

1 −1

0

1

0

0

...

...

...

0

0

0

0

00⎞ ⎟⎟

(5)

                                                 ⎜⎜⎜⎜⎜⎝ 0   0     0   0    ...     −1   −1 0⎟⎟⎟⎟⎟⎠

                                                           0      0     0   0    ...      0       1   1

Therefore, the minimum of M is obtained by calculating the derivative of M() with respect to s, equating it to zero (evaluated at s=sˆ) and solving the resulting equation. By doing so, we get sˆY   (6)

where (IN +K1 N K1N)−1 is a symmetric positive definite matrix. As the second derivative of M() evaluated at s=sˆ can be shown to be a symmetric and positive definite matrix we know this estimator minimizes M(). It should be noticed that expression (6) is well known in the filtering and data graduation fields (e.g. Hodrick and Prescott [3]) as well as in the penalized splines literature (see Ruppert et al. [12]). The PLS approach has the advantage of showing explicitly the roles played by , F and S. However, to obtain sˆ with (6) it is necessary to invert an N ×N matrix, which may cause instability and lack of precision of the numerical solution when N is large. To avoid this problem it is preferable to use the Kalman filter with smoothing.

Kalman filtering requires the formulation of a state-space model that uses a state equation and a measurement equation, given, respectively, by

                                  Xt = AtXt−1+Wt          and      Yt =ctXt +t             for t =1,..., N                            (7)

with  two independent zero-mean white noise processes, that is, they are sequences of random errors serially uncorrelated and identically distributed. For the ES filter we have

                                Xt =t,            At =1, Wt =t                and ct =1       for t =1,..., N                          (8)

with Var and Var. Thus, the state and measurement equations for the ES filter are

                                                        t t         and     Yt t                                                                           (9)

Besides, is given by the variance ratio . Thus, to equate the results of the Kalman filter to those obtained with (6) we assume 1 and .

Another equivalent method is the WK filter, which arises by assuming that expressions (9) are true. As shown by King and Rebelo [8] the formula for this filter can be obtained from the first-order condition of the minimization problem, that is,

                             for t =2,..., N −1                     (10)

By solving this equation we obtain the symmetric WK filter

                                                    ˆt Yt                                                                       (11)

This formula is best justified by the result that follows from Assumption A in Bell [13], which states that the initial values of the process are independent of differenced signal and noise. As with the Kalman filter, the estimated trend for the WK filter is obtained by assuming 2 =1 and 2=, so that ˆt Yt, where B1 is such that B1Xt = Xt+1 for every variable X and index t. This filter produces an estimator of  with minimum mean-square error (MSE) if a complete realization (from t =−∞ to t =∞) of the series {Yt} is available.

Otherwise, we could extend the observed series with a few backcasts and forecasts as did Kaiser and Maravall [14]. For the HP filter, they found that only four backcasts and forecasts are required to reproduce the infinite filter effect. It is possible to combine the forecasting formula with the WK formula to obtain an exact finite-sample filter and another approach is to use the matrix formulas provided by McElroy [15].