Although the ANN model is expected to perform better in-sample, since it nests the linear model, there is no guarantee that it will dominate the linear model out-of-sample (Donaldson and Kamstra, 1996).
The method of estimation adopted is the on-line error backpropagation, which mimics a learning behavior. In this method, the weights of the signals are updated after presentation of each combination of input and output variables. Using the first set of observations, at the initial stage the method does a forward and backward pass through the network, computes initially the weights, and determines the value of the error function E=k (tk−yk)2, where tk are the target (i.e. observed) values of the output variable fed into the network and yk are the estimated values.4 At the nest stage, it uses the second set of observations and does a forward and backward pass, recomputes the weight, and redetermines the value of the error function E, and so on. The learning algorithm converges and thus the process stops when the value of the error function is lower than a predetermined convergence criterion.5
The MLPs are trained over a training period (i.e. training sample). To improve on the in-sample fitting performance of the MLP, the estimated set of weights was used as a set of initial values for further training while setting more strict convergence criteria for the learning algorithm to meet. However, as this approach may cause overfitting (i.e. exceptionally good in-simple performance but poor out-of-sample performance), we adopt the following cross-validation strategy in training.6 The ANN model is initially trained on the subset of the in-sample data from January 1980 to December 1993. We then used the estimated model to generate output (‘forecasted’ values) for the remaining portion of the in-sample data, i.e. January 1994–December 1994. This output is then compared to the sample of original values (validation sample) of the output variable by computing the root mean squared error (RMSE). We repeat this procedure by setting more strict convergence criteria each time. As this procedure is repeated, we observe that the RMSE declines over successive training. At some point, however, the RMSE reaches a minimum and then starts increasing, which indicates that overfitting may occur. So, the training process was stopped at the nth training if the RMSE of the (n+1)th training was found to be higher than the RMSE of the nth training. On the basis of the estimated weights from the nth training over the training period, out-of-sample forecasts were generated for the subsequent ‘test’ period.
The empirical results from training the ANN model given in Equation (1) for the training period (i.e. from January 1980 to December 1994) are given in Table 2. This Table reports the estimated coefficients aj, bi, and cj,i for both the DJ and the FT. This Table also reports the RMSE of the generated output for the validation in-sample period, namely January 1994–December 1994. Further training of the ANN model would result in a higher RMSE, thereby indicating overfitting. The reported coefficients were used to generate out-of-sample forecasts from the neural network model.
A competitor to the nonlinear ANN model is the linear model for stock returns and fundamentals given by Equation (2)
yt=0+1Z1,t−1+2Z2,t−1+wt (2)
Table 2. ANN model estimation: period: January 1980–December 1994
Уважаемый посетитель!
Чтобы распечатать файл, скачайте его (в формате Word).
Ссылка на скачивание - внизу страницы.