MAPE is determined for each of the subseries using the same forecast horizons that were used in the M-competition (6, 8 and 18 periods for yearly, quarterly and monthly time series, respectively). The accuracy measure is calculated for each of the nine forecast methods available for all 180 series. the forecasting method resulting in the minimum MAPE for the nine methods is
Table III. Training statistics for stage I network (NN1)
Number of data sets |
180 |
Training tolerance |
0.5 |
Number of gooda classi®cation achieved |
153 |
Percentage of good classi®cation achieved |
85% |
Learning rate used |
0.9 (starting), 0.02 (ending) |
Number of hidden neurons |
23 |
aA classi®cation is considered good if the output neuron representing the correct category has a valuehave values 50.5. 4 0.5 and the other output neurons
Table IV. Testing results for stage I network (NN1)
Stage IÐtesting (NN1) |
Testing set 1 |
Testing set 2 |
Size of data set |
86 |
86 |
Number of good classi®cations |
59 |
62 |
Percentage of good classi®cations achieved |
68.6% |
72.1% |
Testing tolerance |
0.5 |
0.5 |
identi®ed for and labelled as one of the three forecasting groups as shown in Table II. The training results are shown in Table III. Table IV shows the results of training the network with two samples of size 86 each. The two test sets have been chosen randomly from the main set (consisting of 1001 series) but without the 180 series considered for the training. Based on a testing tolerance of 0.5, the network gives an average testing accuracy of 70%.
Figure 4 shows the architecture of the stage II networks. While the input neurons are the same as in the stage I network, the number of hidden neurons used for these networks is reduced to 11 (based on optimal performance). The output neurons for each network represent the three speci®c forecasting methods for each of the three groups considered in stage I. To select the training data set for NN2 network, ®rst, a subset of the original 1001 series consisting of observations for which the best forecasting method (based on the MAPE criterion) is known to be either HOL, WIN, or BRT is selected. From this subset, a random sample of 150 series is chosen for training and three samples of 20 series each are selected for testing, making sure that the series selected for training are not used in testing. The procedure used for selecting data sets for training and testing for NN3 and NN4 networks is similar to that of NN2 described above.
Table V shows the results of training for the stage II networks. The average training accuracy (at 0.5 tolerance level) achieved is about 73%. The nature of the task for the neural networks is classi®cation (selecting a forecasting group) rather than prediction. Internally, the network assigns a value of 0 or 1 for each of the three output neurons. A value of 1 for an output neuron representing a particular group (correct group) and values of 0 each for the other two neurons are assigned for each data set presented to the network during training and testing. For the purposes of measuring the test accuracy of the neural network a test tolerance of 0.5 is used. Accordingly, during testing, the network determines that a classi®cation is good when the output neuron
Table V. Training results for stage II networks (NN2, NN3, and NN4)
Уважаемый посетитель!
Чтобы распечатать файл, скачайте его (в формате Word).
Ссылка на скачивание - внизу страницы.