a
1: REAL; 2: NORM; 3: UNIF; 4: LOG; 5: LAP.
b
The mean difference is significant at the 0.05 level.
Table XVII. Pairs’ comparison for ‘technique’ factor
TECHNIQUEa TECHNIQUEa Mean difference SE Sig. 95% confidence interval
I J (I − J) for difference
Lower bound Upper bound |
|||
1 2 |
0.499b |
0.014 0.000 |
0.471 0.527 |
2 1 |
0.499b |
0.014 0.000 |
(0.527) (0.471) |
a
1: GA; 2: RT.
b
The mean difference is significant at the 0.05 level.
was to refine them. We find no evidence for the fourth and fifth hypotheses (H4, H5), with all crossover operators and RT mechanisms achieving comparable results.
In the case of pairs’ comparisons, for testing performances we encounter a similar result. All the mean differences are statistically significant. Only the order of best performers has changed slightly: ‘maximum of absolute values’, ‘normalization’ and ‘no preprocessing’ for the ‘preprocessing’ factor; uniform, normal, real, Laplace and logistic for the ‘distribution’ factor. Once again, our first hypothesis is satisfied, with GAs performing better on test data as well.
We investigated the influence of three different factors and their combinations on the prediction performance of ANN classification models. The three factors are preprocessing method (none, division by absolute maximum values and normalization), data distribution (the real data, uniform, normal, logistic and Laplace distributions) and training mechanism (a gradient-descent-like mechanism improved by an RT procedure and a natural-evolution-based mechanism known as a GA).
Few studies have shown the individual influence of preprocessing method and data distribution on the prediction performance of ANNs. Koskivaara (2000) investigated the impact of four preprocessing techniques on the forecast capability of ANNs. Other studies (Pendharkar, 2002; Pendharkar and Rodger, 2004) investigated the combined influence of other factors, such as distribution kurtosis, variance heterogeneity, network size and input and weights noise, on the ANN classification performance. After examining Alander (1995), we could find no report in the literature which analysed the influence of data distribution, preprocessing method, training mechanism and their combinations on the classification performance of ANNs. In this study we are concerned with questions regarding the choice of different factor–factor pairs when the third factor is fixed, e.g. which combination of preprocessing method–training mechanism would be the most suitable given that the distribution of the data is known.
As we have shown (Section 4), this study has a different perspective than other studies which use GAs to train neural networks. A major difference from related studies (Schaffer et al., 1992; Sexton and Gupta, 2000; Sexton and Sikander, 2001; Pendharkar, 2002; Pendharkar and Rodger, 2004) is that the two ANN training mechanisms are used to refine the initial solution (the ANN set of weights). Rather than randomly generating it, the initial solution is obtained when determining the ANN architecture, which is kept fixed in the refining process for both training mechanisms. An empirical procedure to determine the proper ANN architecture is introduced. Problem complexity (the number of variables and output neurons) is another difference with related studies which usually consider the two-class discrimination problem. In our prediction models, the number of financial performance classes is set to seven. We can easily change this parameter to simulate the binary classification problem, allowing us precise and detailed comparisons with other related studies. Another distinction with related studies comes from the types of test used to validate hypotheses. In this study, we rely on non-parametric tests to validate the individual influence of the factors. However, we finally performed a three-way ANOVA to validate the main hypothesis, but without violating its constraints.
Уважаемый посетитель!
Чтобы распечатать файл, скачайте его (в формате Word).
Ссылка на скачивание - внизу страницы.