5. The weights are updated by the following method: wi,t=wi,t−1+n[E/w]+a(wi,t−1−wi,t−2), where n is the learning parameter and a is the momentum coefficient. The learning rate captures the contribution of the weight to the objective. Thus, a high learning rate may speed up convergence, but it may also lead to over-correction and failure to converge. By contrast, low learning rate may prolong convergence. The momentum value, a, determines how much of the previous update should be carried on the current stage. The larger the momentum value, the greater the influence of the last update error. The actual value of the weights, once convergence is achieved, may be sensitive to the choice of the learning and momentum parameters. In this study, the value of the learning rate was set equal to 0.5 and the value of momentum equal to 0.8.
6. For a complete discussion of this model selection procedure and its optimality properties, see Kavalieris (1989).
7. The inclusion in the linear model of the same explanatory variables does not by itself entail nesting of the linear model by thenonlinear model. The nonlinear ANN model will nest the linear one if all of the three following conditions are met. First, the explanatory variables are the same. Second, in Equation (1) the function f( . ) is not the identity function. Third, in Equation (1), bi=0.
8. The forecast encompassing test has an easily derivable distribution when applied to out-of-sample data, but not when applied tothe in-sample data (Donaldson and Kamstra, 1997). Therefore, we present our results only for the out-of-sample forecast encompassing tests.
9. Using different specifications, Kanas and Yannopoulos (1999) reached similar conclusions.
Adya M, Collopy F. 1998. How effective are neural networks at forecasting and prediction? A review and evaluation. Journal of Forecasting 17: 481–495.
Brock WA. 1993. Pathways to randomness in the economy: Emergent nonlinearity and chaos in economics and finance. Working Paper, University of Winsconsin, Madison.
Campbell JY, Lo AW, MacKinlay AC. 1998. The Econometrics of Financial Markets. Princeton University Press: Princeton, NJ.
Campbell JY, Grossman SJ, Wiang J. 1993. Trading volume and serial correlation in stock returns. Quarterly Journal of Economics November: 905–939.
Clemen RT. 1989. Combining forecasts: A review and annotated bibliography. International Journal of Forecasting 5: 671–690. Clements MP, Hendry DF. 1993. On the limitations of comparing mean square forecast errors. Journal of Forecasting 5: 559–583.
Clements MP, Hendry DF. 1998. Forecasting Economic Time Series. Cambridge University Press: Cambridge, UK.
Donaldson RG, Kamstra M. 1996. Forecast combining with Neural Networks. Journal of Forecasting 15: 49–61.
Donaldson RG, Kamstra M. 1997. An artificial neural network—GARCH model for international stock return volatility. Journal of Empirical Finance 4: 17–46.
Froot KA, Obstfeld M. 1991. Intrinsic bubbles: the case of stock prices. American Economic Reiew 81: 831–842.
Granger CWJ. 1989. Invited review: combining forecasts—twenty years later. Journal of Forecasting 8: 167–173.
Granger CWJ, Newbold P. 1986. Forecasting Economic Time Series (2nd edn). Academic Press: Orlando, FL.
Kanas A, Yannopoulos A. 1999. Comparing linear and nonlinear forecasts of stock returns. Working Paper, University of Crete.
Kavalieris L. 1989. The estimation of the order of an autoregression using recursive residuals and crossvalidation. Journal of Time Series Analysis 10: 178–271.
Pesaran MH, Timmermann A. 1992. A simple nonparametric test of predictive performance. Journal of Business and Economic Statistics 10: 461–465.
Summers LH. 1986. Does the stock market rationally reflect fundamental values’. Journal of Finance XLI(3): 591–603.
Swanson N, White H. 1995. A model selection approach to assessing the information in the term structure using linear models and artificial neural networks. Journal of Business and Economics Statistics 13: 265–275. van Norden S, Schaller H. 1994. Fads on bubbles. Working Paper, Bank of Canada.
White H. 1992. Artificial Neural Network: Approximation and Learning Theory. Blackwell Publishers: Cambridge.
Уважаемый посетитель!
Чтобы распечатать файл, скачайте его (в формате Word).
Ссылка на скачивание - внизу страницы.