Abstract
A two-stage algorithm combining the advantages of adaptive genetic algorithm and modified Newton method is developed for effective training in feedforward neural networks. The genetic algorithm with adaptive reproduction, crossover, and mutation operators is to search for initial weight and bias of the neural network, while the modified Newton method, similar to BFGS algorithm, is to increase network training performance. The benchmark tests show that the two-stage algorithm is superior to many conventional ones: steepest descent, steepest descent with adaptive learning rate, conjugate gradient, and Newton-based methods and is suitable to small network in engineering applications. In addition to numerical simulation, the effectiveness of the two-stage algorithm is validated by experiments of system identification and vibration suppression.
Original language | English |
---|---|
Pages (from-to) | 12189-12194 |
Number of pages | 6 |
Journal | Expert Systems With Applications |
Volume | 38 |
Issue number | 10 |
DOIs | |
Publication status | Published - 2011 Sept 15 |
All Science Journal Classification (ASJC) codes
- General Engineering
- Computer Science Applications
- Artificial Intelligence