On the convergence speed of artificial neural networks in the solving of linear systems
محورهای موضوعی : مجله بین المللی ریاضیات صنعتی
1 - Department of Mathematics, Urmia Branch, Islamic Azad University, Urmia, Iran.
کلید واژه: System of linear equations, Quasi-Newton method, Steepest descent method, Cost function, Learning algorithm,
چکیده مقاله :
Artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. This paper is a scrutiny on the application of diverse learning methods in speed of convergence in neural networks. For this aim, first we introduce a perceptron method based on artificial neural networks which has been applied for solving a non-singular system of linear equations. Next two famous learning techniques namely, the steepest descent and quasi-Newton methods are employed to adjust connection weights of the neural net. The main aim of this study is to compare ability and efficacy of the techniques in speed of convergence of the present neural net. Finally, we illustrate our results on some numerical examples with computer simulations.