A Hybrid Optimization Algorithm for Learning Deep Models
الموضوعات :Farnaz Hoseini 1 , Asadollah Shahbahrami 2 , Peyman Bayat 3
1 - Department of Computer Engineering, Rasht Branch, Islamic Azad University, Rasht, Iran
2 - Department of Computer Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran
3 - Department of Computer Engineering, Rasht Branch, Islamic Azad University, Rasht, Iran
الکلمات المفتاحية: Adam, Optimization Algorithms, Deep Learning, Stochastic Gradient Descent, Momentum, Nestrove,
ملخص المقالة :
Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural network training, but training a neural network can involve thousands of computers for months. In the present study, basic optimization algorithms in deep learning were evaluated. First, a performance criterion was defined based on a training dataset, which makes an objective function along with an adjustment phrase. In the optimization process, a performance criterion provides the least value for objective function. Finally, in the present study, in order to evaluate the performance of different optimization algorithms, recent algorithms for training neural networks were compared for the segmentation of brain images. The results showed that the proposed hybrid optimization algorithm performed better than the other tested methods because of its hierarchical and deeper extraction.