Bassim A. HassanIssam A. R. MoghrabiAlaa Luqman IbrahimHawraz N. Jabbar
ABSTRACT This research introduces two conjugate gradient methods, BIV1 and BIV2, designed to enhance the efficiency and performance of unconstrained optimization problems with only first derivative vectors. The study explores the derivation of new conjugate gradient parameters and investigates their practical performance. The proposed BIV1 and BIV2 methods are compared with the traditional Hestenes‐Stiefel (HS) method through a series of numerical experiments. These experiments evaluate the methods on various test problems sourced from the CUTE library and other unconstrained problem collections. Key performance metrics, including the number of iterations, function evaluations, and CPU time, demonstrate that both BIV1 and BIV2 methods offer superior efficiency and effectiveness compared to the HS method. Furthermore, the effectiveness of these methods is illustrated in the context of training artificial neural networks. Experimental results show that the new methods achieve competitive performance in terms of convergence rate and accuracy.
Bassim A. HassanIssam A. R. MoghrabiTalal M. AlharbiAlaa Luqman Ibrahim
Basim A. HassanAlaa Luqman IbrahimAli Ahmed A. AbdullahAbdulameer A. Saad
John FordYasushi NarushimaHiroshi Yabe
Michael TowseyD. AlpsanL. Sztriha