A Limited-Memory Quasi-Newton Optimization Method with Parallel Computing Capability

Document Type : Original Paper

Authors

1 Computer Engineering Department,, Engineering Faculty, Ferdowsi University of Mashhad, Mashhad

2 Department of Computer Engineering, Engineering Faculty, Ferdowsi University of Mashhad, Mashhad, Iran

10.22055/jamm.2025.48314.2330

Abstract

Newton's method is considered preferable due to its higher accuracy compared to first-order methods in approximating the target function; however, it requires the explicit computation of the Hessian matrix. On the other hand, quasi-Newton methods are popular among researchers because they don't require directly calculating the Hessian matrix.
Here, we propose a quasi-Newton method designed to achieve high computational speed while accurately approximating the Hessian. Our approach views the Hessian matrix from an operational perspective.
From the operational perspective, a matrix acts as an operator that transforms an input vector into an output vector. Following this approach, the Hessian matrix takes a gradient vector as input and generates an output vector accordingly. Our method is designed to perform calculations in parallel, enabling high execution speed. Moreover, our method is specifically designed to operate efficiently with limited memory, making it well-suited for large-scale problems. Our experimental results indicate that the proposed method is faster than the quasi-Newton L-BFGS method. For instance, in applications such as quadratic optimization problems, our method demonstrates a significant performance improvement, executing approximately ten times faster than L-BFGS. Furthermore, in problems involving a large number of variables, our method is more computationally efficient than BFGS.

Keywords

Main Subjects