Convergence properties of an Objective-Function-Free Optimization regularization algorithm, including an $\mathcal{O}(\epsilon^{-3/2})$ complexity bound - Algorithmes Parallèles et Optimisation Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Convergence properties of an Objective-Function-Free Optimization regularization algorithm, including an $\mathcal{O}(\epsilon^{-3/2})$ complexity bound

Résumé

An adaptive regularization algorithm for unconstrained nonconvex optimization is presented in which the objective function is never evaluated, but only derivatives are used. This algorithm belongs to the class of adaptive regularization methods, for which optimal worst-case complexity results are known for the standard framework where the objective function is evaluated. It is shown in this paper that these excellent complexity bounds are also valid for the new algorithm, despite the fact that significantly less information is used. In particular, it is shown that, if derivatives of degree one to $p$ are used, the algorithm will find a $\epsilon_1$-approximate first-order minimizer in at most $O(\epsilon_1^{-(p+1)/p})$ iterations, and an $(\epsilon_1,\epsilon_2)$-approximate second-order minimizer in at most $O(\max[\epsilon^{-(p+1)/p},\epsilon_2^{-(p+1)/(p-1)}])$ iterations. As a special case, the new algorithm using first and second derivatives, when applied to functions with Lipschitz continuous Hessian, will find an iterate $x_k$ at which the gradient's norm is less than $\epsilon_1$ in at most $O(\epsilon_1^{-3/2})$ iterations.

Dates et versions

hal-03718813 , version 1 (09-07-2022)

Licence

Paternité

Identifiants

Citer

Serge Gratton, Sadok Jerad, Philippe L. Toint. Convergence properties of an Objective-Function-Free Optimization regularization algorithm, including an $\mathcal{O}(\epsilon^{-3/2})$ complexity bound. 2022. ⟨hal-03718813⟩
45 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More