It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to O(ε ?2) to drive the norm of the gradient below ε. This shows that the upper bound of O(ε ?2) evaluations known for the steepest descent is tight and that Newton's method may be as slow as the steepest-descent method in the worst case. The improved evaluation complexity bound of O(ε ...Expand abstract
- Publisher copy:
- Copyright date:
On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems.
If you are the owner of this record, you can report an update to it here: Report update to this record