If instead of the classical $1/L$ constant step size we have adaptive step sizes chosen with exact line search or Armijo (let's say) can this alter the Big-O complexity of the convergence rate?
In Searching for Optimal Per Coordinate Step Sizes with Multidimensional Backtracking the authors note that"... even an exact line-search cannot improve the convergencerate beyond what is achievable with a fixed step-size".But this is for strongly convex functions - would the same be true in general non-convex case?