For example if the algorithm needs $O(1/\varepsilon)$ iterations to get to a point where $\|\nabla f(x)\| \leq \varepsilon$ can this Big-O complexity change when we instead use $\|x_{k+1} - x_{k}\| \leq \varepsilon$ (with variable gradient descent step sizes) as the stopping condition or the difference between successive function values $f(x_{k})$ and $f(x_{k+1})$ (where both $f(x)$ and $\nabla f(x)$ are Lipschitz continuous)?
EDIT: As I have not thought this through - of course we could always construct some steps size schedules that makes the complexity worse but if we use exact line search at each step, for example, or, let's say, some inexact search based on Armijo condition - could the choice of terminating condition change the convergence rate?