After seeing some proofs of epsilon delta x^2 , e^x, x^3, I notice that the common proof pattern is we take $|f(x)-f(c)|$ and find some expression $h(c,\delta): \mathbb{R} \times \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0 }$ which is surjective (or better yet invertible) when seen as a function constant in $c$, so that we have:
$$|f(x)-f(c)| < h(c,\delta)$$
And we have a few cases on how it is proceeds now. If we didn't restrict $\delta$'s domain at any point in the proof (eg: as we do with $x^2$), then by surjectivity of $h$, for every $\epsilon \in \mathbb{R}_{\geq 0}$ we have a $\delta$ so that $\epsilon = h(\delta)$, now this is the $\delta$ required to satisfy the continuity criterion.
In case we can some how get an invertible $h$ we set $ \delta = h^{-1}(\epsilon)$. In cases of continuity of monomial terms, we also incoporate the trick of $\delta$ restriction to get such an invertible $h$, and then define $\delta$ through a minimum function to, so that the assignment of $\delta$ we get from epsilon is consistent with the restriction of $\delta$ we put before.
So, far so good, now I only have one question in the final thing which we get from our hard work of bounding, that is $h(c,\delta)$ why it in almost every case not $h(c,\delta,x)$. That is, why do we aim to get the final thing i.e $h$ , which is also what we equate to $\epsilon$, that the quantity $|f(x)-f(c)|$ is lesser than as independent of $x$?
I've been struggling with trying to answer this for my self. My weak intuition is this: $\delta$ directly restricts what values of $x$ can take up. If $h$ is somehow dependent of $x$, then so would $h^{-1}$ be too, in whatever case we are in. Now, it'd be weird if the quantity which tells us where $x$ is depends on $x$ itself... but then again recursive definitions do exist in mathematics... some maybe there are better reasons to give for this.
Note: I am very aware of the trivial answer "cause it works"