It's very common in numerical analysis to have an approximation method whose error is evaluated by taking a truncated Taylor series, applying a remainder theorem to get the error.E.g. the $N$-point trapezoid rule approximates the integral of $f$ on $[a, b]$ to error$$|E| = \frac{(b - a)^3}{12N^2}|f''(\xi)|,$$for some point $\xi \in [a, b]$.We can get a uniform error bound by taking an upper bound of $f''$ over the whole interval. My question is: is there a way that we can generically compute such an (as sharp as possible) upper bound for all (computable) continuous functions, or do we have to supply and prove individual bounds ad hoc?
Obviously, we know classically that every function on a compact domain is bounded. Heck, my old analysis text even proved this by computing the upper and lower bounds. However, its first step is to employ the equivalence of continuity and uniform continuity on closed domains, and get uniform $\epsilon$-$\delta$ for the computation. And the proof of that equivalence depends in an essential way on compactness; if there is a way to "compute" a subcover, and so translate normal continuity $\epsilon$-$\delta$s into uniform ones, it's not obvious to me. Is there such a way? Is there an approach I'm not thinking of that is less directly ascertainable from classical math?
My other thought, somewhat unrelated to the approach above, is to represent functions as mapping closed intervals to closed intervals, and only having an indirect pointwise action, but that's almost verbatim the Scott domain of continuous functions, which I've already evaluated and discarded for other reasons.