Quantcast
Channel: Active questions tagged real-analysis - Mathematics Stack Exchange
Viewing all articles
Browse latest Browse all 9660

pow and its relative error

$
0
0

Investigating the floating-point implementation of the $\operatorname{pow}(x,b)=x^b$ with $x,b\in\Bbb R$ in some library implementations, I found that some pow implementations are inexact.

The implementation is basically as$$\operatorname{pow}(x,b)= \exp(b\ln x) \tag 1$$Now let's assume $x>0$, and that the relative error inflicted by $\exp$ and $\ln$ is the same $\Delta$ with $|\Delta| \ll 1$

Then the result of the computation is:$$\begin{align}(1+\Delta_\text{pow}) \operatorname{pow}(x,b) &= (1+\Delta)\exp (b(1+\Delta)\ln x) \\&=(1+\Delta)\exp(b\ln x)\underbrace{\exp(\Delta b\ln x)}_{\textstyle\approx 1+\Delta b\ln x} \\&\approx (1+\Delta)\operatorname{pow}(x,b)(1+\Delta b\ln x) \\&\approx (1+\Delta + \Delta b\ln x)\operatorname{pow}(x,b) \tag 2\end{align}$$This means that the relative error of pow is not $\Delta$ but$$\Delta_\text{pow} = \Delta(1+|b\ln x|) \tag 3$$So it is clear why the implementation according to $(1)$ is getting inexact, but I don't see how pow could be implemented with a smaller relative error. (The targeted precision is relative because it is in the context of floating-point imlpementation).

I am not asking for a specific implementation of course, but for a different representation of pow that has a better error behaviour. In the above implementation, the relative error of pow is unbounded even when the relative errors of exp and ln are bounded and well-behaved.


Viewing all articles
Browse latest Browse all 9660

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>