# [ACCEPTED]-How are exponents calculated?-analysis

I've had a chance to look at fdlibm's implementation. The 10 comments describe the algorithm used:

```
* n
* Method: Let x = 2 * (1+f)
* 1. Compute and return log2(x) in two pieces:
* log2(x) = w1 + w2,
* where w1 has 53-24 = 29 bit trailing zeros.
* 2. Perform y*log2(x) = n+y' by simulating muti-precision
* arithmetic, where |y'|<=0.5.
* 3. Return x**y = 2**n*exp(y'*log2)
```

followed 9 by a listing of all the special cases handled 8 (0, 1, inf, nan).

The most intense sections 7 of the code, after all the special-case 6 handling, involve the `log2`

and `2**`

calculations. And 5 there are no loops in either of those. So, the 4 complexity of floating-point primitives 3 notwithstanding, it looks like a asymptotically 2 constant-time algorithm.

Floating-point experts 1 (of which I'm not one) are welcome to comment. :-)

Unless they've discovered a better way to 42 do it, I believe that approximate values 41 for trig, logarithmic and exponential functions 40 (for exponential growth and decay, for example) are 39 generally calculated using arithmetic rules 38 and **Taylor Series** expansions to produce an approximate 37 result accurate to within the requested 36 precision. (See any Calculus book for details 35 on power series, Taylor series, and Maclaurin 34 series expansions of functions.) Please 33 note that it's been a while since I did 32 any of this so I couldn't tell you, for 31 example, exactly how to calculate the number 30 of terms in the series you need to include 29 guarantee an error that small enough to 28 be negligible in a double-precision calculation.

For 27 example, the Taylor/Maclaurin series expansion 26 for e^x is this:

```
+inf [ x^k ] x^2 x^3 x^4 x^5
e^x = SUM [ --- ] = 1 + x + --- + ----- + ------- + --------- + ....
k=0 [ k! ] 2*1 3*2*1 4*3*2*1 5*4*3*2*1
```

If you take all of the terms 25 (k from 0 to infinity), this expansion is 24 exact and complete (no error).

However, if 23 you don't take all the terms going to infinity, but 22 you stop after say 5 terms or 50 terms or 21 whatever, you produce an **approximate** result that differs 20 from the actual e^x function value by a 19 remainder which is fairly easy to calculate.

The 18 good news for exponentials is that it converges 17 nicely and the terms of its polynomial expansion 16 are fairly easy to code iteratively, so 15 you **might** (repeat, **MIGHT** - remember, it's been a while) not 14 even need to pre-calculate how many terms 13 you need to guarantee your error is less 12 than precision because you can test the 11 size of the contribution at each iteration 10 and stop when it becomes close enough to 9 zero. In practice, I do not know if this 8 strategy is viable or not - I'd have to 7 try it. There are important details I have 6 long since forgotten about. Stuff like: machine 5 precision, machine error and rounding error, etc.

Also, please 4 note that if you are not using e^x, but 3 you are doing growth/decay with another 2 base like 2^x or 10^x, the approximating 1 polynomial function changes.

The usual approach, to raise a to the b, for 12 an integer exponent, goes something like 11 this:

```
result = 1
while b > 0
if b is odd
result *= a
b -= 1
b /= 2
a = a * a
```

It is generally logarithmic in the 10 size of the exponent. The algorithm is 9 based on the invariant "a^b*result = a0^b0", where 8 a0 and b0 are the initial values of a and 7 b.

For negative or non-integer exponents, logarithms 6 and approximations and numerical analysis 5 are needed. The running time will depend 4 on the algorithm used and what precision 3 the library is tuned for.

Edit: Since there 2 seems to be some interest, here's a version 1 without the extra multiplication.

```
result = 1
while b > 0
while b is even
a = a * a
b = b / 2
result = result * a
b = b - 1
```

You can use exp(n*ln(x)) for calculating 4 x^{n}. Both x and n can be double-precision, floating 3 point numbers. Natural logarithm and exponential 2 function can be calculated using Taylor 1 series. Here you can find formulas: http://en.wikipedia.org/wiki/Taylor_series

If I were writing a pow function targeting 5 Intel, I would return exp2(log2(x) * y). Intel's 4 microcode for log2 is surely faster than 3 anything I'd be able to code, even if I 2 could remember my first year calculus and 1 grad school numerical analysis.

e^x = (1 + fraction) * (2^exponent), 1 6 <= 1 + fraction < 2

x * log2(e) = log2(1 5 + fraction) + exponent, 0 <= log2(1 4 + fraction) < 1

exponent = floor(x * log2(e))

1 3 + fraction = 2^(x * log2(e) - exponent) = e^((x 2 * log2(e) - exponent) * ln2) = e^(x - exponent 1 * ln2), 0 <= x - exponent * ln2 < ln2

More Related questions

We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.