Yahoo Answers is shutting down on May 4th, 2021 (Eastern Time) and beginning April 20th, 2021 (Eastern Time) the Yahoo Answers website will be in read-only mode. There will be no changes to other Yahoo properties or services, or your Yahoo account. You can find more information about the Yahoo Answers shutdown and how to download your data on this help page.
Trending News
First digit of an exponential?
Given two positive integers a,b > 1, is there a formula to obtain the most significant decimal digit (or for any base m, for that matter) of a^b, without computing a^b? The number of digits in a^b? Please provide proof.
A proof to the contrary would also be awesome.
Apparently, I should have said "for an arbitrary base m".
Thank you very much smci. I still wonder, though - can anyone prove that it can't be done without logarithms, or computation of a^b?
1 Answer
- smciLv 71 decade agoFavorite Answer
Well since log_m (a^b) = b log_m(a)
this boils down to getting a "good-enough" approximation to log_m(a).
If you want the FORMULA approach, you're trying to estimate which of the following (b-1) bins log_m (a^b) falls into:
(b-1) ≤ b log_m(a)
(b-2) ≤ b log_m(a) < (b-1)
(b-3) ≤ b log_m(a) < (b-2)
...
2 ≤ b log_m(a) < 3
1 ≤ b log_m(a) < 2
(no '0' case cos we know the MSD can't be 0)
Whereas if you want an ITERATIVE METHOD (not formula) implementation, below I describe an adaptation of a well-known how to compute this.
and the NUMBER OF DIGITS (in base m) of a^b will be floor( b log_m(a) )
Obviously the larger exponent b is, the more accurately we are going to need to estimate log_m(a), in order to get log_m (a^b) to one sig. fig.
Now we only need to estimate log_m(a) close enough to get one sig fig. We can get an upper-bound on accuracy by noting that the most accuracy would be needed to distinguish between a leading digit of (b-1) and (b-2):
(b-1) ≤ b log_m(a)
(1-1/b) ≤ log_m(a)
This tells us how accurate our estimate to log_m (a) needs to be.
(This assumes full precision, i.e. no early termination in the algorithm - you would need to modify the following algorithm for that. I suspect it'll be (b-1 +1/2) ≤ b log_m(a)
i.e. (b-1/2) ≤ b log_m(a)
i.e. twice as tight as you would normally need.
Also, for reasons below, you'll want to use natural-logs, then convert from base-e to base-m (with some rounding-to-furthest))
One well-known iterative algorithm for elementary-function evaluation is CORDIC ( http://en.wikipedia.org/wiki/CORDIC ) since the 1960s.
See the Volder and Walther papers cited for proofs.
Since you only need the MSD, you can figure out an early- termination condition. Search the literature, there have been many papers on that.
To get natural log [see Volder eqn (20)]
you use Hyperbolic mode, and initialize to x=(w+1),y=(w-1),z=0 , then the iteration gradually computes
z → ln(w) = 2 atanh(y/x)
with increasing precision.
As I commented, you'll need to figure out your early-termination condition based on the worst-case in base-m, then convert that to base-e.
Implementations of CORDIC are usually done in radix-2, radix-4 or radix-2^k, but there's no reason you can't do it in radix-m (the implementation won't be the most efficient, but you don't care.) However if you do it outside radix-2^k, there is hardly any literature on it. So I suggest doing it in radix-2^k then convert to radix-m.
CORDIC will give you arbitrary-precision, but the compute time is bit-serial, so e.g. if you wanted 128-bit precision, it might take slightly over 128 iterations (read the literature to see why guard bits, and occasional double-iterations are needed).
(By the way, a smartass special-case is for base-2, where the MSD (of any non-zero number) will (obviously) always be 1 (except when a≡0).)
____
@torquestomp:
we DON'T compute logarithms or a^b, we only do PARTIAL COMPUTATION.
It's the same difference to any other approach. It's all conceptually the same thing.
Source(s): I did my MSEE thesis on CORDIC. I interchanged the terms 'precision' and 'accuracy' above sloppily in a way that I should be shot for, but it's late and you get the idea. * John S. Walther, A Unified Algorithm for Elementary Functions, Proc. of Spring Joint Computer Conference, pp379-385, May 1971 [linked from Wikipedia]