2012-04-14

Iteration: 0

π = 0

=

∞

Σ

k=0

(2*k*)! *1*^{2k+1}

2^{2k} (*k*!)^{2} (2*k* + 1)

1

=

∞

Σ

k=0

((2*k*)!)^{3} (42*k* + 5)

(*k*!)^{6} 16^{3k+1}

1

=

2√2

9801

∞

Σ

k=0

(4*k*)! (1103 + 26390*k*)

(*k*!)^{4} 396^{4k}

426880 √10005

=

∞

Σ

k=0

(6*k*)! (13591409 + 545140134*k*)

(3*k*)! (*k*!)^{3} (-640320)^{3k}

6

=

∞

Σ

k=1

1

4

=

4 atan

1

5

– atan

1

239

, where

atan *z*

=

∞

Σ

k=0

(-1)^{k} *z*^{2k+1}

2*k* + 1

Click **Iterate Once** to get started. Then keep
clicking. Notice how the result slowly converges on 3.14159265358979.
If you get tired, you can switch to **10 Times**. Then switch from
**Zeta** to **Machin** and click **Iterate Once** several
times. See how much more quickly it converges? Then try the other
algorithms! Note: this app is crippled by the fact that JavaScript only
performs 64-bit math. [JavaScript source]

[*Disclaimer: I'm not a math guy, so there are probably all kinds
of inexactitudes in here. Feel free to add corrections to the
comments.*]

A post went by on reddit recently showing an employer trying to weed out candidates. The idea was that you were to find the first 7-digit palindrome in π, and that was the email address to send your resumestuffs. (The first 10-digit palindrome is 0136776310.)

Now, searching a string for palindromes is a pretty trivial exercise, as is finding the first million digits of π on the Internet.

So the only thing that's left is to write a program to compute π for us! How do we do that? Let us count the ways!

No, let's not. There are too many. We'll just cover a few, and then show an implementation of the fastest in C using the GNU MultiPrecision library, GMP.

People have been figuring ways to compute π for ages, and most of
them were very very tedious and time-consuming. Archimedes figured it
out to 3 decimal places. 750 hundred years later, we managed to get 4
digits. Another 900 years passed, and we got to 16. In the late 1800s,
hapless William
Shanks busted his buns *for 15 years* to compute π to 707
places using John
Machin's formula (seen in the app, above). In a stroke of bad luck,
however, he blew it 527 digits in, so the remainder of the estimate is
incorrect. *[Nooooooo...!]* Fortunately, no one was willing to
spend the time to double-check his work, and he died in the blissful
happiness of a job well-done.

Archimedes had used a geometric calculation to estimate π, but later on we mathed-up and figured out infinite series. It turns out, this opens up some serious doors, since π has a habit of showing up all over the place.

The "go to" formula for many years was John Machin's. Following that
in the early 20th century, *super
genius* Srinivasa
Ramanujan came up with some good speed-demons. In 1987 the Chudnovsky
brothers came up with the blistering Chudnovsky algorithm, which
computes over 14 digits of π per iteration! (These three algorithms are
featured in the app at the top of this page.)

If we have a formula that we can solve for π, and there is an infinite series that can produce a result for that solution, then we're golden.

What does that mean?

Here's an example: we know sin π/2 = 1, so we can solve for π like so: π = 2 arcsin 1. Now... is there an infinite series for arcsin? Sure, it's:

arcsin *z*

=

∞

Σ

k=0

(2*k*)! *z*^{2k+1}

2^{2k} (*k*!)^{2} (2*k* + 1)

Notes for non-math types: sigma, Σ, is a summation, and basically means we're going to loopkfrom 0 to infinity, and keep a running sum of the results of the formula inside the summation. The exclamation point, !, is the factorial which is the product of all integers between that number and 1. And, of course, the superscripts are exponents.

So the basic plan of attack is to write a for-loop to do the summation, and then compute the final result, like so:

Pseudocode//compute pi = 2 * arcsin(1)sum = 0 z = 1 //since we're computing arcsin(1)for (k = 0; k < ∞; k++) { sum = sum + giant_arcsin_formula(k, z) //from above} pi = 2 * sum //and there was much rejoicing

You might have noticed a slight *implementation detail* in the
above code. Notably, it literally has an infinite loop in it, and you
have to be somewhere by sometime, so who can wait that long?

So we give up after a while. We run it for a few thousand
iterations, and we decide that's good enough, and call it a day. But as
you saw with the app at the top of this page, some series
*converge* (approach π) very very VERY slowly. So the quest has
been on for the last few hundred years to come up with equations that
converge more and more quickly so we can wrap it up and have some
dinner.

Using arcsin to compute π this way is really abysmally slow, so we'll be focusing on Chudnovsky instead.

You might have noticed that the app at the top of this page gives out after just a few digits. This is because JavaScript uses 64-bit IEEE-754 floating point numbers, and that's as much precision as they offer. Also, some intermediate calculations might eventually overflow and fail, as well (since there are a lot of powers and factorials involved.)

How do we get more precision? How do we get precision of our choice, say, 4000 bits instead of just 64? We need a library that does Arbitrary Precision Arithmetic. One of my favorites is the GNU MultiPrecision Library, AKA gmplib or libgmp or GMP.

This library allows you to set the precision of your computations, and then has a variety of useful functions to help (e.g. arithmetic, exponentiation, factorial, etc.) Here's an example that computes √2 to 1024 bits:

C: sqrt2.c//compile with: gcc -Wall -o sqrt2 sqrt2.c -lgmp#include <stdio.h> #include <stdlib.h> #include <gmp.h> int main(void) { mpf_t result; //floating point variablechar *output; //holds the output result stringmp_exp_t exp; //holds the exponent for the result stringmpf_set_default_prec(1024); //set default precision in bitsmpf_init(result); //allocate and init variablempf_sqrt_ui(result, 2); //compute sqrt(2)//convert to string, and print:output = mpf_get_str(NULL, &exp, 10, 0, result); printf("%.*s.%s\n", (int)exp, output, output+exp); //clean upfree(output); //mpf_get_str() calls malloc()mpf_clear(result); return 0; }

And here is the result of a run:

$./sqrt21.4142135623730950488016887242096980785696718753769480731766797379907 324784621070388503875343276415727350138462309122970249248360558507372 126441214970999358314132226659275055927557999505011527820605714701095 599716059702745345968620147285174186408891986095523292304843087143214 50839762603627995251407989687253396

As you can see, we have way more digits than will fit in a standard
`long double`; we have 1024-bits-worth, to be precise.

How many decimal digits are there in 1024 bits, anyway? How can we figure it out? (Fun Fact: did you know "bit" is a contraction of "binary digit"? Of course you did!)

Thinking about it intuitively, we know that 1 bit can encode two values: 1 or 0. We know that 2 bits can store 4 values: 0, 1, 2, and 3. But for decimal numbers, we need to store the values 0-through-9... so we need more bits.

3 bits gets us 0-7 (2

^{3}different values), which is almost there, but 4 bits gets us 0-15 (2^{4}), which is too many. Sure we can encode decimal numbers at one decimal digit per nibble—that's called Binary Coded Decimal—but it wastes part of a bit.So intuitively, it's between 3 and 4 bits. Like 3.4 bits per decimal digit or something. How can we do better?

In general, a

x-decimal-digit number can encode 10^{x}different values. Similarly, ay-bit (binary digit) number can encode 2^{y}different values. And for anyy-bit value, there is somex-decimal-digit number that can hold the same value.And we know the relation between

xandyis some constant factor (we speculated 3.4 bits-per-decimal-digit, earlier). So we can say thaty=fx, wherefis probably around 3.4, and have fun:10^{x}=2^{y}=2^{fx}since we declaredy=fxfx=log_{2}(10^{x})solve forfxusing definition of the logarithmfx=xlog_{2}10f=log_{2}10=3.3219280948873626Hey, so it's 3.32192-blahblahblah bits-per-decimal-digit! Our 3.4 guess wasn't bad at all. And now we know how many bits we need to hold a certain number of digits:

double bits_per_digit = 3.3219280948873626; //roughlyint digits = 100; int bits = digits * bits_per_digit + 1; //+1 in case of round-off

GMP can do a lot more than just square roots. There are oodles of functions, some of which will make you especially happy if you're trying to implement RSA. For Chudnovsky, we just need square root, exponent, factorial, and arithmetic.

I'm not even going to begin to pretend I know how the Chudnovsky equation came about:

426880 √10005

=

∞

Σ

k=0

(6*k*)! (13591409 + 545140134*k*)

(3*k*)! (*k*!)^{3} (-640320)^{3k}

but there it is in all its glory. Solve for π, and you're done!

Again, it's just like the arcsine example, above, where we run the series for so many iterations until we feel the estimate is good enough. Now, for Chudnovsky, we know how many decimal digits it generates per iteration, so it's easy to compute the number of iterations needed for a desired result. (I don't know how they figured how many digits per iteration; maybe someone can enlighten us.)

Here's the inner part of the loop from the source file, `chudnovsky.c`, that runs the
summation. Obviously the larger `iterations` is, the longer it's
going to run, and the better the π approximation will be.

Cfor (k = 0; k < iterations; k++) { threek = 3*k; mpz_fac_ui(a, 6*k); //(6k)!mpz_set_ui(b, 545140134); //13591409 + 545140134kmpz_mul_ui(b, b, k); mpz_add_ui(b, b, 13591409); mpz_fac_ui(c, threek); //(3k)!mpz_fac_ui(d, k); //(k!)^3mpz_pow_ui(d, d, 3); mpz_ui_pow_ui(e, 640320, threek); //-640320^(3k)if ((threek&1) == 1) { mpz_neg(e, e); } //numerator (in A)mpz_mul(a, a, b); mpf_set_z(A, a); //denominator (in B)mpz_mul(c, c, d); mpz_mul(c, c, e); mpf_set_z(B, c); //resultmpf_div(F, A, B); //add on to summpf_add(sum, sum, F); }

You should be able to match the comments in the source up to the equation to see the individual bits at work. (Note this code is not optimized for speed in the least; it merely matches the equation as closely as possible.)

$./chudnovsky 200003.14159265358979323846264338327950288419716939937510582097 4944592307816406286208998628034825342117067982148086513282 3066470938446095505822317253594081284811174502841027019385 2110555964462294895493038196442881097566593344612847564823 ...etc.

One cool thing about the summation is that you can break it up between cores. You could have one core do half the work, and another core do the other half, and then add their results together when they're both done. It's embarrassingly parallel. It should be possible to take something like OpenMP, which I blogged about earlier, and speed up the computation considerably.

I'm not going to bother implementing that, since Hanhong Xue and David Carver did it better. Links to that source is below.

- π on Wikipedia
- GMP home page
- GMP page about π, with link to Hanhong Xue's Chudnovsky estimator
- David Carver's OpenMP mods to Hanhong Xue's estimator
- A million digits of π at piday.org
- My cheesy C program that uses Chudnovsky—this is demonstration code, optimized for clarity, not speed or correctness.