Python 59 bytes
print reduce(lambda x,p:p/2*x/p+2*10**999,range(6637,1,-2))
This prints out 1000 digits; slightly more than the required 5. Instead of using the prescribed iteration, it uses this:
pi = 2 + 1/3*(2 + 2/5*(2 + 3/7*(2 + 4/9*(2 + 5/11*(2 + ...)))))
The 6637
(the innermost denominator) can be formulated as:
digits * 2*log2(10)
This implies a linear convergence. Each deeper iteration will produce one more binary bit of pi.
If, however, you insist on using the tan-1 identity, a similar convergence can be achieved, if you don't mind going about the problem slightly differently. Taking a look at the partial sums:
4.0, 2.66667, 3.46667, 2.89524, 3.33968, 2.97605, 3.28374, ...
it is apparent that each term jumps back and forth to either side of the convergence point; the series has alternating convergence. Additionally, each term is closer to the convergence point than the previous term was; it is absolutely monotonic with respect to its convergence point. The combination of these two properties implies that the arithmetic mean of any two neighboring terms is closer to the convergence point than either of the terms themselves. To give you a better idea of what I mean, consider the following image:
The outer series is the original, and the inner series is found by taking the average of each of the neighboring terms. A remarkable difference. But what's truly remarkable, is that this new series also has alternating convergence, and is absolutely monotonic with respect to its convergence point. That means that this process can be applied over and over again, ad nauseum.
Ok. But how?
Some formal definitions. Let P1(n) be the nth term of the first sequence, P2(n) be the nth term of the second sequence, and similarly Pk(n) the nth term of the kth sequence as defined above.
P1 = [P1(1), P1(2), P1(3), P1(4), P1(5), ...]
P2 = [(P1(1) +P1(2))/2, (P1(2) +P1(3))/2, (P1(3) +P1(4))/2, (P1(4) +P1(5))/2, ...]
P3 = [(P1(1) +2P1(2) +P1(3))/4, (P1(2) +2P1(3) +P1(4))/4, (P1(3) +2P1(4) +P1(5))/4, ...]
P4 = [(P1(1) +3P1(2) +3P1(3) +P1(4))/8, (P1(2) +3P1(3) +3P1(4) +P1(5))/8, ...]
Not surprisingly, these coefficients follow exactly the binomial coefficients, and can expressed as a single row of Pascal's Triangle. Since an arbitrary row of Pascal's Triangle is trivial to calculate, an arbitrarily 'deep' series can be found, simply by taking the first n partial sums, multiply each by the corresponding term in the kth row of Pascal's Triangle, and dividing by 2k-1.
In this way, full 32-bit floating point precision (~14 decimal places) can be achieved with just 36 iterations, at which point the partial sums haven't even converged on the second decimal place. This is obviously not golfed:
# used for pascal's triangle
t = 36; v = 1.0/(1<<t-1); e = 1
# used for the partial sums of pi
p = 4; d = 3; s = -4.0
x = 0
while t:
t -= 1
p += s/d; d += 2; s *= -1
x += p*v
v = v*t/e; e += 1
print "%.14f"%x
If you wanted arbitrary precision, this can be achieved with a little modification. Here once again calculating 1000 digits:
# used for pascal's triangle
f = t = 3318; v = 1; e = 1
# used for the partial sums of pi
p = 4096*10**999; d = 3; s = -p
x = 0
while t:
t -= 1
p += s/d; d += 2; s *= -1
x += p*v
v = v*t/e; e += 1
print x>>f+9
The initial value of p begins 210 larger, to counteract the integer division effects of s/d as d becomes larger, causing the last few digits to not converge. Notice here again that 3318
is also:
digits * log2(10)
The same number of iteratations as the first algorithm (halved because t decreases by 1 instead of 2 each iteration). Once again, this indicates a linear convergence: one binary bit of pi per iteration. In both cases, 3318 iterations are required to calculate 1000 digits of pi, as slightly better quota than 1 million iterations to calculate 5.
"It can be answered in about 10 lines with c#" someone please beat that – MCMastery – 2015-07-24T15:14:52.043
8You should probably add some more rules, otherwise you will get answers like (python)
p=lambda:3.14159
– Matt – 2012-08-29T15:44:18.0901
Have you seen http://codegolf.stackexchange.com/questions/506/calculate-500-digits-of-pi , which is very similar? At the very least, trig functions should be banned for this problem because they allow for trivial solutions such as this QBASIC program: ?INT(4E5*ATN(1))/1E5
– PleaseStand – 2012-08-29T17:30:45.183I think you should require that the algorithm be one of successive approximation: the longer you compute, the closer you get to pi. – DavidC – 2012-08-29T22:59:33.267
@DavidCarraher, although that's mathematically inevitable using this series, from a numerical analytical point of view it's highly dubious. A slowly converging alternating series is a poster child for loss of significance. – Peter Taylor – 2012-08-30T07:03:51.860
@PeterTaylor "Slowly converging" is an understatement. It took roughly one million places to attain the desired precision! – DavidC – 2012-08-30T11:14:34.537
huh, a friend of mine posted something about this problem on facebook last night. he didn't golf it though. – acolyte – 2012-08-30T16:59:47.627
2
Dupe, but it's so old it's not here: http://stackoverflow.com/q/407518/12274
– J B – 2012-09-01T21:42:57.663Later near-duplicate: http://codegolf.stackexchange.com/q/22009
– msh210 – 2014-02-25T21:29:33.363