Arithmetic-Geometric Mean of Gauss



Contrary to the popular belief that mathematics is the most dreaded subject, many people in their younger years are struck by many mathematical curiosities. Some of them use these curiosities as puzzles for friends, others try to find the reason behind it and are satisfied once they find the reason. But the great heroes of mathematics are those who, being intrigued by a mathematical curiosity, develop the idea in a systematic manner and connect it to other ideas of existing mathematical knowledge.

The following examples will illustrate my point clearly
1) $ 987654321 - 123456789 = 864197532$: Here the same digits appear after subtraction and people generally pose it as a puzzle: Subtract $ 45$ from $ 45$ and get $ 45$. Here the sum of digits is $ 45$.

2) Divisibility rule for $ 7$: Split the number in blocks of 3 digits starting from least significant digit. If the difference between sum of even numbered blocks and the sum of odd numbered blocks is divisible by $ 7$ then the number is divisible by $ 7$. The reason behind the rule is that $ 10^3 + 1 = 1001$ is divisible by $ 7$ and since $ 1001$ is also divisible by $ 13$ the rule applies to $ 13$ as well.

3) Process of square root extraction by long division: The method seems quite weird and strange when presented to students at the age of 12-13 yrs. I scratched my head sometime during my 15th year and found the reason why it works. The method is based on the following simple formula (again studied at 12 yrs of age) $$ (x + h)^2 = x^2 + 2xh + h^2 = x^2 + h (2x + h)$$ which allows us to write (10 is being used because the long division method of square root is used for decimal numbers) $$ (10x + h)^2 - a = 0 \Rightarrow (2 \cdot 10x + h)h = a - (10x)^2$$ Here $ a$ is the number whose square root we need, $ x$ is the the number formed by removing least significant digit from the square root and $ h$ is the least significant digit of the square root. This is a special case (and easy) of Horner's method of finding numerical roots of polynomial equations where we find the root digit by digit.


After this prelude we now come to our topic of this post. For the first time I studied the concept of Arithmetic-Geometric mean in an exercise problem on sequences in some average quality book on infinite series when I was in 11th grade (i.e. it was presented just as a curious mathematical problem with no special significance other than being related to the arithmetic and geometric mean).

To begin our journey let's start with two positive real numbers $ a$ and $ b$ and define two sequences $ \{a_n\}$ and $ \{b_n\}$ as follows \begin{align} a_{0} &= a, b_{0} = b\notag\\ a_{n + 1} &= \frac{a_{n} + b_{n}}{2}\notag\\ b_{n + 1} &= \sqrt{a_{n}b_{n}}\notag \end{align} Thus we start with $ a_{0} = a$ and $ b_{0} = b$ and find their arithmetic mean as $ a_{1}$ and their geometric mean as $ b_{1}$. The process is then repeated to get $ a_{2}$ and $ b_{2}$ from $ a_{1}$ and $ b_{1}$. This process generates the sequences $ \{a_{n}\}$ and $ \{b_{n}\}$. The exercise problem in my book was to prove that these two sequences converge to the same limit as $ n \rightarrow \infty$.

The solution to the problem is not that complicated. In the trivial case when $ a = b$ it is clear that both sequences are constant and hence their common limit is $ a = b$. Assuming then that $ a > b$ (if not so, by AM > GM inequality we will always have $ a_{1} > b_{1}$) we can easily see that sequence $ \{a_{n}\}$ is strictly decreasing and bounded below by $ b$ and sequence $ \{b_{n}\}$ is strictly increasing and bounded above by $ a$. Therefore both the sequences tend to limits say, $ A$ and $ B$. Since we have $$ a_{n + 1} = \frac{a_{n} + b_{n}}{2}$$ it is clear that $ A = (A + B) / 2$ and so $ A = B$. It is also interesting to see that $$ a_{n + 1} - b_{n + 1} = \frac{{(\sqrt{a_{n}} - \sqrt{b_{n}})}^2}{2}$$ which leads to $$ a_{n + 1} - b_{n + 1} = \frac{{(a_{n} - b_{n})}^2}{2 ({\sqrt{a_{n}} + \sqrt{b_{n}})}^2}$$ Finally we get $$ a_{n + 1} - b_{n + 1} \leq \frac{{(a_{n} - b_{n})}^2}{8b}$$ since all $ a_{n}$ and $ b_{n}$ are greater than or equal to $ b$. Applying this recursively we get $$ a_{n + p} - b_{n + p} \leq \frac{{(a_{n} - b_{n})}^{2^{p}}}{(8b)^{p}}$$ Since both sequences tend to the same limit there will be a value of $ n$ such that $ |a_{n} - b_{n}| < 1, |(a_{n} - b_{n}) / (8b)| < 1$ and then the above equation shows that the termwise difference between the sequences will converge very fast to $ 0$. In fact the convergence will be quadratic meaning that in each iteration the number of agreeing decimal digits in $ a_{n}$ and $ b_{n}$ will double.

The common limit of these two sequences $ \{a_{n}\}$ and $ \{b_{n}\}$ is called the Arithmetic-Geometric Mean of $ a$ and $ b$ and is denoted by $ M(a, b)$. We shall use the abbreviation AGM for Arithmetic-Geometric Mean in what follows. I played around with this concept of AGM using my calculator and found that usually for numbers in the range of $ 10^{10}$ the AGM was found in less than 10 iterations. Further iterations were giving the same result due to limitation of 10 decimal digits of calculator. For me the journey of AGM was finished here and I remembered it just as a very peculiar mathematical curiosity.

Gauss on AGM

From here Gauss takes on the stage. He calculated the AGM of $ 1$ and $ \sqrt{2}$ upto 11 places of decimals and found that it agreed with ratio of $ \pi$ and a certain number $ \omega$. This number $ \omega$ is related to the following elliptic integral $$ \omega = 2 \int_{0}^{1} \frac{dx}{\sqrt{1 - x^{4}}}$$ which is involved in the calculation of arc-length of a curve called lemniscate given by the equation $$ (x^{2} + y^{2})^{2} = x^{2} - y^{2}$$ and by the way, $ \pi$ is given by the following integral: $$ \pi = 2 \int_{0}^{1} \frac{dx}{\sqrt{1 - x^{2}}}$$ Such was the genius and intuition of Gauss that he recognized that such an agreement up to 11 places of decimals between two numbers arising out of entirely different contexts could not be a mere coincidence. I am still struck by awe as to how Gauss could think of such a thing. Seeing numerical equality up to a certain places of decimals between two numbers in the vast uncountable sea of reals and interpreting it as a true equality is an unparalleled feat of genius. Gauss mentioned in his diary that this connection would open up new fields of study in mathematical analysis. Gauss' prediction came true (with contributions from himself, Abel and Jacobi) creating the theory of elliptic functions and integrals.

AGM and Elliptic Integrals

Gauss proceeded to prove that these two numbers were in fact equal. In other words he established the following identity $$ \int_{0}^{1} \frac{dx}{\sqrt{1 - x^{4}}} = \frac{\pi}{2M(1, \sqrt{2})}$$ To do so he first started a systematic study of AGM and its properties. From the definition it is immediately obvious that the following hold true
a) $ M(a, a) = a$
b) $ M(a, b) = M(b, a)$
c) $ M(a, b) = M(a_{0}, b_{0}) = M(a_{1}, b_{1}) = \ldots = M(a_{n}, b_{n})$, in particular we have $$ M(a, b) = M\left(\frac{a + b}{2}, \sqrt{ab}\right)$$ d) $ M(ka, kb) = kM(a, b)$
Note that we can invert c) and write
e) $$ M(a, b) = M(a + \sqrt{a^{2} - b^{2}}, a - \sqrt{a^{2} - b^{2}})$$

First Proof: Stage 1

Gauss begins his analysis starting from the function $ M(1 + x, 1 - x)$ for $ 0 \leq x < 1$. In the following we are going to use the properties of AGM mentioned above $$ M(1 + x, 1 - x) = (1 + x)M\left(1, \frac{1 - x}{1 + x}\right)$$ (using (d)). Noting that $$ 1 - \left(\frac{1 - x}{1 + x}\right)^{2} = \frac{4x}{(1 + x)^{2}}$$ we get using (e) $$ M(1 + x, 1 - x) = (1 + x)M\left(1 + \frac{2\sqrt{x}}{1 + x}, 1 - \frac{2\sqrt{x}}{1 + x}\right)$$ Taking reciprocals we get $$\dfrac{1}{M(1 + x, 1 - x)} = \left(\dfrac{1}{1 + x}\right)\left(\dfrac{1}{M\left(1 + \dfrac{2\sqrt{x}}{1 + x}, 1 - \dfrac{2\sqrt{x}}{1 + x}\right)}\right)$$

First Proof: Stage 2

Now Gauss uses a very powerful technique of power series expansion. Since $ 1 / M(1 + x, 1 - x)$ is an even function of $ x$ (i.e. its value does not change by changing the sign of $ x$) therefore it can be expressed as a power series of $ x$ consisting of only even powers of $ x$.

Thus we assume along with Gauss that $$ \frac{1}{M(1 + x, 1 - x)} = 1 + p_{1}x^{2} + p_{2}x^{4} + \cdots + p_{n}x^{2n} + \cdots $$ where coefficients $ p_{1}, p_{2}, \ldots, p_{n}, \ldots$ are to be determined. The initial coefficient $ 1$ comes from the fact that for $ x = 0$ we get $ M(1, 1) = 1$.

On using this power series we get \begin{align} 1 + p_{1}x^{2} &+ p_{2}x^{4} + \cdots + p_{n}x^{2n} + \cdots\notag\\ &= \frac{1}{1 + x} + \frac{2^{2}p_{1}x}{(1 + x)^{3}} + \frac{2^{4}p_{2}x^{2}}{(1 + x)^{5}} + \cdots + \frac{2^{2n}p_{n}x^{n}}{(1 + x)^{2n + 1}} + \cdots\notag \end{align} Equating coefficients of various powers of $ x$ one can get the values of $ p_{n}$ for all $ n$. By hand calculation it can be checked that \begin{align} p_{1} &= \frac{1}{4} = \left(\frac{1}{2}\right)^{2}\notag\\ p_{2} &= \frac{9}{64} = \left(\frac{3}{8}\right)^{2}\notag\\ p_{3} &= \frac{25}{256} = \left(\frac{5}{16}\right)^{2}\notag\\ p_{4} &= \frac{1225}{16384} = \left(\frac{35}{128}\right)^{2}\notag \end{align} and it seems that all $ p_{n}$ are fractions and perfect squares at that, but there is no pattern. Gauss finds the pattern by taking successive ratios of the coefficients $ p_{n}$ as follows \begin{align} \frac{p_{2}}{p_{1}} &= \left(\frac{3}{4}\right)^{2}\notag\\ \frac{p_{3}}{p_{2}} &= \left(\frac{5}{6}\right)^{2}\notag\\ \frac{p_{4}}{p_{3}} &= \left(\frac{7}{8}\right)^{2}\notag \end{align} so that we have finally $$ \frac{p_{n}}{p_{n - 1}} = \left(\frac{2n - 1}{2n}\right)^{2}$$ and therefore $$ p_{n} = \left(\frac{1 \cdot 3 \cdot 5 \cdots (2n - 1)}{2 \cdot 4 \cdot 6 \cdots 2n}\right)^{2}$$ Gauss does not stop here and he justifies this pattern by actually doing algebraic manipulations while equating coefficients of both power series (these computations are quite time consuming and need patience therefore omitted from this discussion). Thus Gauss finally arrives at the following result $$\frac{1}{M(1 + x, 1 - x)} = \sum_{n = 0}^{\infty}\left(\frac{1 \cdot 3 \cdot 5 \cdots (2n - 1)}{2 \cdot 4 \cdot 6 \cdots 2n}\right)^{2}x^{2n}$$ (where the first term in the series on right is 1 corresponding to $ n = 0$).

First Proof: Stage 3

The other development about the $ \omega$ integral is simple. Gauss starts with the famous elliptic itntegral (they were already being studied by Legendre) $ K(x)$ defined by $$ K(x) = \int_{0}^{\pi / 2}\frac{d\theta}{\sqrt{1 - x^{2}\sin^{2}\theta}}$$ and expands the integrand in a power series using binomial theorem to get $$ K(x) = \sum_{n = 0}^{\infty}\frac{1 \cdot 3 \cdot 5 \cdots (2n - 1)}{2^{n}n!}x^{2n}\int_{0}^{\pi / 2} \sin^{2n}\theta\,\,d\theta $$ Now the integral for powers of sines and cosines is easily evaluated using the following formula $$\int_{0}^{\pi / 2} \sin^{2n}\theta\,\,d\theta = \frac{2n - 1}{2n}\int_{0}^{\pi / 2} \sin^{2n - 2}\theta\,\,d\theta$$ Repeated application of this formula gives us \begin{align} \int_{0}^{\pi / 2} \sin^{2n}\theta\,\,d\theta &= \frac{1 \cdot 3 \cdot 5 \cdots (2n - 1)}{2 \cdot 4 \cdot 6 \cdots 2n}\int_{0}^{\pi / 2}\,d\theta\notag\\ &= \frac{1 \cdot 3 \cdot 5 \cdots (2n - 1)}{2 \cdot 4 \cdot 6 \cdots 2n} \cdot \frac{\pi}{2}\notag \end{align} Thus we get $$ K(x) = \frac{\pi}{2}\sum_{n = 0}^{\infty}\left(\frac{1 \cdot 3 \cdot 5 \cdots (2n - 1)}{2 \cdot 4 \cdot 6 \cdots 2n}\right)^{2}x^{2n}$$

First Proof: Finally Putting Pieces Together

The connection between AGM and elliptic integral is now put in place $$ K(x) = \frac{\pi}{2M(1 + x, 1 - x)}$$ Noting that $ M(1 + x, 1 - x) = M(1, \sqrt{1 - x^{2}})$ we get $$ K(x) = \frac{\pi}{2M(1, \sqrt{1 - x^{2}})}$$ Putting $ y = \sqrt{1 - x^{2}}$ we get $$\frac{\pi}{2M(1, y)} = K(\sqrt{1 - y^{2}}) = \int_{0}^{\pi / 2}\frac{d\theta}{\sqrt{1 - (1 - y^{2}) \sin^{2}\theta}}$$ Putting $ y = \sqrt{2}$ and $ x = \sin \theta$ in the integral we get $$ \int_{0}^{1}\frac{dx}{\sqrt{1 - x^{4}}} = \frac{\pi}{2M(1, \sqrt{2})}$$ In our analysis we neglected the issue of convergence of various power series involved. However it can be shown easily that they converge for $ |x| < 1$. Also under these conditions we have $ |y| < 1$ and therefore setting $ y = \sqrt{2}$ is not allowed. However, the formula connecting the elliptic integral and the AGM holds for all values of the variables involved. The restrictions on them were needed because of the particular power series approach used in the proof.


Great mathematical ideas often spring from simple concepts and their greatness lies in the way they get connected to other prevailing ideas and concepts of mathematics. However it is also true that establishing a connection between otherwise disconnected ideas almost always requires a genius. Even the first step of recognizing a probable connection requires great intuition, but this is possible for many people if they are observant enough. The really hard part is trying to establish the connection using a mathematical proof.

Gauss' Second Proof

Gauss developed his theory of elliptic integrals further and proved the famous integral formula which shows the connection with AGM even more explicitly. He proved that the integral $$ \int_{0}^{\pi / 2} \frac{d\theta}{\sqrt{a^{2} \cos^{2}\theta + b^{2} \sin^{2}\theta}}$$ is invariant under the Arithmetic-Geometric transformation given by $$ a \rightarrow \frac{a + b}{2}, \,\,\, b \rightarrow \sqrt{ab}$$ and therefore the integral is easily evaluated using AGM as follows $$\int_{0}^{\pi / 2} \frac{d\theta}{\sqrt{a^{2} \cos^{2}\theta + b^{2} \sin^{2}\theta}} = \frac{\pi}{2M(a, b)}$$ Gauss was such an algebraist that he proved the invariance of the integral directly using just one substitution $$ \sin\theta = \frac{2a \sin\phi}{a + b + (a - b)\sin^{2}\phi}$$ This substitution involves heavy algrebraical manipulation to change the integrand in terms of $ \phi$ and calulating the derivative $ d\theta / d\phi$. It is not clear as to how Gauss found this substitution formula. Later authors simplified the substitution into two parts. First a substitution $ t = b\tan\theta$ reduces the integral to $$ I(a, b) = \frac{1}{2}\int_{-\infty}^{\infty}\frac{dt}{\sqrt{(t^{2} + a^{2})(t^{2} + b^{2})}}$$ Setting $ c = (a + b) / 2$ and $ d = \sqrt{ab}$ and using the substitution $ t = (x - ab / x) / 2$ one gets \begin{align}t^{2} + c^{2} &= \frac{(x^{2} - ab)^{2}}{4x^{2}} + \frac{(a + b)^{2}}{4}\notag\\ &= \frac{x^4 - 2abx^{2} + a^{2}b^{2} + (a^{2} + b^{2})x^{2} + 2abx^{2}}{4x^{2}}\notag\\ &= \frac{x^4 + (a^{2} + b^{2})x^{2} + a^{2}b^{2}}{4x^{2}}\notag\\ &= \frac{(x^{2} + a^{2})(x^{2} + b^{2})}{4x^{2}}\notag\end{align} and \begin{align}t^{2} + d^{2} &= \frac{(x^{2} - ab)^{2}}{4x^{2}} + (\sqrt{ab})^{2}\notag\\ &= \frac{x^4 - 2abx^{2} + a^{2}b^{2} + 4abx^{2}}{4x^{2}}\notag\\ &= \frac{x^4 + 2abx^{2} + a^{2}b^{2}}{4x^{2}}\notag\\ &= \frac{(x^{2} + ab)^{2}}{4x^{2}}\notag\end{align} and $$ dt = \frac{x^{2} + ab}{2x^{2}}dx $$ so that we get (also note that the substitution from $ t$ to $ x$ maps the interval $ (-\infty, \infty)$ to interval $ (0, \infty)$) \begin{align}I(c, d) &= \frac{1}{2}\int_{-\infty}^{\infty}\frac{dt}{\sqrt{(t^{2} + c^{2})(t^{2} + d^{2})}}\notag\\ &= \frac{1}{2}\int_{0}^{\infty}\frac{4x^{2}}{\sqrt{(x^{2} + a^{2})(x^{2} + b^{2})}}\,\frac{1}{x^{2} + ab}\,\frac{x^{2} + ab}{2x^{2}}\,dx\notag\\ &= \int_{0}^{\infty}\frac{dx}{\sqrt{(x^{2} + a^{2})(x^{2} + b^{2})}}\notag\\ &= \frac{1}{2}\int_{-\infty}^{\infty}\frac{dx}{\sqrt{(x^{2} + a^{2})(x^{2} + b^{2})}} = I(a, b)\notag\end{align} Thus the invariance of the integral under the Arithmetic-Geomteric transformation is established. Using this invariance it is easy to prove the identity we started with namely $$ \int_{0}^{1} \frac{dx}{\sqrt{1 - x^{4}}} = \frac{\pi}{2M(1, \sqrt{2})}$$ The integral on the left after applying substitution $ x = \cos\theta$ becomes $$ \int_{0}^{\pi / 2}\frac{d\theta}{\sqrt{1 + \cos^{2}\theta}}$$ and using $ 1 = \cos^{2}\theta + \sin^{2}\theta$ we get $$\int_{0}^{\pi / 2}\frac{d\theta}{\sqrt{1 + \cos^{2}\theta}} = \int_{0}^{\pi / 2}\frac{d\theta}{\sqrt{2\cos^{2}\theta + \sin^{2}\theta}} = \frac{\pi}{2M(1, \sqrt{2})}$$

What Next

In the next post we will study the Richard P. Brent's (discovered by Eugene Salamin at same time during 1976) famous formula for calculating $\pi$ using AGM. Needless to say that this formula being based on AGM converges very rapidly and is very suitable for computer based calculation.

Print/PDF Version

8 comments :: Arithmetic-Geometric Mean of Gauss

Post a Comment

  1. In the last stage of the first proof we established that

    $\displaystyle K(x) = \frac{\pi}{2M(1 + x, 1 - x)}$

    Also comparing this with the identity established in stage 1 of first proof

    $\displaystyle \dfrac{1}{M(1 + x, 1 - x)} = \left(\dfrac{1}{1 + x}\right)\left(\dfrac{1}{M\left(1 + \dfrac{2\sqrt{x}}{1 + x}, 1 - \dfrac{2\sqrt{x}}{1 + x}\right)}\right)$

    we get

    $\displaystyle K(x) = \left(\dfrac{1}{1 + x}\right)K\left(\dfrac{2\sqrt{x}}{1 + x}\right)$

    This identity is called Landen’s transformation and can be proved directly (with some heavy algebraical manipulation) using the substitution

    $\displaystyle x\sin\theta = \sin(2\phi - \theta)$

    in the integral defining $K(x)$

    $\displaystyle K(x) = \int_{0}^{\pi / 2}\frac{dx}{\sqrt{1 - x^{2}\sin^{2}\theta}}$

  2. Paramanand,
    Thank you for your clear comments on this subject, the best I've found after quite a lot of searching on the web. I am working to get a better understanding of the AGM<>elliptic integral connection, and am finding it hard going. I've gotten stuck at least 10 times on various dead ends, but with your blog I did make some progress. At the moment, however,, I'm stuck in 2 places. I don't see how you calculate the polynominal coefs in the 2nd part of gauss's 1st proof, and I can't figure out how the "t=b*sin(theta)" results in the transformed integral below it. I also tried gauss' original substitution and got completely lost on that one.
    I've ordered a couple of the books you recommended, maybe that will help. However, maybe I'm just missing some obvious (to you) techniques to get past the current stuck places.

    Thanks for your blog, --Glenn Keller

  3. @Glenn Keller
    Thanks for reading this post. The calculation of $p_{1}, p_{2}, \ldots$ is difficult and Gauss did considerable algebraic manipulation to calculate $p_{n}$ in general. But finding first few coefficients is easy. For example the coefficient of $x$ on LHS is $0$. On RHS we have the coefficient of $x$ as $-1 + 4p_{1}$. This leads to $4p_{1} - 1 = 0$ so that $p_{1} = 1/4$.

    Again for $p_{2}$ we note that coefficient of $x^{2}$ on LHS is $p_{1}$. On RHS the coefficient of $x^{2}$ is $1 - 12p_{1} + 16p_{2}$ so we get $16p_{2} - 2 = 1/4$ or $p_{2} = 9/64$. Note that the calculation of coefficient of powers of $x$ on RHS requires the use of binomial theorem to handle $(1 + x)^{2n + 1}$ in the denominator.

    Regarding your second point about the substitution $t = b\tan \theta$, we can see that $dt = b\sec^{2}\theta\, d\theta$. And we have $$\begin{aligned}\frac{d\theta}{\sqrt{a^{2}\cos^{2}\theta + b^{2}\sin^{2}\theta}} &= \frac{\sec\theta\,\, d\theta}{\sqrt{a^{2} + b^{2}\tan^{2}\theta}}\\
    &= \frac{b\sec^{2}\theta\,\, d\theta}{\sqrt{(a^{2} + b^{2}\tan^{2}\theta)(b^{2}\sec^{2}\theta)}}\\
    &= \frac{b\sec^{2}\theta\,\, d\theta}{\sqrt{(a^{2} + b^{2}\tan^{2}\theta)(b^{2} + b^{2}\tan^{2}\theta)}}\\
    &= \frac{dt}{\sqrt{(a^{2} + t^{2})(b^{2} + t^{2})}}\end{aligned}$$ In the above substitution the interval of integration changes to $[0, \infty)$ for $t$ corresponding to $[0, \pi/2]$ for $\theta$. Since the function of $t$ is even we can change the interval to $(-\infty, \infty)$ and add a factor of $1/2$.


  4. Hello, Paramanand,
    Thanks for your help. I now understand the t=b*tan(theta) substitution completely, and will follow that further later. Next I need to go back and understand the binomial theorem again. It has been quite a while, but I can do that piece on my own.

    Thanks again, --Glenn Keller

  5. Hello, Paramanand,
    I was able to get through everything on this page now, thanks. With the binominal multiplying it was not too bad once I got a pattern going, although I had to be very careful about sign mistakes when getting up around (1+x)^13.

    It bothered me about the y=sqrt(2) not in the series convergence area. It took a long time to think of another way, but I think if you choose x=cos(theta) and y=1/sqrt(2) instead, you can get to the same place using:
    M(1,sqrt(2)) = M(sqrt(2),1) = M(1,1/sqrt(2)) * sqrt(2).

    I was confused by the b*tan(theta) proof final conclusion, but when I read something in Borwein's "PI & the AGM" book the penny dropped. I didn't realize that the invariance under the AGM transformation proved this. However, it is obvious now, and probably was to you all along:
    M(a,b)= M( M(a,b),M(a,b) ) == q.
    Substituting q for a & b into the eqn with the "a^2*cos^2" gives the result desired. Such a wonderfully simple proof, or so it seems now.

    Looks like it is going to take a bit to get through the 2nd kind ellipticals. The end result of the pi calc works pretty beautifully, it will take a while to understand the path to that reasonably.

    Best Regards, --Glenn Keller

  6. @Glenn Keller,
    I am glad that you took some effort to understand the calculations done in this post, especially going till $(1 + x)^{13}$. Given the effort you put, I think you will not have a serious difficulty in dealing with the elliptic integrals of second kind (especially the second proof is much easier to handle compared to the first proof for formula of elliptic integrals of second kind).


  7. It'll awesome if you could show how Gauss worked out the power series. I've read quite a few papers on this area, and most authors skip that part.

  8. @Anonymous,
    If you read the comments you will find how one can compute the coefficients for the series of $1/M(1 - x, 1 + x)$ by hand calculation. But to calculate the general term $p_{n}$ you need to read the paper "Gauss, recurrence relations, and the agM" by Stacy G. Langton. Unfortunately this paper does not appear to available online for free now. It was available when I wrote this post, but I am unable to find that copy right now.