In the previous post we established the general binomial theorem using Taylor's theorem which uses derivatives in a crucial manner. In this post we present another approach to the general binomial theorem by studying more about the properties of the binomial series itself. Needless to say, this approach requires some basic understanding about infinite series and we will assume that the reader is familiar with ideas of convergence/divergence of an infinite series and some of the tests for convergence of a series.
Apart from the basic understanding of infinite series we will require a fundamental theorem concerning the multiplication of two infinite series. This we present next.
Lemma 1: If \lim_{n \to \infty}a_{n} = A then \lim_{n \to \infty}\frac{a_{1} + a_{2} + \cdots + a_{n}}{n} = A Let t_{n} = a_{n} - A so that t_{n} \to 0 and we can see that \frac{a_{1} + a_{2} + \cdots + a_{n}}{n} = \frac{t_{1} + t_{2} + \cdots + t_{n}}{n} + A and hence it suffices to prove that \frac{t_{1} + t_{2} + \cdots + t_{n}}{n} \to 0 under the condition that t_{n} \to 0. Consider s_{n} = [\sqrt{n}] (greatest integer not exceeding \sqrt{n}) so that s_{n} is an integer and s_{n} \to \infty, s_{n}/n \to 0 as n \to \infty. Let \epsilon > 0 be given. Since t_{n} \to 0 there is a positive integer n_{0} such that |t_{n}| < \epsilon / 2 for all n \geq n_{0}. Further since s_{n} \to \infty it follows that there is a positive integer m_{1} such that s_{n} > n_{0} for n \geq m_{1}. Thus it follows that |t_{n}| < \epsilon/2 for n \geq s_{m_{1}}. Again t_{n} \to 0 hence the sequence t_{n} is bounded and let |t_{n}| < K for all n. Further note that s_{n}/n \to 0 hence there is a positive integer m_{2} such that |s_{n}/n| < \epsilon/2K for all n \geq m_{2}. Let T_{n} = \frac{t_{1} + t_{2} + \cdots + t_{n}}{n} Then for n \geq m = \max(m_{1}, m_{2}) \begin{align} |T_{n}| &= \left|\frac{t_{1} + t_{2} + \cdots + t_{n}}{n}\right|\notag\\ &\leq \frac{|t_{1}| + |t_{2}| + \cdots + |t_{s_{m}}|}{n} + \frac{|t_{s_{m} + 1}| + |t_{s_{m} + 2}| + \cdots + |t_{n}|}{n}\notag\\ &< \frac{s_{m}K}{n} + \frac{n - s_{m}}{n}\cdot\frac{\epsilon}{2}\notag\\ &< \frac{s_{n}}{n}\cdot K + \frac{\epsilon}{2}\notag\\ &< \frac{\epsilon}{2K}\cdot K + \frac{\epsilon}{2}\notag\\ &= \epsilon\notag \end{align} and therefore T_{n} \to 0 as n \to \infty. This completes the proof of the lemma.
Using this lemma we prove that:
Lemma 2: If a_{n} \to A, b_{n} \to B as n \to \infty then \lim_{n \to \infty}\frac{a_{1}b_{n} + a_{2}b_{n - 1} + \cdots + a_{n - 1}b_{2} + a_{n}b_{1}}{n} = AB Let a_{n} = A + t_{n} so that t_{n} \to 0 as n \to \infty. Then \begin{align} c_{n} &= \frac{a_{1}b_{n} + a_{2}b_{n - 1} + \cdots + a_{n - 1}b_{2} + a_{n}b_{1}}{n}\notag\\ &= A\cdot\frac{b_{1} + b_{2} + \cdots + b_{n}}{n} + \frac{t_{1}b_{n} + t_{2}b_{n - 1} + \cdots + t_{n - 1}b_{2} + t_{n}b_{1}}{n}\notag \end{align} Since b_{n} \to B it is bounded by some K so that |b_{n}| < K for all n. Now the first term in the above equation tends to AB (by Lemma 1) and the absolute value of second fraction is less than K\cdot\frac{|t_{1}| + |t_{2}| + \cdots + |t_{n}|}{n} which also tends to K \cdot 0 = 0 by Lemma 1. Hence it follows that c_{n} \to AB as n \to \infty.
We are now ready to prove Abel's theorem on multiplication of infinite series:
Abel's Theorem: Let a_{n}, b_{n} be any two sequences and let \begin{align} A_{n} &= a_{1} + a_{2} + \cdots + a_{n}\notag\\ B_{n} &= b_{1} + b_{2} + \cdots + b_{n}\notag\\ c_{n} &= a_{1}b_{n} + a_{2}b_{n - 1} + \cdots + a_{n - 1}b_{2} + a_{n}b_{1}\notag\\ C_{n} &= c_{1} + c_{2} + \cdots + c_{n}\notag \end{align} If A_{n} \to A, B_{n} \to B, C_{n} \to C as n \to \infty then C = AB.
The theorem says that if \sum a_{n}, \sum b_{n} are two convergent series and \sum c_{n} is the series formed by term by term multiplication of \sum a_{n}, \sum b_{n} (with terms grouped in a manner explained above) and further if \sum c_{n} is convergent then \sum c_{n} = (\sum a_{n})(\sum b_{n}).
The proof of the theorem is based on the simple idea that we can write a_{n} = A_{n} - A_{n - 1}, b_{n} = B_{n} - B_{n - 1} for n > 1 and a_{1} = A_{1}, b_{1} = B_{1}. We make the convention that A_{0} = B_{0} = 0 so that the above relations hold for all n. Using these relations it is easily proven that \begin{align} C_{n} &= a_{1}B_{n} + a_{2}B_{n - 1} + \cdots + a_{n}B_{1}\notag\\ &= b_{1}A_{n} + b_{2}A_{n - 1} + \cdots + b_{n}A_{1}\notag\\ \sum_{i = 1}^{n}C_{i} &= A_{1}B_{n} + A_{2}B_{n - 1} + \cdots + A_{n}B_{1}\notag \end{align} By lemma 2 it is clear that \frac{1}{n}\cdot\sum_{i = 1}^{n}C_{i} = \frac{A_{1}B_{n} + A_{2}B_{n - 1} + \cdots + A_{n}B_{1}}{n} \to AB and further C_{n} \to C therefore by lemma 1 \frac{1}{n}\cdot\sum_{i = 1}^{n}C_{i} \to C and therefore C = AB.
Let the series (1) be convergence for x = r \neq 0. Then a_{n}r^{n} \to 0 and hence |a_{n}r^{n}| < K for all n and some K. Let |x| = r' < |r| then |a_{n}x^{n}| = |a_{n}|r|^{n}|\left(\frac{r'}{|r|}\right)^{n} < K\left(\frac{r'}{|r|}\right)^{n} and the series is convergent by comparison with the geometric series \sum(r'/|r|)^{n}.
Thus the region of convergence of a power series is always an interval of the form (-R, R) where R may also be \infty. Moreover the power series may or may not converge for x = \pm R and then we say that R is the radius of convergence (the word radius comes due the reason that if x is treated as a complex variable then the region of convergence turns out to be a circle with radius R).
Using the rule for multiplication of infinite series we can multiply two power series f(x) = \sum_{n = 0}^{\infty}a_{n}x^{n}, g(x) = \sum_{n = 0}^{\infty}b_{n}x^{n} to get another power series h(x) = \sum_{n = 0}^{\infty}c_{n}x^{n} where c_{n} = a_{0}b_{n} + a_{1}b_{n - 1} + \cdots + a_{n - 1}b_{1} + a_{n}b_{0} Whether this new series is convergent or not needs to be determined separately.
Note that if p, q are positive integers then both the series f(x, p), f(x, q) turn out to be finite as \binom{p}{m}, \binom{q}{m} become 0 as m exceeds p and q. Moreover in this case we know that f(x, p) = (1 + x)^{p}, f(x, q) = (1 + x)^{q} and therefore f(x, p)f(x, q) = (1 + x)^{p + q} and the coefficient of x^{m} in (1 + x)^{p + q} is \binom{p + q}{m} (it will become 0 as soon as m exceeds p + q). Thus it follows from multiplication of series that \binom{p}{0}\binom{q}{m} + \binom{p}{1}\binom{q}{m - 1} + \cdots + \binom{p}{m - 1}\binom{q}{1} + \binom{p}{m}\binom{q}{0} = \binom{p + q}{m} for all positive integers p, q and all non-negative integers m. Note that by definition the expression \binom{p}{m} is a polynomial in p and thus we see that the above equation is an identity between polynomials of two variables p, q which holds for all positive integral values of p, q. Thus this identity must be true identically for all real values of p, q and thus the evaluation of c_{m} from equation (3) is complete and we have c_{m} = \binom{p}{0}\binom{q}{m} + \binom{p}{1}\binom{q}{m - 1} + \cdots + \binom{p}{m - 1}\binom{q}{1} + \binom{p}{m}\binom{q}{0} = \binom{p + q}{m}\tag{4} for all real values of p, q. The series h(x) = \sum c_{m}x^{m} thus turns out to be a binomial series f(x, p + q). From Abel's theorem on multiplication of infinite series it follows that f(x, p)f(x, q) = f(x, p + q) when all the three binomial series involved are convergent. By ratio test it is easily proved that binomial series is convergent whenever |x| < 1. Thus we have the following result:
The binomial series f(x, n) = 1 + \binom{n}{1}x + \binom{n}{2}x^{2} + \cdots = \sum_{m = 0}^{\infty}\binom{n}{m}x^{m} is convergent for all values of x with |x| < 1 and if |x| < 1 then it satisfies the following functional equation f(x, p)f(x, q) = f(x, p + q)\tag{5} for all real values of p, q.
The above functional equation reminds us of the property satisfied by exponential function namely F(x + y) = F(x)F(y) as far as the parameter p of f(x, p) is concerned. By simple algebraic arguments we can thus establish that f(x, n) = \{f(x, 1)\}^{n} = (1 + x)^{n} for all rational values of n. This completes the proof of general binomial theorem when index n is rational. The extension to irrational values of n is achieved by noting that f(x, n) is a continuous function of n for a fixed value of x with |x| < 1.
The binomial series f(x, n) = 1 + \binom{n}{1}x + \binom{n}{2}x^{2} + \cdots + \binom{n}{m}x^{m} + \cdots for x = 1 is convergent only when n > - 1 and then its sum as expected is equal to 2^{n}.
Next we consider the behavior of series f(x, n) for x = -1. If we write n = -p then we can see that the general term is given by (-1)^{m}\binom{n}{m} = (-1)^{m}\binom{-p}{m} = \frac{p(p + 1)\cdots (p + m - 1)}{m!} and we can sum the series directly to get \begin{align} 1 + p &+ \frac{p(p + 1)}{2!} + \cdots + \frac{p(p + 1)\cdots (p + m - 1)}{m!}\notag\\ &= \frac{(p + 1)(p + 2)\cdots (p + m)}{m!}\notag\\ &= (-1)^{m}\binom{n - 1}{m}\notag \end{align} and we have seen earlier that the final expression converges if and only if n - 1 > -1 i.e. n > 0 and then it tends to 0 as m \to \infty. Hence the binomial series for x = -1 is convergent if and only if n > 0 and then its sum is 0.
To summarize, the binomial series f(x, n) = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + \frac{n(n - 1)(n - 2)}{3!}x^{3} + \cdots has the following behavior:
Print/PDF Version
Apart from the basic understanding of infinite series we will require a fundamental theorem concerning the multiplication of two infinite series. This we present next.
Multiplication of Infinite Series
Consider two infinite series \sum a_{n} = a_{1} + a_{2} + \cdots + a_{n} + \cdots and \sum b_{n} = b_{1} + b_{2} + \cdots + b_{n} + \cdots If we multiply them term by term (i.e multiply each term of one series by first term of another series, then multiply each term of the first series by second term of the second series and so on) we get a sum (whose meaning we are yet to define) like the following \begin{align} a_{1}b_{1} &+ a_{1}b_{2} + a_{1}b_{3} + \cdots +\notag\\ a_{2}b_{1} &+ a_{2}b_{2} + a_{2}b_{3} + \cdots +\notag\\ a_{3}b_{1} &+ a_{3}b_{2} + a_{3}b_{3} + \cdots +\notag\\ \cdots &+ \cdots +\notag\\ a_{n}b_{1} &+ a_{n}b_{2} + \cdots +\notag\\ \cdots &+ \cdots +\notag \end{align} In order to give this 2 dimensional expression a meaning it is better to regroup the terms and arrange them in linear fashion to make a new infinite series. One particularly nice way to group terms is to add the diagonal entries and get terms like (a_{1}b_{1}), (a_{1}b_{2} + a_{2}b_{1}), \cdots and in general the n^{\text{th}} group is c_{n} = a_{1}b_{n} + a_{2}b_{n - 1} + \cdots + a_{n - 1}b_{2} + a_{n}b_{1} and then we have another series \sum c_{n} = c_{1} + c_{2} + \cdots + c_{n} + \cdots and since the series c_{n} eventually captures all the terms which are obtained by multiplying the terms of series \sum a_{n} and \sum b_{n} it is reasonable to expect that the sum of series \sum c_{n} will be the product of sums of \sum a_{n} and \sum b_{n}. This is in fact true under very general circumstances and we will establish a result to that effect. Before we proceed to do that we first establish certain lemmas on sequences.Lemma 1: If \lim_{n \to \infty}a_{n} = A then \lim_{n \to \infty}\frac{a_{1} + a_{2} + \cdots + a_{n}}{n} = A Let t_{n} = a_{n} - A so that t_{n} \to 0 and we can see that \frac{a_{1} + a_{2} + \cdots + a_{n}}{n} = \frac{t_{1} + t_{2} + \cdots + t_{n}}{n} + A and hence it suffices to prove that \frac{t_{1} + t_{2} + \cdots + t_{n}}{n} \to 0 under the condition that t_{n} \to 0. Consider s_{n} = [\sqrt{n}] (greatest integer not exceeding \sqrt{n}) so that s_{n} is an integer and s_{n} \to \infty, s_{n}/n \to 0 as n \to \infty. Let \epsilon > 0 be given. Since t_{n} \to 0 there is a positive integer n_{0} such that |t_{n}| < \epsilon / 2 for all n \geq n_{0}. Further since s_{n} \to \infty it follows that there is a positive integer m_{1} such that s_{n} > n_{0} for n \geq m_{1}. Thus it follows that |t_{n}| < \epsilon/2 for n \geq s_{m_{1}}. Again t_{n} \to 0 hence the sequence t_{n} is bounded and let |t_{n}| < K for all n. Further note that s_{n}/n \to 0 hence there is a positive integer m_{2} such that |s_{n}/n| < \epsilon/2K for all n \geq m_{2}. Let T_{n} = \frac{t_{1} + t_{2} + \cdots + t_{n}}{n} Then for n \geq m = \max(m_{1}, m_{2}) \begin{align} |T_{n}| &= \left|\frac{t_{1} + t_{2} + \cdots + t_{n}}{n}\right|\notag\\ &\leq \frac{|t_{1}| + |t_{2}| + \cdots + |t_{s_{m}}|}{n} + \frac{|t_{s_{m} + 1}| + |t_{s_{m} + 2}| + \cdots + |t_{n}|}{n}\notag\\ &< \frac{s_{m}K}{n} + \frac{n - s_{m}}{n}\cdot\frac{\epsilon}{2}\notag\\ &< \frac{s_{n}}{n}\cdot K + \frac{\epsilon}{2}\notag\\ &< \frac{\epsilon}{2K}\cdot K + \frac{\epsilon}{2}\notag\\ &= \epsilon\notag \end{align} and therefore T_{n} \to 0 as n \to \infty. This completes the proof of the lemma.
Using this lemma we prove that:
Lemma 2: If a_{n} \to A, b_{n} \to B as n \to \infty then \lim_{n \to \infty}\frac{a_{1}b_{n} + a_{2}b_{n - 1} + \cdots + a_{n - 1}b_{2} + a_{n}b_{1}}{n} = AB Let a_{n} = A + t_{n} so that t_{n} \to 0 as n \to \infty. Then \begin{align} c_{n} &= \frac{a_{1}b_{n} + a_{2}b_{n - 1} + \cdots + a_{n - 1}b_{2} + a_{n}b_{1}}{n}\notag\\ &= A\cdot\frac{b_{1} + b_{2} + \cdots + b_{n}}{n} + \frac{t_{1}b_{n} + t_{2}b_{n - 1} + \cdots + t_{n - 1}b_{2} + t_{n}b_{1}}{n}\notag \end{align} Since b_{n} \to B it is bounded by some K so that |b_{n}| < K for all n. Now the first term in the above equation tends to AB (by Lemma 1) and the absolute value of second fraction is less than K\cdot\frac{|t_{1}| + |t_{2}| + \cdots + |t_{n}|}{n} which also tends to K \cdot 0 = 0 by Lemma 1. Hence it follows that c_{n} \to AB as n \to \infty.
We are now ready to prove Abel's theorem on multiplication of infinite series:
Abel's Theorem: Let a_{n}, b_{n} be any two sequences and let \begin{align} A_{n} &= a_{1} + a_{2} + \cdots + a_{n}\notag\\ B_{n} &= b_{1} + b_{2} + \cdots + b_{n}\notag\\ c_{n} &= a_{1}b_{n} + a_{2}b_{n - 1} + \cdots + a_{n - 1}b_{2} + a_{n}b_{1}\notag\\ C_{n} &= c_{1} + c_{2} + \cdots + c_{n}\notag \end{align} If A_{n} \to A, B_{n} \to B, C_{n} \to C as n \to \infty then C = AB.
The theorem says that if \sum a_{n}, \sum b_{n} are two convergent series and \sum c_{n} is the series formed by term by term multiplication of \sum a_{n}, \sum b_{n} (with terms grouped in a manner explained above) and further if \sum c_{n} is convergent then \sum c_{n} = (\sum a_{n})(\sum b_{n}).
The proof of the theorem is based on the simple idea that we can write a_{n} = A_{n} - A_{n - 1}, b_{n} = B_{n} - B_{n - 1} for n > 1 and a_{1} = A_{1}, b_{1} = B_{1}. We make the convention that A_{0} = B_{0} = 0 so that the above relations hold for all n. Using these relations it is easily proven that \begin{align} C_{n} &= a_{1}B_{n} + a_{2}B_{n - 1} + \cdots + a_{n}B_{1}\notag\\ &= b_{1}A_{n} + b_{2}A_{n - 1} + \cdots + b_{n}A_{1}\notag\\ \sum_{i = 1}^{n}C_{i} &= A_{1}B_{n} + A_{2}B_{n - 1} + \cdots + A_{n}B_{1}\notag \end{align} By lemma 2 it is clear that \frac{1}{n}\cdot\sum_{i = 1}^{n}C_{i} = \frac{A_{1}B_{n} + A_{2}B_{n - 1} + \cdots + A_{n}B_{1}}{n} \to AB and further C_{n} \to C therefore by lemma 1 \frac{1}{n}\cdot\sum_{i = 1}^{n}C_{i} \to C and therefore C = AB.
Multiplication of Power Series
The above rule for multiplication of infinite series can be used to multiply special kinds of series called power series. A power series is of the form f(x) = \sum_{n = 0}^{\infty}a_{n}x^{n} = a_{0} + a_{1}x + a_{2}x^{2} + \cdots\tag{1} where a_{n} is a sequence and x is a variable. If for some values of x lying in a certain set A the above series is convergent then its sum defines a function from A \to \mathbb{R}. Clearly the series is convergent if x = 0 and it is possible that it may be convergent for other values of x. Power series have a special property that if they are convergent for x = r then they are convergent for all values of x with |x| < |r|. This is not difficult to prove.Let the series (1) be convergence for x = r \neq 0. Then a_{n}r^{n} \to 0 and hence |a_{n}r^{n}| < K for all n and some K. Let |x| = r' < |r| then |a_{n}x^{n}| = |a_{n}|r|^{n}|\left(\frac{r'}{|r|}\right)^{n} < K\left(\frac{r'}{|r|}\right)^{n} and the series is convergent by comparison with the geometric series \sum(r'/|r|)^{n}.
Thus the region of convergence of a power series is always an interval of the form (-R, R) where R may also be \infty. Moreover the power series may or may not converge for x = \pm R and then we say that R is the radius of convergence (the word radius comes due the reason that if x is treated as a complex variable then the region of convergence turns out to be a circle with radius R).
Using the rule for multiplication of infinite series we can multiply two power series f(x) = \sum_{n = 0}^{\infty}a_{n}x^{n}, g(x) = \sum_{n = 0}^{\infty}b_{n}x^{n} to get another power series h(x) = \sum_{n = 0}^{\infty}c_{n}x^{n} where c_{n} = a_{0}b_{n} + a_{1}b_{n - 1} + \cdots + a_{n - 1}b_{1} + a_{n}b_{0} Whether this new series is convergent or not needs to be determined separately.
Exponential Property of the Binomial Series
We will now study a very important property of the binomial series f(x, n) = 1 + \binom{n}{1}x + \binom{n}{2}x^{2} + \cdots = \sum_{m = 0}^{\infty}\binom{n}{m}x^{m}\tag{2} where n is a real number. If p, q are real and we multiply the two binomial series f(x, p), f(x, q) then we get another series h(x) = \sum_{m = 0}^{\infty}c_{m}x^{m} where c_{m} = \binom{p}{0}\binom{q}{m} + \binom{p}{1}\binom{q}{m - 1} + \cdots + \binom{p}{m - 1}\binom{q}{1} + \binom{p}{m}\binom{q}{0}\tag{3} The challenge is to evaluate the coefficient c_{m} in terms of a simple formula.Note that if p, q are positive integers then both the series f(x, p), f(x, q) turn out to be finite as \binom{p}{m}, \binom{q}{m} become 0 as m exceeds p and q. Moreover in this case we know that f(x, p) = (1 + x)^{p}, f(x, q) = (1 + x)^{q} and therefore f(x, p)f(x, q) = (1 + x)^{p + q} and the coefficient of x^{m} in (1 + x)^{p + q} is \binom{p + q}{m} (it will become 0 as soon as m exceeds p + q). Thus it follows from multiplication of series that \binom{p}{0}\binom{q}{m} + \binom{p}{1}\binom{q}{m - 1} + \cdots + \binom{p}{m - 1}\binom{q}{1} + \binom{p}{m}\binom{q}{0} = \binom{p + q}{m} for all positive integers p, q and all non-negative integers m. Note that by definition the expression \binom{p}{m} is a polynomial in p and thus we see that the above equation is an identity between polynomials of two variables p, q which holds for all positive integral values of p, q. Thus this identity must be true identically for all real values of p, q and thus the evaluation of c_{m} from equation (3) is complete and we have c_{m} = \binom{p}{0}\binom{q}{m} + \binom{p}{1}\binom{q}{m - 1} + \cdots + \binom{p}{m - 1}\binom{q}{1} + \binom{p}{m}\binom{q}{0} = \binom{p + q}{m}\tag{4} for all real values of p, q. The series h(x) = \sum c_{m}x^{m} thus turns out to be a binomial series f(x, p + q). From Abel's theorem on multiplication of infinite series it follows that f(x, p)f(x, q) = f(x, p + q) when all the three binomial series involved are convergent. By ratio test it is easily proved that binomial series is convergent whenever |x| < 1. Thus we have the following result:
The binomial series f(x, n) = 1 + \binom{n}{1}x + \binom{n}{2}x^{2} + \cdots = \sum_{m = 0}^{\infty}\binom{n}{m}x^{m} is convergent for all values of x with |x| < 1 and if |x| < 1 then it satisfies the following functional equation f(x, p)f(x, q) = f(x, p + q)\tag{5} for all real values of p, q.
The above functional equation reminds us of the property satisfied by exponential function namely F(x + y) = F(x)F(y) as far as the parameter p of f(x, p) is concerned. By simple algebraic arguments we can thus establish that f(x, n) = \{f(x, 1)\}^{n} = (1 + x)^{n} for all rational values of n. This completes the proof of general binomial theorem when index n is rational. The extension to irrational values of n is achieved by noting that f(x, n) is a continuous function of n for a fixed value of x with |x| < 1.
Behavior of Binomial Series at x = \pm 1
For the sake of completeness it is best to discuss the behavior of the binomial series f(x, n) in equation (1) for x = \pm 1. Let's first consider x = 1 and then we need to check the convergence (and find the sum if it is convergent) of the series f(1, n) = 1 + \binom{n}{1} + \binom{n}{2} + \cdots = \sum_{m = 0}^{\infty}\binom{n}{m} The general term of this series a_{m} = \binom{n}{m} = \frac{n(n - 1)(n - 2)\cdots (n - m + 1)}{m!} and clearly if n \leq -1 then \left|\binom{n}{m}\right| \geq 1 and thus the general term of the series does not tend to 0. Therefore the binomial series does not converge if n \leq -1. If n > -1 then we can see that \frac{a_{m + 1}}{a_{m}} = \frac{n - m}{(m + 1)} = -\left(1 - \frac{n + 1}{m + 1}\right) so that a_{m} ultimately alternates in sign and since n > -1 the terms decrease in absolute value after a certain value of m. Next we note that \log\left|\frac{a_{m + 1}}{a_{m}}\right| = \log\left(\frac{m - n}{m + 1}\right) = \log\left(1 - \frac{n + 1}{m + 1}\right) < - \frac{n + 1}{m + 1} and hence \log\left|\frac{a_{m + p}}{a_{m + 1}}\right| < - (n + 1)\sum_{i = 1}^{p}\frac{1}{m + i} and the expression on right tends to -\infty if p \to \infty. It follows that a_{m + p} \to 0 as p \to \infty. Thus a_{m} \to 0 as m \to \infty. It follows from Leibniz test for alternating series that the series under discussion is convergent if n > -1. To calculate its sum we need to apply Taylor's series on the function g(x) = (1 + x)^{n} and we have 2^{n} = g(1) = 1 + \binom{n}{1} + \binom{n}{2} + \cdots + \binom{n}{m - 1} + R_{m} where the remainder R_{m} in Lagrange's form is given by R_{m} = \frac{g^{(m)}(\theta)}{m!} = \binom{n}{m}(1 + \theta)^{n - m} and hence for m > n we have |R_{m}| < \left|\binom{n}{m}\right| = |a_{m}| and since a_{m} \to 0 as m \to \infty for n > -1, it follows that R_{m} \to 0 as m \to \infty for n > -1. Thus we have the following result:The binomial series f(x, n) = 1 + \binom{n}{1}x + \binom{n}{2}x^{2} + \cdots + \binom{n}{m}x^{m} + \cdots for x = 1 is convergent only when n > - 1 and then its sum as expected is equal to 2^{n}.
Next we consider the behavior of series f(x, n) for x = -1. If we write n = -p then we can see that the general term is given by (-1)^{m}\binom{n}{m} = (-1)^{m}\binom{-p}{m} = \frac{p(p + 1)\cdots (p + m - 1)}{m!} and we can sum the series directly to get \begin{align} 1 + p &+ \frac{p(p + 1)}{2!} + \cdots + \frac{p(p + 1)\cdots (p + m - 1)}{m!}\notag\\ &= \frac{(p + 1)(p + 2)\cdots (p + m)}{m!}\notag\\ &= (-1)^{m}\binom{n - 1}{m}\notag \end{align} and we have seen earlier that the final expression converges if and only if n - 1 > -1 i.e. n > 0 and then it tends to 0 as m \to \infty. Hence the binomial series for x = -1 is convergent if and only if n > 0 and then its sum is 0.
To summarize, the binomial series f(x, n) = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + \frac{n(n - 1)(n - 2)}{3!}x^{3} + \cdots has the following behavior:
- if n=0 or a positive integer then the series terminates at (n+1)'th term and for n=0 the sum is 1 otherwise the sum is (1+x)^{n} regardless of the value of x.
- it is convergent for all real values of x with |x| < 1 and all real values of n and its sum is (1 + x)^{n}.
- for x = 1 it is convergent for all real values of n with n > -1 and its sum is 2^{n}.
- for x = -1 it is convergent for all real values of n with n > 0 and its sum is 0.
- for any other combination of real values of x, n the series does not converge.
Print/PDF Version
very well explained ...
Unknown
April 2, 2018 at 8:23 PMepic
enthusiast
December 1, 2021 at 9:07 AM