Abel's test
Part of a series of articles about  
Calculus  





Specialized


In mathematics, Abel's test (also known as Abel's criterion) is a method of testing for the convergence of an infinite series. The test is named after mathematician Niels Henrik Abel. There are two slightly different versions of Abel's test – one is used with series of real numbers, and the other is used with power series in complex analysis. Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions dependent on parameters.
Contents
Abel's test in real analysis
Suppose the following statements are true:
 is a convergent series,
 {b_{n}} is a monotone sequence, and
 {b_{n}} is bounded.
Then is also convergent.
It is important to understand that this test is mainly pertinent and useful in the context of non absolutely convergent series . For absolutely convergent series, this theorem, albeit true, is almost self evident.
Abel's test in complex analysis
A closely related convergence test, also known as Abel's test, can often be used to establish the convergence of a power series on the boundary of its circle of convergence. Specifically, Abel's test states that if a sequence of positive real numbers is decreasing monotonically (or at least that for all n greater than some natural number m, we have ) with
then the power series
converges everywhere on the closed unit circle, except when z = 1. Abel's test cannot be applied when z = 1, so convergence at that single point must be investigated separately. Notice that Abel's test implies in particular that the radius of convergence is at least 1. It can also be applied to a power series with radius of convergence R ≠ 1 by a simple change of variables ζ = z/R.^{[1]} Notice that Abel's test is a generalization of the Leibniz Criterion by taking z = −1.
Proof of Abel's test: Suppose that z is a point on the unit circle, z ≠ 1. For each , we define
By multiplying this function by (1 − z), we obtain
The first summand is constant, the second converges uniformly to zero (since by assumption the sequence converges to zero). It only remains to show that the series converges. We will show this by showing that it even converges absolutely: where the last sum is a converging telescoping sum. The absolute value vanished because the sequence is decreasing by assumption.
Hence, the sequence converges (even uniformly) on the closed unit disc. If , we may divide by (1 − z) and obtain the result.
Abel's uniform convergence test
Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions or an improper integration of functions dependent on parameters. It is related to Abel's test for the convergence of an ordinary series of real numbers, and the proof relies on the same technique of summation by parts.
The test is as follows. Let {g_{n}} be a uniformly bounded sequence of realvalued continuous functions on a set E such that g_{n+1}(x) ≤ g_{n}(x) for all x ∈ E and positive integers n, and let {f_{n}} be a sequence of realvalued functions such that the series Σf_{n}(x) converges uniformly on E. Then Σf_{n}(x)g_{n}(x) converges uniformly on E.
Notes
 ^ (Moretti, 1964, p. 91)
References
 Gino Moretti, Functions of a Complex Variable, PrenticeHall, Inc., 1964
 Apostol, Tom M. (1974), Mathematical analysis (2nd ed.), AddisonWesley, ISBN 9780201002881
 Weisstein, Eric W. "Abel's uniform convergence test". MathWorld.
External links
 Proof (for real series) at PlanetMath.org