Thursday, February 26, 2009

The methods of Borel, Abel and Cesaro, Voronoi & Norlund part II

The previous methods we encountered, our P definition and the B and B' definitions of Borel that the former led to are quite powerful methods capable of summing rapidly divergent and unruly series. Indeed, using the P method and a suitable auxiliary function one may give rise to further powerful methods. However, as Hardy notes in his Divergent Series, this power comes with a cost. In that these methods may be prone to failure in more subtle cases such as slowly diverging series or ones with oscillating terms. Prior to further discussion, we submit for your inspection the following method-

Summation by Abelian means

For 0 =< λ_0 < λ_1 < λ_2 < ... with (λ_n) tending towards infinity and

f(t) = sum_{n = 0}^{infinity} (a_n).exp [(λ_n)t] converges for all positive real values of t,

If,
lim_{t->0} f(t) = s

then,

a_0 + a_1 + a_2 + ... = s , (A, λ)

Summation by Abelian means (A, λ) embodies a whole class of methods (which may indeed be further generalised, see Hardy, Divergent Series, pp71-73) depending on the sequence (λ_n) chosen, though each such method can be shown to be regular, linear and stable. We are interested in the simplest case where λ_n = n, whereupon we have with;

f(x) = sum_{n = 0}^{infinity} (a_n).x^n

where x = e^-t and hence convergent for |x| <1

if,

lim_{x -> 1^-} f(x) = s
(this notation implies the left hand limit)
then,

a_0 + a_1 + a_2 + ... = s , (A)

The 'A' stands in for Abel summation, which one might find a curious choice of name given Abel's attitude towards divergent series. Indeed, the present method is more fitting for the legacy of Euler or Poisson than Abel (we shall later see the intimate connection between a more powerful summation method attributed to Euler and Abel summation). However, it was Abel who employed a similar argument (under different conditions to do with limiting partial sums of course) for a theorem to do with Cauchy summation and the proper articulation has associated the method to his name.

The method is less powerful than the (B) and (B') methods discussed in the previous post, failing even to sum the geometric series outside its radius of convergence. However, it possesses the stability that Borel's methods in general do not and also generalise the Cesaro summation which we shall discuss in the next post.

Note: We may also mention in passing, the method of Lambert summation which also relies on a trick in the limit as with Borel's integral method but in a form more familiar with Abel summation.

We say,

a_0 + a_1 + a_2 + ... = s , (L)
if,
lim_{x -> 1^-} (1 - x). [sum_{n = 1}^{infinity} n.(a_n).(r^n) / (1 - x^n)] = s

Evaluating the limit (1 - x) / (1 - x^n) at each n shows how this method works.

Monday, February 23, 2009

The methods of Borel, Abel and Cesaro, Voronoi & Norlund part I

As promised, we shall consider some classic resummation procedures in concise detail.

One can recall that the sum of an infinite series in the Cauchy sense was the limit of partial sums.

We consider a very general and powerful method which relies on an alternative in taking the limit.

Suppose s_n is a partial sum of the series a_0 + a_1 + a_2 + ... which is under consideration and that S(t) = (p_0)(s_0) + (p_1)(s_1).t + (p_2)(s_2).t^2 + .... is entire for non-negative coefficients p_n and P(t) = (p_0) + (p_1).t + (p_2).t^2 + ...

If lim_{t->infinity} S(t) / P(t) = s

then we say,

a_0 + a_1 + a_2 + ... = s , (P)

(while we have used P to refer to this summation method, it ought not be considered with Abel summation for which P is sometimes used for Poisson who also employed it in his investigations into Fourier series. Hardy in Divergent Series calls this the J method pp79-80)

We put p_n = 1/ (n!) and thus P(t) = e^t to define the method of Borel summation. That is if,

lim_{t->infinity} e^(-t) S(t)
= lim_{t->infinity} e^(-t) [(s_0) + (s_1).t + (s_2).(t^2)/2! + ... + (s_n).(t^n)/n! + ...] = s

then,

a_0 + a_1 + a_2 + ... = s , (B)

Since we have,

n! = integral_{t = 0}^{infinity} (t^n).e^(-t) dt
, by employing a property of the Gamma function we have a similar summation,

a_0 + a_1 + a_2 + ...
= sum_{n = 0}^{infinity} (a_n).[integral_{t = 0}^{infinity} (t^n).e^(-t) dt] / n!
= integral_{t = 0}^{infinity} [sum_{n = 0}^{infinity} (a_n).(t^n)/n!] e^(-t) dt = s

(the interchange of summation and integration is allowed given the convergence of n!)

Should this last integral converge and the series within has a non-zero radius of convergence we say,

a_0 + a_1 + a_2 + ... = s , (B')

Should e^(-t) [sum_{n = 0}^{infinity} (a_n).(t^n)/n!]  -> 0 as t -> infinity the methods B and B' may be easily shown to be equivalent.

See Hardy, Divergent Series pp182-183


The Borel methods certainly allow us to sum more types of series than we had been able to thus far, for instance, it is easily shown that the B sum of 1 + z + z^2 + z^3 + ... = 1 / (1 - z) throughout the half plane Re(z) < 1

Sunday, February 22, 2009

A Diverging Appeal...

From abhorred 'inventions of the devil' to rigorously defined entities, we have briefly glimpsed the passage of divergent series through history. What the episode highlights is the freedom that comes of definitions that need not abide by intuitive notions on how a mathematical entity ought to be.

Restrictions however, are not to be entirely disposed of whenever it is sought to pursue interesting results. A method of summation is certainly appealing when it is general and not overly contrived as Hardy remarks in 'Divergent Series'.

There is also the matter of 'regularity'. Which is a first useful criteria by which to classify a given method. By claiming a method to be regular we mean that the sum is associates with a series will correspond to the sum in the traditional sense whenever the latter is not infinite. This serves to 'generalise' the notion of a sum.

We immediately note that the Ramanujan sense of sum we met previously is not a regular method. The finer points of that method are best seen in connection to the Euler MacLaurin summation formula which we shall come across later.

Are there any more discriminating properties by which we can classify our study of divergent series?

The following postulates are often useful:

1) First postulate of linearity
If a_0 + a_1 + a_2 + ... = S, then
k.a_0 + k.a_1 + k.a_2 + .... = kS

2) Second postulate of linearity
If a_0 + a_1 + a_2 + ... = S and b_0 + b_1 + b_2 + ... = T, then
(a_0 + b_0) + (a_1 + b_1) + ... = S + T

3) Postulate of stability
If a_0 + a_1 + a_2 + ... = S then a_1 + a_2 + ... = S - a_0 and vice versa

They are embodied in a large number of the methods we employ.

We shall conclude this discussion for now upon considering three interesting senses of sum.

The powers of definition!

Of course, using a summation technique to give the sum of a divergent series a meaning is the same sort of thing Euler was scoffed at when he employed geometric sums.

Say,
S = 1 + r + r^2 + r^3 + r^4 + .... + r^(n-1)

(we could've made the initial term something other than 1, say 'a', but this only requires multiplying by 'a' whenever S arises as is)

Then,

S - Sr = S (1 - r) = 1 + r + r^2 + .... + r^(n-1) - r - r^2 - .... - r^n = 1 - r^n
(since all the terms in between 'telescope' out)

Thus, for r not equal to 1,

S = (1 - r^n) / (1 - r)

Now for the infinite sum of S we need to find the limit of the sequence of partial sums,

lim_{n->infinity} S

Noting how r^n will only go to a finite value (zero to be precise) as n gets large without bound only when -1

lim_{n->infinity} S = 1/ (1 - r)

as the infinite sum in the traditional sense!

Euler, in playing around with this identity sought to remove the restriction of -1

1 + r + r^2 + .... + r^n + .... ad inf. := 1 / (1 - r) ---------- (1)

is valid for all values of r save for r = 1

This leads to some intriguing examples, for instance, r = 2 gives,

1 + 2 + 4 + 8 + ..... ad inf. = -1

and similar seemingly irreconcilable identities for other values of r.

However, as Hardy fittingly remarks in his seminal work on this subject 'Divergent Series', it is a mistake to think of Euler as a 'loose mathematician'. He acknowledged that what he was doing was no longer the standard summation procedure that was in use (this was before Cauchy made it all rigorous) and treated these identities almost as they would be treated in the modern theory of divergent series. If anything, he had the proper sort of reservations about the matter, as his dictum 'summa cujusque seriei est valor expressionis illius finitae,...' leads us to believe.

His belief that a divergent series must always be associated the same sum by different expressions kept him at bay from the rigorous theory, though his ideas were well in advance of their times and his reasons were different from many of his more orthodox contemporaries.

Euler thought in terms of limits of power series in his consideration of these series. Essentially, where the power series represented the infinite series, when the corresponding function attained a limit one would be able to associate that limit as the sum of the series.

Consider for instance,

1 + x + x^2 + .... + x^n + .... ad inf. := 1 / (1 - x)

valid for all x under say, the A sense of summation.

The finite values that appear in the right hand side even when the left hand side expression diverges in the traditional sense can be made sense of by considering it as the remainder from a process of algebraic long division (note that this is the same sort of singularity we removed in finding the Ramanujan sum of H).

This sort of consideration, while it can be made rigorous, is not without counter-intuitive backlashes. For instance, this implies that,

1 - 1 + 1 - 1 +....

and

1 + 0 - 1 + 1 + ....

while having essentially the same terms, yield different sums since the power series have different 'gaps'!

But then, considering the rearrangement theorem of Riemann we encountered early on for the Cauchy sense of convergence, this doesn't seem as exaggerated a bullet to bite after all.

Euler's justification would come much later with the advent of analytic continuation in the theory of functions of a complex variable. Where the domain of a given function can be extended under certain conditions.

If f and g are holomorphic on domains A and B respectively, and the functions coincide in the non-empty intersection of A and B, g is termed the analytic continuation of f to B and f is in turn termed the analytic continuation of g to A.

What is most striking here is that the analytic continuation whenever it exists is unique! It would seem Euler's attitude almost foresshadowed the implications of this powerful result.

What need be kept in mind is that while rigorous arguments in mathematics makes for the most refined of intellectual achievement, it should never bog us down and force us along a single linear track that we might find a naive comfort in.

In retrospect, Abel sounds disappointed as he writes them off in ushering in a new dawn of rigour-

“Divergent series are, in general, something terrible and it is a shame to base any proof on them.
We can prove anything by using them and they have caused so much misery and created so
many paradoxes. . . . . Finally my eyes were suddenly opened since, with the exception of the
simplest cases, for instance the geometric series, we hardly find, in mathematics, any infinite
series whose sum may be determined in a rigorous fashion, which means the most essential
part of mathematics has no foundation. For the most part, it is true that the results are
correct, which is very strange. I am working to find out why, a very interesting problem.”

Certainly, from this statement it is clear that Abel saw significant enough a problem to want to pursue it, though exactly the scarcity he remarks upon would become crucial for a more complete theory of functions as we have today (which was thanks to the efforts of Abel, Cauchy, et al.).

To sum or not to sum...

Of course, the excursion taken in studying the harmonic series promises to be most fruitful in understanding how summation fits into the broader mathematical programme.

We have already seen that a traditional idea of summation where the sum of some terms (a_k) is,

S = a_0 + a_1 + a_2 + a_3 + ..... + a_(n-1) + a_n

and that taking the limit of S as n-> infinity gives us a way to sum the infinite series. This is essentially the idea behind the Cauchy sense of sum. The summation we are so familiar with that it becomes second nature to think of it synonymous with the very concept of summation.

If we were to stop here then we'll have succeeded at the original goal- in that we have now defined to an acceptable precision what we mean by summation. However, this playing field in particular, perhaps more than any other in contemporary mathematics (even counting non-Euclidean geometries) opens us doors to much more exotic possibilities.

It became plain to see that just as we have no general formula that can offer a closed form for each and every sum (even in the regular sense), there is no obligation presented that should limit us in extending the notion of a sum in different directions.

As a case in point, let us define the Ramanujan sum of a sequence (a_n) as,

S := sum_{n = 1}^{infinity} a_n  - integral_{1}^{infinity} a(t) dt   , (R)

Whereby we immediately have,

H := gamma , (R)

by just a matter of definition!

Most interesting is that the Laurent expansion of Riemann's zeta function,

Z(s) = sum_{n = 1}^{infinity} n^(-s)

(note that Z(1) = H)

gives,

Z(s) = 1/(1-s)  + gamma + ....
(where the terms represented by the trailing dots become zero as s->1, see http://en.wikipedia.org/wiki/Riemann_zeta_function#Laurent_series)

Thus, the Ramanujan summation procedure may be considered as removing the particular singularity that made the sum diverge while retaining a constant that is then given the function of the sum of the series.

Indeed, Ramanujan's idea was to treat the constant that arose from the Euler McLaurin summation formula as a representative quantity, or more loosely, the 'sum' of the series.

Saturday, February 21, 2009

The Euler-Mascheroni constant

We saw before that the natural logarithm function had a McLaurin expansion,

ln (1+x) = x - x^2/2 + x^3/3 - x^4/4 +..... ---------------------(2)

which was valid absolutely within the unit circle. Outside this we will begin seeing a discrepancy between the sum of the series on the right hand side of (2) and ln (1+x). Using Taylor's theorem the error term can actually be calculated and the series treated as an approximation to the function. Then, as x grows large without bound, what becomes of this discrepancy?

We define,
gamma = lim_{n->infinity}  1 + 1/2 + 1/3 + .... + 1/n   - ln n
 
We immdiately see by the bounds imposed by the Taylor series (2) that gamma 
exists.

Since gamma exists we in turn have another way to prove that the harmonic series diverges given the natural logarithm does!