Friday, May 28, 2010

The Irwin-Hall distribution: A Problem to do with a Sum of Uniform Distributions

The following question appears at http://www.spellscroll.com/questionfull/184/ where it appears to trace back to http://www.wilmott.com/

X_1, X_2,..., X_n are independent random variables, uniformly distributed on [0,1].
What is the probability that X_1 +X_2 +... +X_n < 1 ?

An attempt:

A general sum of independent identically distributed U[0,1] random variables seems to be called the Irwin-Hall distribution. It is clearly continuous and of interest at least in the general problem of finding the distribution of a sum of random variables.

We can see that a general X_k, k= 1,...,n has characteristic function (given as a Fourier transform of its probability density function),

ϕ_k (t) = E[exp (t.X_k)] = i.(exp(it) - 1) / t

We have that,

E[exp (t.(X_1 +X_2 +... +X_n))] = E[(exp (t.X_1)) ... (exp (t.X_n))] = E[exp (t.X_1)] ... E[exp (t.X_n)]

(this last pair of equalities hold for any independent X_1, ..., X_n in fact)

Whence the characteristic function of our sum,

ϕ(t) = E[exp (t.(X_1 +X_2 +... +X_n))] = [i.(exp(it) - 1) / t]^n (= ϕ_1 (t) ... ϕ_n (t))

The approach thus far is clear, invoking the characteristic function was due to it's sum to product property.

(continued...)

Monday, April 26, 2010

Solving cubics - Cardano's Method

With apologies for the lack of updates, now for something completely different!

Below is a general method of tackling cubics. I have seen the method explained in many an account, but conciseness and clarity are often amiss (if ironically, in the mistaken hope that the needless additions and illustrations help further understanding!).
This is interesting not for actual computational use (though perhaps some will like that), but for the idea behind it. Simply put we reduce what we need to a solving a quadratic and then extract cube roots at the end.

So suppose our cubic is of the form,
ax^3 + bx^2 + cx + d = 0
First off we use the substitution y = x - b/(3a) to reduce this to what's known as a 'depressed cubic' of the form,
y^3 = 3Gy + H

Now recall the expansion,
(p+q)^3 = 3pq(p+q) + p^3 + q^3

We note on comparison then that y=p+q is a solution to our cubic if we take,
pq = G and p^3 + q^3 = H

Now form the quadratic with p^3 and q^3 as roots, that is,

t^2 - Ht + G^3

Solving this we get two values which we can arbitrarily call p^3 and q^3 respectively.
Now we take cube roots for each, and in each case we get three cube roots (call the cube roots of unity 1, w and w^2 say, where w = (-1+i.sqrt(3))/2 or (-1-i.sqrt(3))/2, which you can verify for yourself).

Since y=p+q we need be careful which pairs to match up, but on recalling that pq=G we get,
p+q, wp+(w^2)q and (w^2)p+wq

as the three desired cube roots!

Thursday, September 24, 2009

Analytic Continuation of the Lerch Transcendent

We define the following important Dirichlet series as the Lerch transcendent,
Φ(z,s,a) := sum_[n=0 to +infinity] (z^n)/(n+a)^s , a {0}∪Z^- ; s C for |z|<1 and Re(s)>1 when |z|=1

Note that z=1 gives the Hurwitz zeta function, and a=0 gives the s-logarithm of z as notable special cases (and with a=1 this is just the Riemann zeta function).
In the case when z=exp (2πiλ) we have the Lerch (or periodic) zeta function,
L(λ,s,a) := sum_[n=0 to +infinity] (e^(2πiλn))/(n+a)^s

We have the following integral representation for Φ,
Φ(z,s,a)Γ(s) = sum_[n=0 to +infinity] (z^n) integral_[0 to +infinity] (t^(s-1)).e^(-(n+a)t) dt , for Re(a)>0, Re(s)>0
with the change of variable t := (n+a)t in the Euler integral representation for the Gamma function.
We now interchange sum and integral (which is justified here) to get,
Φ(z,s,a)Γ(s) = integral_[0 to +infinity] (t^(s-1)).e^(-at) sum_[n=0 to +infinity] (z.e^(-t))^n dt
The sum within the integral is a geometric series, and on summing we have our identity,

Φ(z,s,a)Γ(s) = integral_[0 to +infinity] [(t^(s-1)).e^(-at) / (1 - z.e^(-t))] dt
for Re(a)>0; |z|≤1, z≠1, Re(s)>0; z=1, Re(s)>1

This suggest a contour integral of the form,

(2πi).I(z,s,a) := integral_[H] (t^(s-1)).e^(at) / (1 - z.e^t) dt

Where H is a Hankel contour where,
H= C_1 ∪ C_2 ∪ C_3
and C_1 is the portion of H which travels in a straight line from +infinity just under the positive real axis, which then connects to C_2 which traverses a semi-circle of radius ρ0 which again connects to C_3 which returns to +infinity just above the positive real axis.

So with C_1 parametrised by t=r.e^(-πi) , C_2 parametrised by t=ρ.e^(iθ) and C_3 parametrised by t=r.e^(πi) , we have,

(2πi).I(z,s,a) = integral_[+infinity to ρ] [[r^(s-1).e^(-πis).e^(πi).e^(-πi).e^(-ra)] / (1 - z.e^(-r))] dr +
integral_[-π to +π] [[ρ^(s-1).e^(sθi).e^(-θi).e^(aρ.e^(iθ)).ρi.e^(iθ)] / (1 - z.e^(ρ.e^(iθ)))] dθ +
integral_[ρ to +infinity] [[r^(s-1).e^(πis).e^(-πi).e^(πi).e^(-ra)] / (1 - z.e^(-r))] dr

On simplifying,
(2πi).I(z,s,a) = (e^(πis) - e^(-πis)).integral_[ρ to +infinity] [r^(s-1).e^(-ra) / (1 - z.e^(-r))] dr + i(ρ^s). integral_[-π to +π] [e^(iθs + aρ.e^(iθ)) / (1 - z.e^(ρe^(iθ)))] dθ

The last integral in θ tends to 0 as ρ->0 (in the region to which we continue to, *details to be supplied).
We then have,

lim_[ρ->0] I(z,s,a) = (sin (πs) / π) Φ(z,s,a)Γ(s)

Recalling the formula Γ(s)Γ(1-s) = (π / sin (πs)) gives the contour integral representation,

Φ(z,s,a) = [Γ(1-s) / (2πi)] integral_[H] [t^(s-1).e^(at) / (1 - z.e^t)] dt ------- (1)
Re(a)>0, |arg(-t)| =< π

cf.

Monday, September 14, 2009

A formal summation formula

Define the Bernoulli numbers B_n by,
t / (e^t - 1) = sum_[n=0 to +infinity] (B_n).t^n / n!

Let ∂ f(t) = d[f(t)]/dt and put ∂^(-1) f(t) = integral f(t) dt

Note that,
exp [n∂ f(t)] = sum_[k=0 to +infinity] (n^k).∂^(n) f(t) / k! = f(t+n)
where the last inequality is by taking Taylor's theorem as a formal identity.

Then,
sum_[n=0 to +infinity] f(t+n) = sum_[n=0 to +infinity] exp [n∂ f(t)] = (1 - e^∂)^(-1) f(t) , by summing the geometric series.
Then,
sum_[n=0 to +infinity] f(t+n) = -sum_[n=0 to +infinity] (B_n)(∂^(n-1) f(t)) / n!

Let t->0 and we have,
sum_[n=0 to +infinity] f(n) = -sum_[n=0 to +infinity] (B_n)(∂^(n-1) f(0)) / n! -------- (1)
Put f(n) = n^k for k ∈ Z+ and we have all terms vanishing on the right hand side except that at n=k+1
For n=0 on the left there is no term so the left hand side is just the Riemann zeta function at -k, and we have,
Z(-k) = -(B_(k+1)).k! / (k+1)! = -(B_(k+1)) / (k+1)

We define the Bernoulli polynomials B_n (x) = sum_[k=0 to n] (nCk).(B_k).x^(n-k)
These have generating function,
t.e^(xt) / (e^t - 1) = sum_[n=0 to +infinity] (B_n (x)).t^n / n!

Put f(n) = (n+μ)^k for k ∈ Z+ and μ a constant in (1) and we have the Hurwitz zeta function Z on the left hand side and,

Z(k, μ) = -sum_[r=0 to k+1] (B_r)(k!).μ^(k-r+1) / (r!)(k-r+1)!

then,
Z(k, μ) = -(1/(k+1)) sum_[r=0 to k+1] ((k+1)Cr).(B_r).μ^(k-r+1) = -(B_(k+1) (μ)) / (k+1)

Which is an identity we needed in our proof of the Gauss multiplication theorem for the Gamma function.
We also remark that a two-variable generalisation is possible by considering f(t+y) rather than f(t) and deriving,
sum_[k=0 to n-2] f(t+y+k) = [sum_[k=0 to n-1] e^((y+k)D)] f(t) = [e^(yD) / (e^D - 1)] (f(t+n-1) - f(t))
(by summing the geometric series as before).
Whence by using the definition of the Bernoulli polynomials the operator in square brackets can be expanded as an infinite series. Putting f(t) = log (t) we can obtain an asymptotic expansion for log Γ(n+y) in powers of the reciprocal of n, which is Stirling's approximation (for y=0) with the constant involved expressed in series form!
Our formal manipulations can be shown to match up with the analytic extensions of these functions.

See G H Hardy, Divergent Series, 2nd ed. AMS Chelsea, 1991 (cf. sec.13.12) for more on the Euler-Maclaurin summation formula.

The Polylogarithm

Define the k-logarithm, L_k by,
L_k (z) = sum_[n=1 to +infinity] z^n / n^k , for |z|<1

We will derive an interesting identity for this function which illustrates the discrete Fourier transform.

Let ζ be an N th root of unity with ζ not =1 (i.e- ζ is primitive). Consider the sum,

N^(k-1) sum_[ζ^N = 1] L_k (ζz)

Using the definition of L_k (z) we can write this as,
N^(k-1) sum_[n=1 to +infinity] (z^n / n^k) sum_[ζ^N = 1] ζ^n

Note that, sum_[i=0 to N-1] ζ^i = (1 - ζ^N) / (1 - ζ) = 0
by summing the geometric progression. Now note that this is satisfied also by ζ^2, ... , ζ^(N-1) so these are the other roots of unity apart from 1 and ζ (at ζ=N we again get 1 and cycle through). But this is again the same geometric series and so,
sum_[ζ^N = 1] ζ^n = 0 , for all n>1 with n not =N.

Then in our sum,
N^(k-1) sum_[ζ^N = 1] L_k (ζz)

N^(k-1) sum_[lN
(N^(k-1)).z^(nN) / [(n^k).(N^(k-1))] = z^(nN) / n^k for n=1,...
Then,

N^(k-1) sum_[ζ^N = 1] L_k (ζz) = L_k (z^N)

Saturday, September 12, 2009

A 'Natural Proof' of the Gauss Multiplication formula II: Theorem and Proof

We are now ready to prove the result:

Theorem (Gauss's Multiplication formula for Γ)

Γ(Nt) = (2π)^((1-N)/2) . N^(Nt - 1/2) . prod_[k=0 to N-1] Γ(t + k/N) ------- (2)

Proof:

From (1) we have that,
(2π) / Γ(Nt) = prodz_[m=0 to +infinity] (m + Nt) = prod_[k=0 to N-1] prodz_[m=0 to +infinity] ((mN + k) + Nt)

it is easy to see that there is a bijection between the product indices on either side of the last equality since m=0 on the right gives the terms of the product on the left from m=0 up to m=N-1 and so on.

Then,
(2π) / Γ(Nt) = prod_[k=0 to N-1] prodz_[m=0 to +infinity] N.(m + (t + k/N))

From the lemma we established in the last post we have,

(2π) / Γ(Nt) = prod_[k=0 to N-1] N^[Z(0, t + k/N)] prodz_[m=0 to +infinity] (m + (t + k/N))

We then recognise the regularised product from (1) again and along with the value of Z(0, t + k/N), have,

(2π) / Γ(Nt) = prod_[k=0 to N-1] N^[1/2 - (t + k/N)] . (2π) / Γ(t + k/N) = N^(1/2 - Nt) . (2π)^(N/2) . prod_[k=0 to N-1] 1/Γ(t + k/N)

Which gives the required formula on rearrangement.

QED.

The proof flows naturally from the regularized determinant expression for the Gamma function unlike standard proofs which match up zeros and poles of either side of (2), which is why the term 'natural' was used to describe it. We also note that the simple functional equation Γ(z+1) = z.Γ(z) is obvious from (1).
I am much indebted to Professors Kurokawa and Wakayama for their wonderful paper.

For more on regularization see Basic Analysis Of Regularized Series And Products by Jay Jorgenson and Serge Lang.

A 'Natural Proof' of the Gauss Multiplication formula II: A Lemma

We continue with our treatment of the multiplication formula for the Gamma function following Kurokawa and Wakayama.

We will require the following:

Lemma
prodz_[n ∈ J] μ.(a_n + b) = [μ^Z(0,b)].prodz_[n ∈ J] (a_n + b)
for μ, b constants.

Z_[μ.a_n] (s, t) = sum_[n ∈ J] (μ.a_n + t)^(-s) = [μ^(-s)].[Z_[a_n] (s, t/μ)] , by taking μ^(-s) out of the sum and noting the resulting zeta function.

Then,
Z_[μ.a_n] '(s, t) = -(μ^(-s)).(log μ).[Z_[a_n] (s, t/μ)] + (μ^(-s)).[Z_[a_n] '(s, t/μ)] , by differention using the product rule.

With s=0,
-Z_[μ.a_n] (0, t) = log [μ^(Z_[a_n] (0, t/μ))] - Z_[a_n] '(0, t/μ)

Which gives,
exp (-Z_[μ.a_n] '(0, t)) = [μ^(Z_[a_n] (0, t/μ))]. exp (-Z_[a_n] '(0, t/μ))

t=μb gives the desired result.

We note that, in the classical case,
Z(-n, b) = -B_[n+1] (b) / (n+1) , where B_n (x) is the n th Bernoulli polynomial on x, for n a natural number and b a constant. Then, Z(0, b) = (1/2) - b
(a divergent series based proof of this will be given in a later post)

We finish the proof with the coming post!

Thursday, September 10, 2009

A 'Natural Proof' of the Gauss Multiplication formula I: Preliminaries

The following proof of the Gauss Multiplication formula for the Gamma function is an elaboration of the original that appears in the excellent paper 'Zeta Regularizations' by N Kurokawa and M Wakayama in Acta Applicandae Mathematicae Volume 81, Number 1 / March, 2004. We follow their exposition throughout.
http://www.springerlink.com/content/q185862551668217/

It is by far the most natural proof I have come across of this theorem even though the fundamental idea is from the theory of regularized determinants.

A few preliminaries

The notation introduced here will be used throughout.
Observe that for a given Hurwitz zeta function, Z_λ (s, t) = sum_[λ_n] (λ_n + t)^(-s),
∂[Z_λ (0, t)]/∂s = - sum_[λ_n] log (λ_n + t)

So we define the zeta-regularized product for the sequence (λ_n)_{n=0,...},
prodz_[λ_n] (λ_n + t) := exp (- ∂[Z_λ (0, t)]/∂s)

For convenience when λ_n = n we shall denote the resulting classical Hurwitz zeta function by Z.

The main result that will be employed is Lerch's formula,

∂[Z(0, t)]/∂s = -(1/2).log (2π) + log Γ(t)

where log is the natural logarithm and Γ is the Gamma function as defined at
http://en.wikipedia.org/wiki/Gamma_function
Note that Z'(0) = -(1/2).log (2π) where Z is the Riemann zeta function.

For a proof the reader can refer to
http://ocw.nctu.edu.tw/upload/fourier/supplement/Zeta-Function.pdf or the chapter on the Zeta and Gamma functions in S Lang's Complex Analysis (4th ed.), Springer Graduate Texts in Mathematics.

We now have the immediate corollary that,

prodz_[n=0 to +infinity] (n + t) = (2π) / Γ(t) ------- (1)

(await latter half for proof)...

Tuesday, June 09, 2009

Three ways about a Curious Integral!

I present for your inspection the integral over the real line of,

1/(x^2 + a^2)^2

with respect to x. At first sight this does not seem to admit an elementary solution. Indeed it is often used in examples of residue calculus to demonstrate the power of the method in calculating real integrals of rational functions! However, we note two other methods of dealing with it-

1) We employ the substitution x=a.tan u, whence the integral becomes one from -pi/2 to pi/2 of cos^2 (u) with respect to u. Using 2.cos^2 (u) = cos (2u) + 1 we then have the value to be pi/(2.a^3).

2) Note that the integral over the real line of,
1/(x^2 + a^2) = (1/a).[arctan (x/a)]_{x=-infinity}^{x=infinity} = pi/a
We now differentiate this relation with respect to a, effectively this is now a parameter integral, whence the integral is again pi/(2.a^3)

Sunday, March 01, 2009

The methods of Borel, Abel and Cesaro, Voronoi & Norlund part III

We now come to the last part of the present series regarding summability methods. The methods we shall presently consider is due to Norlund and are known as Norlund methods. The method of Cesaro means which we shall consider later is an elementary case of these. A nearly equivalent definition of these had already been given by Voronoi which is why we mention him here.

Say we have a sequence of weights, (w_k) where k runs from 0 to n, with only the restriction that each term is non-negative and, for our convenience, w_0 is never zero.

We consider the sequence of partial sums of the form s_n = a_0 + a_1 + a_2 + ... + a_n
and construct the sequence t_n such that,

t_n = [(w_n).(s_0) + (w_(n-1)).(s_1) + (w_(n-2)).(s_2) + ... + (w_0).(s_n)] / (w_0 + w_1 + w_2 + ... + w_n)

and consider the limiting value of this new sequence as the sum, i.e-

a_0 + a_1 + a_2 + ... = s , (N, w_k)

if,

lim_{n -> infinity} t_n = s

Of course, an immediate concern here is where regularity holds given the freedom we have with assigning the weights and thus the generality of this method. However, it may be shown that regularity holds given,
w_n / (w_0 + w_1 + w_2 + ... + w_n) -> 0

and that any two such regular Norlund methods are consistent with each other, i.e- summing a series to the same sum given both limits exist.

Indeed, for a regular Norlund method, an Abelian theorem holds which draws the connection between Abel's method and Norlund's.

i.e-
If (N, w_k) is regular and a_0 + a_1 + a_2 + ... + a_n = s , (N, w_k) then the series given by,

f(x) = a_0 + (a_1).x + (a_2).x^2 + ... + (a_n).x^n + ...

has a positive radius of convergence and f(x) is analytic and regular for x between 0 (inclusive) and 1.

lim_{x -> 1^-} f(x) = s

Which means that if f(x) does converge for x between 0 and 1 the (N, w_k) method gives the same sum as the A method.

For w_k = 1 for all k we have a special case we call Cesaro summation to (C, 1) and with,

w_k = (n + m - 1)_C_(m - 1) 

we have the more general (C, k) method of Cesaro. (note that the notation n_C_k stands for the binomial coefficient nCk)

The (C, 1) method for instance, sums the series,

1 - 1 + 1 -1 + ... to 1/2 by averaging the partial sums when taking the 'Cesaro limit'.

Thursday, February 26, 2009

The methods of Borel, Abel and Cesaro, Voronoi & Norlund part II

The previous methods we encountered, our P definition and the B and B' definitions of Borel that the former led to are quite powerful methods capable of summing rapidly divergent and unruly series. Indeed, using the P method and a suitable auxiliary function one may give rise to further powerful methods. However, as Hardy notes in his Divergent Series, this power comes with a cost. In that these methods may be prone to failure in more subtle cases such as slowly diverging series or ones with oscillating terms. Prior to further discussion, we submit for your inspection the following method-

Summation by Abelian means

For 0 =< λ_0 < λ_1 < λ_2 < ... with (λ_n) tending towards infinity and

f(t) = sum_{n = 0}^{infinity} (a_n).exp [(λ_n)t] converges for all positive real values of t,

If,
lim_{t->0} f(t) = s

then,

a_0 + a_1 + a_2 + ... = s , (A, λ)

Summation by Abelian means (A, λ) embodies a whole class of methods (which may indeed be further generalised, see Hardy, Divergent Series, pp71-73) depending on the sequence (λ_n) chosen, though each such method can be shown to be regular, linear and stable. We are interested in the simplest case where λ_n = n, whereupon we have with;

f(x) = sum_{n = 0}^{infinity} (a_n).x^n

where x = e^-t and hence convergent for |x| <1

if,

lim_{x -> 1^-} f(x) = s
(this notation implies the left hand limit)
then,

a_0 + a_1 + a_2 + ... = s , (A)

The 'A' stands in for Abel summation, which one might find a curious choice of name given Abel's attitude towards divergent series. Indeed, the present method is more fitting for the legacy of Euler or Poisson than Abel (we shall later see the intimate connection between a more powerful summation method attributed to Euler and Abel summation). However, it was Abel who employed a similar argument (under different conditions to do with limiting partial sums of course) for a theorem to do with Cauchy summation and the proper articulation has associated the method to his name.

The method is less powerful than the (B) and (B') methods discussed in the previous post, failing even to sum the geometric series outside its radius of convergence. However, it possesses the stability that Borel's methods in general do not and also generalise the Cesaro summation which we shall discuss in the next post.

Note: We may also mention in passing, the method of Lambert summation which also relies on a trick in the limit as with Borel's integral method but in a form more familiar with Abel summation.

We say,

a_0 + a_1 + a_2 + ... = s , (L)
if,
lim_{x -> 1^-} (1 - x). [sum_{n = 1}^{infinity} n.(a_n).(r^n) / (1 - x^n)] = s

Evaluating the limit (1 - x) / (1 - x^n) at each n shows how this method works.

Monday, February 23, 2009

The methods of Borel, Abel and Cesaro, Voronoi & Norlund part I

As promised, we shall consider some classic resummation procedures in concise detail.

One can recall that the sum of an infinite series in the Cauchy sense was the limit of partial sums.

We consider a very general and powerful method which relies on an alternative in taking the limit.

Suppose s_n is a partial sum of the series a_0 + a_1 + a_2 + ... which is under consideration and that S(t) = (p_0)(s_0) + (p_1)(s_1).t + (p_2)(s_2).t^2 + .... is entire for non-negative coefficients p_n and P(t) = (p_0) + (p_1).t + (p_2).t^2 + ...

If lim_{t->infinity} S(t) / P(t) = s

then we say,

a_0 + a_1 + a_2 + ... = s , (P)

(while we have used P to refer to this summation method, it ought not be considered with Abel summation for which P is sometimes used for Poisson who also employed it in his investigations into Fourier series. Hardy in Divergent Series calls this the J method pp79-80)

We put p_n = 1/ (n!) and thus P(t) = e^t to define the method of Borel summation. That is if,

lim_{t->infinity} e^(-t) S(t)
= lim_{t->infinity} e^(-t) [(s_0) + (s_1).t + (s_2).(t^2)/2! + ... + (s_n).(t^n)/n! + ...] = s

then,

a_0 + a_1 + a_2 + ... = s , (B)

Since we have,

n! = integral_{t = 0}^{infinity} (t^n).e^(-t) dt
, by employing a property of the Gamma function we have a similar summation,

a_0 + a_1 + a_2 + ...
= sum_{n = 0}^{infinity} (a_n).[integral_{t = 0}^{infinity} (t^n).e^(-t) dt] / n!
= integral_{t = 0}^{infinity} [sum_{n = 0}^{infinity} (a_n).(t^n)/n!] e^(-t) dt = s

(the interchange of summation and integration is allowed given the convergence of n!)

Should this last integral converge and the series within has a non-zero radius of convergence we say,

a_0 + a_1 + a_2 + ... = s , (B')

Should e^(-t) [sum_{n = 0}^{infinity} (a_n).(t^n)/n!]  -> 0 as t -> infinity the methods B and B' may be easily shown to be equivalent.

See Hardy, Divergent Series pp182-183


The Borel methods certainly allow us to sum more types of series than we had been able to thus far, for instance, it is easily shown that the B sum of 1 + z + z^2 + z^3 + ... = 1 / (1 - z) throughout the half plane Re(z) < 1

Sunday, February 22, 2009

A Diverging Appeal...

From abhorred 'inventions of the devil' to rigorously defined entities, we have briefly glimpsed the passage of divergent series through history. What the episode highlights is the freedom that comes of definitions that need not abide by intuitive notions on how a mathematical entity ought to be.

Restrictions however, are not to be entirely disposed of whenever it is sought to pursue interesting results. A method of summation is certainly appealing when it is general and not overly contrived as Hardy remarks in 'Divergent Series'.

There is also the matter of 'regularity'. Which is a first useful criteria by which to classify a given method. By claiming a method to be regular we mean that the sum is associates with a series will correspond to the sum in the traditional sense whenever the latter is not infinite. This serves to 'generalise' the notion of a sum.

We immediately note that the Ramanujan sense of sum we met previously is not a regular method. The finer points of that method are best seen in connection to the Euler MacLaurin summation formula which we shall come across later.

Are there any more discriminating properties by which we can classify our study of divergent series?

The following postulates are often useful:

1) First postulate of linearity
If a_0 + a_1 + a_2 + ... = S, then
k.a_0 + k.a_1 + k.a_2 + .... = kS

2) Second postulate of linearity
If a_0 + a_1 + a_2 + ... = S and b_0 + b_1 + b_2 + ... = T, then
(a_0 + b_0) + (a_1 + b_1) + ... = S + T

3) Postulate of stability
If a_0 + a_1 + a_2 + ... = S then a_1 + a_2 + ... = S - a_0 and vice versa

They are embodied in a large number of the methods we employ.

We shall conclude this discussion for now upon considering three interesting senses of sum.

The powers of definition!

Of course, using a summation technique to give the sum of a divergent series a meaning is the same sort of thing Euler was scoffed at when he employed geometric sums.

Say,
S = 1 + r + r^2 + r^3 + r^4 + .... + r^(n-1)

(we could've made the initial term something other than 1, say 'a', but this only requires multiplying by 'a' whenever S arises as is)

Then,

S - Sr = S (1 - r) = 1 + r + r^2 + .... + r^(n-1) - r - r^2 - .... - r^n = 1 - r^n
(since all the terms in between 'telescope' out)

Thus, for r not equal to 1,

S = (1 - r^n) / (1 - r)

Now for the infinite sum of S we need to find the limit of the sequence of partial sums,

lim_{n->infinity} S

Noting how r^n will only go to a finite value (zero to be precise) as n gets large without bound only when -1

lim_{n->infinity} S = 1/ (1 - r)

as the infinite sum in the traditional sense!

Euler, in playing around with this identity sought to remove the restriction of -1

1 + r + r^2 + .... + r^n + .... ad inf. := 1 / (1 - r) ---------- (1)

is valid for all values of r save for r = 1

This leads to some intriguing examples, for instance, r = 2 gives,

1 + 2 + 4 + 8 + ..... ad inf. = -1

and similar seemingly irreconcilable identities for other values of r.

However, as Hardy fittingly remarks in his seminal work on this subject 'Divergent Series', it is a mistake to think of Euler as a 'loose mathematician'. He acknowledged that what he was doing was no longer the standard summation procedure that was in use (this was before Cauchy made it all rigorous) and treated these identities almost as they would be treated in the modern theory of divergent series. If anything, he had the proper sort of reservations about the matter, as his dictum 'summa cujusque seriei est valor expressionis illius finitae,...' leads us to believe.

His belief that a divergent series must always be associated the same sum by different expressions kept him at bay from the rigorous theory, though his ideas were well in advance of their times and his reasons were different from many of his more orthodox contemporaries.

Euler thought in terms of limits of power series in his consideration of these series. Essentially, where the power series represented the infinite series, when the corresponding function attained a limit one would be able to associate that limit as the sum of the series.

Consider for instance,

1 + x + x^2 + .... + x^n + .... ad inf. := 1 / (1 - x)

valid for all x under say, the A sense of summation.

The finite values that appear in the right hand side even when the left hand side expression diverges in the traditional sense can be made sense of by considering it as the remainder from a process of algebraic long division (note that this is the same sort of singularity we removed in finding the Ramanujan sum of H).

This sort of consideration, while it can be made rigorous, is not without counter-intuitive backlashes. For instance, this implies that,

1 - 1 + 1 - 1 +....

and

1 + 0 - 1 + 1 + ....

while having essentially the same terms, yield different sums since the power series have different 'gaps'!

But then, considering the rearrangement theorem of Riemann we encountered early on for the Cauchy sense of convergence, this doesn't seem as exaggerated a bullet to bite after all.

Euler's justification would come much later with the advent of analytic continuation in the theory of functions of a complex variable. Where the domain of a given function can be extended under certain conditions.

If f and g are holomorphic on domains A and B respectively, and the functions coincide in the non-empty intersection of A and B, g is termed the analytic continuation of f to B and f is in turn termed the analytic continuation of g to A.

What is most striking here is that the analytic continuation whenever it exists is unique! It would seem Euler's attitude almost foresshadowed the implications of this powerful result.

What need be kept in mind is that while rigorous arguments in mathematics makes for the most refined of intellectual achievement, it should never bog us down and force us along a single linear track that we might find a naive comfort in.

In retrospect, Abel sounds disappointed as he writes them off in ushering in a new dawn of rigour-

“Divergent series are, in general, something terrible and it is a shame to base any proof on them.
We can prove anything by using them and they have caused so much misery and created so
many paradoxes. . . . . Finally my eyes were suddenly opened since, with the exception of the
simplest cases, for instance the geometric series, we hardly find, in mathematics, any infinite
series whose sum may be determined in a rigorous fashion, which means the most essential
part of mathematics has no foundation. For the most part, it is true that the results are
correct, which is very strange. I am working to find out why, a very interesting problem.”

Certainly, from this statement it is clear that Abel saw significant enough a problem to want to pursue it, though exactly the scarcity he remarks upon would become crucial for a more complete theory of functions as we have today (which was thanks to the efforts of Abel, Cauchy, et al.).

To sum or not to sum...

Of course, the excursion taken in studying the harmonic series promises to be most fruitful in understanding how summation fits into the broader mathematical programme.

We have already seen that a traditional idea of summation where the sum of some terms (a_k) is,

S = a_0 + a_1 + a_2 + a_3 + ..... + a_(n-1) + a_n

and that taking the limit of S as n-> infinity gives us a way to sum the infinite series. This is essentially the idea behind the Cauchy sense of sum. The summation we are so familiar with that it becomes second nature to think of it synonymous with the very concept of summation.

If we were to stop here then we'll have succeeded at the original goal- in that we have now defined to an acceptable precision what we mean by summation. However, this playing field in particular, perhaps more than any other in contemporary mathematics (even counting non-Euclidean geometries) opens us doors to much more exotic possibilities.

It became plain to see that just as we have no general formula that can offer a closed form for each and every sum (even in the regular sense), there is no obligation presented that should limit us in extending the notion of a sum in different directions.

As a case in point, let us define the Ramanujan sum of a sequence (a_n) as,

S := sum_{n = 1}^{infinity} a_n  - integral_{1}^{infinity} a(t) dt   , (R)

Whereby we immediately have,

H := gamma , (R)

by just a matter of definition!

Most interesting is that the Laurent expansion of Riemann's zeta function,

Z(s) = sum_{n = 1}^{infinity} n^(-s)

(note that Z(1) = H)

gives,

Z(s) = 1/(1-s)  + gamma + ....
(where the terms represented by the trailing dots become zero as s->1, see http://en.wikipedia.org/wiki/Riemann_zeta_function#Laurent_series)

Thus, the Ramanujan summation procedure may be considered as removing the particular singularity that made the sum diverge while retaining a constant that is then given the function of the sum of the series.

Indeed, Ramanujan's idea was to treat the constant that arose from the Euler McLaurin summation formula as a representative quantity, or more loosely, the 'sum' of the series.