Abel Summation and an Analog for Calculus
by always_correct, Nov 22, 2016, 2:02 AM
For those acquainted with integral calculus, you should be familiar with the identity commonly known as Integration by Parts, which states for any two differentiable functions
and
:
Now, when we think of derivatives, we think of the difference quotient of a function. This looks like:
We take the limit of this as
to get a derivative, now, what if we didn't take a limit? What if we said
was good enough? Then we get what is called a finite difference of
, which in this case is
. We denote this particular finite difference as
. Why would we have to use a finite difference? Think sequences. Let
for all (or some)
with
. Now taking a derivative of a sequence sounds silly, but we can take a finite difference! For the rest of the post I will be denoting the sequence of finite differences as
. This will make things a bit more readable.
Similarly, when we think of definite integration, we think of areas under curves.

Note that the sum can be an analog to an integral. As we know from the Fundamental Theorem of Calculus:
Using our analogs we discussed, we realize the same is true for sequences with finite differences:
For the first equality, note that the sum telescopes. Interesting. One more thing, we need a compact way to represent summation that is analogous to taking an integral. For a sequence
we denote the sequence the represents the integral as
. We use the Fundamental Theorem once more and note that letting
, that
. Looking at the analog, we see that it is also in a sense true, that
Keeping integration by parts in mind, I introduce the Abel summation. I first represent it in the commonly use notation, then my own. For any two sequences
and
, with 
Here it is again using my notation, note I have swapped to have that
.
Compare this to the integration by parts formula, it's very similar! How can we prove this? Let's look at the simple proof for integration by parts.
From the Fundamental Theorem it follows that:
Note that I swapped the location of
and
. We should look to mimick this proof to prove that Abel summation works, while there are somewhat simpler methods, I feel this shows how it really is an analog to integration by parts. We see that
will represent
and
will represent
. We start our proof just like the previous one, by finding the derivative of the product of
and
. Note that as
will represent
,
will represent
.
How can we rewrite this? Looking at that fact that
, we might look at the expansion of
(expand this). We see that this is not enough. One might also look at the expansion of
(this also). This is also not enough. However, the keen will realize that adding both works:
Using the analogs we previously noted, we simplify this to
We are closer to finishing. In the original proof, we next then took the indefinite integral of both sides, and said the definite integral was also valid. So, let us do the analog, we will sum up both sides from
to
.
Note that
, we then add
to both sides effectively shifting the
in the first sigma to
.
Rearranging, we arrive at the desired equality.
Expanding to the usual notation, we get (we swapped
and
and made it negative):
Abel Summation is very powerful, but it is important to understand its connection to calculus. The close relation between derivatives, integrals, finite differences, and sums is very interesting, what can you prove with it?
Thank you for reading.









$](http://latex.artofproblemsolving.com/5/1/2/5124f7cbdea6e7cd4a959209535ff23e74a2a38d.png)



$](http://latex.artofproblemsolving.com/b/c/c/bcca18f7d602a9e1967486a030190e79a891be36.png)
Similarly, when we think of definite integration, we think of areas under curves.

Note that the sum can be an analog to an integral. As we know from the Fundamental Theorem of Calculus:







Keeping integration by parts in mind, I introduce the Abel summation. I first represent it in the commonly use notation, then my own. For any two sequences






![\begin{align*}
\frac{d}{dx}[f(x)g(x)] &= f'(x)g(x) + f(x)g'(x) \\
f(x)g(x) &= \int f'(x)g(x) dx + \int f(x)g'(x) dx \\
\int f(x)g'(x) dx &= f(x)g(x) - \int f'(x)g(x) dx \\
\end{align*}](http://latex.artofproblemsolving.com/b/9/0/b90201be54a820921180c74332c4d91cef04df84.png)














![$\frac{d}{dx}[f(x)g(x)] = f'(x)g(x) + f(x)g'(x)$](http://latex.artofproblemsolving.com/c/9/7/c9737781763d3b0225c7d759e951549bd51c10f1.png)
















Abel Summation is very powerful, but it is important to understand its connection to calculus. The close relation between derivatives, integrals, finite differences, and sums is very interesting, what can you prove with it?
Thank you for reading.

This post has been edited 3 times. Last edited by always_correct, Nov 22, 2016, 2:19 AM