Calculus of Variations: Part II
by greenturtle3141, Jul 31, 2022, 9:11 PM
Calculus of Variations Series
Part I: https://artofproblemsolving.com/community/c2532359h2742841
Part II: This
Reading Level: 6/5
Prerequisites:
You must be familiar with the following:
It is highly recommended that you are familiar with:
The following would be nice to be familiar with, but we're going to review them regardless:
Part 1: The Direct Method
Last time, we left off with the following issue: Using the Euler-Lagrange equation, we maanged to find pretty convincing candidates for the minimizer of integral functionals such as
. However, who is to say that the candidate is actually a minimizer?
Typically, the process for actually finding a minimizer for, say, a function like
, goes as follows:
, we can do (1) by using some kind of extreme value argument. Step (2) is done by solving
. (3) is done simply.
In the case of a function(al) in infinite dimensions like
, we're kinda in trouble. Step (2) is handled by the Euler-Lagrange equation, and (3) is a trivial process. But how do we go about (1)?
The Direct Method of the Calculus of Variations handles this. I'm going to state it just so you know what's coming, so don't run away if you have no idea what any of it means:
Let's translate this procedure to English.
A Stupid Example
Let our space be
, and let
. We claim that
has a minimum. Shocking, I know.
Our recipe above will be overkill for this example, but let's just do it so you're more familiar with how this flows.
As you can see, the work we need to do ourselves boils down to (1) finding a lower bound for
, (2) proving a compactness result with respect to an appropriate notion of convergence
, and (3) proving lower-semicontinuity with respect to
.
A Not So Stupid Example...?
Problem 1: Let
be an open and bounded interval, and let
be an integral functional defined as:
Prove that
has a minimum.
(From now on, we will abuse notation by dropping the
when the meaning is clear, so you'll see stuff like
.)
Well ok, let's try applying the Direct Method (even though, again, it is very unnecessary...).
Lower Bound
Since
for all
, it is clear that
for all
, hence
is a lower bound.
Compactness
Suppose
. Then
for all
. In particular
is bounded in
. Now we just need to find a subsequence that... uh... er... eh? What convergence do we even choose, and how in the world do we find a converging subsequence? The last example was easy because of Bolzano-Weierstrass, but
is an infinite-dimensional normed space! What in the world could we possibly do???
Sequential Lower-Semicontinuity
Er... maybe we can try and take a stab at this. Let's choose
to be "convergence in
" and see if it works.
Suppose
in
. Let
. We want to prove that
.
By properties of the liminf, there is a subsequence
for which
. We still have
, so by some random theorem ("converse of dominated convergence"), there exists a further subsequence
that converges almost-everywhere to
. Now by Fatou's Lemma:
Half yay?
...
Well that didn't go very well. The problem is that infinite-dimensional calculus is not easy (wow what a surprise).
By choosing
to be
convergence, we managed to prove the sequential lower-semicontinuity, but we were unable to show compactness. In fact, this convergence is actually too strong to get a compactness result! Finding a counterexample is left as an exercise.
What about convergence in measure?
So, we need to choose some convergence weaker than
... and I have just the thing!
Part 2: Weak
Convergence and Weak Compactness
I'm going to try and motivate where the heck this comes from. This motivation is not essential, so feel free to skip straight to the definition of weak convergence if this flies over your head.
One of the biggest secrets of math is that being an object is closely tied to how you are related to other objects.
As an example in linear algebra, consider a point
. By giving you this point
, you... know what
. I mean, duh. But what if I only told you how
is related to other things? Specifically, what if for every element
in the dual space of
(i.e. the space of all linear maps
), I told you the result of
? Can you deduce
? Of course! If we take the map
(the linear map that just spits out the
th coordinate) for each
, then this tells us all the components of
, hence the information we probe about
by attacking it with dual-space elements tells us everything about
.
There is an analogue of this for
space. If
and
is the Holder conjugate of
(so that
), then the dual space of
is the space of linear functions that take the form
, where
. By Holder's inequality,
is always real (i.e. the integral converges in
) because
, hence why we use the Holder conjugate. Now, something called the Riesz Representation Theorem essentially says that being a function
is essentially the same as being a linear map
. What this implies is that a function
is uniquely determined by all the information by attacking
with an element
via
.
We're going to need Riesz-Representation later, so I'm going to state it formally here.
Theorem (RRT):
Continuity?
Anyways, the motto here is that looking at
is closely tied with looking at
for each
. And hopefully this is enough to motivate the star of the show...
Definition (Weak
convergence): Let
be measurable, let
, and consider a sequence
. We say that
converges weakly in
to some function
if
for all
, where
is the Holder conjugate of
. This is equivalent to
If
converges weakly to
in
, we write "
in
".
(You can also define this in the same way for
but it's not as wholesome so we write
instead. We won't be needing this.)
Well ok, interesting. But before we accept
as a member of the "notion of convergence family", we gotta make sure that it's a sane notion of convergence. You know how there are some weird topological spaces in which limits aren't unique? Let's make sure that doesn't happen.
Proposition: If
and
, then
almost everywhere.
Proof.
Fix
. Then
and
. By adding these together, we get that
. But there's no
in this equation, so this is just saying that
, and this holds for all
.
We can conclude by Riesz-Representation. Alternatively...
Alright cool. Great. But if I'm calling this weird thing weak
convergence, then it better be literally weaker than
convergence (which we sometimes call "strong"
convergence). Let's check that.
Proposition: If
in
, then
in
.
Proof. Fix an arbitrary
. Then by Holder:
But
because
and
by virtue of
. 
Wonderful! But why do we care about weak convergence? Remember, we need a sufficiently weak notion of convergence to get a sort of compactness result. Indeed, we get it!
Theorem (Weak Compactness in
): Let
be a sequence for which
. Then there exists a subsequence
for which
in
.
Proof.
This is a tough proof. Two key ideas:
Alright so first, instead of dealing with
, let's deal with the linear map
. The sketch here is that we're going to find a subsequence of
that converges at enough points to some
. Then by Riesz-Representation, we'll switch back from
to
.
By enough points, I just mean the dense subset
! Let's do this step-by-step.
Hm, doing this forever isn't really helpful. Since this process never ends, I don't end up with a good sequence... or do I? Analysis aficionados know what's coming next: A classic diagonalization argument! Ultimately we now take the subsequence
. Convince yourself that
for all
!
Now we want to define the "limiting linear map"
. Our natural guess is:
Does this work? Well, for all
, we get that
. So this certainly defines
over all
, which is a dense subset of
. Is this enough to make
well-defined over all of
? (We're in some danger because the limit in the definition of
might not exist...)
It indeed does! I mean, it's pretty believable. Here are the gory details.
Lemma: Continuity
T is well-defined
T is linear and bounded
Now we need to switch back to functions. By Riesz Representation, there is a unique
such that
. That is,
for all
. At last, we claim that
in
. Indeed, for any
we have:
Whew! 
That's one piece of the puzzle down: compactness. The other half, sequential lower-semicontinuity, still needs to be handled. Of course, whether or not we can get sequential lower-semicontinuity will also depend on the functional
, not just which convergence we choose (which, again, will be the weak
convergence because it's gotta match what we've been working with above). Naturally, we'll only be able to prove the sequential lower-semicontinuity for some suitably well-behaved functionals.
Researchers in the area, like my advisor Leoni, have proved some different theorems regarding which functionals
will be sequentially lower-semicontinuous with respect to weak convergence. The proofs are completely above my paygrade. In this post, we'll prove a lower-semicontinuity result that arises from convexity. With that...
Intermission: Convexity
You may know a standard definition of convexity from calculus, but we're going to work with a more general notion of convexity.
Definition (Convexity): Let
be a normed space, and let
. We say that
is convex if, for all
, we have that
for all
.
"Hey isn't that just Jens-" Yes.
Let's connect this to the notion of convexity that you may be more familiar with.
Theorem: Let
be twice-differentiable. Then
is convex iff
for all
. For higher dimensions, if
is twice-differentiable, then
is convex iff
is positive semi-definite.
Proof. See https://math.stackexchange.com/questions/2083629/why-does-positive-semi-definiteness-imply-convexity.
Here's what we actually need though.
Theorem:
To visualize what this theorem, is saying, take
. Then the motto to imagine is that "you can make any convex graph by drawing a lot of lines".
Proof.
The first point is a boring exercise. The converse is more interesting, and it's what we're actually going to use in the next part.
To proceed, we're going to need to invoke some functional analysis. It's one of those "obvious theorems" that are harder to prove than they look. Essentially: Imagine two convex sets in
. If they don't intersect, then surely you can draw a line separating the two, so that one convex set is on one side of the line and the other convex set is on the other side... right?
The theorem we need is essentially just that but in arbitrarily many dimensions.
Theorem (Geometric Hahn-Banach): Let
be a normed space, and suppose
are disjoint, non-empty convex sets, with
open. Then
and
are separated by a closed hyperplane. That is, there exists a continuous linear map
and a constant
such that
for all
and
.
The "closed hyperplane" that this theorem refers to is basically the set of points
for which
.
Why do I want to use this theorem? The key idea is that for
, I want to find an affine function
that passes through
, such that
just "touches" the graph of
at
. That is,
, and
for all
. This indeed sounds like a job for Geometric Hahn-Banach!
Firstly, what is the space
? Since I want to think about the "graph" of
, just using the space
won't do. I really want to think about the product space
, endowed with the norm
.
Next, the convex sets
are as follows:
and
are disjoint, and
must be open (why?). Moreover, both
and
are convex (...why?). Thus we may apply the Geometric Hahn-Banach theorem to find a continuous and linear
and a constant
such that
for all
and
.
Now:
.
We are now ready to choose our affine function. The key idea is that we want to take
to be the real number which satisfies
(beacuse the hyperplane we want the affine function to represent is given by the set of points where
). To make this more well-defined, we can manipulate this desired equality into
by linearity of
, and "solving for
" we obtain the definition we want:
But is this well-defined? We must check that
. Indeed, note that
and
, and subtracting these we get
. Hence we may take this equation to be the definition of
and nobody can complain.
Since
is a continuous linear function
, we conclude that
is affine. Moreover:

So our affine function indeed passes through
. Hence, for this choice of
to be the one we want, it remains to verify that
for all
. Manipulating:




And this is indeed true because
for all
for which
, and sending
we get
. Brilliant.
To summarize: For every
, we have found an affine
for which
for all
, and
. I now claim that the family of affine functions for is literally just
. Indeed, the inequality
holds for all
, and for all
. So we may take the sup to obtain:
But for
we obtain equality, hence
for all
. 
Wonderful.
Part 3: Weak Sequential Lower-Semicontinuity
Let's recap: We're looking for a notion of convergence that gets us both a compactness result and a sequential lower-semicontinuity result.
convergence is strong enough to get a sequential lower-semicontinuity, but too strong to get a compactness result. Ny switching to weak
convergence instead, we managed to get a compactness result. But, is it too weak to get a sequential-lower-semicontinuity result?
For some integral functionals, we might not get weak sequential lower-semicontinuity. However, by imposing a convexity condition, we shall!
Theorem: Let
and let
be open and bounded. Suppose that
is convex. Then the integral functional
defined by
is sequentially lower-semicontinuous with respect to weak
convergence.
Proof.
To properly use our digression into convexity, we first need to prove that...
CLAIM:
is convex.
To see this, take
. Take a
. Then, by convexity of
, we have that:
Integrating in
:

But this is just
. This holds for all
and all
, hence
is convex.
Alright, now let's prove the theorem. Take a sequence
with
in
. What we need to prove is that:
By the intermission, and the fact that
is convex, we may write
for some family of affine functions
.
What do such affine functions look like? Well, they're just continuous linear maps plus a constant. Hence
for some continuous linear
and some
.
But hold on, what does
look like? By Riesz-Representation, it must take the form
for some
, where
is the Holder conjugate of
!
Lit. Let's start by going down easily:
Now let us take the liminf:
Hold on, what's going on on the right side? I feel like I should know what
is... aha! By the weak convergence
, we actually have
, because
! Therefore:
Finally, as
was arbitrary, we may take the sup over all indices
to obtain:
Boom. 
Looking back at the functional
defined as
from the beginning, we are finally ready to use the direct method to prove that this has a minimum.
As you can see, we now have the advanced technology necessary to prove that certain functionals of the form
for convex
have a minimum! Unfortunately, this isn't really that amazing (why?).
What about this next functional, defined over differentiable functions
?
Hm, the technology we have so far isn't quite advanced enough to deal with derivatives whatsoever...
...we'll talk about how to deal with this in Part 3.
Part I: https://artofproblemsolving.com/community/c2532359h2742841
Part II: This
Reading Level: 6/5
Prerequisites:
You must be familiar with the following:
- Basic topology (open/closed sets)
- liminf and limsup
space
It is highly recommended that you are familiar with:
- Bolzano-Weierstrass; sequential compactness
- Fatou's Lemma
- Holder's Inequality
The following would be nice to be familiar with, but we're going to review them regardless:
- Convex functions
- Riesz Representation Theorem for
spaces
- Geometric Hahn-Banach theorem
Part 1: The Direct Method
Last time, we left off with the following issue: Using the Euler-Lagrange equation, we maanged to find pretty convincing candidates for the minimizer of integral functionals such as

Typically, the process for actually finding a minimizer for, say, a function like

- Prove that a minimum actually exists.
- Using a calculus argument, obtain a list of possible candidates for the minimizer.
- Pick the candidate that produces the minimum value of all the candidates, and conclude by (1) that it produces the minimum value, and hence is a minimizer.


In the case of a function(al) in infinite dimensions like

The Direct Method of the Calculus of Variations handles this. I'm going to state it just so you know what's coming, so don't run away if you have no idea what any of it means:
- Obtain a lower bound on the functional
.
- Deduce that
has an infimum, and thus admits a minimizing sequence
such that
.
- Find a notion of convergence
such that we get the following compactness result: If
is a sequence such that
is a bounded sequence, then there exists a subsequence
such that
for some
. Apply this compactness result to the sequence
from the previous step.
- Prove that
is sequentially lower semi-continuous with respect to
.
- We're done!
Let's translate this procedure to English.
- This is exactly what it sounds like: Find some really small
such that
for all
.
- Any set that is bounded from below has an infimum, so there must exist the infimum
. Of course, this does not necessarily mean that infimum can be obtained... after all, that's precisely what we're trying to prove! What this does imply is that for all
, I can find some
for which
. In particular, this means that
, and we call
a minimizing sequence.
- Let's shelve the motivation for compactness for now. Once we have a minimizing sequence, the idea is to prove that
is sequentially lower-semicontinuous, so that in theory we can turn the
into
for some
.
As a reminder, a functionis sequentially lower-semicontinuous at some
if, for every
, we have that
.
This idea doesn't make too much sense yet though, for two reasons. The first problem is that the minimizing sequence we have,, doesn't necessarily converge to some
. The point of step 3 of the procedure is to fix this! In this step, we show that there is a subsequence
that does converge to some
(and in theory this should be a minimizer of
). This is what it means to prove a compactness result: Given that
is bounded, prove that there is a convergent subsequence of
.
But there is a second issue. We keep talking about "convergence of", both in regards to the compactness result and the "sequential lower-semicontinuity". What kind of convergence is this? All this discussion is moot if I don't specify what notion convergence this is.
If we're working in, then there isn't much of a choice of convergence besides the standard one. But for function spaces like
, you have a number of choices, like convergence in measure and convergence in
. Some of these choices for convergence are stronger than others. The weaker the convergence you choose is, the easier it will be to prove the compactness result. So we choose some convergence
that is weak enough for
to admit a convergence subsequence
, provided that
is bounded.
Why the greek letter tau?By "choosing a convergence", what's really going on behind the scenes is that I'm choosing a topology.
(Doesn't make 100% sense because almost-everywhere convergence does not induce a topology, but whatever.)
- Now the final nail in the coffin is to prove that
is sequentially lower-semicontinuous with respect to the convergence
that we chose. That is, if
, then
. The stronger the convergence
is, the easier it is to prove sequential lower-semicontinuity. This means that there is a delicate balance in choosing
: It cannot be too weak nor too strong.
- Now we're obviously done! ... I'm kidding, it's not that obvious. Here's why.
- We have a minimizing sequence
with
.
- By boundedness of
, we may apply the compactness result proved in Step 3 to deduce that there exists a subsequence
such that
for some
.
- By sequential lower-semicontinuity of
with respect to
, we now get that:
- One one hand,
since
is an infimum (in particular it is a lower bound). On the other hand,
, because
(extracting a subsequence doesn't change the limit). Thus
, meaning that
. So indeed there exists
for which
is the global minimum of
, i.e. the minimum is obtained, i.e.
has a minimum. This is precisely what we needed to prove!
- We have a minimizing sequence
A Stupid Example
Let our space be



Our recipe above will be overkill for this example, but let's just do it so you're more familiar with how this flows.
- Is
bounded from below? Yep, because
and
, so
for all
.
- Blah blah minimizing sequence blah, we don't need to do any work here, it's all automatic.
- Now we need a compactness result of the following form: If
is a sequence for which
is bounded (which we may write as "\sup_{n \in \mathbb{N}} f(x_n,y_n) < \infty" to sound like the cool kids), then we may find a subsequence
for which
.
What convergenceshould we choose? ...well, we really don't have much of a choice here, do we? Let's just pick the standard Euclidean metric/norm
, i.e. the most obvious choice for convergence in
that there is.
Alright, now suppose. Then that means that
for all
, so
for all
. In particular the sequence
must be bounded, hence by Bolzano-Weierstrass we may find a converging subsequence
. Tada!
- Lastly we need sequential lower-semicontinuity of
. But uh, I mean, it's obviously continuous with respect to the standard Euclidean notion of convergence, so it's certainly lower-semicontinuous and hence sequentially lower-semicontinuous.
- We're done, a minimum exists! We don't need to do work here either.
As you can see, the work we need to do ourselves boils down to (1) finding a lower bound for



A Not So Stupid Example...?
Problem 1: Let




(From now on, we will abuse notation by dropping the


Well ok, let's try applying the Direct Method (even though, again, it is very unnecessary...).
Lower Bound
Since





Compactness
Suppose






Sequential Lower-Semicontinuity
Er... maybe we can try and take a stab at this. Let's choose


Suppose




By properties of the liminf, there is a subsequence






...
Well that didn't go very well. The problem is that infinite-dimensional calculus is not easy (wow what a surprise).
By choosing


What about convergence in measure?
I'm too lazy to think about it, mostly because I find it pretty unwieldy. You might get something using Vitali's theorem. Anyways, the convergence we're going to end up choosing happens to be even weaker than convergence in measure!
So, we need to choose some convergence weaker than

Part 2: Weak

I'm going to try and motivate where the heck this comes from. This motivation is not essential, so feel free to skip straight to the definition of weak convergence if this flies over your head.
One of the biggest secrets of math is that being an object is closely tied to how you are related to other objects.
As an example in linear algebra, consider a point















There is an analogue of this for

















We're going to need Riesz-Representation later, so I'm going to state it formally here.
Theorem (RRT):
- For every
, the map
is a linear map
(duh), and is continuous (why?).
- Conversely (!!!), for every continuous linear map
, there is a unique
for which
.
Continuity?
"Aren't all linear maps continuous?"
Somehow, no.
Some people write "bounded" instead of "continuous", which is basically an equivalent condition.
Somehow, no.
Some people write "bounded" instead of "continuous", which is basically an equivalent condition.
Anyways, the motto here is that looking at



Definition (Weak

















(You can also define this in the same way for


Well ok, interesting. But before we accept

Proposition: If



Proof.
Fix







We can conclude by Riesz-Representation. Alternatively...
Define the signed measure
. Clearly
, thus by Radon-Nikodym there exists a unique
for which
for all measurable
. But
for all
of finite measure (since
), and a subadditivity argument lets us extend this to all measurable
. In particular we can write the silly equation
for all
. By the same argument, we see that
for all
. By uniqueness it must follow that
up to almost-everywhere equivalence, as needed.















Alright cool. Great. But if I'm calling this weird thing weak



Proposition: If




Proof. Fix an arbitrary







Wonderful! But why do we care about weak convergence? Remember, we need a sufficiently weak notion of convergence to get a sort of compactness result. Indeed, we get it!
Theorem (Weak Compactness in






Proof.
This is a tough proof. Two key ideas:
- Infinite-dimensional spacess are hard. But
is separable! This means that there exists a sequence
that is dense in
. Countable is good! We like countable. There is hope.
- Instead of dealing with functions, what if we played with linear maps instead? Riesz Representation says that it's basically the same thing, and moreover linear maps output real numbers, which we know a lot about. (This is what weakening the convergence allows us to exploit!!!)
Alright so first, instead of dealing with






By enough points, I just mean the dense subset

- First handle
.
, so the sequence of real numbers
is bounded! By Bolzano-Weierstrass, we may now find a subsequence
of
such that
.
- Next up is
. By the same argument,
is a bounded sequence, so we may find a subsequence
of
such that
. Note that since subsequence extraction preserves limits, we still have
.
- Next is
. By the same argument, we find a subsequence
of
such that
.
- etc. etc. etc., do this forever.
Hm, doing this forever isn't really helpful. Since this process never ends, I don't end up with a good sequence... or do I? Analysis aficionados know what's coming next: A classic diagonalization argument! Ultimately we now take the subsequence



Now we want to define the "limiting linear map"










It indeed does! I mean, it's pretty believable. Here are the gory details.
Lemma: Continuity
CLAIM:
is continuous for all
, with
for all
, for some constant
.
Proof.
So we just choose
.





Proof.


T is well-defined
Take
. We wish to prove that the limit
exists.
By density there exists a subsequence
for which
in
. We first claim that
is convergent. It suffices to prove that it is Cauchy. Take
. Then by the lemma:
Sending
, we get:
Using this inequality, we see that since
is Cauchy in
, it must follow that
is Cauchy. Hence indeed,
for some
.
Now we claim that
, finishing the argument. Indeed:

We are free to choose
. Choosing a large
so that
, we get:
Sending
on both sides, we obtain:
As
was arbitrary, we may send
so that
.


By density there exists a subsequence













Now we claim that












T is linear and bounded
Linearity just follows from linearity of limits (all of which are well-defined, as we have just proven!). For boundedness, just write


Now we need to switch back to functions. By Riesz Representation, there is a unique









That's one piece of the puzzle down: compactness. The other half, sequential lower-semicontinuity, still needs to be handled. Of course, whether or not we can get sequential lower-semicontinuity will also depend on the functional


Researchers in the area, like my advisor Leoni, have proved some different theorems regarding which functionals

Intermission: Convexity
You may know a standard definition of convexity from calculus, but we're going to work with a more general notion of convexity.
Definition (Convexity): Let





![$t \in [0,1]$](http://latex.artofproblemsolving.com/6/7/3/6735b925696750e153b8d293780a7b620449b778.png)
"Hey isn't that just Jens-" Yes.
Let's connect this to the notion of convexity that you may be more familiar with.
Theorem: Let







Proof. See https://math.stackexchange.com/questions/2083629/why-does-positive-semi-definiteness-imply-convexity.
Here's what we actually need though.
Theorem:
- Let
be a family of affine maps on a normed space
. That is,
for some linear
and some
. Then
defined by
is convex.
- Conversely (!!!), every convex
may be written as
for some family
of affine maps.
To visualize what this theorem, is saying, take

Proof.
The first point is a boring exercise. The converse is more interesting, and it's what we're actually going to use in the next part.
To proceed, we're going to need to invoke some functional analysis. It's one of those "obvious theorems" that are harder to prove than they look. Essentially: Imagine two convex sets in

The theorem we need is essentially just that but in arbitrarily many dimensions.
Theorem (Geometric Hahn-Banach): Let










The "closed hyperplane" that this theorem refers to is basically the set of points


Why do I want to use this theorem? The key idea is that for









Firstly, what is the space





Next, the convex sets

is the supergraph of
. That is, it is the set of all
for which
.
is the ray pointing downward from
. That is, it is the set
.










Now:
for all
for which
. Taking
, we have
for all
. Sending
, we deduce by continuity of
that
.
for all
. Sending
, we deduce that
.

We are now ready to choose our affine function. The key idea is that we want to take












Since



















To summarize: For every














Wonderful.
Part 3: Weak Sequential Lower-Semicontinuity
Let's recap: We're looking for a notion of convergence that gets us both a compactness result and a sequential lower-semicontinuity result.


For some integral functionals, we might not get weak sequential lower-semicontinuity. However, by imposing a convexity condition, we shall!
Theorem: Let






Proof.
To properly use our digression into convexity, we first need to prove that...
CLAIM:

To see this, take

![$t \in [0,1]$](http://latex.artofproblemsolving.com/6/7/3/6735b925696750e153b8d293780a7b620449b778.png)






![$t \in [0,1]$](http://latex.artofproblemsolving.com/6/7/3/6735b925696750e153b8d293780a7b620449b778.png)


Alright, now let's prove the theorem. Take a sequence







What do such affine functions look like? Well, they're just continuous linear maps plus a constant. Hence



But hold on, what does





Lit. Let's start by going down easily:











Looking back at the functional


- Clearly
, so it is bounded from below.
- Suppose a sequence
satisfies
. Then
is bounded in
, so by Part 2 there exists a subsequence
in
. So we have a compactness result.
- Since
is convex, we have by the theorem we have just proven that
is sequentially lower-semicontinuous with respect to weak
convergence.
- Therefore, by the direct method,
has a minimum.
As you can see, we now have the advanced technology necessary to prove that certain functionals of the form


What about this next functional, defined over differentiable functions


...we'll talk about how to deal with this in Part 3.

This post has been edited 6 times. Last edited by greenturtle3141, Aug 1, 2022, 6:27 AM