Infinite Dimensional Derivatives and the Calculus of Variations: Part I
by greenturtle3141, Dec 22, 2021, 6:04 AM
Calculus of Variations Series
Part I: This
Part II: https://artofproblemsolving.com/community/c2532359h2896086
Reading Difficulty: 5/5
Prerequisites: Vector space, norm, limits, integration
Useful to know: Integration by parts, directional derivatives, gradients, multidimensional chain rule, differential equations, second-order ODEs
Notational Note: Sometimes I write
as a shortcut/abuse of notation for
.
This is one of my favorite applications of analysis. Buckle your seatbelts!
Part 0: Directional Derivatives
Let
be a normed space (like
). Suppose
where
, and
is an interior point (i.e.
is defined not just at
, but also "around"
). Then we can try and find the "derivative" of
at
.
Specifically we can take a "directional derivative", but that means that a "direction" for this derivative must be specified. Especially, it is the "rate of change moving along some line through
". To wit, let
with
be a "direction". Imagine this as an arrow that points in the direction of the "derivative".
Definition (Directional Derivative): The directional derivative of
at
along the direction
is:

For example, let
, and let
, defined on all of
. We can take the directional derivative along
at
:


For fun, you can try and see what happens when you take
, or similar.
Part 1: Differentiating in Infinite Dimensions
This might sound scary, but keep in mind that NOTHING changes from the previous example. It's the same exact definition. We're just going to have fun with it! Let's mess around.
We're going to take a spicier vector space. Let
, the normed vector space of bounded continuous functions over
. Of course, if I say it's a normed space, then I have to give you a norm. We're going to take the supremum norm, i.e. for a continuous function
:
If you don't know what
means, you can instead replace it with
(not technically correct, but good enough for intuition). Convince yourself that it is a norm! That is, we should have these properties:
Alrighty! We have a vector space. Note that this vector space is of infinite dimension... Uh, what do functions on this vector space look like? These "functions" take in other functions, and output real numbers. Wow! So these are like, "higher-order functions", eh?
Example 1 (Evaluation Map): Let's try this cool function:
For example,
, and so we can compute
because
.
Test your understanding real quick:
We have a function to differentiate, so now we want a direction
to differentiate towards, and a point
to take the derivative at... For the sake of generality, let's just let them be
and
. (I'm calling it
to remind myself that it's a function)
Wow this is wacky and I love it.

Interesting!
Bonus Problem: Is
differentiable? That is, does there exist a linear
such that
? If so, what is
? Hint
Example 2 (Integral Functionals): Let's start throwing in the dinosaurs!
For example,
, and
or something. Let's try computing the directional derivative of
at
in the direction
:

![$$ = \lim_{t \to 0} \frac{1}{t}\left[\int_{-1}^1 f_0(x)+tv(x)\,dx - \int_{-1}^1 f_0(x)\,dx\right] $$](//latex.artofproblemsolving.com/0/0/7/007a2f9d7b8f15e89715ddba65e99d8a7cb5faa2.png)
![$$ = \lim_{t \to 0} \frac{1}{t}\left[\int_{-1}^1 f_0(x)+tv(x) - f_0(x)\,dx\right] $$](//latex.artofproblemsolving.com/a/c/8/ac86febe4c9bdaa93014fa99c435238fce94ef87.png)

Example 3: That was kinda lame because everything just cancelled too well. Let's get spicier:
What happens now?

![$$ = \lim_{t \to 0} \frac{1}{t}\left[\int_{-1}^1 (f_0(x)+tv(x))^2\,dx - \int_{-1}^1 f_0(x)^2\,dx\right] $$](//latex.artofproblemsolving.com/d/0/1/d01fe10087d37484e69df44c38402ec6742e7b56.png)

So close! How do we deal with this wacky
term? Intuitively, since
should go to
as
, we would predict that this limit is
. This is actually true! This is proven using Lebesgue Dominated Convergence. No worries if you're unfamiliar, but just note that you can't always "shove the limit inside the integral". It's fine here, though! Therefore:
Spicy!
Bonus Exercise: Generalize the above argument to compute the directional derivative for functionals of the form
, where
is a differentiable function with continuous derivative. If you don't know Lebesgue Dominated Convergence, go ahead and swap limits and integrals without proof.
Fooling around in infinite dimensions was fun! Here's how we can practically apply this... it's about to get even more fun!
Part 2: The Calculus of Variations
Consider the following four problems:
These four problems are all minimization problems. If you're reading this, you've probably found minimums before. It was easy! To find the minimum of something like
, you'd just take the derivative and set it to zero. For those problems, you were finding the point that minimizes a function
. But for the above four problems, what we're trying to find the a certain function that minimizes some quantity in terms of that function... like, a function of a function.
Hm... Perhaps it would be easier to see where we're headed if we considered the first problem first.
Example 1: Over all differentiable functions
satisfying
and
, what is the minimum possible value of
? For what
is this achieved?
Well, taking inspiration from calculus, what if we just, like... took the derivative of
, and set it to zero...? Well, we're trying to minimize over a space of functions, which is like, an infinite-dimensional vector space, so the derivative would have to be like, some kind of infinite-dimensional derivative, or something...
Oh wait.
Some Tools We'll Need
We need an analog for "if
is minimized at
then
."
Theorem 1: Let
where
for
a normed vector space. Suppose
has a local minimum/maximum at
, where
is an interior point and all the directional derivatives at
exist. Then we MUST have
for EVERY direction
.
Proof
Next, a small note on integration by parts.
Lemma 2: Let
. That is,
is continuously differentiable and compactly supported, (you can interpret this as
). Then, for any differentiable, integrable
, we have:
Proof
Next, recall (or learn right now) the chain rule in multiple dimensions.
Theorem 3 (Chain Rule): Let
and
. Then:
Proof omitted, but here's an example
Lastly, we'll need this cool trick.
Lemma 4 (Fundamental Lemma of the Calculus of Variations: Let
be a continuous function, such that
for all compactly supported and continuously differentiable
satisfying
. Then
.
Proof
Back to the First Example
Firstly, we need to specify the vector space of functions we're using. We will pick
, which is the normed space of functions
with continuous derivative. As before, the norm we take is the supremum/maximum norm.
Let's suppose that
is minimized at some
. Then, by Theorem 1, we have for every direction
that:



Switch the limit and integral with a domination argument:

Now let's split this integral into two parts. The key idea is to remove the
and turn it into a
.
This equality holds for all directions
. We're allowed to apply more conditions to
if we want (as long as it's still a direction, it works!), particularly we may assume that
is compactly supported in
. Then, magic happens when we apply integration by parts / Theorem 2 on the second integral! This gives us:

We conclude that
for all
. By the Fundamental Lemma of Calculus of Variations, we find that actually,
. Tada!
This is now just a differential equation! We can solve this
. Plugging in the constraints
and
, we find that
and
. Therefore, the minimizing function is
.
By taking an "infinite-dimensional" derivative, we discovered the minimal value of an "infinite-dimensional" function! Wasn't that fun?
Welcome to the Calculus of Variations.
Part 3: The Euler-Lagrange Equation
You can totally solve the other three problems that I have posed by using this method, but it can get a bit annoying. Instead, let's generalize in order to make a shortcut!
General Problem: Let
be a twice-differentiable function
. The
component represents...
, the
component represents
, and the
component represents
. Here's an example to explain what that means: If I'm trying to represent the expression
in the form
, then
.
Let
. What differential equation must be satisfied by an
that minimizes
?
Note that the first problem is equivalent to solving this general problem for
.
Step 1: Take the directional derivative
If
is a local minimum, then the directional derivatives at
must all be zero. For any direction
compactly supported in
, we have that:


Step 1.5: Shove the Limit In
This isnt necessarily fun so Im hiding it
Step 2: Prep the Chain Rule
The integrand looks like a derivative, and that's because it totally is. It's just:
I personally find it hard to see where chain rule is applied here. To help myself out, I like to define a "component accumulator" function. To wit, let
. Then this is:

Step 3: Apply Chain Rule



Putting back the integral, we have for every
and compactly supported
that:

Step 4: Integrate by Parts
We don't like
. We want to replace it with
. So split the integral:
And apply integration by parts, and the fact that
is compactly supported, on the second term:


Step 5: The Fundamental Lemma
Since this holds for all compactly supported
directions
, we may use the Fundamental Lemma of Calculus of Variations, and we finally obtain that
. Ergo:

This is the Euler-Lagrange Equation.
Part 4: Euler-Lagrange Abuse
There's a mess of variables flying around, so let's make it really clear how we use Euler-Lagrange.
Our first victim/example will be the shorest distance between two points.
Victim 1 (Shortest Distance): Let
and
be points in
, and let's assume that
. What is the shortest path between these two points?
The Glitch Mob suggests that it is a line, but let's demonstrate that.
We can (ish) model the shortest path as a continuous function
that satisfies
and
. So we're trying to figure out what function
satisfies these conditions, such that its length (i.e. "arc length") is minimized.
The arc-length formula states that the "length" of this path is given by
(slight warning: We're kinda assuming that
is differentiable... Why we're fine
Find the differentiable function
satisfying
and
that minimizes the quantity
.
Now let's follow the Euler-Lagrange formula...
Hence the Euler-Lagrange equation for this minimization problem is given by
. Notice that I didn't bother evaluating this derivative. That's because if the derivative of something is zero and that something is 
then the something must be a constant function! Thus:
For a constant
. Solving, we get:
So
for constants
and
and a choice of sign. This is linear, and by applying the original conditions, this must interpolate to the line connecting the two points.
Victim 2 (Rollercoaster Brachistochrone): You've been hired as an engineer to build a new rollercoaster at the Grand Canyon, which probably violates like 50 laws. It's going to start really high up at
, and its ending point is going to be on the ground at
. To maximize profits, your company wants the ride to take the least time possible. Given that the only force acting on the system is gravity (i.e. gravity is the only source of speed), what shape should the track be?
First, we purport that the track can be modelled as a continuous function
unless you really, really want loops. How do we compute the time the rollercoaster would take to get to the end given that
is the function modelling the track? The unfortunate answer is physics.
Bad Physics
Which leads into some messy analysis...
Messy Analysis
And eventually we find that the time taken
given a track
is given by:

Er, looks like the letter
has been stolen by physics. Oh well. But when has that ever stopped math? The left side of the Euler-Lagrange equation is:
The right side is:
Now set them equal and solve, easy!
...
...If I go through the computations here then both my readers and I will go insane. Fortunately, Mathematica comes to the rescue! This reduces to:
The
term massively upsets me, so if we let
then this is:
It's time for a slick trick. Multiply each side by
to get:
But why would I ever? If we massage this a bit...
...notice that we can apply the product rule for differention in reverse!

Thus
for a constant
. One layer down, one to go! Solving for
:
Think: Is
going up or down? It should be going up because
is going down! So we must take the positive solution.
Let's throw away some rigor and "separate and integrate":


(If you're concerned, it's not too hard to restore rigor.)
If integrating this is your cup of tea, go ahead. A trig sub or two should do the trick. Unfortunately I have other things to do, so by stuffing this into Mathematica we get:
Not the prettiest solution, but it will do.
If we restore
then the curve looks something like this:

This curve is called the Brachistochrone!
The last victim, I leave to you as an exercise.
Victim 3 (The Cow Containment Catenoid): Space Farmer John got his hands on the elusive Space Cow, and wants to build a special enclosure for it. He insists that the room for the cow consists of a circular floor of radius
and a circular ceiling
, such that the centers of these circles lie directly above each other at a distance
.
Space Farmer John is running out of material, though, and has hired you (yes, you!) to finish the enclosure by building the walls out of the super expensive FlexiMetal material. The mission is to figure out what shape the walls need to be in order to minimize the amount of FlexiMetal used (i.e. minimize the surface area of the walls).
His first thought was to just build the walls up "linearly" to make a truncated cone. But, I suspect that he doesn't have the mathematical background to back up his claim. Fortunately, you do now! Is the truncated cone truly the most optimal shape? Or does it turn out to be a more surprising shape...? Perhaps, some mysterious surface called a "Catenoid"? Only one way to find out!
If you want, I can start you off:
Start
Now have at it!
Takeaways
For that last point though... we all know that solving
doesn't necessarily give you a minimum/maximum... sometimes you can get tricked. To wit, must the Euler-Lagrange equation give a minimum? Unfortunately no! We need to ensure, somehow, that a minimum even exists in the first place. But how? ...and other questions that we shall resolve in Part II+.
Enjoy Christmas!
Part I: This
Part II: https://artofproblemsolving.com/community/c2532359h2896086
Reading Difficulty: 5/5
Prerequisites: Vector space, norm, limits, integration
Useful to know: Integration by parts, directional derivatives, gradients, multidimensional chain rule, differential equations, second-order ODEs
Notational Note: Sometimes I write


This is one of my favorite applications of analysis. Buckle your seatbelts!
Part 0: Directional Derivatives
Let










Specifically we can take a "directional derivative", but that means that a "direction" for this derivative must be specified. Especially, it is the "rate of change moving along some line through



Definition (Directional Derivative): The directional derivative of




For example, let









Part 1: Differentiating in Infinite Dimensions
This might sound scary, but keep in mind that NOTHING changes from the previous example. It's the same exact definition. We're just going to have fun with it! Let's mess around.
We're going to take a spicier vector space. Let






if and only if
.
- For a real number
, we have that
.
(this might be the trickiest one to verify)
Alrighty! We have a vector space. Note that this vector space is of infinite dimension... Uh, what do functions on this vector space look like? These "functions" take in other functions, and output real numbers. Wow! So these are like, "higher-order functions", eh?
Example 1 (Evaluation Map): Let's try this cool function:




Test your understanding real quick:
- Compute
. Answer
- Compute
. Answer
- Why did I write
instead of
? Answer
Writingmight be fine, but pedantically one might interpret
as a number rather than a function.
doesn't take in numbers, it takes in functions! Writing
makes it very explicit that we're talking about a function.
- Compute
. Answer
Technically not defined, because. It's continuous for sure, but it ain't bounded!
We have a function to differentiate, so now we want a direction





Wow this is wacky and I love it.


Bonus Problem: Is




How can we use the directional derivatives to deduce what
would have to be?

Example 2 (Integral Functionals): Let's start throwing in the dinosaurs!







![$$ = \lim_{t \to 0} \frac{1}{t}\left[\int_{-1}^1 f_0(x)+tv(x)\,dx - \int_{-1}^1 f_0(x)\,dx\right] $$](http://latex.artofproblemsolving.com/0/0/7/007a2f9d7b8f15e89715ddba65e99d8a7cb5faa2.png)
![$$ = \lim_{t \to 0} \frac{1}{t}\left[\int_{-1}^1 f_0(x)+tv(x) - f_0(x)\,dx\right] $$](http://latex.artofproblemsolving.com/a/c/8/ac86febe4c9bdaa93014fa99c435238fce94ef87.png)

Example 3: That was kinda lame because everything just cancelled too well. Let's get spicier:


![$$ = \lim_{t \to 0} \frac{1}{t}\left[\int_{-1}^1 (f_0(x)+tv(x))^2\,dx - \int_{-1}^1 f_0(x)^2\,dx\right] $$](http://latex.artofproblemsolving.com/d/0/1/d01fe10087d37484e69df44c38402ec6742e7b56.png)








Bonus Exercise: Generalize the above argument to compute the directional derivative for functionals of the form


Fooling around in infinite dimensions was fun! Here's how we can practically apply this... it's about to get even more fun!
Part 2: The Calculus of Variations
Consider the following four problems:
- What function
satisfying
will minimize the quantity
?
- What is the shortest path between two points in
?
- Suppose a cliff is
meters high. What is the shape of the track that minimizes the time it would take for a roller coaster starting at the top of the cliff to reach the ground?
- Consider the two rings
and
positioned "coaxially" in
, for constants
. What is the shape of minimal surface that connects the boundaries of the two rings?
These four problems are all minimization problems. If you're reading this, you've probably found minimums before. It was easy! To find the minimum of something like


Hm... Perhaps it would be easier to see where we're headed if we considered the first problem first.
Example 1: Over all differentiable functions
![$f:[0,1] \to [0,1]$](http://latex.artofproblemsolving.com/2/4/d/24d12668b546405e2920e39edb323de074b587f7.png)




Well, taking inspiration from calculus, what if we just, like... took the derivative of

Oh wait.
Some Tools We'll Need
We need an analog for "if



Theorem 1: Let









Proof
If it's a local minimum, then surely
is a local minimum of each function
. Then
.



Next, a small note on integration by parts.
Lemma 2: Let
![$\varphi \in C_0^1([a,b];\mathbb{R})$](http://latex.artofproblemsolving.com/8/e/8/8e8ef5b827d4479668c21c9adbdb8a9a666c1cae.png)




Apply the integration by parts:
Since
is compactly supported, the
is just
.




Next, recall (or learn right now) the chain rule in multiple dimensions.
Theorem 3 (Chain Rule): Let



Taking
, let
and let
. Then:













Lastly, we'll need this cool trick.
Lemma 4 (Fundamental Lemma of the Calculus of Variations: Let
![$h:[a,b] \to \mathbb{R}$](http://latex.artofproblemsolving.com/c/7/7/c77db75ab8456ed2e299ae49b30b724f8bbb919a.png)




Proof
Suppose otherwise. Then there exists
for which
. WLOG
, and for ease let us assume that
(proof is more or less the same).
If we let
, then by continuity, there exists
so small that
for all
within
of
.
Now, it is not hard to construct a smooth
that is non-zero only in
, such that
. We then obtain:
![$$\int_a^b f\varphi\,dx = \int_{(x_0-\delta,x_0+\delta)} f\varphi\,dx + \int_{[a,b] \setminus (x_0-\delta,x_0+\delta)} f\varphi\,dx$$](//latex.artofproblemsolving.com/a/d/2/ad2925d074a43765cafb2732557ced9e3fbe5509.png)


Contradiction. We can easily scale up/down
to get
, so we still win.
![$x_0 \in [a,b]$](http://latex.artofproblemsolving.com/4/a/f/4af7320b8adbc88ca57ea462d236ccf3ff60c1b9.png)



If we let






Now, it is not hard to construct a smooth



![$$\int_a^b f\varphi\,dx = \int_{(x_0-\delta,x_0+\delta)} f\varphi\,dx + \int_{[a,b] \setminus (x_0-\delta,x_0+\delta)} f\varphi\,dx$$](http://latex.artofproblemsolving.com/a/d/2/ad2925d074a43765cafb2732557ced9e3fbe5509.png)





Back to the First Example
Firstly, we need to specify the vector space of functions we're using. We will pick
![$X = C^1([0,1];\mathbb{R})$](http://latex.artofproblemsolving.com/6/a/3/6a3b81f2607d141a8aa202456a78f175cf06e7a8.png)
![$f:[0,1] \to \mathbb{R}$](http://latex.artofproblemsolving.com/f/3/2/f32e7bed55d3d94f27278eafd5d420b947082f59.png)
Let's suppose that
![$F:C^1([a,b];\mathbb{R}) \to \mathbb{R}$](http://latex.artofproblemsolving.com/4/4/1/441def52f0618587362a0a72244925dd81027aaa.png)
















We conclude that

![$v \in C^1_0([0,1];\mathbb{R})$](http://latex.artofproblemsolving.com/b/3/8/b38012d3bfd3516de74ddbeb36e22dbdc7012057.png)

This is now just a differential equation! We can solve this
Define
to be the differentiation linear operator on the vector space of smooth solutions, so that
. Let
so
. Then
so
.
Now
. For fun, move the
over, then magically we have
. So
and
.
to get 





Now











By taking an "infinite-dimensional" derivative, we discovered the minimal value of an "infinite-dimensional" function! Wasn't that fun?
Welcome to the Calculus of Variations.
Part 3: The Euler-Lagrange Equation
You can totally solve the other three problems that I have posed by using this method, but it can get a bit annoying. Instead, let's generalize in order to make a shortcut!
General Problem: Let











Let



Note that the first problem is equivalent to solving this general problem for

Step 1: Take the directional derivative
If






Step 1.5: Shove the Limit In
This isnt necessarily fun so Im hiding it
By the Mean Value Theorem, we have that for each
there exists
between
and
such that
. Since
is twice-differentiable,
is continuous, so it obtains a maximum
on
. We deduce that the integrand is bounded by
. By Arzela and/or Lebesgue domination, we may conclude that the limit may be passed through the integral.








![$[a,b]$](http://latex.artofproblemsolving.com/8/e/c/8ecbd1ba3da8f2adef66a63f2ab32c47e63fa734.png)

Step 2: Prep the Chain Rule




Step 3: Apply Chain Rule



Putting back the integral, we have for every



Step 4: Integrate by Parts
We don't like






Step 5: The Fundamental Lemma
Since this holds for all compactly supported




This is the Euler-Lagrange Equation.
Part 4: Euler-Lagrange Abuse
There's a mess of variables flying around, so let's make it really clear how we use Euler-Lagrange.
- First, look at your integrand, and figure out what
is. This is done by replacing
with
and
with
.
- To get the left side, differentiate
with respect to the
variable, then plug back in
and
. (You can think of this as treating
as a "variable" and differentiating with respect to it.)
- To get the right side, differentiate
with respect to the
variable, and plug back in
and
("Differentiate with respect to the
variable"). Now, take the derivative with respect to
.
Our first victim/example will be the shorest distance between two points.
Victim 1 (Shortest Distance): Let




The Glitch Mob suggests that it is a line, but let's demonstrate that.
We can (ish) model the shortest path as a continuous function
![$f:[a,c] \to \mathbb{R}$](http://latex.artofproblemsolving.com/3/6/0/360250ba5e69e0300d86ece9b10ed5853f7f767a.png)



The arc-length formula states that the "length" of this path is given by


We don't need differentiable here, we just need "absolutely continuous", and then
exists "almost everywhere" making this quantity well-defined. This actually makes sense because I'm moderately certain that we don't even define length for curves that aren't absolutely continuous.
). Thus here's the problem:
Find the differentiable function




Now let's follow the Euler-Lagrange formula...
- Looks like our
is
.
- Let's differentiate with respect to
... to get... uh,
. This is the LHS.
- Now let's differentiate with respect to
to get
. Plugging back in stuff, this turns into
. We're not done yet: We need to differentiate this with respect to
to get
. This is the RHS.
Hence the Euler-Lagrange equation for this minimization problem is given by








Victim 2 (Rollercoaster Brachistochrone): You've been hired as an engineer to build a new rollercoaster at the Grand Canyon, which probably violates like 50 laws. It's going to start really high up at


First, we purport that the track can be modelled as a continuous function
![$f:[0,h] \to [0,k]$](http://latex.artofproblemsolving.com/d/5/a/d5a48fa829229304d7d3b216b6d62f960f70cdcb.png)

Bad Physics
We need physics to compute the rollercoaster's speed when it reaches a certain point
on the track. Fortunately this is simple: The total energy at the top of the track is all gravitationally-sourced, and is given by
. Once it has descended to
, there has been a vertical displacement of
meaning that
gravitational potential energy has been lost. This must all be converted to kinetic energy, hence if
is the speed at
then:











Which leads into some messy analysis...
Messy Analysis
To be honest, these manipulations don't come naturally to me, so don't fret if you can't follow (alternatively, Google some alternative approaches). Impose the condition that
be absolutely continuous (so that the curve traced is rectifiable), hence
is differentiable almost everywhere and we may define:
Where
is the
-coordinate after
seconds. That is,
is the length along the curve travelled after a time
.
has an inverse, and by the chain rule
, thus if
is the total length of the journey and
is the total time taken (what we need to find), then:
Now note that in fact,
, hence:
Now apply the substitution
. Then, write
so that
. Therefore:




















And eventually we find that the time taken



Er, looks like the letter



...
...If I go through the computations here then both my readers and I will go insane. Fortunately, Mathematica comes to the rescue! This reduces to:















Let's throw away some rigor and "separate and integrate":



If integrating this is your cup of tea, go ahead. A trig sub or two should do the trick. Unfortunately I have other things to do, so by stuffing this into Mathematica we get:

If we restore


This curve is called the Brachistochrone!
The last victim, I leave to you as an exercise.
Victim 3 (The Cow Containment Catenoid): Space Farmer John got his hands on the elusive Space Cow, and wants to build a special enclosure for it. He insists that the room for the cow consists of a circular floor of radius



Space Farmer John is running out of material, though, and has hired you (yes, you!) to finish the enclosure by building the walls out of the super expensive FlexiMetal material. The mission is to figure out what shape the walls need to be in order to minimize the amount of FlexiMetal used (i.e. minimize the surface area of the walls).
His first thought was to just build the walls up "linearly" to make a truncated cone. But, I suspect that he doesn't have the mathematical background to back up his claim. Fortunately, you do now! Is the truncated cone truly the most optimal shape? Or does it turn out to be a more surprising shape...? Perhaps, some mysterious surface called a "Catenoid"? Only one way to find out!
If you want, I can start you off:
Start
Tilt the room on its side. We'll make the walls by drawing a continuous function
connecting
and
, then rotating this function around the
axis. Of course, this assumes that the best surface is rotationally symmetric. Don't ask me for a proof, though!
For such a function
, I'll tell you that the surface area of the revolved surface is given by:





For such a function


Now have at it!
Takeaways
- Function spaces are valid vector spaces, and you can even do "Calculus" on them!
- Having a deep understanding of analysis, in its most general forms, leads to some remarkable results and methodologies.
- The "set the derivative to
" philosophy has an infinite-dimensional analogue, which leads to the Euler-Lagrange equation.
For that last point though... we all know that solving

Enjoy Christmas!
This post has been edited 4 times. Last edited by greenturtle3141, Jul 31, 2022, 9:18 PM