Difference between revisions of "Holomorphic function"

(Proof of theorem)
Line 54: Line 54:
&= i \cdot f'(z) \\
&= i \cdot f'(z) \\
&= i \cdot \lim_{h\to 0} \frac{f(z+h)-f(z)}{h}  
&= i \cdot \lim_{h\to 0} \frac{f(z+h)-f(z)}{h}  
= i \cdot \frac{\partial f}{\partial x} . </cmath>
= i \cdot \frac{\partial f}{\partial x} .  
Breaking <math>f</math> into real and imaginary components, we see
Breaking <math>f</math> into real and imaginary components, we see
<cmath> \frac{\partial u}{\partial y} + i \frac{\partial v}{\partial y}
<cmath> \frac{\partial u}{\partial y} + i \frac{\partial v}{\partial y}

Latest revision as of 21:04, 26 July 2017

A holomorphic function $f: \mathbb{C} \to \mathbb{C}$ is a differentiable complex function. That is, just as in the real case, $f$ is holomorphic at $z$ if $\lim_{h\to 0} \frac{f(z+h)-f(z)}{h}$ exists. This is much stronger than in the real case since we must allow $h$ to approach zero from any direction in the complex plane.

Usually, we speak of functions as holomorphic on (open) sets, rather than at points, for when we consider the behavior of a function at a point, we prefer to consider it in the context of the points nearby.

Cauchy-Riemann Equations

We can obtain an equivalent definition if we break $f$ and $z$ into real and imaginary components.

Specifically, let $u, v : \mathbb{R \times R \to R}$ be definted by \[u(x,y) = \text{Re}\,f(x+iy), \qquad v(x,y) = \text{Im}\,f(x+iy) .\] If $z = x+iy$, then \[f(z) = u(x,y) + i v(x,y).\]

It turns out that we can express the idea "$f$ is holomorphic" entirely in terms of partial derivatives of $u$ and $v$.

Theorem. Let $D$ be an open, connected subset of $\mathbb{C}$. Let us abbreviate $x = \text{Re}\, z$ and $y = \text{Im}\, z$. Then the function $f$ is holomorphic on $D$ if and only if all the partial derivatives of $u$ and $v$ with respect to $x$ and $y$ are continuous on $D$, and the following system holds for every point $z \in D$: \begin{align*} \frac{\partial u}{\partial x} &= \frac{\partial v}{\partial y} ,\\ \frac{\partial u}{\partial y} &= -\frac{\partial v}{\partial x}.  \end{align*} These equations are called the Cauchy-Riemann Equations.

For convenience, we may abbreviate \[\frac{\partial f}{\partial x} = \frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x}, \qquad \frac{\partial f}{\partial y} = \frac{\partial u}{\partial y} + i \frac{\partial v}{\partial y} .\] With this abuse of notation, we may rewrite the Cauchy-Riemann equations thus: \[\frac{\partial f}{\partial y} = i \frac{\partial f}{\partial x} .\]

Proof of theorem

First, suppose that $f$ is complex-differentiable at $z$. Then at $z$, \begin{align*} \frac{\partial f}{\partial y} = \lim_{h\to 0} \frac{f(z+ih)-f(z)}{h} &= i \cdot \lim_{h\to 0} \frac{f(z+ih) - f(z)}{ih} \\ &= i \cdot f'(z) \\ &= i \cdot \lim_{h\to 0} \frac{f(z+h)-f(z)}{h}  = i \cdot \frac{\partial f}{\partial x} .  \end{align*} Breaking $f$ into real and imaginary components, we see \[\frac{\partial u}{\partial y} + i \frac{\partial v}{\partial y} = \frac{\partial f}{\partial y} = i \frac{\partial f}{\partial x} = -\frac{\partial v}{\partial x} + i \frac{\partial u}{\partial y}.\] Setting real and imaginary components equal, we obtain the Cauchy-Riemann equations. It follows from the Cauchy Integral Formula that the second derivative of $f$ exists at $z$; thus the derivative of $f$ is continuous at $z$, and so are the partial derivatives of $u$ and $v$.

Now, suppose the Cauchy-Riemann equations hold a point $z$, and that the partial derivatives of $u$ and $v$ exist and are continuous in a neighborhood of $z$. Let $h = h_1 + i h_2$ be an arbitrarily small complex number, with $h_1, h_2 \in \mathbb{R}$. Then \begin{align*} \frac{f(z + h) - f(z)}{h} &= \frac{f(z+h_1+ih_2)-f(z+h_1)}{h_1+ih_2} + \frac{f(z+h_1)-f(z)}{h_1+ih_2} \\ &\approx \frac{ih_2}{h_1+ih_2} \frac{\partial f}{\partial y}(z+h_1) + \frac{h_1}{h_1 + ih_2} \frac{\partial f}{\partial x}(z) \\ &\approx \frac{ih_2}{h_1+ih_2} \frac{\partial f}{\partial y}(z) + \frac{h_1}{h_1 + ih_2} \frac{\partial f}{\partial x}(z) , \end{align*} with the first approximation from the definition of the partial derivatives and the second from the continuity of the partial derivatives. We may force $h$ to be small enough that both approximations are arbitrarily accurate. Now, by the Cauchy-Riemann equations, \[\frac{i h_2}{h_1+ih_2} \frac{\partial f}{\partial y}(z) + \frac{h_1}{h_1 + ih_2} \frac{\partial f}{\partial x}(z) = \frac{\partial f}{\partial x} (z) .\] Therefore \[\lim_{h\to 0} \frac{f(z+h)-f(z)}{h} = \frac{\partial f}{\partial x} (z) .\] In particular, the limit exists, so $f$ is differentiable at $z$. Since $z$ was arbitrary, it follows that $f$ is differentiable everywhere in $D$. $\blacksquare$

Analytic Functions

A related notion to that of homolorphicity is that of analyticity. A function $f:\mathbb{C}\to\mathbb{C}$ is said to be analytic at $z$ if $f$ has a convergent power series expansion on some neighborhood of $z$. Amazingly, it turns out that a function is holomorphic at $z$ if and only if it is analytic at $z$. Furthermore, its radius of convergence is the greatest lower bound of the distance from $z$ to a singularity.

This is not the case with real functions. Consider, for example, the real function \[f(x) = \frac{1}{x^2 + 1} .\] It is infinitely differentiable along the entire real line, yet its power series diverges when $\lvert x \rvert > 1$. But in the complex plane we see that \[\frac{1}{z^2 +1} = \frac{1}{(z-i)(z+i)}\] has singularities at $z = \pm i$, so the power series must clearly diverge when $\lvert z \rvert > 1$.

Equivalence of Analytic and Holomorphic Functions

We now prove that all holomorphic functions behave in this orderly way.

Theorem. Let $D$ be a connected, open subset of $\mathbb{C}$, and let $f$ be a holomorphic function on $D$. Then for any $z_0 \in D$, the power series expansion of $f$ aboud $z_0$ converges, and its radius of convergence is the greatest quantity $R$ for which there exists a holomorphic continuation of $f$ to the set \[\{ z \in \mathbb{C} : \lvert z - z_0 \rvert < R \} .\]

Proof. Since $z_0$ is in $D$, there is some $R_0 >0$ such that $f$ is holomorphic within $R_0+\epsilon$ of $z$. Suppose that $\lvert z - z_0 \rvert < R_0$. Let $C$ be the simple, positively oriented circle of radius $R_0$ about $z_0$, and let $M$ be an upper bound on $\lvert f(z) \rvert$ for $z\in C$. By the Cauchy Integral Formula, \[\left\lvert \frac{f^{(n)}(z_0)}{n!}(z-z_0)^n \right\rvert = \biggl\lvert \frac{(z-z_0)^n}{2\pi} \int\limits_C \frac{f(w)} {(w-z_0)^{n+1}}dw \biggr\rvert \le M \cdot \left\lvert \frac{z-z_0}{R_0} \right\rvert^n .\] The series thus converges geometrically. It follows that if there is a holomorphic extension of $f$ to the set \[\{ z \in \mathbb{C} : \lvert z \rvert < R \},\] then the power series of $f$ about $z$ converges with radius at least $R$.

Conversely, suppose that the power series expansion of $f$ diverges for at some $z$ of distance less than $R$ from $z_0$. Then by the previous paragraph, there is no holomorphic extension of $f$ to all points of distance less than $R$. It follows that the radius of convergence of the Taylor series expansion of $f$ about $z_0$ is indeed the quantity as stated in the theorem. $\blacksquare$

Strange Consequences of Extension

In some cases, repeated extension of a function may lead to bizarre consequences. For example, we may define a square-root function $f(z)$ that is holomorphic and defined everywhere except on the set of non-positive real numbers. Any power series expansion that avoids the origin will converge. However, if we try to cross the negative real axis with a power series expansion, we will find that our power series expansion gives different results from our original function on the other side of the axis! This is because the square root "function" is in fact a multifunction that can restrict to a holomorphic function on any open subset of $\mathbb{C}$ that does not include a closed path about the origin.

See also

Invalid username
Login to AoPS