**Download:**
PatchPae2.zip (1663672 downloads)

**Source code:** https://github.com/wj32/PatchPae2

Before using this patch, make sure you have fully removed any other “RAM patches” you may have used. This patch does NOT enable test signing mode and does NOT add any watermarks.

Note: I do not offer **any** support for this. If this did not work for you, either:

- You cannot follow instructions correctly, or
- You cannot use more than 4GB of physical memory on 32-bit Windows due to hardware/software conflicts. See the comments on this page for more information.

`f : ('a -> 'b) -> 'a -> 'b`

in F#, along with some interesting applications.
Most versions of this code that you can find online only deal with the simple case `'a -> 'b`

, which doesn’t allow the function to call a memoized form of itself for recursion. This version does. The trick is to define mutually recursive functions `g : 'a -> 'b`

and `h : ('a -> 'b) -> 'a -> 'b`

, where `g`

is the memoized form of `f`

that will get passed to `f`

itself, and `h`

does the actual work.

open System.Collections.Generic // memoize : (('a -> 'b) -> 'a -> 'b) -> ('a -> 'b) let memoize f = let mem = Dictionary<'a, 'b>(); let rec g key = h g key and h r key = match mem.TryGetValue(key) with | (true, value) -> value | _ -> let value = f g key mem.Add(key, value) value g

**Example** (Fibonacci numbers):

let fib r n = match n with | 0 -> 0 | 1 -> 1 | n -> r (n - 1) + r (n - 2) printfn "%d" ((memoize fib) 20)

The generic `memoize`

function can work with any key type `'a`

that `Dictionary`

supports, but this can be unnecessarily slow if only need `int`

keys and we have a known bound for the key. In that case, using an array is substantially faster than using `memoize`

with an `int`

key:

let memoize1D ni invalid (f : (int -> 'a) -> int -> 'a) = let mem = Array.create ni invalid let rec g i = h g i and h r i = if 0 <= i && i < ni then match mem.[i] with | value when value <> invalid -> value | value -> let value = f g i mem.[i] <- value value else f g i g

**Example** (Unbounded knapsack problem):

// Computes the maximum possible total that is less than or equal to n, // given positive weights w. let knapsack w n = let memoized = memoize1D (n + 1) -1 (fun r k -> Array.fold (fun acc x -> if x <= k then max acc (x + r (k - x)) else acc) 0 w) memoized n

F# has 2D and 3D arrays, which we can take advantage of when we need `int * int`

or `int * int * int`

keys:

let memoize2D ni nj invalid (f : (int -> int -> 'a) -> int -> int -> 'a) = let mem = Array2D.create ni nj invalid let rec g i j = h g i j and h r i j = if 0 <= i && i < ni && 0 <= j && j < nj then match mem.[i, j] with | value when value <> invalid -> value | value -> let value = f g i j mem.[i, j] <- value value else f g i j g let memoize3D ni nj nk invalid (f : (int -> int -> int -> 'a) -> int -> int -> int -> 'a) = let mem = Array3D.create ni nj nk invalid let rec g i j k = h g i j k and h r i j k = if 0 <= i && i < ni && 0 <= j && j < nj && 0 <= k && k < nk then match mem.[i, j, k] with | value when value <> invalid -> value | value -> let value = f g i j k mem.[i, j, k] <- value value else f g i j k g

**Example** (Money changing problem):

// How many ways are there to make change for $n, given an unlimited // supply of coins with specified dollar values? let waysToMakeChange (coins : int array) n = let memoized = memoize2D coins.Length (n + 1) -1L (fun r minCoin remaining -> if remaining = 0 then 1L else let mutable sum = 0L for i = minCoin to coins.Length - 1 do if remaining >= coins.[i] then sum <- sum + r i (remaining - coins.[i]) sum) memoized 0 n

For U.S. coin denominations and a total value of $1, we get:

> waysToMakeChange [|1; 5; 10; 25; 50; 100|] 100;; val it : int64 = 293L

The above code doesn’t work for $30 because we quickly run out of stack space:

> waysToMakeChange [|1; 5; 10; 25; 50; 100|] 3000;; Process is terminated due to StackOverflowException.

This occurs because the expression `memoized 0 n`

expands to an extremely long chain of recursive calls, which we don’t have enough stack space for. To fix this, we can use what we know about the problem to enforce a specific order of execution. Since `memoized i n`

calls `memoized i' n'`

for \(n’ < n\) only, we should insert the following block just before `memoized 0 n`

:

for i = 0 to n - 1 do memoized 0 i |> ignore

Now we get:

> waysToMakeChange [|1; 5; 10; 25; 50; 100|] 3000;; val it : int64 = 379747086L > waysToMakeChange [|1; 5; 10; 25; 50; 100|] 10000;; val it : int64 = 139946140451L

Given coin values \(1 \le c_1 < \cdots < c_m \), the solution to the money changing problem is the coefficient of \(x^n\) in the generating function $$
\frac{1}{(1-x^{c_1})\cdots(1-x^{c_m})}.
$$ When the number of coins is small, a closed form solution can easily be found by hand using partial fraction decomposition and the formula $$
\frac{1}{(1-x)^{k+1}} = \sum_{k\ge 0}\binom{n+k}{k}x^n.$$ This quickly becomes infeasible as the number of coins increases, while the algorithm in `waysToMakeChange`

scales fairly well.

We can extend the algorithm to work with an infinite sequence of coins with values \(1 \le c_1 < c_2 < \cdots\), which is possible because F# sequences (really just `IEnumerable`

s) are lazy by design:

// Note: coins must be a strictly increasing, possibly infinite sequence. let waysToMakeChangeSeq (coins : int seq) n = let memoized = memoize2D (n + 1) (n + 1) -1L (fun r minCoin remaining -> if remaining = 0 then 1L else let mutable sum = 0L coins |> Seq.skip minCoin |> Seq.takeWhile ((>=) remaining) |> Seq.iteri (fun i coin -> sum <- sum + r (minCoin + i) (remaining - coin)) sum) for i = 0 to n - 1 do memoized 0 i |> ignore memoized 0 n

The associated generating function is $$

\frac{1}{(1-x^{c_1})(1-x^{c_2})\cdots}.

$$

A **partition** of an integer \(n\) is a representation of \(n\) as an unordered sum of positive integers. (An empty sum evaluates to \(0\), so the number of partitions of \(0\) is \(1\).) The number of partitions of \(n\) is really just the money changing problem with an infinite series of coins \((1,2,\dots)\):

> let positiveIntegers = Seq.initInfinite ((+) 1);; val positiveIntegers : seq<int> > positiveIntegers;; val it : seq<int> = seq [1; 2; 3; 4; ...] > waysToMakeChangeSeq positiveIntegers 0;; val it : int64 = 1L > waysToMakeChangeSeq positiveIntegers 5;; val it : int64 = 7L > waysToMakeChangeSeq positiveIntegers 100;; val it : int64 = 190569292L

So there are \(190,569,292\) different ways of writing \(100\) as an unordered sum of positive integers.

]]>\DeclareMathOperator{\tr}{tr}

\DeclareMathOperator{\sgn}{sgn}

\DeclareMathOperator{\Id}{Id}

\)Horn’s inequality states that for any two compact operators \(\sigma,\tau\) on a Hilbert space \(E\), $$\prod_{k=1}^n s_k(\sigma\tau) \le \prod_{k=1}^n s_k(\sigma)s_k(\tau)$$ where \(s_1(\tau),s_2(\tau),\dots\) are the singular values of \(\tau\) arranged in descending order. Alfred Horn’s original 1950 paper provides a short proof that relies on the following result:

**Theorem.** *If \(H\) is a positive, symmetric, completely continuous operator whose first \(n\) eigen-values are \(\lambda_1,\dots,\lambda_n\), then $$\det[(Hy_i,y_j)] \le \lambda_1\cdots\lambda_n\det[(y_i,y_j)]$$ for any elements \(y_1,\dots,y_n\).*

It seems that back in 1950, they used the term “completely continuous” instead of the more modern “compact”. Horn also states:

Weyl’s elegant proof uses an appeal to the theory of \(n\)-tensors. A straightforward proof may be given by using the relation \((Hy_i,y_j)=\sum_k\lambda_k(y_i,x_k)(x_k,y_j)\), where the \(x_k\) form a

completeortho-normal set.

I couldn’t really understand Weyl’s paper, and if you try to expand \(Hy_i\) as suggested, things get messy very quickly. (You will probably get there eventually!) In the process of trying to come up with a proof, I discovered that the theorem is actually a very intuitive and cleverly disguised statement about the \(n\)th exterior power of \(E\).

I’m going to assume that you know what the terms “exterior algebra” and “\(k\)th exterior power” mean. Let \(E\) be a Hilbert space over \(K\), where \(K=\mathbb{R}\) or \(K=\mathbb{C}\). We will denote the \(k\)th exterior power of \(E\) by \(\Lambda^k E\). The basic idea is that the element \(v_1\wedge\cdots\wedge v_k \in \Lambda^k E\) represents an oriented \(k\)-parallelotope spanned by \(v_1,\dots,v_k \in E\). The vector space \(\Lambda^k E\) consists of all possible linear combinations of such elements. Since \(E\) has an inner product, we can give \(\Lambda^k E\) an inner product that satisfies \begin{align}

\langle u_1\wedge\cdots\wedge u_k, v_1\wedge\cdots\wedge v_k\rangle

&= \det((\langle u_i,v_j \rangle)_{i,j}) \\

&= \det\begin{bmatrix}

\langle u_1,v_1 \rangle & \cdots & \langle u_1,v_k \rangle \\

\vdots & \ddots & \vdots \\

\langle u_k,v_1 \rangle & \cdots & \langle u_k,v_k \rangle

\end{bmatrix}.

\end{align}

To show that such an inner product exists, use the fact that the determinant of a matrix is alternating and multilinear in the rows and columns, and apply the universal property of the \(k\)th exterior power twice. The next theorem proves that this inner product is in fact positive definite.

When \(u_i=v_i\) for \(i=1,\dots,k\), the matrix on the right hand side above is called the **Gram matrix** of \(v_1,\dots,v_k\). Geometrically, \(|v_1\wedge\cdots\wedge v_k|\) is the volume of the \(k\)-parallelotope spanned by \(v_1,\dots,v_k\). Note that \(|v_1\wedge\cdots\wedge v_k|=|v_1|\cdots|v_k|\) whenever \(v_1,\dots,v_k\) are orthogonal.

**Theorem 1.** *If \(v_1,\dots,v_k \in V\) then the Gram matrix $$G(v_1,\dots,v_k)=(\langle v_i,v_j \rangle)_{i,j}$$ is positive semidefinite. It is invertible if and only if \(v_1 \wedge\cdots\wedge v_k \ne 0\).*

*Proof.* For all \(x=(x_1,\dots,x_k)\in K^k\), \begin{align}

\langle G(v_1,\dots,v_k)x,x \rangle

&= \sum_{i=1}^k\sum_{j=1}^k\langle v_i,v_j \rangle x_i\overline{x_j} = \sum_{i=1}^k\sum_{j=1}^k\langle xv_i,xv_j \rangle \\

&= \left\vert\sum_{i=1}^k x_i v_i\right\vert^2 \ge 0. \tag{*}

\end{align} It is a well-known result that \(v_1\wedge\cdots\wedge v_k=0\) if and only if \(v_1,\dots,v_k\) are linearly dependent. If \(v_1,\dots,v_k\) are linearly dependent then \(G(v_1,\dots,v_k)\) is not invertible because its columns are linearly dependent. Conversely, if \(G(v_1,\dots,v_k)\) is not invertible then \(G(v_1,\dots,v_k)x=0\) for some \(x\ne 0\). (*) shows that \(\sum_{i=1}^k x_i v_i = 0\), so \(v_1,\dots,v_k\) are linearly dependent. \(\square\)

**Theorem 2** (Hadamard’s inequality). *For all \(v_1,\dots,v_k\in V\) we have $$|v_1\wedge\cdots\wedge v_k|\le|v_1|\cdots|v_k|,$$ with equality if and only if \(v_1,\dots,v_k\) are orthogonal or \(v_i=0\) for some \(i\).*

*Proof.* We can assume that \(v_1,\dots,v_k\ne 0\). Let \(u_i=v_i/|v_i|\) for \(i=1,\dots,k\). Let \(\lambda_1,\dots,\lambda_k\ge 0\) be the eigenvalues of \(G(u_1,\dots,u_k)\), with repetitions. Using the AM-GM inequality, \begin{align}

|u_1\wedge\cdots\wedge u_k|

&= \det(G(u_1,\dots,u_k)) = \lambda_1\cdots\lambda_k \\

&\le \left(\frac{\lambda_1+\cdots+\lambda_k}{k}\right)^k = \left(\frac{\tr G(u_1,\dots,u_k)}{k}\right)^k \\

&= \left(\frac{|u_1|^2+\cdots+|u_k|^2}{k}\right)^k = 1,

\end{align} with equality if and only if \(\lambda_1=\cdots=\lambda_k\). This is true if and only if \(G(u_1,\dots,u_k)\) is the identity matrix, \(\{u_1,\dots,u_k\}\) is an orthonormal set. \(\square\)

An important consequence that we will be using is:

**Corollary 3.** *The alternating multilinear map \((v_1,\dots,v_k)\mapsto v_1\wedge\cdots\wedge v_k\) is continuous.*

We have turned \(\Lambda^k E\) into an inner product space, but in general \(\Lambda^k E\) is not complete (when \(E\) is infinite-dimensional). From now on, we will be working with its completion \(\overline{\Lambda^k}E\), which is a Hilbert space by definition.

The formula below is the key to understanding the proof of the theorem stated at the beginning of this post. We denote the set of continuous linear operators on \(E\) by \(L(E)\).

**Theorem 4.** *Let \(E\) be a Hilbert space and let \(\{e\}_{\alpha\in A}\) be an ordered Hilbert basis for \(E\). If \(f\in L(E)\) and \(fe_\alpha=\lambda_\alpha e_\alpha\) for all \(\alpha\in A\), then for all \(v_1,\dots,v_k\in E\), $$
fv_1\wedge\cdots\wedge fv_k = \sum_{\alpha_1<\cdots<\alpha_k} \lambda_{\alpha_1}\cdots\lambda_{\alpha_k} \langle v_1\wedge\cdots\wedge v_k,e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k} \rangle e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k}.$$*

*Proof.* Due to Corollary 3, we can apply the usual Fourier expansion formula to get \begin{align}

& fv_1\wedge\cdots\wedge fv_k \\

&= \sum_{\alpha_1\in A} \langle v_1,e_{\alpha_1}\rangle fe_{\alpha_1} \wedge\cdots\wedge \sum_{\alpha_k\in A} \langle v_k,e_{\alpha_k}\rangle fe_{\alpha_k} \\

&= \sum_{\alpha_1\in A}\cdots\sum_{\alpha_k\in A} \lambda_{\alpha_1}\cdots\lambda_{\alpha_k} \langle v_1,e_{\alpha_1}\rangle\cdots\langle v_k,e_{\alpha_k}\rangle e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k} \\

&= \sum_{\alpha_1<\cdots<\alpha_k} \lambda_{\alpha_1}\cdots\lambda_{\alpha_k} \sum_{\sigma\in S_k} \langle v_1,e_{\alpha_{\sigma(1)}}\rangle\cdots\langle v_k,e_{\alpha_{\sigma(k)}}\rangle e_{\alpha_{\sigma(1)}}\wedge\cdots\wedge e_{\alpha_{\sigma(k)}} \\
&= \sum_{\alpha_1<\cdots<\alpha_k} \lambda_{\alpha_1}\cdots\lambda_{\alpha_k} \sum_{\sigma\in S_k} \sgn(\sigma) \langle v_1,e_{\alpha_{\sigma(1)}}\rangle\cdots\langle v_k,e_{\alpha_{\sigma(k)}}\rangle e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k} \\
&= \sum_{\alpha_1<\cdots<\alpha_k} \lambda_{\alpha_1}\cdots\lambda_{\alpha_k} \det((\langle v_i,e_{\alpha_j} \rangle)_{i,j}) e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k} \\
&= \sum_{\alpha_1<\cdots<\alpha_k} \lambda_{\alpha_1}\cdots\lambda_{\alpha_k} \langle v_1\wedge\cdots\wedge v_k,e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k} \rangle e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k}.
\end{align} \(\square\)

If we apply the theorem to \(f=\Id_E\), then we have the formula $$

v_1\wedge\cdots\wedge v_k = \sum_{\alpha_1<\cdots<\alpha_k} \langle v_1\wedge\cdots\wedge v_k,e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k} \rangle e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k}.$$ This shows that the set $$\mathcal{B}=\{e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k}:\alpha_1<\cdots<\alpha_k\}$$ is a Hilbert basis for \(\overline{\Lambda^k}E\), where \({\alpha_1,\dots,\alpha_k}\) ranges over the subsets of \(A\) with \(k\) elements. Also note that $$|v_1\wedge\cdots\wedge v_k|^2 = \sum_{\alpha_1<\cdots<\alpha_k} |\langle v_1\wedge\cdots\wedge v_k,e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k} \rangle|^2.$$
Theorem 4 states that if \(f\in L(E)\) is diagonalizable with respect to \(\{e\}_{\alpha\in A}\), then the induced map \(\Lambda^k f\in L(\overline{\Lambda^k}E)\) (satisfying \((\Lambda^k f) v_1\wedge\cdots\wedge v_k = fv_1\wedge\cdots\wedge fv_k\)) is diagonalizable with respect to \(\mathcal{B}\). Furthermore, the eigenvalue corresponding to \(e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_k}\) is \(\lambda_{\alpha_1}\cdots\lambda_{\alpha_k}\).

We now return to the theorem stated at the beginning of this post, using our new notation.

**Theorem 5.** *Let \(\rho\in L(E)\) be a positive compact operator and let \(\lambda_1\ge\lambda_2\ge\cdots\) be the eigenvalues of \(\rho\) (with repetition). If \(v_1,\dots,v_n\in E\) then $$\langle \rho v_1\wedge\cdots\wedge\rho v_n,v_1\wedge\cdots\wedge v_n\rangle \le \lambda_1\cdots\lambda_n|v_1\wedge\cdots\wedge v_n|^2.$$*

*Proof.* Write \(\rho=\sum_{k=1}^\infty \lambda_k e_k e_k^*\) for some countable orthonormal set \(\{e_k\}\) in \(E\), and choose an ordered Hilbert basis \(\{e_\alpha\}_{\alpha\in A}\) for \(E\) that contains \(\{e_k\}\). Since \(\rho e_\alpha=0\) whenever \(e_\alpha\notin\{e_k\}\), Theorem 4 shows that \begin{align}

& \rho v_1\wedge\cdots\wedge\rho v_n \\

&= \sum_{k_1 < \cdots < k_n} \lambda_{k_1}\cdots\lambda_{k_n} \langle v_1\wedge\cdots\wedge v_n,e_{k_1}\wedge\cdots\wedge e_{k_n} \rangle e_{k_1}\wedge\cdots\wedge e_{k_n}.
\end{align} Therefore \begin{align}
& \langle \rho v_1\wedge\cdots\wedge\rho v_n,v_1\wedge\cdots\wedge v_n\rangle \\
&= \sum_{k_1 < \cdots < k_n} \lambda_{k_1}\cdots\lambda_{k_n} |\langle v_1\wedge\cdots\wedge v_n,e_{k_1}\wedge\cdots\wedge e_{k_n} \rangle|^2 \\
&\le \lambda_1\cdots\lambda_n \sum_{k_1 < \cdots < k_n} |\langle v_1\wedge\cdots\wedge v_n,e_{k_1}\wedge\cdots\wedge e_{k_n} \rangle|^2 \\
&\le \lambda_1\cdots\lambda_n \sum_{\alpha_1<\cdots<\alpha_n} |\langle v_1\wedge\cdots\wedge v_n,e_{\alpha_1}\wedge\cdots\wedge e_{\alpha_n} \rangle|^2 \\
&= \lambda_1\cdots\lambda_n|v_1\wedge\cdots\wedge v_n|^2.
\end{align} \(\square\)

**Lemma 6.** *If \(f\in L(E)\), \(u_1,\dots,u_k\in E\), and \(v_1,\dots,v_k\in E\), $$\langle fu_1\wedge\cdots\wedge fu_k, v_1\wedge\cdots\wedge v_k\rangle = \langle u_1\wedge\cdots\wedge u_k, f^* v_1\wedge\cdots\wedge f^* v_k\rangle.$$*

*Proof.* This follows directly from the fact that \(\det((\langle fu_i,v_j \rangle)_{i,j}) = \det((\langle u_i,f^* v_j \rangle)_{i,j})\). \(\square\)

Finally, we can prove Horn’s inequality.

**Theorem 7** (Horn’s inequality). *Let \(\sigma,\tau\in L(E)\) be compact. For all \(n\), $$\prod_{k=1}^n s_k(\sigma\tau) \le \prod_{k=1}^n s_k(\sigma)s_k(\tau).$$*

*Proof.* Let \(\sigma\tau = \sum_{k=1}^\infty s_k(\sigma\tau) v_k u_k^*\) be a singular value decomposition of \(\sigma\tau\). Clearly $$|\sigma\tau u_1\wedge\cdots\wedge\sigma\tau u_n| = \prod_{k=1}^n s_k(\sigma\tau).$$ By Theorem 5 and Lemma 6, \begin{align}

\prod_{k=1}^n s_k(\sigma\tau)^2

&= |\sigma\tau u_1\wedge\cdots\wedge\sigma\tau u_n|^2 \\

&= \langle (\sigma^*\sigma)\tau u_1\wedge\cdots\wedge (\sigma^*\sigma)\tau u_n, \tau u_1\wedge\cdots\wedge \tau u_n\rangle \\

&\le \left(\prod_{k=1}^n s_k(\sigma)^2\right) |\tau u_1\wedge\cdots\wedge \tau u_n|^2 \\

&= \left(\prod_{k=1}^n s_k(\sigma)^2\right) \langle (\tau^*\tau) u_1\wedge\cdots\wedge (\tau^*\tau) u_n, u_1\wedge\cdots\wedge u_n\rangle \\

&\le \left(\prod_{k=1}^n s_k(\sigma)^2\right) \left(\prod_{k=1}^n s_k(\tau)^2\right).

\end{align} \(\square\)

This patch allows you to use more than 3/4GB of RAM on an x86 Windows system. Works on Windows Vista SP2, Windows 7 SP0, Windows 7 SP1, Windows 8 and Windows 8.1. Instructions and source code included.

**Download:**
PatchPae2.zip (1663672 downloads)

Before using this patch, make sure you have fully removed any other “RAM patches” you may have used. This patch does NOT enable test signing mode and does NOT add any watermarks.

Note: I do not offer **any** support for this. If this did not work for you, either:

- You cannot follow instructions correctly, or
- You cannot use more than 4GB of physical memory on 32-bit Windows due to hardware/software conflicts. See the comments on this page for more information.

\newcommand{\bbc}{\mathbb{C}}

\newcommand{\Int}{\operatorname{Int}}

\)Navigation: 1. Exact, conservative and closed forms | 2. Locally exact forms and singular homology |

Let \(F\) be a complex Banach space, let \(U\subseteq\bbc\) be an open set, and let \(f:U\to F\). Recall that the complex Fréchet derivative of \(f\) at \(z\in U\), if it exists, is a \(\bbc\)-linear map \(Df(z):\bbc\to F\). Not all real differentiable functions on \(\bbc\) are *complex* differentiable: for example, the (real) derivative of \(f(z)=\overline{z}\) (i.e. \(f(x+yi)=x-yi\)) at any point is represented by the matrix $$\begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix},$$ but \(Df(z)\) is clearly not a \(\bbc\)-linear map.

Since \(\bbc\) is a one-dimensional vector space over itself, \(Df(z)\) is completely determined by the value \(Df(z)(1)\in F\): $$

Df(z)w=wDf(z)(1).

$$ We can therefore identify \(Df(z)\) with \(Df(z)(1)\), and from now on we will use the notation \(f'(z)=Df(z)(1)\in F\). For example, if \(f(z)=z^2\) then the Fréchet derivative is \(Df(z)w=2zw\), and \(f'(z)=Df(z)(1)=2z\) as usual.

**Theorem 18.** *Let \(U\subseteq\bbc\) be an open set. A function \(f:U\to F\) is complex differentiable at \(z\in U\) if and only if the limit $$c=\lim_{h\to 0}\frac{f(z+h)-f(z)}{h}$$ exists. In that case, \(c=f'(z)\).*

*Proof.* We have $$

\lim_{h\to 0}\frac{f(z+h)-f(z)}{h} = c \Leftrightarrow \lim_{h\to 0}\frac{f(z+h)-f(z)-ch}{|h|} = 0.

$$ \(\square\)

If \(f:U\to F\) is complex differentiable at all \(z\in U\), then we say that \(f\) is **holomorphic on \(U\)** or simply **holomorphic**. If \(A\subseteq\bbc\) is any set, we say that \(f\) is **holomorphic on \(A\)** if it is holomorphic on an open set containing \(A\). If \(z\in\bbc\) and \(f\) is holomorphic on a neighborhood of \(z\), we say that \(f\) is **holomorphic at \(z\)**.

Using the basic properties of the derivative, we have:

**Theorem 19** (Chain rule). *Let \(U,V\subseteq\bbc\) be open sets. Let \(f:U\to\bbc\) and \(g:V\to F\) with \(f(U)\subseteq V\). If \(f\) is complex differentiable at \(z\) and \(g\) is complex differentiable at \(f(z)\), then \(g\circ f\) is complex differentiable at \(z\) and $$(g\circ f)'(z)=g'(f(z))f'(z).$$*

**Theorem 20.** *Let \(U\subseteq\bbc\) be an open set, let \(F_1,F_2\) be complex Banach spaces, and let \(f:U\to F_1\) and \(f:U\to F_2\) be complex differentiable at \(z\in U\).*

*If \(f\) is constant then \(f'(z)=0\).**If \(F_1=F_2\) then \((f+g)'(z)=f'(z)+g'(z)\).**\((cf)'(z)=cf'(z)\) for all \(c\in\bbc\).**If \(F_1=\bbc\) or \(F_2=\bbc\), then \((fg)'(z)=f'(z)g(z)+f(z)g'(z)\).**If \(F_2=\bbc\) and \(g(z)\ne 0\) then \((f/g)'(z)=[f'(z)g(z)-f(z)g'(z)]/g(z)^2\).*

**Theorem 21.** *Let \(U\subseteq\bbc\) be an open set. A function \(f:U\to\bbc\) is complex differentiable at \(z\in U\) if and only if it is real differentiable at \(z\) and the real derivative \(Df(z)\) is represented by a matrix of the form $$\begin{bmatrix}a & -b \\ b & a\end{bmatrix}.$$ In that case, \(Df(z)\) is the matrix representation of the complex number \(f'(z)\). Furthermore, \(\det Df(z)=|f'(z)|^2\).*

The preceding theorem shows that a holomorphic function \(f:U\to\bbc\) is simply a real differentiable function with the property that its derivative is a scalar times a rotation matrix at every point of \(U\). (That is, the derivative is \(\bbc\)-linear.) Suppose that \(f(x+yi)=u(x,y)+v(x,y)i\) where \(u,v\) are real valued functions. If \(f\) is holomorphic then the theorem implies that $$

\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y}\quad\mathrm{and}\quad\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}.

$$ These are known as the **Cauchy-Riemann equations**.

We now define complex line integrals as in part 1, taking \(E=\bbc\). If \(U\subseteq\bbc\) is an open set and \(f:U\to F\) is continuous, then we define its **associated form** \(\omega_f:U\to L(\bbc,F)\) by $$\omega_f(z)w=wf(z).$$ If \(\gamma\) is a curve in \(U\) then the **integral of \(f\) along \(\gamma\)** is defined by $$

\int_\gamma f = \int_\gamma f(z)\,dz = \int_\gamma \omega_f.

$$ Note that if \(\gamma:[a,b]\to U\) is a curve with partition \(\{a_0,\dots,a_k\}\), then $$

\int_\gamma f = \sum_{i=1}^k \int_{a_{i-1}}^{a_i} f(\gamma(t))\gamma'(t)\,dt.

$$ The usual properties in Theorem 1 apply. A holomorphic function \(g:U\to F\) satisfying \(f=g’\) is called a **primitive** of \(f\). It is easy to check that any potential function for \(\omega_f\) (a function \(g\) such that \(\omega_f=Dg\)) is a primitive for \(f\).

**Example 22.** Let \(n\) be an integer and define a curve \(\gamma:[0,2\pi]\to\bbc\) by \(\gamma(t)=e^{it}\). Then \begin{align}

\int_\gamma z^n\,dz &= i\int_0^{2\pi} e^{(n+1)it}\,dt \\

&= \begin{cases}2\pi i, & n=-1, \\ 0, & n\ne -1.\end{cases}

\end{align}

Suppose \(f:U\to F\) is holomorphic. Since \(\omega_f=\ell\circ f\) where \(\ell:F\to L(\bbc,F)\) is the linear map given by \(\ell(x)h=hx\), we have \begin{align}

D\omega_f(z)(u,v) &= (D\ell(f(z))Df(z)u)(v) \\

&= \ell(Df(z)u)(v) \\

&= uvf'(z).

\end{align} Clearly, \(D\omega_f(z)\) is symmetric for all \(z\in U\). Thus \(\omega_f\) is closed, and Goursat’s theorem (Corollary 8) shows that \(\omega_f\) is locally exact. As a consequence, we can define the integral of a holomorphic function \(f\) along any path (see Lemma 9).

**Theorem 23** (Cauchy’s theorem, local version). *Let \(U\subseteq\bbc\) be an open set, let \(\gamma_1,\gamma_2\) be paths in \(U\) that are homotopic, and let \(f\) be holomorphic on \(U\). Then $$\int_{\gamma_1} f = \int_{\gamma_2} f.$$ In particular, if \(U\) is simply connected then $$\int_\gamma f=0$$ for any closed path \(\gamma\) in \(U\).*

*Proof.* Apply Theorem 10. \(\square\)

If \(C\) is a circle, we write $$\int_C f$$ for the integral of \(f\) along \(C\), taken counterclockwise.

**Theorem 24** (Cauchy’s integral formula, local version). *Let \(D\) be a closed disc and let \(f\) be holomorphic on \(D\). Then $$f(z)=\frac{1}{2\pi i}\int_{\partial D} \frac{f(\zeta)}{\zeta-z}\,d\zeta$$ for every \(z\in\Int D\).*

*Proof.* Let \(U\) be an open set containing \(D\) on which \(f\) is holomorphic. For small \(r > 0\), the circle \(C_r\) of radius \(r\) around \(z\) is contained in \(D\). Note that \(C_r\) is homotopic to \(\partial D\) in \(U\setminus\{z\}\), so using Example 22 and Theorem 23 we have \begin{align}

\left\vert \frac{1}{2\pi i}\int_{\partial D}\frac{f(\zeta)}{\zeta-z}\,d\zeta-f(z)\right\vert &= \left\vert \frac{1}{2\pi i}\int_{C_r}\frac{f(\zeta)}{\zeta-z}\,d\zeta – \frac{1}{2\pi i}\int_{C_r}\frac{f(z)}{\zeta-z}\,d\zeta \right\vert \\

&= \left\vert \frac{1}{2\pi i}\int_{C_r}\frac{f(\zeta)-f(z)}{\zeta-z}\,d\zeta\right\vert \\

&\le \frac{1}{2\pi} 2\pi r \sup_{\zeta\in C_r} \left\vert \frac{f(\zeta)-f(z)}{\zeta-z} \right\vert \\

&\to 0

\end{align} as \(r\to 0\) since \(f\) is complex differentiable at \(z\). \(\square\)

The **open disc** \(D_r(z_0)\) is the set \(\{z\in\bbc:|z-z_0| < r\}\), and the **closed disc** \(\overline{D}_r(z_0)\) is the set \(\{z\in\bbc:|z-z_0| \le r\}\).

**Theorem 25.** *Let \(D=\overline{D}_r(z_0)\) be a closed disc and let \(f\) be holomorphic on \(D\). Then $$
f(z)=\sum_{n=0}^\infty a_n(z-z_0)^n \tag{*}
$$ for every \(z\in\Int D\), where $$
a_n = \frac{1}{2\pi i} \int_{\partial D} \frac{f(\zeta)}{(\zeta-z_0)^{n+1}}\,d\zeta = \frac{1}{n!} f^{(n)}(z_0).
$$ We have $$
|a_n| \le \frac{1}{r^n} \sup_{\zeta\in\partial D} |f(\zeta)|,
$$ so the power series in (*) has a radius of convergence of at least \(r\).*

*Proof.* By Theorem 24, we have $$f(z)=\frac{1}{2\pi i}\int_{\partial D} \frac{f(\zeta)}{\zeta-z}\,d\zeta.$$ Let \(0 < s < r\) and let \(D'=\overline{D}_s(z_0)\). For all \(z\in D'\) and \(\zeta\in\partial D\) we have \begin{align}
\frac{1}{\zeta-z} &= \frac{1}{\zeta-z_0}\left(\frac{1}{1-\frac{z-z_0}{\zeta-z_0}}\right) \\
&= \frac{1}{\zeta-z_0}\sum_{n=0}^\infty \left(\frac{z-z_0}{\zeta-z_0}\right)^n,
\end{align} where the geometric series converges absolutely and uniformly for \(\zeta\in\partial D\) since $$
\left\vert\frac{z-z_0}{\zeta-z_0}\right\vert \le \frac{s}{r} < 1.
$$ Therefore \begin{align}
f(z) &= \frac{1}{2\pi i}\int_{\partial D} \frac{f(\zeta)}{\zeta-z_0} \sum_{n=0}^\infty \left(\frac{z-z_0}{\zeta-z_0}\right)^n\,d\zeta \\
&= \sum_{n=0}^\infty \left[ \frac{1}{2\pi i}\int_{\partial D}\frac{f(\zeta)}{(\zeta-z_0)^{n+1}}\,d\zeta \right] (z-z_0)^n
\end{align} for all \(z\in D'\). \(\square\)
A function \(f:\bbc\to F\) is called **entire** if it is holomorphic on \(\bbc\). From Theorem 25 we can see that a function is entire if and only if it is represented by a power series with infinite radius of convergence.

**Theorem 26.** *Let \(f\) be an entire function. If there is a constant \(c\) and a positive integer \(k\) such that $$\sup_{|z|=r} |f(z)| \le cr^k$$ for all \(r > 0\), then \(f\) is a polynomial of degree \(k\) or less (with coefficients in \(F\)).*

*Proof.* Write \(f(z)=\sum_{n=0}^\infty a_n z^n\) where \(a_n\in F\). By Theorem 25, we have $$

|a_n| \le \frac{1}{r^n} \sup_{|z|=r} |f(z)| \le cr^{k-n}.

$$ If \(n > k\), we can take \(r\to\infty\) to deduce that \(a_n=0\). \(\square\)

**Corollary 27** (Liouville’s theorem). *Any bounded entire function is constant.*

As a simple application, we show that the spectrum of any element of a complex Banach algebra is nonempty. Let \(A\) be a complex unital Banach algebra and let \(x\in A\). We define the **spectrum** of \(x\), denoted by \(\sigma(x)\), to be the set of numbers \(\lambda\in\bbc\) such that \(x-\lambda 1\) is not invertible (where \(1\) is the unit in \(A\)).

**Theorem 28.** *The spectrum of any \(x\in A\) is nonempty.*

*Proof.* Suppose that \(\sigma(x)\) is empty. The map \(f:\bbc\to A\) given by \(z\mapsto (x-z1)^{-1}\) is entire, since \begin{align}

Df(z)w &= -(x-z1)^{-1}(-w1)(x-z1)^{-1} \\

&= w(x-z1)^{-2}.

\end{align} (See this result.) If \(|z| > |x|\) then \(x-z1=-z(1-x/z)\) is invertible and $$

|(x-z1)^{-1}|=|z|^{-1}|(1-x/z)^{-1}|\le\frac{|z|^{-1}}{1-|x/z|} \to 0

$$ as \(|z|\to\infty\). By Liouville’s theorem, \(f=0\). But \(f(z)=(x-z1)^{-1}\ne 0\) for any \(z\in\bbc\), which is a contradiction. \(\square\)

An important consequence of Theorem 17 is the following result, which is the global version of Cauchy’s theorem (Theorem 23):

**Theorem 29** (Cauchy’s theorem). *Let \(U\subseteq\bbc\) be an open set, let \(\gamma_1,\gamma_2\) be 1-cycles in \(U\) that are homologous, and let \(f\) be holomorphic on \(U\). Then $$
\int_{\gamma_1} f = \int_{\gamma_2} f.
$$*

Usually, Cauchy’s theorem is stated in terms of winding numbers (which will be defined shortly). Our goal is to prove the following:

**Theorem 30** (Cauchy’s theorem with winding numbers). *Let \(U\subseteq\bbc\) be an open set, let \(\gamma_1,\gamma_2\) be 1-cycles in \(U\) such that \(W(\gamma_1,z)=W(\gamma_2,z)\) for all \(z\in\bbc\setminus U\), and let \(f\) be holomorphic on \(U\). Then $$\int_{\gamma_1} f = \int_{\gamma_2} f.$$*

We recall some concepts and theorems from algebraic topology. For any topological space \(X\), we define a **loop** in \(X\) to be a continuous map \(\gamma:[0,1]\to X\) with \(\gamma(0)=\gamma(1)\), and we say that \(\gamma\) is **based** at \(\gamma(0)\). Let \(\mathbb{S}^1=\{z\in\bbc:|z|=1\}\) be the circle. The map \(q:\mathbb{R}\to\mathbb{S}^1\) given by \(s\mapsto e^{2\pi is}\) is a universal covering of \(\mathbb{S}^1\). Let \(f:[0,1]\to\mathbb{S}^1\) be a loop based at a point \(z_0\in\mathbb{S}^1\). We define the **winding number** of \(f\) by \(\widetilde{f}(1)-\widetilde{f}(0)\), where \(\widetilde{f}:[0,1]\to\mathbb{R}\) is any lift of \(f\). Since any two lifts of \(f\) differ by a constant, the winding number is well-defined. Since \(\widetilde{f}(1)\) and \(\widetilde{f}(0)\) are both elements of the fiber \(q^{-1}(\{z_0\})\), they differ by an integer; thus the winding number of a loop is always an integer.

**Theorem 31.** *Let \(f,g\) be loops in \(\mathbb{S}^1\) based at the same point. Then \(f\) and \(g\) are (path) homotopic if and only if they have the same winding number.*

Now let \(z_0\in\bbc\) and let \(\gamma:[0,1]\to\bbc\setminus\{z_0\}\) be a closed path. Define a retraction \(r:\bbc\setminus\{z_0\}\to\mathbb{S}^1\) by $$r(z)=\frac{z-z_0}{|z-z_0|}.$$ Then \(r\circ\gamma\) is a loop in \(\mathbb{S}^1\), and we can define the **winding number of \(\gamma\) with respect to \(z_0\)** to be the winding number of \(r\circ\gamma\). We denote this integer by \(W(\gamma,z_0)\). If we consider \(\gamma\) as a singular 1-cycle in the homology group \(H_1(\bbc\setminus\{z_0\})\cong\mathbb{Z}\), then \([\gamma]=W(\gamma,z_0)[\alpha]\) where \(\alpha\) is the generator of \(H_1(\bbc\setminus\{z_0\})\) defined by \(\alpha(s)=z_0+e^{2\pi is}\). Therefore, for any 1-cycle \(\gamma\) in \(\bbc\setminus\{z_0\}\) we define the **winding number of \(\gamma\) with respect to \(z_0\)** to be the unique integer \(W(\gamma,z_0)\) such that \([\gamma]=W(\gamma,z_0)[\alpha]\).

**Theorem 32.**

*If \(\gamma\) is homologous to \(\eta\) in \(\bbc\setminus\{z_0\}\), then \(W(\gamma,z_0)=W(\eta,z_0)\).**If \(\gamma_1,\dots,\gamma_k\) are closed paths and \(n_1,\dots,n_k\) are integers, then $$*

W(n_1\gamma_1+\cdots+n_k\gamma_k,z_0)=n_1 W(\gamma_1,z_0)+\cdots+n_k W(\gamma_k,z_0).

$$

Note that we have a convenient expression for the winding number of a 1-cycle as an integral:

**Theorem 33.** *For every 1-cycle \(\gamma\) in \(\bbc\setminus\{z_0\}\), we have $$
W(\gamma,z_0)=\frac{1}{2\pi i}\int_\gamma \frac{1}{z-z_0}\,dz.
$$*

*Proof.* We first prove the result for closed paths in \(\bbc\setminus\{z_0\}\). By Theorem 12, we may assume that \(\gamma\) is a closed curve. By linearity, we may also assume that \(\gamma\) is a \(C^1\) path. Let \(\widetilde{\gamma}:[0,1]\to\mathbb{R}\) be a lift of \(r\circ\gamma\); then \(\widetilde{\gamma}\) is \(C^1\) and $$

e^{2\pi i\widetilde{\gamma}(s)} = \frac{\gamma(s)-z_0}{f(s)},

$$ where \(f(s)=|\gamma(s)-z_0|\). We compute \begin{align}

\frac{1}{2\pi i}\int_\gamma \frac{1}{z-z_0}\,dz &= \frac{1}{2\pi i}\int_0^1 \frac{\gamma'(s)}{\gamma(s)-z_0}\,ds \\

&= \frac{1}{2\pi i}\int_0^1 \frac{2\pi if(s)\widetilde{\gamma}'(s)e^{2\pi i\widetilde{\gamma}(s)}+f'(s)e^{2\pi i\widetilde{\gamma}(s)}}{f(s)e^{2\pi i\widetilde{\gamma}(s)}}\,ds \\

&= \frac{1}{2\pi i}\int_0^1 \left(2\pi i\widetilde{\gamma}'(s)+\frac{f'(s)}{f(s)}\right)\,ds \\

&= \frac{1}{2\pi i}[2\pi i\widetilde{\gamma}(s)+\log f(s)]_0^1 \\

&= \widetilde{\gamma}(1)-\widetilde{\gamma}(0) \\

&= W(\gamma,z_0).

\end{align} Now let \(\gamma\) be a 1-cycle in \(\bbc\setminus\{z_0\}\). By Theorem 15, \(\gamma\) is homologous to a sum \(\sum_{j=1}^k c_j \gamma_j\) where each \(\gamma_j\) is a closed path. Then $$

W(\gamma,z_0)=\sum_{j=1}^k c_j W(\gamma_j,z_0) = \sum_{j=1}^k c_j \frac{1}{2\pi i} \int_{\gamma_j} \frac{1}{z-z_0}\,dz = \frac{1}{2\pi i}\int_\gamma \frac{1}{z-z_0}\,dz.

$$ \(\square\)

**Lemma 34.** *Let \(\gamma:[a,b]\to\bbc\) be a curve and let \(A=\gamma([a,b])\). The function $$\alpha\mapsto\int_\gamma \frac{1}{z-\alpha}\,dz$$ is continuous on \(\bbc\setminus A\).*

*Proof.* Let \(\alpha_0\in\bbc\setminus A\). The function \(t\mapsto|\alpha_0-\gamma(t)|\) is positive and continuous on \([a,b]\), so it attains a minimum \(r > 0\). For all \(|\alpha-\alpha_0| < r/2\) and \(t\in[a,b]\) we have $$
|\alpha-\gamma(t)| \ge |\alpha_0-\gamma(t)|-|\alpha-\alpha_0| \ge r/2,
$$ so \begin{align}
\left\vert \int_\gamma\left(\frac{1}{z-\alpha}-\frac{1}{z-\alpha_0}\right)\,dz \right\vert &\le L(\gamma) \sup_{t\in[a,b]} \left\vert\frac{\alpha-\alpha_0}{(\gamma(t)-\alpha)(\gamma(t)-\alpha_0)}\right\vert \\
&\le L(\gamma)\frac{4}{r^2}|\alpha-\alpha_0| \\
&\to 0
\end{align} as \(\alpha\to\alpha_0\). \(\square\)
**Corollary 35.** *Let \(\gamma:[a,b]\to\bbc\) be a closed curve and let \(A=\gamma([a,b])\). If \(E\) is a connected subset of \(\bbc\setminus A\), then \(z\mapsto W(\gamma,z)\) is constant on \(E\). If \(E\) is unbounded, then \(W(\gamma,z)=0\) for all \(z\in E\).*

*Proof.* The first claim is clear. Let \(n\) be the winding number of \(\gamma\) with respect to any point of \(E\). We have $$

n=\frac{1}{2\pi i}\int_\gamma\frac{1}{\zeta-z}\,d\zeta

$$ for all \(z\in E\), so \(n=0\) since $$

\left\vert \int_\gamma\frac{1}{\zeta-z}\,d\zeta \right\vert \to 0

$$ as \(|z|\to\infty\). \(\square\)

We now come to the fundamental theorem that links singular homology and winding numbers. We will provide a proof later.

**Theorem 36.** *Let \(U\subseteq\bbc\) be an open set and let \(\gamma\) be a 1-cycle in \(U\). If \(W(\gamma,z)=0\) for all \(z\in\bbc\setminus U\), then \(\gamma\) is a boundary, i.e. \(\gamma=\partial b\) for some 2-chain \(b\).*

**Corollary 37.** *Let \(\gamma,\eta\) be 1-cycles in \(U\). Then \(\gamma\) and \(\eta\) are homologous if and only if \(W(\gamma,z)=W(\eta,z)\) for all \(z\in\bbc\setminus U\).*

Clearly, Theorem 30 (our goal) follows directly from Corollary 37.

If \(c=\sum_{i=1}^k c_i\sigma_i\) is a \(p\)-chain where \(c_i\ne 0\), we define the **image** of \(c\) to be the set \(\bigcup_{i=1}^k \sigma_i(\triangle_p)\).

**Theorem 38** (Cauchy’s integral formula). *Let \(U\subseteq\bbc\) be an open set, let \(\gamma\) be a 1-cycle in \(U\) homologous to 0, and let \(f\) be holomorphic on \(U\). For all \(z\in U\) not in the image of \(\gamma\) we have $$
W(\gamma,z)f(z)=\frac{1}{2\pi i}\int_\gamma\frac{f(\zeta)}{\zeta-z}\,d\zeta.
$$*

*Proof.* Write \(f(\zeta)=\sum_{n=0}^\infty a_n(\zeta-z)^n\) in a neighborhood of \(z\). Let \(C\) be a small circle centered at \(z\), contained in this neighborhood. By Theorem 30, \begin{align}

\frac{1}{2\pi i}\int_\gamma\frac{f(\zeta)}{\zeta-z}\,d\zeta &= \frac{1}{2\pi i}\int_{W(\gamma,z)C}\frac{f(\zeta)}{\zeta-z}\,d\zeta \\

&= \frac{1}{2\pi i}\sum_{n=0}^\infty \int_{W(\gamma,z)C} a_n(\zeta-z)^{n-1}\,d\zeta \\

&= a_0\frac{1}{2\pi i}\int_{W(\gamma,z)C} \frac{1}{\zeta-z}\,d\zeta \\

&= W(\gamma,z)f(z).

\end{align} \(\square\)

If \(\gamma:[a,b]\to\bbc\) is a closed curve and there exists a partition \(\{a_0,\dots,a_k\}\) of \([0,1]\) such that \(\gamma|_{[a_{j-1},a_j]}\) is a horizontal or vertical line segment for each \(j\), then we say that \(\gamma\) is **rectangular**. A **rectangular** 1-cycle is a 1-cycle that can be written as a sum of rectangular closed curves. A **grid** is a union of finitely many vertical or horizontal lines in \(\bbc\). Every grid partitions \(\bbc\) into a finite number of rectangular regions, some bounded and some unbounded. Then it is clear that for any rectangular 1-cycle \(\gamma\) there is a grid \(G\) for which \(\gamma=\sum_{i=1}^k c_i\sigma_i\), where each \(\sigma_i\) is an edge of a bounded rectangle. We say that \(G\) is a **grid for \(\gamma\)**.

**Lemma 39.** *Let \(\gamma\) be a rectangular 1-cycle in \(\bbc\), let \(G\) be a grid for \(\gamma\), and let \(R_1,\dots,R_n\) be the bounded rectangles. For each \(i\), choose some \(p_i\in\Int R_i\). Then $$
\gamma=\sum_{i=1}^n W(\gamma,p_i)\partial R_i.
$$ (Each \(\partial R_i\) is oriented counterclockwise.)*

*Proof.* Let \(\eta=\gamma-\sum_{i=1}^n W(\gamma,p_i)\partial R_i\); it is clear that \(W(\eta,p)=0\) for any \(p\) not on the grid (i.e. not on the boundary of some bounded or unbounded rectangle). Suppose that \(\eta\ne 0\) and write \(\eta=m\sigma+\eta’\), where \(m\ne 0\), \(\sigma\) is an edge of a bounded rectangle \(R\), and \(\eta’\) is some 1-chain not containing \(\sigma\). Then \(\sigma\) is also an edge of exactly one other rectangle \(R’\), which is either bounded or unbounded. Choose \(p\in\Int R\) and \(p’\in\Int R’\). Then \(W(\partial R,p)=1\) and \(W(\partial R,p’)=0\), so \begin{align}

W(\eta-m\partial R,p) &= W(\eta,p)-mW(\partial R,p)=-m, \\

W(\eta-m\partial R,p’) &= W(\eta,p’)-mW(\partial R,p’)=0.

\end{align} But the image \(E\) of \(\eta-m\partial R\) does not contain the edge \(\sigma\), so \(p\) and \(p’\) are in the same connected component of \(\bbc\setminus E\). Therefore $$

W(\eta-m\partial R,p)=W(\eta-m\partial R,p’)

$$ by Corollary 35, which is a contradiction. \(\square\)

*Proof of Theorem 36.* By using an argument similar to that of Theorem 12, we may assume that \(\gamma\) is a rectangular 1-cycle in \(U\). Let \(G\) be a grid for \(\gamma\), let \(R_1,\dots,R_n\) be the bounded rectangles, and choose some \(p_i\in\Int R_i\) for each \(i\). By Lemma 39, we have $$\gamma=\sum_{i=1}^n W(\gamma,p_i)\partial R_i.$$ Suppose some \(R_i\) contains a point \(p\in\bbc\setminus U\); then \(W(\gamma,p)=0\). If \(p\in\Int R_i\), then \(W(\gamma,p_i)=W(\gamma,p)=0\) since \(\Int R_i\) is connected. If \(p\in\partial R_i\) and \(p\) is not in the image of \(\gamma\), then again we have \(W(\gamma,p_i)=W(\gamma,p)=0\). Note that \(p\) cannot be in the image of \(\gamma\). Therefore \(R_i\subseteq U\) whenever \(W(\gamma,p_i)\ne 0\), and \(\gamma\) is the boundary of the 2-chain $$

\sum_{i=1}^n W(\gamma,p_i) R_i.

$$ \(\square\)

Navigation: 1. Exact, conservative and closed forms | 2. Locally exact forms and singular homology | **3. Applications to complex analysis**

As in part 1, \(E,F\) are Banach spaces and \(U\subseteq E\) is an open set. Recall that a form \(\omega:U\to L(E,F)\) is **exact** if \(\omega=Df\) for some \(f:U\to F\), **closed** if it is differentiable and \(D\omega(x)\in L(E,E;F)\) is symmetric for every \(x\in U\), and **locally exact** if it is closed and for every \(x\in U\) there is a neighborhood \(V\subseteq U\) of \(x\) on which \(\omega\) is exact. Last time we showed that every closed \(C^1\) form is locally exact, and that every closed form on an open subset of \(\mathbb{R}^2\) is locally exact (even if is differentiable but not \(C^1\)).

So far, we have only defined line integrals along \(C^1\) paths and curves. It turns out that for locally exact forms, we can extend the definition to paths that are merely continuous (and not necessarily differentiable).

Let \(\omega\) be a locally exact form on \(U\) and let \(\gamma:[a,b]\to U\) be a path. Since \(\gamma([a,b])\) is compact, there exists a partition \(P=\{a_0,\dots,a_k\}\) of \([a,b]\) and open balls \(B_1,\dots,B_k\) such that \(\omega\) is exact on \(B_i\) and \(\gamma([a_{i-1},a_i])\subseteq B_i\) for each \(i=1,\dots,k\). We define the **integral of \(\omega\) along \(\gamma\)** by $$

\int_\gamma \omega = \sum_{i=1}^k [g_i(\gamma(a_i))-g_i(\gamma(a_{i-1}))],

$$ where \(g_i\) is any potential for \(\omega\) on \(B_i\).

**Lemma 9.** *The above integral is well-defined.*

*Proof.* Suppose we are given different open balls \(\widetilde{B}_1,\dots,\widetilde{B}_k\) with the above properties, as well as corresponding potential functions \(\widetilde{g}_i\). For each \(i\) the functions \(g_i\) and \(\widetilde{g}_i\) are both potentials for \(\omega\) on the connected set \(B_i\cap \widetilde{B}_i\), so \(g_i-\widetilde{g}_i\) is constant and $$

g_i(\gamma(a_i))-g_i(\gamma(a_{i-1})) = \widetilde{g}_i(\gamma(a_i))-\widetilde{g}_i(\gamma(a_{i-1})).

$$ Next, we show that choosing a refinement of \(P\) does not change the value of the integral. Since any refinement of \(P\) can be obtained by adding finitely many points to \(P\), it suffices to show that adding a single point \(c\) to \(P\) does not change the value of the integral. Let $$

P=\{a_0,\dots,a_{j-1},c,a_j,\dots,a_k\}

$$ where \(a_{j-1} < c < a_j\). We can use the same open balls and potential functions as before, with \(B_j\) being used for the intervals \([a_{j-1},c]\) and \([c,a_j]\). Then the term $$
g_j(\gamma(a_j))-g_j(\gamma(a_{j-1}))
$$ is replaced by $$
g_j(\gamma(a_j))-g_j(\gamma(c))+g_j(\gamma(c))-g_j(\gamma(a_{j-1})),
$$ which does not change the value of the integral. This completes the proof, for if \(Q\) is another partition of \([a,b]\) then taking the common refinement \(P\cup Q\) does not change the value of the integral. \(\square\)
It is easy to check that the fundamental theorem for line integrals (Theorem 2) still holds when \(\gamma\) is a path.

Let \(X,Y\) be topological spaces and let \(f,g:X\to Y\) be continuous maps. A **homotopy** from \(f\) to \(g\) is a continuous map \(H:X\times[0,1]\to Y\) such that \(H(s,0)=f(s)\) and \(H(s,1)=g(s)\) for all \(s\in[a,b]\). Let \(\gamma_1,\gamma_2:[a,b]\to X\) be paths (i.e. continuous maps). If \(t\mapsto H(0,t)\) and \(t\mapsto H(1,t)\) are constant, we say that \(H\) is a **path homotopy**. If there is a *path* homotopy from \(\gamma_1\) to \(\gamma_2\), we say that \(\gamma_1\) is **homotopic** to \(\gamma_2\).

**Theorem 10** (Line integrals along homotopic paths). *Let \(\omega\) be a locally exact form on \(U\). If \(\gamma_1,\gamma_2\) are paths in \(U\) that are homotopic, then $$\int_{\gamma_1} \omega = \int_{\gamma_2} \omega.$$*

*Proof.* Let \(H:[a,b]\times[0,1]\to U\) be a path homotopy from \(\gamma_1\) to \(\gamma_2\). Since \([a,b]\times[0,1]\) is compact, there are partitions \begin{align}

a&=s_0\le\cdots\le s_m=b, \\

0&=t_0\le\cdots\le t_n=1

\end{align} such that for each rectangle \(R_{ij}=[s_{i-1},s_i]\times[t_{j-1},t_j]\) there is an open ball \(B_{ij}\subseteq U\) with \(H(R_{ij})\subseteq B_{ij}\) on which \(\omega\) is exact. For each \(j=0,\dots,n\), let \(\gamma^{(j)}(s)=H(s,t_j)\). Since \(\gamma^{(0)}=\gamma_1\) and \(\gamma^{(n)}=\gamma_2\), it suffices to show that $$

\int_{\gamma^{(j)}} \omega = \int_{\gamma^{(j-1)}} \omega

$$ for each \(j=1,\dots,n\). Fix some \(j\). For each \(i=1,\dots,m\), let \(g_i\) be a potential for \(\omega\) on \(B_{ij}\). Since \(g_i\) and \(g_{i-1}\) are both potentials for \(\omega\) on \(B_{ij}\cap B_{(i-1)j}\), they differ by a constant on \(B_{ij}\cap B_{(i-1)j}\). Therefore $$

g_i(\gamma^{(j)}(s_{i-1}))-g_i(\gamma^{(j-1)}(s_{i-1}))=g_{i-1}(\gamma^{(j)}(s_{i-1}))-g_{i-1}(\gamma^{(j-1)}(s_{i-1}))

$$ for each \(i=1,\dots,m\). We have \begin{align}

& \int_{\gamma^{(j)}} \omega – \int_{\gamma^{(j-1)}} \omega \\

&= \sum_{i=1}^m [g_i(\gamma^{(j)}(s_{i}))-g_i(\gamma^{(j)}(s_{i-1}))-g_i(\gamma^{(j-1)}(s_{i}))-g_i(\gamma^{(j-1)}(s_{i-1}))] \\

&= \sum_{i=1}^m [g_i(\gamma^{(j)}(s_{i}))-g_i(\gamma^{(j-1)}(s_{i}))-(g_{i-1}(\gamma^{(j)}(s_{i-1}))-g_{i-1}(\gamma^{(j-1)}(s_{i-1})))] \\

&= g_m(\gamma^{(j)}(b))-g_m(\gamma^{(j-1)}(b))-(g_0(\gamma^{(j)}(a))-g_0(\gamma^{(j-1)}(a))) \\

&= 0

\end{align} since \(\gamma^{(j)}\) and \(\gamma^{(j-1)}\) have the same starting and ending points. \(\square\)

An open set \(U\) is said to be **simply connected** if it is connected and every closed path in \(U\) is homotopic to a point (i.e. homotopic to a constant path).

**Corollary 11.** *Every locally exact form on a simply connected open set is exact.*

*Proof.* Apply Theorem 10 and Theorem 3. \(\square\)

The next theorem will be used in part 3.

**Theorem 12.** *Every path in \(U\) is homotopic to a curve in \(U\).*

*Proof.* Let \(\gamma:[a,b]\to U\) be a path. Since \(\gamma([a,b])\) is compact, there exists a partition \(P=\{a_0,\dots,a_k\}\) of \([a,b]\) and open balls \(B_1,\dots,B_k\subseteq U\) such that \(\gamma([a_{i-1},a_i])\subseteq B_i\) for each \(i=1,\dots,k\). Define \(\gamma_i:[0,1]\to B_i\) by \(\gamma_i(s)=\gamma(a_{i-1}+s(a_i-a_{i-1}))\); then $$

\gamma|_{[a_{i-1},a_i]}(s)=\gamma_i\left(\frac{s-a_{i-1}}{a_i-a_{i-1}}\right).

$$ For each \(i\) there is a path homotopy \(H_i:[0,1]\times[0,1]\to B_i\) from \(\gamma_i\) to the straight line segment \(\eta_i\) from \(\gamma(a_{i-1})\) to \(\gamma(a_i)\), so we can define a path homotopy \(H:[a,b]\times[0,1]\to U\) by setting $$

H|_{[a_{i-1},a_i]\times[0,1]}(s,t)=H_i\left(\frac{s-a_{i-1}}{a_i-a_{i-1}},t\right).

$$ Therefore \(\gamma\) is homotopic to the curve \(s\mapsto H(s,1)\). \(\square\)

We now turn to singular homology, and extend the line integral to singular 1-chains.

Given \(p+1\) affinely independent points \(\{v_0,\dots,v_p\}\) in \(\mathbb{R}^n\), the **geometric \(p\)-simplex** with **vertices** \(v_0,\dots,v_p\) is the subset of \(\mathbb{R}^n\) defined by $$

[v_0,\dots,v_p]=\left\{\sum_{i=0}^p t_i v_i : 0\le t_1\le 1\:\mathrm{and}\:\sum_{i=0}^p t_i=1\right\}.

$$ The integer \(p\) is called the **dimension** of the simplex. The simplices whose vertices are nonempty subsets of \(\{v_0,\dots,v_p\}\) are called the **faces** of the simplex. The \((p-1)\)-dimensional faces are the **boundary faces** of the simplex. The **standard \(p\)-simplex** is $$

\triangle_p=[e_0,\dots,e_p]\subseteq\mathbb{R}^p,

$$ where \(e_0=0\) and \(e_i\) is the \(i\)th standard basis vector. For each \(i=0,\dots,p\), we define the **\(i\)th face map in \(\triangle_p\)** to be the unique affine map \(F_{i,p}:\triangle_{p-1}\to\triangle_p\) satisfying $$

F_{i,p}(e_0)=e_0,\dots,\quad F_{i-p}(e_{i-1})=e_{i-1},\quad F_{i,p}(e_i)=e_{i+1},\dots,\quad F_{i,p}(e_{p-1})=e_p.

$$ Let \(U\subseteq E\) be an open set. A continuous map \(\sigma:\triangle_p\to U\) is called a **singular \(p\)-simplex in \(U\)**. The **singular chain group of \(U\) in degree \(p\)**, denoted by \(C_p(U)\), is the free abelian group generated by all singular \(p\)-simplices in \(U\). An element of \(C_p(U)\) is called a **singular \(p\)-chain**. The **boundary** of a singular \(p\)-simplex \(\sigma\) is the singular \((p-1)\)-chain defined by $$

\partial\sigma=\sum_{i=0}^p (-1)^i \sigma\circ F_{i,p}.

$$ For example, if \(\sigma\) is the identity map on \(\triangle_2\) then \(\partial\sigma=\sigma\circ F_{0,2}-\sigma\circ F_{1,2}+\sigma\circ F_{2,2}\) where \begin{align}

(\sigma\circ F_{0,2})(e_0)=e_1,\quad & (\sigma\circ F_{0,2})(e_1)=e_2, \\

(\sigma\circ F_{1,2})(e_0)=e_0,\quad & (\sigma\circ F_{1,2})(e_1)=e_2, \\

(\sigma\circ F_{2,2})(e_0)=e_0,\quad & (\sigma\circ F_{2,2})(e_1)=e_1.

\end{align} The map \(\partial\) extends uniquely to a group homomorphism \(\partial_p:C_p(U)\to C_{p-1}(U)\), called the **singular boundary operator**. We write \(\partial=\partial_p\) when the dimension \(p\) is clear.

**Theorem 13.** *For any \(c\in C_p(U)\) we have \(\partial(\partial c)=0\).*

We say that a singular \(p\)-chain is a **cycle** if \(\partial c=0\), and we say that \(c\) is a **boundary** if \(c=\partial b\) for some \(b\in C_{p+1}(U)\). Let \(Z_p(U)\) be the set of all singular \(p\)-cycles, and let \(B_p(U)\) be the set of all singular \(p\)-boundaries. Then \(Z_p(U)=\ker\partial_p\), and \(B_p(U)=\operatorname{im}\partial_{p-1}\). Since \(\partial_{p-1}\circ\partial_p=0\), we have \(B_p(U)\subseteq Z_p(U)\). The **\(p\)th singular homology group** of \(U\) is the quotient group $$

H_p(U)=Z_p(U)/B_p(U).

$$ The equivalence class in \(H_p(U)\) of a singular \(p\)-cycle \(c\) is called its **homology class**, and is denoted by \([c]\). If \([c]=[c’]\), then we say that \(c\) and \(c’\) are **homologous**.

Since \(\triangle_1=[0,1]\), any singular 1-simplex \(\gamma\) is a path in \(U\). Conversely, any path \(\gamma:[a,b]\to U\) can be considered as a singular 1-simplex in \(U\) since we have the reparametrization \(\widehat{\gamma}:[0,1]\to U\) given by \(\widehat{\gamma}(t)=\gamma(a+t(b-a))\).

**Lemma 14.**

*If \(\gamma\) is a singular 1-simplex then \([-\gamma]=-[\gamma]\), where \(-\gamma\) is the singular 1-simplex defined by $$(-\gamma)(t)+\gamma(1-t).$$**If \(\gamma_1,\gamma_2\) are singular 1-simplices with \(\gamma_1(1)=\gamma_2(0)\), then \([\gamma_1+\gamma_2]=[\gamma_1\cdot\gamma_2]\), where \(\gamma_1\cdot\gamma_2\) is the singular 1-simplex defined by $$*

\gamma(t)=\begin{cases}

\gamma_1(2t), & 0\le t\le 1, \\

\gamma_2(2t-1), & 1 < t \le 2. \end{cases}$$*If \(\gamma,\eta\) are singular 1-simplices that are (path) homotopic, then \([\gamma]=[\eta]\).*

Let us denote a singular 0-simplex \(\sigma\) with \(\sigma(0)=x\) by \(P(x)\). If \(c\) is a singular 1-simplex in \(U\) and \(\partial c=0\) then by definition we have \(P(c(1))-P(c(0))=0\), i.e. \(c\) is a closed path.

**Theorem 15.** *Every 1-cycle \(c\in Z_1(U)\) can be written in the form $$
[c]=\sum_{i=1}^k c_i[\gamma_i],
$$ where each \(\gamma_i:\triangle_1\to U\) is a closed path.*

*Proof.* Suppose not; then we can write $$

c=\partial b+\sum_{i=1}^k c_i\gamma_i+\sum_{i=1}^j \sigma_i \tag{*}

$$ where \(b\) is a singular 2-chain, \(c_i,c’_i\ne 0\) for all \(i\), each \(\gamma_i\) is a closed path, and each \(\sigma_i\) is a path that is not closed. We can assume that \((j,k)\) is the smallest pair for which \(c\) can be written in this form (where we take \((j,k) < (j',k')\) if \(j < j'\), or \(j=j'\) and \(k < k'\)), and that \(j\ge 1\). Then $$
0=\partial c=\sum_{i=1}^j [P(\sigma_i(1))-P(\sigma_i(0))].
$$ Suppose \(P(\sigma_i(1))=P(\sigma_1(0))\) for some \(i\ne 1\); then \([\sigma_i]+[\sigma_1]=[\sigma_i\cdot\sigma_1]\), so we can reduce either \(j\) or \(k\) in (*). Similarly, if \(P(\sigma_i(0))=P(\sigma_1(0))\) for some \(i\ne 1\), then \([\sigma_i]+[\sigma_1]=[(-\sigma_i)\cdot\sigma_1]\) and we can reduce \(j\) or \(k\). Both of these cases contradict the minimality of \((j,k)\), so we must have \(P(\sigma_1(0))=P(\sigma_1(1))\) since the coefficient of \(P(\sigma_1(0))\) in the above sum is \(0\). This contradicts the fact that \(\sigma_i\) is not closed. \(\square\)
Let \(\omega\) be a locally exact form on \(U\) and let \(c\in C_1(U)\) be a 1-chain in \(U\). Write \(c=\sum_{i=1}^k c_i\sigma_i\) where \(c_i\in\mathbb{Z}\) and each \(\sigma_i\) is a 1-simplex in \(U\). We define the **integral of \(\omega\) over \(c\)** by $$

\int_c \omega = \sum_{i=1}^k c_i \int_{\sigma_i} \omega.

$$

**Lemma 16.** *Let \(\sigma:\triangle_2\to U\) be a 2-simplex. Then $$\int_{\partial\sigma} \omega=0$$ for every locally exact form \(\omega\) on \(U\).*

*Proof.* It is clear that $$

\int_{\partial\sigma} \omega=\int_{\sigma\circ\gamma} \omega

$$ where \(\gamma:[a,b]\to\triangle_2\) is a path traversing the boundary of \(\triangle_2\) counterclockwise. Since \(\triangle_2\) is convex, there is a path homotopy \(H:[a,b]\times[0,1]\to\triangle_2\) from \(\gamma\) to a point in \(\triangle_2\), so \(\sigma\circ H\) is a path homotopy from \(\sigma\circ\gamma\) to a point in \(\sigma(\triangle_2)\). Therefore $$\int_{\sigma\circ\gamma} \omega=0$$ by Theorem 10. \(\square\)

**Theorem 17.** *If \(c\) and \(c’\) are homologous 1-cycles in \(U\), then $$\int_c \omega=\int_{c’} \omega$$ for every locally exact form \(\omega\) on \(U\). If \(\omega\) and \(\widetilde{\omega}\) are locally exact forms on \(U\) that differ by an exact form, then $$\int_c \omega=\int_c \widetilde{\omega}$$ for every 1-cycle \(c\) in \(U\).*

*Proof.* If \(c\) and \(c’\) are homologous, then \(c-c’=\partial b\) for some 2-chain \(b\). Write \(b=\sum_{i=1}^k b_i\sigma_i\) where each \(\sigma_i\) is a 2-simplex. Then $$

\int_c \omega – \int_{c’} \omega = \int_{\partial b} \omega = \sum_{i=1}^k b_i \int_{\partial\sigma_i} \omega = 0

$$ by Lemma 16. Suppose \(\omega\) and \(\widetilde{\omega}\) are locally exact forms with \(\omega-\widetilde{\omega}=Df\) for some function \(f:U\to F\), and let \(c\) be a 1-cycle. By Theorem 15, \(c\) is homologous to \(\sum_{i=1}^k c_i\gamma_i\) where each \(\gamma_i\) is a closed path. Then $$

\int_c \omega – \int_c \widetilde{\omega} = \int_c Df = \sum_{i=1}^k c_i \int_{\gamma_i} Df = 0.

$$ \(\square\)

Next, time we will look at how the line integral, in this general formulation, is used in complex analysis.

Navigation: 1. Exact, conservative and closed forms | **2. Locally exact forms and singular homology** | 3. Applications to complex analysis

The line integral is a useful tool for working with vector fields on \(\mathbb{R}^n\), (co)vector fields on manifolds, and complex differentiable functions. However, it is often unclear how these different versions of the line integral are related to each other. In the next few posts, I will be presenting a very general form of the line integral along with the standard theorems and some basic applications. Familiarity with the Fréchet derivative is assumed.

Let \(E,F\) be (real or complex) Banach spaces and let \(U\subseteq E\) be an open set. We define a **form on \(U\)** to be a continuous map \(\omega:U\to L(E,F)\), where \(L(E,F)\) is the space of continuous linear maps from \(E\) to \(F\). A **path in \(U\)** is a continuous map \(\gamma:[a,b]\to U\). If \(\gamma(a)=\gamma(b)\) then we say that \(\gamma\) is a **closed path**. We say that a continuous map \(\gamma:[a,b]\to U\) is a **curve** if there is a partition \(a=a_0 < \cdots < a_k=b\) of \([a,b]\) such that \(\gamma|_{[a_{i-1},a_i]}\) is a continuously differentiable path for each \(i\). If \(\gamma(a)=\gamma(b)\) then we say that \(\gamma\) is a **closed curve**.

If \(\gamma:[a,b]\to U\) is a \(C^1\) path and \(\omega\) is a form on \(U\), we define the **(line) integral of \(\omega\) along \(\gamma\)** by $$

\int_\gamma \omega = \int_\gamma \omega(x)\,dx = \int_a^b \omega(\gamma(t))\gamma'(t)\,dt.

$$ (Note that \(\omega(\gamma(t))\) is a linear map from \(E\) to \(F\), and \(\gamma'(t)\in E\) since we are identifying \(\gamma'(t)\) with \(D\gamma(t)(1)\).) If \(\gamma\) is a curve with partition \(\{a_0,\dots,a_k\}\), we define the integral of \(\omega\) along \(\gamma\) by $$

\int_\gamma \omega = \int_\gamma \omega(x)\,dx = \sum_{i=1}^k \int_{\gamma|_{[a_{i-1},a_i]}} \omega.

$$

**Theorem 1** (Properties of line integrals). *Let \(\gamma:[a,b]\to U\) be a curve and let \(\omega,\eta\) be forms on \(U\).*

*For any scalars \(a,b\), $$\int_\gamma (a\omega+b\eta)=a\int_\gamma \omega+b\int_\gamma \eta.$$**For any Banach space \(G\) and any continuous linear map \(f:F\to G\), $$\int_\gamma f\omega = f\left(\int_\gamma \omega\right),$$ where \(f\omega:U\to L(E,G)\) is the form defined by \((f\omega)(x)u=f(\omega(x)u)\).**If \(\gamma\) is constant then $$\int_\gamma \omega = 0.$$**If \(\gamma_1=\gamma|_{[a,c]}\) and \(\gamma_2=\gamma|_{[c,b]}\) where \(a < c < b\) then $$\int_\gamma \omega = \int_{\gamma_1} \omega + \int_{\gamma_2} \omega.$$**If \(\varphi:[c,d]\to [a,b]\) is a continuously differentiable function with \(\varphi(c)=a\) and \(\varphi(d)=b\) then $$\int_{\gamma\circ\varphi} \omega = \int_\gamma \omega.$$ If \(\varphi(c)=b\) and \(\varphi(d)=a\) (i.e. \(\varphi\) is decreasing) then $$\int_{\gamma\circ\varphi} \omega = -\int_\gamma \omega.$$**We have $$\left\vert\int_\gamma \omega\right\vert \le L(\gamma) \sup_{t\in[a,b]} |\omega(\gamma(t))|,$$ where \(L(\gamma)\) is the length of \(\gamma\) defined by $$L(\gamma)=\sum_{i=1}^k\int_{a_{i-1}}^{a_i} |\gamma'(t)|\,dt.$$**If \(\{\omega_n\}\) is a sequence of forms on \(U\) converging uniformly to a form \(\omega\), then $$\int_\gamma \omega = \lim_{n\to\infty}\int_\gamma \omega_n.$$*

*Proof.* By linearity, we can assume that \(\gamma\) is a \(C^1\) path. For (2), we have \begin{align}

\int_\gamma f\omega &= \int_a^b f(\omega(\gamma(t))\gamma'(t))\,dt \\

&= f\left(\int_a^b \omega(\gamma(t))\gamma'(t)\,dt\right) \\

&= f\left(\int_\gamma \omega\right)

\end{align} using one of the basic properties of the regulated integral or Bochner integral. For (4), we have \begin{align}

\int_{\gamma\circ\varphi} \omega &= \int_c^d \omega((\gamma\circ\varphi)(t))(\gamma\circ\varphi)'(t)\,dt \\

&= \int_c^d \omega(\gamma(\varphi(t)))\gamma'(\varphi(t))\varphi'(t)\,dt \\

&= \int_a^b \omega(\gamma(t))\gamma'(t)\,dt \\

&= \int_\gamma \omega

\end{align} by the change of variables formula. For (5), we have \begin{align}

\left\vert\int_\gamma \omega\right\vert &= \left\vert\int_a^b \omega(\gamma(t))\gamma'(t)\,dt\right\vert \\

&\le \int_a^b |\omega(\gamma(t))||\gamma'(t)|\,dt \\

&\le L(\gamma) \sup_{t\in[a,b]} |\omega(\gamma(t))|.

\end{align} For (6), we have \begin{align}

\left\vert\int_\gamma \omega_n – \int_\gamma \omega\right\vert &= \left\vert\int_\gamma (\omega_n-\omega)\right\vert \\

&\le L(\gamma) \sup_{t\in[a,b]} |(\omega_n-\omega)(\gamma(t))| \\

&\to 0

\end{align} as \(n\to\infty\). \(\square\)

**Example** (Line integrals in \(\mathbb{R}^n\)). If \(U\subseteq\mathbb{R}^n\) is an open set and \(F:U\to\mathbb{R}^n\) is a vector field, then its **associated form** \(\omega_F:U\to L(\mathbb{R}^n,\mathbb{R})\) is given by $$\omega_F(x)v=F(x)\cdot v,$$ where \(\cdot\) is the usual dot product on \(\mathbb{R}^n\). We can then define $$\int_\gamma F \cdot dr = \int_\gamma \omega_F = \int_a^b F(\gamma(t)) \cdot \gamma'(t)\,dt$$ for any curve \(\gamma:[a,b]\to U\).

**Example** (Complex line integrals). If \(U\subseteq\mathbb{C}\) is an open set and \(f:U\to\mathbb{C}\) is continuous, then its **associated form** \(\omega_f:U\to L(\mathbb{C},\mathbb{C})\) is given by $$\omega_f(z)w=wf(z).$$ We can then define $$\int_\gamma f(z)\,dz = \int_\gamma \omega_f = \int_a^b f(\gamma(t))\gamma'(t)\,dt$$ for any curve \(\gamma:[a,b]\to U\). Later, we will examine this type of line integral more closely.

We have an important generalization of the fundamental theorem of calculus to line integrals.

**Theorem 2** (Fundamental theorem for line integrals). *Let \(f:U\to F\) be continuously differentiable and let \(\gamma:[a,b]\to U\) be a curve. Then $$\int_\gamma Df = f(\gamma(b))-f(\gamma(a)).$$*

*Proof.* First assume that \(\gamma\) is a \(C^1\) path. Then \begin{align}

\int_\gamma Df &= \int_a^b Df(\gamma(t))\gamma'(t)\,dt \\

&= \int_a^b (f\circ\gamma)'(t)\,dt \\

&= f(\gamma(b))-f(\gamma(a))

\end{align} by the fundamental theorem of calculus. If \(\gamma\) is a curve with partition \(\{a_0,\dots,a_k\}\) then $$

\int_\gamma Df = \sum_{i=1}^k [f(\gamma(a_k))-f(\gamma(a_{k-1}))] = f(\gamma(b))-f(\gamma(a)).

$$ \(\square\)

Note that in particular we have $$\int_\gamma Df = 0$$ for any closed curve \(\gamma\) in \(U\). If \(\omega\) is a form on \(U\), a function \(f:U\to F\) satisfying \(\omega=Df\) is called a **potential for \(\omega\)**. We say that a form \(\omega\) is **exact** if it has a potential function. Note that if \(U\) is connected then this result implies that \(f-g\) is constant if \(f,g\) are any two potentials for \(\omega\). If the integral of \(\omega\) along any closed curve is zero, then we say that \(\omega\) is **conservative**. It is easy to see that a form is conservative if and only if it is path-independent, in the sense that $$\int_\gamma \omega=\int_{\widetilde{\gamma}} \omega$$ whenever \(\gamma,\widetilde{\gamma}\) are curves with the same starting and ending points.

**Theorem 3.** *A form is conservative if and only if it is exact.*

*Proof.* Theorem 2 shows that every exact form is conservative, so it remains to show that every conservative form is exact. Let \(\omega\) be a conservative form on \(U\). We can assume that \(U\) is connected, for otherwise we can obtain a potential function \(f_\alpha\) on each component \(U_\alpha\) of \(U\) and define a potential \(f:U\to F\) for \(\omega\) by setting \(f|_{U_\alpha}=f_\alpha\). Since \(\omega\) is path-independent, for any two points \(x,y\in U\) we can define $$\int_x^y \omega=\int_\gamma \omega$$ where \(\gamma\) is any curve from \(x\) to \(y\). Choose some \(x_0\in U\) and let $$f(x)=\int_{x_0}^x \omega;$$ we want to show that \(\omega=Df\). Let \(x\in U\) and choose \(r > 0\) so that the open ball of radius \(r\) around \(x\) is contained in \(U\). For all \(|h| < r\) the straight line from \(x\) to \(x+h\) is contained in \(U\), so \begin{align}
\frac{1}{|h|} |f(x+h)-f(x)-\omega(x)h| &= \frac{1}{|h|}\left\vert\int_{x_0}^{x+h} \omega - \int_{x_0}^x \omega - \omega(x)h\right\vert \\
&= \frac{1}{|h|}\left\vert\int_x^{x+h} \omega - \omega(x)h\right\vert \\
&= \frac{1}{|h|}\left\vert\int_0^1 \omega(x+th)h\,dt-\int_0^1 \omega(x)h\,dt\right\vert \\
&= \frac{1}{|h|}\left\vert\left(\int_0^1 [\omega(x+th)-\omega(x)]\,dt\right)h\right\vert \\
&\le \sup_{t\in[0,1]} |\omega(x+th)-\omega(x)| \\
&\to 0
\end{align} as \(h\to 0\) since \(\omega\) is continuous. \(\square\)
If \(\omega\) is a differentiable form on \(U\), we say that \(\omega\) is **closed** if \(D\omega(x)\in L(E,E;F)\) is symmetric for every \(x\in U\). If \(\omega=Df\) for some \(C^2\) map \(f:U\to F\) then \(D\omega(x)=D^2 f(x)\) is always symmetric, so we have the following result:

**Theorem 4.** *Every exact \(C^1\) form is closed.* \(\square\)

The converse of Theorem 4 holds for certain kinds of sets. A set \(A\) in a vector space is **star-shaped** with respect to \(x_0\in A\) if the line segment from \(x_0\) to any \(x\in A\) is contained in \(A\).

**Theorem 5** (Poincaré lemma). *Let \(U\subseteq E\) be a star-shaped open set. Every closed \(C^1\) form on \(U\) is exact.*

*Proof.* Suppose that \(U\) is star-shaped with respect to some \(x_0\in U\). By translating \(U\), we can assume that \(x_0=0\). Let \(\omega\) be a closed form on \(U\). Define $$f(x)=\int_0^1 \omega(tx)x\,dt,$$ which is simply the integral of \(\omega\) along the straight line segment from \(0\) to \(x\). Using differentiation under the integral sign and the symmetry of \(D\omega(tx)\), \begin{align}

Df(x)u &= \int_0^1 [tD\omega(tx)(u,x)+\omega(tx)u]\,dt \\

&= \int_0^1 [tD\omega(tx)(x,u)+\omega(tx)u]\,dt \\

&= \left(\int_0^1 [tD\omega(tx)x+\omega(tx)]\,dt\right)u \\

&= \left(\int_0^1 \frac{d}{dt}(t\omega(tx))\,dt\right)u \\

&= \omega(x)u.

\end{align} \(\square\)

**Lemma 6.** *Let \(R=[a,b]\times[c,d]\) be a rectangle and let \(\omega\) be a closed (differentiable) form defined on an open subset of \(\mathbb{R}^2\) containing \(R\). Then $$\int_{\partial R} \omega=0,$$ where the integral is taken counterclockwise along the boundary of \(R\).*

*Proof.* First note that \(D(D\omega(x))(y)=D\omega(x)\) is symmetric for all \(y\in\mathbb{R}^2\), so the Poincaré lemma shows that \(D\omega(x)\) is exact for all \(x\in R\). Decompose \(R\) into the four rectangles \begin{align}

R_1 &= [a,\tfrac{b-a}{2}]\times[c,\tfrac{d-c}{2}], \\

R_2 &= [\tfrac{b-a}{2},b]\times[c,\tfrac{d-c}{2}], \\

R_3 &= [a,\tfrac{b-a}{2}]\times[\tfrac{d-c}{2},d], \\

R_4 &= [\tfrac{b-a}{2},b]\times[\tfrac{d-c}{2},d].

\end{align} Due to the orientations of \(\partial R_1,\dots,\partial R_4\), the inside boundaries cancel and we have $$

\int_{\partial R} \omega = \sum_{i=1}^4 \int_{\partial R_i} \omega \quad\mathrm{and}\quad \left\vert\int_{\partial R} \omega\right\vert \le \sum_{i=1}^4 \left\vert\int_{\partial R_i} \omega\right\vert,$$ so there is a rectangle \(R^{(1)}\) among \(R_1,\dots,R_4\) for which $$\left\vert\int_{\partial R^{(1)}} \omega\right\vert \ge \frac{1}{4}\left\vert\int_{\partial R} \omega\right\vert.$$ Replacing \(R\) with \(R^{(1)}\) in the above, we have a rectangle \(R^{(2)}\subseteq R^{(1)}\) such that $$\left\vert\int_{\partial R^{(2)}} \omega\right\vert \ge \frac{1}{4}\left\vert\int_{\partial R^{(1)}} \omega\right\vert.$$ Repeating this process, we obtain a sequence of rectangles $$R^{(1)}\supseteq R^{(2)}\supseteq \cdots$$ such that $$\left\vert\int_{\partial R^{(n+1)}} \omega\right\vert \ge \frac{1}{4}\left\vert\int_{\partial R^{(n)}} \omega\right\vert$$ for all \(n\). Therefore $$\left\vert\int_{\partial R^{(n)}} \omega\right\vert \ge \frac{1}{4^n}\left\vert\int_{\partial R} \omega\right\vert.$$ If \(L_0\) is the length of \(\partial R\) and \(L_n\) is the length of \(\partial R^{(n)}\), then \(L_n=L_0/2^n\), and if \(\operatorname{diam} R\) is the diameter of \(R\), then \(\operatorname{diam} R^{(n)} = (\operatorname{diam} R)/2^n\). Since every \(R^{(n)}\) is compact and \(\operatorname{diam}R^{(n)} \to 0\) as \(n\to\infty\), there is exactly one point $$x_0\in\bigcap_{n=1}^\infty R^{(n)}.$$ Since \(\omega\) is differentiable at \(x_0\), there exists a neighborhood \(U\) of \(x_0\) such that $$\omega(x)=\omega(x_0)+D\omega(x_0)(x-x_0)+\theta(x-x_0)$$ for every \(x\in U\), where \(\theta\) is a continuous function into \(L(E,F)\) satisfying $$\lim_{x\to x_0} \frac{\theta(x-x_0)}{|x-x_0|} = 0.\tag{*}$$ (This follows directly from the definition of the derivative.) For sufficiently large \(n\) we have \(R^{(n)}\subseteq U\) and \begin{align}

\int_{\partial R^{(n)}} \omega &= \int_{\partial R^{(n)}} \omega(x_0)\,dx + \int_{\partial R^{(n)}} D\omega(x_0)(x-x_0)\,dx + \int_{\partial R^{(n)}} \theta(x-x_0)\,dx \\

&= \int_{\partial R^{(n)}} [\omega(x_0)-D\omega(x_0)x_0]\,dx + \int_{\partial R^{(n)}} D\omega(x_0) + \int_{\partial R^{(n)}} \theta(x-x_0)\,dx \\

&= \int_{\partial R^{(n)}} \theta(x-x_0)\,dx

\end{align} since \(x\mapsto [\omega(x_0)-D\omega(x_0)x_0]x\) is a primitive for the constant form \(\omega(x_0)-D\omega(x_0)x_0\) and \(D\omega(x_0)\) is exact. Therefore \begin{align}

\left\vert\int_{\partial R} \omega\right\vert &\le 4^n \left\vert\int_{\partial R^{(n)}} \omega\right\vert \\

&\le 4^n L_n \sup_{x\in R^{(n)}} |\theta(x-x_0)| \\

&\le 4^n L_n (\operatorname{diam} R^{(n)}) \sup_{x\in R^{(n)}\setminus\{x_0\}} \frac{|\theta(x-x_0)|}{|x-x_0|} \\

&\le L_0 (\operatorname{diam} R) \sup_{x\in R^{(n)}\setminus\{x_0\}} \frac{|\theta(x-x_0)|}{|x-x_0|} \\

&\to 0

\end{align} as \(n\to\infty\) by (*). \(\square\)

**Theorem 7** (Morera’s theorem). *If \(\omega\) is a form on a disc $$U=\{x\in\mathbb{R}^2:|x-x_0| < r\}$$ and $$\int_{\partial R} \omega = 0$$ for every rectangle \(R\) contained in \(U\), then \(\omega\) is exact.*

*Proof.* Define $$f(x)=\int_{x_0}^x \omega,$$ where the integral is taken along the sides of a rectangle whose opposite vertices are \(x_0\) are \(x\). By considering an appropriate rectangle we have $$f(x+h)-f(x)=\int_x^{x+h} \omega,$$ so we can use the argument in Theorem 3 to show that \(\omega=Df\). \(\square\)

**Corollary 8** (Goursat’s theorem). *If \(\omega\) is a closed (differentiable) form on a disc $$U=\{x\in\mathbb{R}^2:|x-x_0| < r\},$$ then \(\omega\) is exact.*

*Proof.* Apply Lemma 6 and Theorem 7. \(\square\)

A closed form \(\omega\) on \(U\) is said to be **locally exact** if for every \(x\in U\) there is a neighborhood \(V\subseteq U\) of \(x\) on which \(\omega\) is exact. The Poincaré lemma shows that every closed \(C^1\) form on an open subset of \(E\) is locally exact, and Corollary 8 shows that every closed form (not necessarily of class \(C^1\)) on an open subset of \(\mathbb{R}^2\) is locally exact. This latter result is fundamental to complex analysis.

Next time we will see why locally exact forms are important, and how we can interpret line integrals in the framework of singular homology.

Navigation: **1. Exact, conservative and closed forms** | 2. Locally exact forms and singular homology | 3. Applications to complex analysis

The (matrix) exponential can be used to solve certain types of first-order linear systems of ordinary differential equations with non-constant coefficients: not only can we solve \begin{align}x'(t)&=x(t)+y(t) \\ y'(t)&=x(t)+y(t),\end{align} but we can also solve \begin{align}x'(t)&=tx(t)+y(t) \\ y'(t)&=x(t)+ty(t).\end{align}

**Theorem 1.** *Let \(I\) be a connected open subset of \(\mathbb{R}\), let \(f:I\to E\) be differentiable and suppose that \(f'(t)=A(t)f(t)+b(t)\) for all \(t\in I\), where \(A:I\to L(E)\) and \(b:I\to E\) are continuous. Assume that \(A(s)A(t)=A(t)A(s)\) for all \(s,t\in I\). Choose any \(a\in I\). Then there exists some \(c\in E\) such that $$
f(t)=e^{\widehat{A}(t)} \left( c+\int_a^t e^{-\widehat{A}(s)}b(s)\,ds \right),
$$ where $$
\widehat{A}(t)=\int_a^t A(s)\,ds.
$$*

*Proof.* Choose any \(a\in I\) and let $$

g(t)=e^{-\widehat{A}(t)}f(t)-\int_a^t e^{-\widehat{A}(s)}b(s)\,ds.

$$ It is easy to verify that \(A(t)=\widehat{A}'(t)\) commutes with \(\widehat{A}(t)\) for every \(t\in I\), so \begin{align}

g'(t) &= e^{-\widehat{A}(t)}(-\widehat{A}'(t))f(t)+e^{-\widehat{A}(t)}f'(t)-e^{-\widehat{A}(t)}b(t) \\

&= -e^{-\widehat{A}(t)}A(t)f(t)+e^{-\widehat{A}(t)}(A(t)f(t)+b(t))-e^{-\widehat{A}(t)}b(t) \\

&= 0

\end{align} and \(g\) is constant (see this theorem). \(\square\)

Suppose we have a (non-homogeneous) linear system of ODEs with *constant* coefficients. Then \(A(t)\) is constant and \(\widehat{A}(t)=tA\), so $$

f(t)=e^{tA} \left( c+\int_a^t e^{-sA}b(s)\,ds \right).

$$ If the system is homogeneous, then \(b=0\) and we simply have $$

f(t)=e^{tA}c.

$$ On the other hand, if \(E=\mathbb{R}\) then we just have the “integrating factor” method for solving ODEs of the form $$

x'(t)+P(t)x(t)=Q(t).

$$

**Example.** Consider the system \begin{align}

x'(t) &= 2tx(t)+y(t) \\

y'(t) &= y(t)+2tx(t),

\end{align} which can be written as $$

\begin{bmatrix}x'(t) \\ y'(t)\end{bmatrix} = \begin{bmatrix}2t & 1 \\ 1 & 2t\end{bmatrix}\begin{bmatrix}x(t) \\ y(t)\end{bmatrix} = A(t) \begin{bmatrix}x(t) \\ y(t)\end{bmatrix}.

$$ It is easy to see that \(A(t)A(s)=A(s)A(t)\) for all \(s,t\in\mathbb{R}\). We have $$\widehat{A}(t) = \begin{bmatrix}t^2 & t \\ t & t^2\end{bmatrix}.$$ Therefore the general solution is \begin{align}

\begin{bmatrix}x(t) \\ y(t)\end{bmatrix} &= e^{\widehat{A}(t)}\begin{bmatrix}C_1 \\ C_2\end{bmatrix} \\

&= \begin{bmatrix}

\frac{1}{2}C_{1}(e^{(t+1)t}+e^{(t-1)t})+\frac{1}{2}C_{2}(e^{(t+1)t}-e^{(t-1)t}) \\

\frac{1}{2}C_{1}(e^{(t+1)t}-e^{(t-1)t})+\frac{1}{2}C_{2}(e^{(t+1)t}+e^{(t-1)t})

\end{bmatrix}.

\end{align}

We now assume that \(V\) is a finite-dimensional real vector space.

**Lemma 1.** *Let \(U\) be the open set of invertible operators in \(L(V)\). Then \(\det:U\to\mathbb{R}\) is differentiable, and $$
D\det(\tau)u=\det(\tau)\operatorname{tr}(\tau^{-1}u).
$$*

*Proof.* Let \(\iota\) be the identity map on \(V\). It is easy to see that \(\det\) is differentiable, by choosing a basis for \(V\). Let \(f(s)=\det(\tau+su)=\det(\tau)\det(\iota+s\tau^{-1}u)\). Then $$

D\det(\tau)u=f'(0)=\det(\tau)\operatorname{tr}(\tau^{-1}u)

$$ since \(\det(\iota+s\tau^{-1}u)\) is a polynomial in \(s\) where the coefficient of \(s\) is \(\operatorname{tr}(\tau^{-1}u)\). \(\square\)

As a simple application of Theorem 1, we prove a well-known formula:

**Theorem 2.** *For all \(\tau\in L(E)\), we have $$
\det(\exp(\tau))=\exp(\operatorname{tr}(\tau)).
$$*

*Proof.* The exponential function is a map \(\exp:L(V)\to U\). Define \(\gamma:\mathbb{R}\to L(V)\) by \(\gamma(s)=s\tau\). Then \begin{align}

(\det\circ\exp\circ\gamma)'(s) &= D\det(\exp(s\tau))(D\exp(s\tau)\tau) \\

&= \det(\exp(s\tau))\operatorname{tr}(\exp(s\tau)^{-1}\exp(s\tau)\tau) \\

&= (\det\circ\exp\circ\gamma)(s)\operatorname{tr}(\tau).

\end{align} Therefore $$

(\det\circ\exp\circ\gamma)(s)=\exp(s\operatorname{tr}(\tau))

$$ by Theorem 1. \(\square\)

D\exp(x)u = \int_0^1 e^{sx}ue^{(1-s)x}\,ds.

$$ This intriguing formula expresses the derivative of the exponential map on a Banach algebra as an integral. In particular, using “matrix calculus” notation we have the formula $$

d\exp(X)= \int_0^1 e^{sX}(dX)e^{(1-s)X}\,ds

$$ when \(X\) is a square matrix. As we’ll see, this is not too hard to prove.

We will assume that all Banach algebras are unital.

**Definition.** Let \(E\) be a Banach algebra. If \(x\in E\), the **exponential** of \(x\) is $$

\exp(x)=e^x=\sum_{n=0}^\infty \frac{x^n}{n!},

$$ which converges absolutely for all \(x\). Thus we have a map \(\exp:E\to E\), called the **exponential function**.

The usual rules for power series apply. In particular, we can differentiate term by term inside the radius of convergence, which is infinite for the exponential function. Before doing this, we need a lemma (the proof is at the end of the post).

**Lemma 1** (Power rule). *Let \(E\) be a Banach algebra, let \(n\ge 0\), and let \(p_n:E\to E\) be the map defined by \(p_n(x)=x^n\). Then \(Dp_n(x)\) is the linear map given by $$
Dp_n(x)u=\sum_{k=0}^{n-1} x^kux^{n-k-1}.
$$ In particular, if \(E\) is commutative then \(Dp_n(x)\) is given by $$
Dp_n(x)u=nx^{n-1}u.
$$*

Applying this lemma to the power series for \(\exp\) gives $$

D\exp(x)u=\sum_{n=1}^\infty \frac{1}{n!} \sum_{k=0}^{n-1}x^kux^{n-k-1}.\tag{*}

$$ Notice that when \(u\) commutes with \(x\), we have \(D\exp(x)u=\exp(x)u=u\exp(x)\). We also need another lemma (again, the proof is at the end of the post):

**Lemma 2.** *For \(m,n\ge 0\), we have $$\int_0^1 s^m(1-s)^n\,ds=\frac{m!n!}{(m+n+1)!}.$$*

Now we can evaluate the integral given at the beginning of the post. We have \begin{align}

\int_0^1 e^{sx}ue^{(1-s)x}\,ds &= \int_0^1 \sum_{m=0}^\infty\frac{s^m x^m}{m!}u\sum_{n=0}^\infty\frac{(1-s)^n x^n}{n!}\,ds \\

&= \sum_{m=0}^\infty\sum_{n=0}^\infty\frac{x^m u x^n}{m!n!}\int_0^1 s^m(1-s)^n\,ds \\

&= \sum_{m=0}^\infty\sum_{n=0}^\infty\frac{x^m u x^n}{(m+n+1)!},

\end{align} which is clearly equal to (*). (The rearrangements are valid because the infinite series are all absolutely convergent.) This proves our formula!

*Proof of Lemma 1.* We use induction on \(n\). The case \(n=0\) is clear, so suppose that the result holds for \(n-1\). Since \(p_n(x)=xp_{n-1}(x)\), the product rule shows that \(Dp_n(x)\) maps \(u\) to \begin{align}

up_{n-1}(x)+xDp_{n-1}(x)u &= ux^{n-1}+x\sum_{k=0}^{n-2}x^kux^{n-k-2} \\

&= ux^{n-1}+\sum_{k=1}^{n-1}x^kux^{n-k-1} \\

&= \sum_{k=0}^{n-1} x^kux^{n-k-1}.

\end{align} \(\square\)

*Proof of Lemma 2.* We use induction on \(n\). The case \(n=0\) is obvious. Suppose the formula holds for \(n-1\). We have \begin{align}

\int_0^1 s^m(1-s)^n\,ds &= \int_0^1 s^m(1-s)^{n-1}(1-s)\,ds \\

&= \int_0^1 s^m(1-s)^{n-1}\,ds- \int_0^1 s^{m+1}(1-s)^{n-1}\,ds \\

&= \frac{m!(n-1)!}{(m+n)!} – \frac{(m+1)!(n-1)!}{(m+n+1)!} \\

&= \frac{m!(n-1)!(m+n+1)-(m+1)!(n-1)!}{(m+n+1)!} \\

&= \frac{m!n!}{(m+n+1)!}.

\end{align} \(\square\)