Solutions - An Introduction To Manifolds [PDF]

  • 0 0 0
  • Gefällt Ihnen dieses papier und der download? Sie können Ihre eigene PDF-Datei in wenigen Minuten kostenlos online veröffentlichen! Anmelden
Datei wird geladen, bitte warten...
Zitiervorschau

Selected Solutions to Loring W. Tu’s An Introduction to Manifolds (2nd ed.)

Prepared by Richard G. Ligo

Chapter 1 Problem 1.1: Let g : R → R be defined by Z t Z t 3 g(t) = f (s)dt = s1/3 dt = t4/3 . 4 0 0 Rx Show that the function h(x) = 0 g(t)dt is C 2 but not C 3 at x = 0. Proof: Note that h00 (x) = g 0 (x) = f (x) = x1/3 . As f (x) is C 0 , it follows that h00 (x) is C 0 , so h(x) is C 2 . Furthermore, we have that h000 (x) = g 00 (x) = f 0 (x) =

  1 x−2/3 3

for x 6= 0,

undefined for x = 0. This implies that h000 (x) is not C 0 at x = 0, so h(x) is not C 3 . 

Problem 1.2: Let f (x) be the function on R defined by  e−1/x for x > 0, f (x) = 0 for x ≤ 0. (a) Show by induction the for x > 0 and k ≥ 0, the kth derivative f (k) (x) is of the form p2k (1/x)e−1/x for some polynomial p2k (y) of degree 2k in y. Proof: Base case: Let k = 0. Then we immediately have that f (k) (x) = f (0) (x) = e−1/x , so f (k) = p2k (1/x)e−1/x , where p2k (1/x) = 1. Thus, the base case holds. Inductive step: Assume that f (k) (x) is of the form p2k (1/x)e−1/x for some polynomial p2k (y). Now consider the following: f (k+1) (x) = = = = =

d (k) (f (x)) dx d (p2k (1/x)e−1/x ) dx d d (p2k (1/x))e−1/x + p2k (1/x) (e−1/x ) dx dx ! !  2k  2k d 1 1 d −1/x a2k + · · · e−1/x + a2k + ··· (e ) dx x x dx         d 1 1 d −1/x −1/x a2k + ··· e + a2k + ··· (e ) 2k 2k dx x x dx 2

Equivalent notation Inductive hypothesis Product rule Form of p2k Simplify





    1 1 −1/x = −2ka2k + ··· e + a2k + ··· e 2k+1 2k x x x2         1 1 −1/x + ··· e + a2k + · · · e−1/x = −2ka2k x2k+1 x2k+2       1 1 = −2ka2k + · · · + a2k + · · · e−1/x x2k+1 x2k+2       1 1 = a2k − 2ka2k + · · · e−1/x x2k+2 x2k+1 1





−1/x

Evaluate derivative Distribute Factor Rearrange terms

Thus, we have that f (k+1) (x) = q2k (1/x)e−1/x for some polynomial q2k (y), as desired. As a result, the induction holds.  (b) Prove that f is C ∞ on R and that f (k) (0) = 0 for all k ≥ 0. Proof: In order to show that f is C ∞ , we must show that f (k) is continuous for all k ≥ 0. As we know that both 0 and e−1/x are both C ∞ , we must simply show that limx→0+ We know from (a) that

dk −1/x e dxk

dk −1/x e dxk

= 0.

= p2k ( x1 )e−1/x , where p2k (y) is a polynomial of even degree.

We know that the exponential decay of e−1/x toward 0 will crush any growth coming from the rational expressions in p2k ( x1 ), so the limit as x approaches 0 from the right will indeed be 0. If one desires to see this more concretely, it could be accomplished (albeit somewhat messily) using a repeated application of l’Hˆopital’s rule. As k was arbitrary, we then have that f is ineed C ∞ . It is also follows immediately from the work above that f (k) (0) = 0 for all k ≥ 0 from the definition of f .  Problem 1.3: Let U ∈ Rn and V ∈ Rn be open subsets. A C ∞ map F : U → V is called an diffeomorphism if it is bijective and has a C ∞ inverse F −1 : V → U . (a) Show that the function f : (−π/2, π/2) → R, f (x) = tan x, is a diffeomorphism. Proof: Recall that tan x =

sin x , cos x

and note that cos x > 0 for all x ∈ (−π/2, π/2). Let k be

arbitrary, then by repeated application of the quotient rule we have that dk pk (x) tan x = , where pk (x) is products and sums of sin x and cos x. k dx cos2k (x) As cos x > 0 on (−π/2, π/2), the denominator will always be non-zero. Furthermore, as sin x and cos x are C ∞ we know that pk (x) will always be defined. Thus,

dk dxk

tan x is defined on

(−π/2, π/2) for all k, so tan x is C ∞ . It can immediately be seen that tan x is bijective on (−π/2, π/2) and has the inverse tan−1 x. As a result, we have that tan x is diffeomorphism on (−π/2, π/2).  3

(b) Let a, b be real numbers with a < b. Find a linear function h : (a, b) → (−1, 1), thus proving that any two finite open sets are diffeomorphic. Proof:

Define h(x) =

2 x b−a

+ (1 −

2b ), b−a

and note that h is a linear function such that

h(a) = −1 and h(b) = 1. As h is linear, we immediately have that h is infinitely differentiable, bijective, and invertible, so h is a diffeomorphism from (a, b) to (−1, 1).  (c) The exponential function exp : R → (0, ∞) is a diffeomorphism. Use it to show that for any real numbers a and b, the intervals R, (a, ∞), and (∞, b) are diffeomorphic. Proof: Define the function expa : R → (a, ∞) by x 7→ exp(x) + a, and note that the image of expa (R) = (a, ∞). As adding the real number a to exp does not affect differentiability, invertibility, or bijectivity (onto the new image), expa will be a diffeomorphism from R to (a, ∞). Likewise, we may define the function −expb : R → (−∞, b) by x 7→ −exp(x) + b, and note that the image of −expb (R) = (−∞, b). As multiplying exp by −1 and adding the real number b do not affect diffentiability, , invertibility, or bijectivity (onto the new image), −expb will be a diffeomorphism from R to (−∞, b). As we know that compositions of diffeomorphisms are diffeomorphisms, it follows that −expb ◦(expa )−1 : (a, ∞) → (−∞, b) is a diffeomorphism, so the intervals (a, ∞) and (−∞, b) are diffeomorphic. 

Problem 1.4: Show that the map  π π n f: − , → Rn , f (x1 , . . . , xn ) = (tan x1 , . . . , tan xn ), 2 2 is a diffeomorphism. Proof: As tan x is bijective, f is bijective coordinate-wise, so f itself is bijective. We know that if each of the component functions of f is C ∞ , then f is C ∞ . As each of f component functions is identical, we need only show that an arbitrary component function fk is C ∞ . An abitrary rth partial derivative of fk will be given by  0 r If ∃ some ij 6= k, ∂ fk = r  ∂ (tan xk ) If i = k ∀j. ∂xi1 · · · ∂xir j (∂xk )r We know from Problem 1.3 that

∂ r (tan xk ) (∂xk )r

will be continuous. As 0 is also continuous, it then

follows that any rth partial derivative of fk is continuous. Because our choices of r and k were arbitrary, fk is C ∞ , so f is C ∞ . 4

f −1 (x1 , . . . , xn ) is given by f −1 (x1 , . . . , xn ) = (tan−1 (x1 ), . . . , tan−1 (xn )). Similar to the previous part, we must only show that an arbitrary component function fk−1 is C ∞ . An arbitrary rth partial derivative of fk−1 will be given by  0 r −1 If ∃ some ij 6= k, ∂ fk = r −1  ∂ (tan xk ) If i = k ∀j. ∂xi1 · · · ∂xir j (∂xk )r Recall that

d dx

tan x =

1 1+x2

and note that this rational function is infinitely differentiable.

As 0 is also continuous, we then have that any rth partial derivative of fk−1 is continuous. Because our choices of r and k were arbitrary, fk−1 is C ∞ , so f −1 is C ∞ . As the above properties hold, it follows that f is a diffeomorphism.  Problem 1.5: Let 0 = (0, 0) be the origin and B(0, 1) be the open unit disk in R2 . To find a diffeomorphism between B(0, 1) and R2 , we identify R2 with the xy-plane in R3 and introduce the lower open hemisphere S : x2 + y 2 + (z − 1)2 = 1, z < 1, in R3 as an intermediate space (Figure 1.4). First note that the map f : B(0, 1) → S, (a, b) 7→ (a, b, 1 −

√ 1 − a2 − b2 ),

is a bijection. (a) The sterographic projection g : S → R2 from (0, 0, 1) is the map that sends a point (a, b, c) ∈ S to the intersection of the line through (0, 0, 1) and (a, b, c) with the xy-plane. Show that it is given by  (a, b, c) 7→

a b , 1−c 1−c

 , c=1−

√ 1 − a2 − b 2 ,

with inverse  (u, v) 7→

u v 1 √ ,√ ,1 − √ 1 + u2 + v 2 1 + u2 + v 2 1 + u2 + v 2

 .

Proof: Let γ(t) represent the line through (0, 0, 1) and (a, b, c) projecting onto the xy-plane. The parameterized equation is given as (ta, tb, 1 + t(c − 1)). As we desire to determine the values of the first two coordinates when γ(t) intersects the xy-plane in R3 , we may set 0 = 1 + t(c − 1) to determine the value of t at which this occurs. Solving for t yields 5

t=

1 , 1−c

which when substituted back into the equation gives us

a 1−c

and

b 1−c

for the first

two coordinates, as desired. Now let u =

a 1−c

and v =

b . 1−c

Consider the following manipulation of the expression from

the third coordinate: 1− √

1 1 + u2 + v 2 1 2

=1− q 1+ =1− q

a 1−c

+

Substitution

2 b 1−c

1

Common denominator

(1−c)2 +a2 +b2 (1−c)2

1−c =1− p (1 − c)2 + a2 + b2 1−c =1− 1 =c

Simplify Given Simplify

Thus, we have that the third coordinate equals c, as desired. We may then solve the equation 1− √

1 2 1 − u2 − 1. = c for v 2 , obtaining v 2 = 1−c 1 + u2 + v 2

Substituting this expression for v 2 into the first coordinate allows us to perform the following manipulation: √

u 1 + u2 + v 2

=q 1+ =

a 1−c

 a 2 1 + (1−c) 2 1−c a a q 1−c = 1−c 1 = a 1 1−c 2 (1−c)



 a 2 1−c

Substitution −1 Simplify

Thus, we have that the first coordinate equals a, as desired. Similarly, we have that the second coordinate equals b. As a result, we have verified that the given inverse is correct.  (b) Composing the two maps f and g gives the map   a b 2 h = g ◦ f : B(0, 1) → R , h(a, b) = √ ,√ . 1 − a2 − b 2 1 − a2 − b 2 6

Find a formula for h−1 (u, v) = (f −1 ◦ g −1 )(u, v) and conclude that h is a diffeomorphism of the open disk B(0, 1) with R2 . Proof: We know that h−1 (a, b) will be given by the following: h−1 (a, b) = (g ◦ f )−1 (a, b) = (f −1 ◦ g −1 )(a, b) = f −1 (g −1 (a, b))     a b 1 a b −1 √ =f ,√ ,1 − √ = √ ,√ . 1 + a2 + b 2 1 + a2 + b 2 1 + a2 + b2 1 + a2 + b 2 1 + a2 + b 2 As the composition of two diffeomorphisms is a diffeomorphism and f and g are diffeomorphisms, we have that h will also be a diffeomorphism.  (c) Generalize part (b) to Rn . Proof:

All of the methods used in parts (a) and (b) can be extended to more dimensions

by simply adding more coordinates to our equations and functions. In this manner we can generalize this problem to Rn .  Problem 1.6: Prove that if f : R2 → R is C ∞ , then there exist C ∞ functions g11 , g12 , g22 on R2 such that f (x, y) = f (0, 0) +

∂f ∂f (0, 0)x + (0, 0)y + x2 g11 (x, y) + xyg12 (x, y) + y 2 g22 (x, y). ∂x ∂y

Proof: As f is C ∞ on R2 (which is star-shaped), we may apply Taylor’s theorem at (0, 0) to obtain f (x, y) = f (0, 0)+x(f1 (x, y))+y(f2 (x, y)), where f1 (x, y) =

∂f ∂f (x, y) and f2 (x, y) = (x, y). ∂x ∂y

As f is C ∞ on R2 , both f1 and f2 will be C ∞ on R2 , so we may apply Taylor’s theorem again to obtain f1 (x, y) = f1 (0, 0) + xf11 (x, y) + yf12 (x, y) and f2 (x, y) = f2 (0, 0) + xf21 (x, y) + yf22 (x, y). The work above then allows us to perform the following manipulation: f (x, y) = f (0, 0) + x(f1 (x, y)) + y(f2 (x, y)) = f (0, 0) + x(f1 (0, 0) + xf11 (x, y) + yf12 (x, y)) + y(f2 (0, 0) + xf21 (x, y) + yf22 (x, y)) = f (0, 0) + xf1 (0, 0) + x2 f11 (x, y) + xyf12 (x, y)) + yf2 (0, 0) + xyf21 (x, y) + y 2 f22 (x, y) = f (0, 0) + xf1 (0, 0) + yf2 (0, 0) + x2 f11 (x, y) + 2xyf12 (x, y)) + y 2 f22 (x, y) ∂f ∂f = f (0, 0) + x (0, 0) + y (0, 0) + x2 f11 (x, y) + 2xyf12 (x, y)) + y 2 f22 (x, y) ∂x ∂x 7

Note that x2 f11 (x, y), 2xyf12 (x, y)), and y 2 f22 (x, y) are all C ∞ , as f is C ∞ . By then defining g11 = x2 f11 (x, y), g12 = 2xyf12 (x, y)), and g22 = y 2 f22 (x, y), we have the desired result f (x, y) = f (0, 0) +

∂f ∂f (0, 0)x + (0, 0)y + x2 g11 (x, y) + xyg12 (x, y) + y 2 g22 (x, y), ∂x ∂y

where g11 , g12 , and g22 are all C ∞ .  Problem 1.7: Let f : R2 → R be a C ∞ function with f (0, 0) = ∂f /∂x(0, 0) = ∂f /∂y(0, 0) = 0. Define g(t, u) =

  f (t,tu)

for t 6= 0,

0

for t = 0.

t

Prove that g(t, u) is C ∞ for (t, u) ∈ R2 . Proof: Let x = t and y = tu, then apply the result of Problem 1.6 to f (t, tu) to obtain f (t, tu) = f (0, 0) +

∂f ∂f (0, 0)t + (0, 0)tu + t2 g11 (t, tu) + t2 ug12 (t, tu) + t2 u2 g22 (t, tu), ∂x ∂y

where g11 , g12 , g22 are C ∞ functions on R2 . As we are given f (0, 0) =

∂f (0, 0) ∂x

=

∂f (0, 0) ∂y

= 0,

we may simplify this expression to f (t, tu) = t2 g11 (t, tu) + t2 ug12 (t, tu) + t2 u2 g22 (t, tu). It then follows that f (t, tu) = tg11 (t, tu) + tug12 (t, tu) + tu2 g22 (t, tu). t As g11 , g12 , g22 are C ∞ functions for (t, tu) it follows that

f (t,tu) t

will be C ∞ on (t, tu) except

for at t = 0. However, as tg11 (t, tu) + tug12 (t, tu) + tu2 g22 (t, tu) = 0 at t = 0 and g(0, u) = 0 by definition, we still have that g(t, u) will be C ∞ for (t, u).  Problem 1.8: Define f : R → R by f (x) = x3 . Show that f is a bijective C ∞ map, but that f −1 is not C ∞ . Proof: We immediately know that f is bijective (this can be seen graphically). Additionally, d/dx

d/dx

d/dx

d/dx

d/dx

f is C ∞ , as repeated differentiation yields x3 → 3x2 → 6x → 6 → 0 → 0 · · · . The inverse of f can be determined to be f −1 (x) = x1/3 . This function is sill bijective and continuous, but it is not C ∞ . This can be seen by observing that (f −1 )0 (x) = discontinuous at x = 0. 

8

1 , 3x2/3

which is

Chapter 2 Problem 2.1: Let X be the vector field x∂/∂x+y∂/∂y and f (x, y, z) the function x2 +y 2 +z 2 on R3 . Compute Xf . Proof: Xf is given by the following computation: Xf = x

∂f ∂f +y = x(2x) + y(2y) = 2x2 + 2y 2 . ∂x ∂y

 Problem 2.3: Let D and D0 be derivations at p in Rn , and c ∈ R. Prove that (a) the sum D + D0 is a derivation at p. Proof:

As D and D0 are derivations, we know that both D and D0 are linear and satisfy

the Leibniz Rule. It is then immediate that D + D0 will be linear, so it remains to show that D + D0 satisfies the Leibniz Rule. Now consider the following: (D + D0 )(f g) = D(f g) + D0 (f g)

Properties of sums

= D(f )g(p) + D(g)f (p) + D0 (f )g(p) + D0 (g)f (p) D and D0 are derivations = (D(f ) + D0 (f ))g(p) + (D(g) + D0 (g))f (p)

Factor

= (D + D0 )(f )g(p) + (D + D0 )(g)f (p)

Properties of sums

Thus, we have that D + D0 satisfies the Leibniz Rule, so D + D0 is a derivation at p.  (b) The scalar multiple cD is a derivation at p. Proof: As D is a derivation, D is linear, so cD will also be linear. It then remains to show that cD satisfies the Leibniz Rule. By the properties of functions and the fact that D is a derivation we have that cD(f g) = c(D(f g)) = c(D(f )g(p) + D(g)f (p)) = cD(f )g(p) + cD(g)f (p), so it follows that cD is a derivation at p. 

Problem 2.4: Let A be an algebra over a field K. If D1 and D2 are derivations of A, show that D1 ◦ D2 is not necessarily a derivation (it is if D1 or D2 = 0), but D1 ◦ D2 − D2 ◦ D1 is always a derivation of A. Proof: It an effort to maintain simplicity, we shall use the function f : R → R defined by f (x) = x. Furthermore, we shall use the ordinary derivative as our derivation: D1 = D2 = 9

d . dx

We will first show that D1 ◦ D2 violates the Leibniz Rule for some arbitrary point p. By way of contradiction assume that the Leibniz Rule does hold for D1 ◦ D2 . Then we that (D1 ◦ D2 )(f f ) =

d d 2 d ( (x )) = (2x) = 2 dx dx dx

and d (D1 ◦ D2 )(f )f (p) + (D1 ◦ D2 )(f )f (p) = dx



   d d d (x) f (p) + (x) f (p) = 0 + 0 = 0, dx dx dx

implying that 2 = 0, which is a contradiction. Thus, the Leibniz Rule must not hold for D1 ◦ D2 , so D1 ◦ D2 is not in general a derivation for any D1 and D2 . We can immediately see that D1 ◦ D2 − D2 ◦ D1 will maintain its linearity, as both D1 and D2 are linear. Using this property and the fact that the Leibniz Rule holds for D1 and D2 we can observe the following for an arbitrary point p: (D1 ◦ D2 − D2 ◦ D1 )(f g) = (D1 ◦ D2 )(f g) − (D2 ◦ D1 )(f g) = D1 (D2 (f )g(p) + D2 (g)f (p)) − D2 (D1 (f )g(p) + D1 (g)f (p)) = D1 (D2 (f )g(p)) + D1 (D2 (g)f (p)) − D2 (D1 (f )g(p)) − D2 (D1 (g)f (p)) = D1 (D2 (f )g(p)) − D2 (D1 (f )g(p)) + D1 (D2 (g)f (p)) − D2 (D1 (g)f (p)) = (D1 (D2 (f ))g(p) + D1 (g(p))D2 (f (p))) − (D2 (D1 (f ))g(p) + D2 (g(p))D1 (f (p))) + (D1 (D2 (g))f (p) + D1 (f (p))D2 (g(p))) − (D2 (D1 (g))f (p) + D2 (f (p))D1 (g(p))) = (D1 (D2 (f ))g(p) − D2 (D1 (f ))g(p)) + (D1 (D2 (g))f (p) − D2 (D1 (g))f (p)) = (D1 (D2 (f )) − D2 (D1 (f )))g(p) + (D1 (D2 (g)) − D2 (D1 (g)))f (p) = (D1 ◦ D2 − D2 ◦ D1 )(f )g(p) + (D1 ◦ D2 − D2 ◦ D1 )(g)f (p) Thus, the Leibniz rule holds for D1 ◦D2 −D2 ◦D1 at an arbitrary point p, so D1 ◦D2 −D2 ◦D1 is indeed a derivation. 

10

Chapter 3 Problem 3.1: Let e1 , . . . , en be a basis for a vector space V and let α1 , . . . , αn be its dual basis in V ∨ . Suppose [gij ] ∈ Rn×n is an n×n matrix. Define a bilinear function f : V ×V → R by X

f (v, w) =

gij v i wj for v =

X

v i ei and w =

X

wj ej in V .

i≤i,j≤n

Describe f in terms of the tensor products of αi and αj , 1 ≤ i, j ≤ n. Proof: We know that v i = αi (v) and wj = αj (v), and we can use this fact along with the properties of α to see that f (v, w) =

X 1≤i,j≤n

gij v i wj =

X

gij αi (v)αj (w) =

1≤i,j≤n

X

gij (αi ⊗ αj )(v, w),

1≤i,j≤n

as desired. 

Problem 3.2: (a) Let V be a vector space of dimension n and f : V → R a nonzero linear functional. Show that dim ker f = n − 1. A linear subspace of V of dimension n − 1 is called a hyperplane in V. Proof: Recall the fact that dim ker f + dim im f = dim V . As we know that dim R = 1 and f is nonzero, we immediately have that dim im f = 1. Because dim V = n, it then follows immediately that dim ker f = n − 1.  (b) Show that a nonzero linear function on a vector space V is determined up to a multiplicative constant by its kernel, a hyperplane in V . In other words, if f and g : V → R are nonzero linear functionals and ker f = ker g, then g = cf for some constant c ∈ R. Proof: Let f, g : V → R be nonzero linear functionals and assume that ker f = ker g. As f is nonzero, there exists some v00 ∈ V such that f (v00 ) = b 6= 0. Let v0 = v00

v00 ; b

it then follows

from the linearity of f that f (v0 ) = f ( b ) = 1b f (v00 ) = 1b b = 1. As we have assumed that ker f = ker g and f (v0 ) 6= 0, it follows that g(v0 ) 6= 0. Now let v ∈ V , a = f (v), and w = v − av0 . Because f is linear we have that f (w) = f (v − av0 ) = f (v) − f (av0 ) = f (v) − af (v0 ) = a − a(1) = 0, so then w ∈ ker f and therefore w ∈ ker g. As a result, we know from the linearity of g and the work above that 0 = g(w) = g(v − av0 ) = g(v) − g(av0 ) = g(v) − ag(v0 ) = g(v) − f (v)g(v0 ). 11

Thus, we have that g(v) = g(v0 )f (v) for all v ∈ V . We may then let c = g(v0 ) to observe g(v) = cf (v), as desired.  Problem 3.3: Let V be a vector space of dimension n with basis e1 , . . . , en . Let α1 , . . . , αn be the dual basis for V ∨ . Show that a basis for the space Lk (V ) of k-linear functions on V is {αi1 ⊗ · · · ⊗ αik } for all multi-indices (i1 , . . . , ik ) (not just the strictly ascending multi-indices as for Ak (L)). In particular, this show that dim Lk (V ) = nk . Proof: Let T : V k → R and T (ej1 , . . . , ejk ) = Tj1 ,...jk . Now construct the function X T0 = Ti1 ,...,ik αi1 ⊗ · · · ⊗ αik 1≤i1 ,...,ik ≤k

and consider the following: X T 0 (ej1 , . . . , ejk ) =

Ti1 ,...,ik αi1 ⊗ · · · ⊗ αik (ej1 , . . . , ejk )

Definition of T 0

Ti1 ,...,ik (αi1 (ej1 ) · · · αik (ejk ))

Defintion of ⊗

Ti1 ,...,ik (δji11 · · · δjikk )

Defintion of δji

1≤i1 ,...,ik ≤k

=

X 1≤i1 ,...,ik ≤k

=

X 1≤i1 ,...,ik ≤k

= Tj1 ,...,jk

Evaluate sum

= T (ej1 , . . . , ejk )

Definition of T

Thus, we have that T 0 (ej1 , . . . , ejk ) = T (ej1 , . . . , ejk ). As ej1 , . . . , ejk was an arbitrary list of elements from the basis e1 , . . . , ek , it follows that T 0 = T on all the basis elements, so T 0 = T . As a result, we have that {αi1 ⊗ · · · ⊗ αik } spans Lk (V ). P Now say that 0 = 1≤i1 ,...ik ≤n Ti1 ,...,ik (αi1 ⊗ · · · ⊗ αik ), and consider the following: X 0= Ti1 ,...,ik (αi1 ⊗ · · · ⊗ αik ) Initial assumption 1≤i1 ,...ik ≤n

=

X

Ti1 ,...,ik (αi1 ⊗ · · · ⊗ αik )(ej1 , . . . , ejk )

Evalaute at a point

Ti1 ,...,ik (αi1 (ej1 ) · · · αik (ejk )

Definition of ⊗

Ti1 ,...,ik (δji11 · · · δjikk

Definition of δji

1≤i1 ,...ik ≤n

=

X 1≤i1 ,...ik ≤n

=

X 1≤i1 ,...ik ≤n

= Tj1 ,...,jk

Evaluate sum

Thus, we have that 0 = Tj1 ,...,jk . As j1 , . . . , jk was arbitrary, we have that Ti1 ,...,ik = 0 for all i1 , . . . , ik , so it follows that {αi1 ⊗ · · · ⊗ αik } is a linearly independent set. 12

As a result, we have that {αi1 ⊗ · · · ⊗ αik } is a basis for Lk (V ). 

Problem 3.4: Let f be a k-tensor on a vector space V . Prove that f is alternating if and only if f changes sign whenever two successive arguments are interchanged: f (. . . , vi+1 , vi , . . .) = −f (. . . , vi , vi+1 , . . .) for i = 1, . . . , k − 1. Proof: (⇒) Assume that f is alternating. Note that the interchange of the ith and (i + 1)th entries is given by the permutation σ = (i i + 1), and further note that sgn σ = −1. Now consider the following: f (v1 . . . , vi+1 , vi , . . . , vn ) = (sgn σ)f (vσ(1) . . . , vσ(i+1) , vσ(i) , . . . vσ(n) ) f is even = −f (v1 , . . . , vi , vi+1 , . . . , vn )

Evaluate sgn and σ

Thus, we have f (v1 . . . , vi+1 , vi , . . . , vn ) = −f (v1 , . . . , vi , vi+1 , . . . , vn ), as desired. (⇐) Assume that f changes signs wheneverever two successive arguments are interchanged. Let σ ∈ Sn be an arbitrary permutation, and recall that σ can be generated successive transpositions of adjacent elements, denote these transpositions as τ1 , . . . , τr . We then have two cases: Case 1: r is even. In the case that r is even, then previous results in algebra tell us that σ will be even, so sgn σ = 1. Furthermore, we know from our assumption that f changes signs with each interchange that f (v1 , . . . , vn ) = (−1)r f (vσ(1) , . . . , vσ(i) ) = f (vσ(1) , . . . , vσ(i) ). Case 2: r is odd. In the case that r is odd, then previous results in algebra tell us that σ will be odd, so sgn σ = −1. Furthermore, we know from our assumption that f changes signs with each interchange that f (v1 , . . . , vn ) = (−1)r f (vσ(1) , . . . , vσ(i) ) = −f (vσ(1) , . . . , vσ(i) ). In either case we have that (−1)r f (vσ(1) , . . . , vσ(i) ) = (sgn σ)f (vσ(1) , . . . , vσ(i) ),

13

so it follows that f is alternating. 

Problem 3.5: Let f be a k-tensor on a vector space V . Prove that f is alternating if and only if f (v1 , . . . , vk ) = 0 whenever two of the vectors v1 , . . . , vk are equal. Proof:

(⇒) Assume that f is alternating, and let vi = vj and σ = (i j). Note that

sgn σ = −1. Because f is alternating we have that f (. . . , vi , . . . , vj , . . .) = (sgn σ)f (. . . , vσ(i) , . . . , vσ(j) , . . .) = −f (. . . , vj , . . . , vi , . . .), so f (. . . , vi , . . . , vj , . . .) = −f (. . . , vj , . . . , vi , . . .). However, as vi = vj , we also immediately have that f (. . . , vi , . . . , vj , . . .) = f (. . . , vj , . . . , vi , . . .), which implies that f (. . . , vj , . . . , vi , . . .) = −f (. . . , vj , . . . , vi , . . .). As a result, we have that f (. . . , vi , . . . , vj , . . .) = 0. (⇐) Assume that f (v1 , . . . , vk ) = 0 whenever two of the vectors v1 , . . . , vk are equal. It follows from this assumption that f (v1 , . . . , vi + vi+1 , vi+1 + vi , . . . , vk ) = 0. Now consider the following manipulation: 0 = f (v1 , . . . , vi + vi+1 , vi+1 + vi , . . . , vk )

Given

= f (v1 , . . . , vi , vi+1 + vi , . . . , vk ) + f (v1 , . . . , vi+1 , vi+1 + vi , . . . , vk )

f is k-linear

= f (v1 , . . . , vi , vi+1 , . . . , vk ) + f (v1 , . . . , vi , vi , . . . , vk )

f is k-linear

+ f (v1 , . . . , vi+1 , vi+1 , . . . , vk ) + f (v1 , . . . , vi+1 , vi , . . . , vk ) = f (v1 , . . . , vi , vi+1 , . . . , vk ) + 0 + 0 + f (v1 , . . . , vi+1 , vi , . . . , vk )

Given

= f (v1 , . . . , vi , vi+1 , . . . , vk ) + f (v1 , . . . , vi+1 , vi , . . . , vk )

Simplify

As a result, we have that f (v1 , . . . , vi , vi+1 , . . . , vk ) = −f (v1 , . . . , vi+1 , vi , . . . , vk ), i.e. that interchanging two successive arguments changes the sign of f . It then follows from Problem 3.4 that f is alternating.  Problem 3.6: Let V be a vector space. For a, b ∈ R, f ∈ Ak (V ), and g ∈ A` (V ), show that af ∧ bg = (ab)f ∧ g. Proof: Consider the following: af ∧ bg =

1 A(af ⊗ bg) k!`!

Definition of ∧ 14

1 A(af (v1 , . . . , vk )bg(vk+1 , . . . , vk+` )) k!`! 1 X = (sgn σ)σ(af (v1 , . . . , vk )bg(vk+1 , . . . , vk+` )) k!`! σ∈S k+` 1 X = ab (sgn σ)σ(f (v1 , . . . , vk )g(vk+1 , . . . , vk+` )) k!`! σ∈S =

Definition of ⊗ Definition of A Pull scalars through

k+`

1 A(f (v1 , . . . , vk )g(vk+1 , . . . , vk+` )) = ab k!`! 1 = ab A(f ⊗ g) k!`! = (ab)f ∧ g

Definition of A Definition of ⊗ Defintion of ∧

Thus, we have af ∧ bg = (ab)f ∧ g, as desired.  Problem 3.7: Suppose two sets of covectors on a vector space V , β 1 , . . . , β k , and γ 1 , . . . , γ k are related by i

β =

k X

aij γ j , i = 1, . . . , k for a k × k matrix A = [aij ].

j=1

Show that β 1 ∧ · · · ∧ β k = (det A)γ 1 ∧ · · · ∧ γ k . Proof: Consider the following: 1

k

β ∧ ··· ∧ β =

k X

a1j γ j

∧ ··· ∧

j=1

=

k X

akj γ j

Given

j=1

X

a1j1

· · · akjk γ j1 ∧ · · · ∧ γ jk

∧ is distributive

1≤j1 ,...,jk ≤k

=

X

a1σ(1) · · · akσ(k) γ σ(1) ∧ · · · ∧ γ σ(k)

Rewrite indices

a1σ(1) · · · akσ(k) (sgn σ)γ 1 ∧ · · · ∧ γ k

Rearrange γ i s

σ∈Sk

=

X σ∈Sk

! =

X

(sgn σ)a1σ(1) · · · akσ(k) γ 1 ∧ · · · ∧ γ k

Factor

σ∈Sk

= det Aγ 1 ∧ · · · ∧ γ k

Definition of det A

Thus, we have β 1 ∧ · · · ∧ β k = (det A)γ 1 ∧ · · · ∧ γ k , as desired. 

Problem 3.8: Let f be a k-covector on a vector space V . Suppose two sets of vectors

15

u1 , . . . , uk and v1 , . . . , vk in V are related by uj =

k X

aij vi , j = 1, . . . , k for a k × k matrix A = [aij ].

i=1

Show that f (u1 , . . . , uk ) = (det A)f (v1 , . . . , vk ). Proof: Consider the following: f (u1 , . . . , uk ) = f

k X

ai1 vi , . . . ,

i=1

=

X

k X

! aik vi

Defintion of ui

i=1

a1i1 , . . . , akik f (vi1 , . . . , vik )

f is k-linear

1≤i1 ,...,ik ≤k

=

X

a1σ(1) · · · akσ(k) f (vσ(1) , . . . , vσ(k) )

Rewrite indices

a1σ(1) · · · akσ(k) (sgn σ)f (v1 , . . . , vk )

Rearrange arguments

σ∈Sk

=

X σ∈Sk

! =

X

(sgn σ)a1σ(1) · · · akσ(k)

f (v1 , . . . , vk )

Factor

σ∈Sk

= (det A)f (v1 , . . . , vk )

Defintion of det A

Thus, we have f (u1 , . . . , uk ) = (det A)f (v1 , . . . , vk ), as desired. 

Problem 3.9: Let V be a vector space of dimension n. Prove that if an n-covector ω vanishes on a basis e1 , . . . , en for V , then ω is the zero covector on V . P Proof: Let v1 , . . . , vn ∈ V , and note that vj = ni=1 ei aij for all j and some n × n matrix A = [aij ], as e1 , . . . , en is a basis for V . It then follows from Problem 3.8 and the properties of ω that ω(v1 , . . . , vn ) = det Aω(e1 , . . . , en ) = det A(0) = 0, so ω(v1 , . . . , vn ) = 0 for any v1 , . . . , vn ∈ V . As a result, we may conclude that ω is the zero covector on V .  Problem 3.10: Let α1 , . . . , αk be 1-covectors on a vector space V . Show that α1 ∧· · ·∧αk 6= 0 if and only if α1 , . . . , αk are linearly independent in the dual space V ∨ . Proof:

(⇒) We shall proceed contrapositively, so assume that α1 , . . . , αk are not linearly

independent. This implies that without loss of generality we may write αk as a linear Pk−1 i combination of the other covectors, that is αk = i=1 ci α for some scalars c1 , . . . , ck−1 . 16

This implies that α1 ∧ · · · ∧ αk = α1 ∧ · · · ∧

Pk−1 i=1

ci αi ; furthermore, note that every term has

a repeated αi . As a result, we have that α1 ∧ · · · ∧ αk 6= 0, so we have shown contrapositively that α1 ∧ · · · ∧ αk = 0 implies that α1 , . . . , αk are linearly independent. (⇐) Assume that α1 , . . . , αk are linearly independent. As they are linearly independent, we may extend them to some basis α1 , . . . , αk , . . . , αn for the dual space V ∨ . Let v1 , . . . , vn represent the dual basis for V . It then follows from proposition 3.27 that (α1 ∧ · · · ∧ αk )(v1 , . . . , vk ) = det[αi (vj )] = det[δji ] = 1, so we have that α1 ∧ · · · ∧ αk 6= 0. As a result, we have that α1 ∧ · · · ∧ αk 6= 0 if and only if α1 , . . . , αk are linearly independent in the dual space V ∨ . 

Problem 3.11: Let α be a nonzero 1-covector and γ a k-covector on a fintie dimensional vector space V . Show that α ∧ γ = 0 if and only if γ = α ∧ β for some (k − 1)-covector β on V. Proof: (⇒) Assume that α ∧ γ = 0. We may extend α to a basis α1 , . . . , αn for V ∨ , where P α1 = α. We may then say that γ = cJ αJ , where J runs over all strictly ascending multiP indices 1 ≤ j1 < · · · < jk ≤ n. In the sum α ∧ γ = cJ α ∧ αJ , all the terms α ∧ αJ with j1 = 1 vanish, since α = α1 . As a result we have that 0=α∧γ =

X

cJ α ∧ α J .

j1 6=1

As {α ∧ αJ }j1 6=1 is a subset of a basis for Ak+1 (V ), it is linearly independent, and so all cJ are 0 if j1 6= 1. It then follows that ! γ=

X

cJ α J = α ∧

j1 =1

where

P

j1 =1 cJ α

j2

X

cJ α j 2 ∧ · · · ∧ α j k

,

j1 =1

∧ · · · ∧ αjk is the desired β.

(⇐) Assume that γ = α ∧ β for some (k − 1)-covector β on V . Consider the following: α ∧ γ = α ∧ (α ∧ β)

Initial assumption

= (α ∧ α) ∧ α

Associativity of ∧

=0∧α

Corollary 3.23

=0

Simplify 17

Thus, we have that α ∧ γ = 0. As a result, we have α ∧ γ = 0 if and only if γ = α ∧ β for some (k − 1)-covector β on V . 

18

Chapter 4 Problem 4.1: Let ω be the 1-form zdx − dz and let X be the vector field y∂/∂x + x∂/∂y on R3 . Compute ω(X) and dω. ∂ Proof: Recall that di is the covector for ∂i and observe that   ∂ ∂ ω(X) = (zdx − dz) y +x ∂x ∂y         ∂ ∂ ∂ ∂ + zxdx − ydz − xdz = zydx ∂x ∂y ∂x ∂y

= zy(1) + 0 − 0 − 0 = zy. Thus, we have that ω(X) = zy. We may also observe that dω = d(zdx − dz) = d(zdx) − d(dz) = dz ∧ dx + zd(dx) − d(dz) = dz ∧ dx + 0 − 0 = dz ∧ dx. Thus, we have that dω = dz ∧ dx.  Problem 4.3: Suppose the standard coordinates on R2 are called r and θ. (this R2 is the (r, θ)-plane, not the (x, y)-plane). If x = r cos θ and y = r sin θ, calculate dx, dy, and dx ∧ dy in terms of dr and dθ. Proof:

Observe that dx = d(rcosθ) = cos θdr − r sin θdθ. Likewise, we have that dy =

d(r sin θ) = sin θdr + r cos θdθ. We can then also see that dx ∧ dy = (cos θdr − r sin θdθ) ∧ (sin θdr + r cos θdθ) = cos θ sin θdr ∧ dr + cos2 θrdr ∧ dθ − r sin2 θdθ ∧ dr − r2 sin θ cos θdθ ∧ dθ = 0 + cos2 θrdr ∧ dθ + sin2 rdr ∧ dθ − 0 = (cos2 θ + sin2 θ)rdr ∧ dθ = (1)rdr ∧ dθ = rdr ∧ dθ. Thus, we have that dx ∧ dy = rdr ∧ dθ.  Problem 4.4: Suppose the standard coordinates on R3 are called ρ, φ, and θ. If x = ρ sin φ cos θ, y = ρ sin φ sin θ, and z = ρ cos φ, calculated dx, dy, dz, and dx ∧ dy ∧ dz in terms of dρ, dφ, and dθ. Proof: We can immediately observe the following: dx = d(ρ sin φ cos θ) = (sin φ cos θ)dρ + (ρ cos φ cos θ)dφ + (−ρ sin φ sin θ)dθ dy = d(ρ sin φ sin θ) = (sin φ sin θ)dρ + (ρ cos φ sin θ)dφ + (ρ sin φ cos θ)dθ 19

dz = d(ρ cos φ) = (cos φ)dρ + (−ρ sin φ)dφ We can then continue to our final result: dx ∧ dy ∧ dz = (((sin φ cos θ)dρ + (ρ cos φ cos θ)dφ + (−ρ sin φ sin θ)dθ) ∧ ((sin φ sin θ)dρ + (ρ cos φ sin θ)dφ + (ρ sin φ cos θ)dθ)) ∧ dz = ((sin2 φ sin θ cos θ)dρ ∧ dρ + (ρ sin φ cos φ sin θ cos θ)dρ ∧ dφ + (ρ sin2 φ cos2 θ)dρ ∧ dθ + (ρ sin φ cos φ sin θ cos θ)dφ ∧ dρ + (ρ2 cos2 φ sin θ cos θ)dφ ∧ dφ + (ρ2 sin φ cos φ cos2 θ)dφ ∧ dθ + (−ρ sin2 φ sin2 θ)dθ ∧ dρ + (−ρ2 sin φ cos φ sin2 θ)dθ ∧ dφ + (−ρ2 sin2 φ sin θ cos θ)dθ ∧ dθ) ∧ dz = (0 + (ρ sin φ cos φ sin θ cos θ)dρ ∧ dφ + (ρ sin2 φ cos2 θ)dρ ∧ dθ + (ρ sin φ cos φ sin θ cos θ)dφ ∧ dρ + 0 + (ρ2 sin φ cos φ cos2 θ)dφ ∧ dθ + (−ρ sin2 φ sin2 θ)dθ ∧ dρ + (−ρ2 sin φ cos φ sin2 θ)dθ ∧ dφ + 0) ∧ dz = ((ρ sin φ cos φ sin θ cos θ)dρ ∧ dφ + (ρ sin2 φ cos2 θ)dρ ∧ dθ + (−ρ sin φ cos φ sin θ cos θ)dρ ∧ dφ + (ρ2 sin φ cos φ cos2 θ)dφ ∧ dθ + (−ρ sin2 φ sin2 θ)dθ ∧ dρ + (−ρ2 sin φ cos φ sin2 θ)dθ ∧ dφ) ∧ dz = ((ρ sin2 φ cos2 θ)dρ ∧ dθ + (ρ2 sin φ cos φ cos2 θ)dφ ∧ dθ + (−ρ sin2 φ sin2 θ)dθ ∧ dρ + (−ρ2 sin φ cos φ sin2 θ)dθ ∧ dφ) ∧ ((cos φ)dρ + (−ρ sin φ)dφ) = (ρ sin2 φ cos φ cos2 θ)dρ ∧ dθ ∧ dρ + (ρ2 sin φ cos2 φ cos2 θ)dφ ∧ dθ ∧ dρ + (−ρ sin2 φ cos φ sin2 θ)dθ ∧ dρ ∧ dρ + (−ρ2 sin φ cos2 φ sin2 θ)dθ ∧ dφ ∧ dρ + (−ρ2 sin3 φ cos2 θ)dρ ∧ dθ ∧ dφ + (−ρ3 sin2 φ cos φ cos2 θ)dφ ∧ dθ ∧ dφ + (ρ2 sin3 φ sin2 θ)dθ ∧ dρ ∧ dφ + (ρ3 sin2 φ cos φ sin2 θ)dθ ∧ dφ ∧ dφ = 0 + (ρ2 sin φ cos2 φ cos2 θ)dφ ∧ dθ ∧ dρ + 0 + (−ρ2 sin φ cos2 φ sin2 θ)dθ ∧ dφ ∧ dρ + (−ρ2 sin3 φ cos2 θ)dρ ∧ dθ ∧ dφ + 0 + (ρ2 sin3 φ sin2 θ)dθ ∧ dρ ∧ dφ + 0 = (ρ2 sin φ cos2 φ cos2 θ)dρ ∧ dφ ∧ dθ + (ρ2 sin φ cos2 φ sin2 θ)dρ ∧ dφ ∧ dθ + (ρ2 sin3 φ cos2 θ)dρ ∧ dφ ∧ dθ + (ρ2 sin3 φ sin2 θ)dρ ∧ dφ ∧ dθ = ((ρ2 sin φ cos2 φ cos2 θ) + (ρ2 sin φ cos2 φ sin2 θ) + (ρ2 sin3 φ cos2 θ) + (ρ2 sin3 φ sin2 θ))dρ ∧ dφ ∧ dθ = ((ρ2 sin φ cos2 φ)(cos2 θ + sin2 θ) + (ρ2 sin3 φ)(cos2 θ + sin2 θ))dρ ∧ dφ ∧ dθ = (ρ2 sin φ cos2 φ + ρ2 sin3 φ)dρ ∧ dφ ∧ dθ = (ρ2 sin φ(cos2 φ + sin2 φ))dρ ∧ dφ ∧ dθ = (ρ2 sin φ)dρ ∧ dφ ∧ dθ Thus, we have that dx ∧ dy ∧ dz = ρ2 sin φ dρ ∧ dφ ∧ dθ. 

20

Problem 4.5: Let α be a 1-form and β a 2-form on R3 . Then α = a1 dx1 + a2 dx2 + a3 dx3 β = b1 dx2 ∧ dx3 + b2 dx3 ∧ dx1 + b3 dx1 ∧ dx2 . Simplify the expression α ∧ β as much as possible. Proof: Observe the following: α∧β = (a1 dx1 + a2 dx2 + a3 dx3 ) ∧ (b1 dx2 ∧ dx3 + b2 dx3 ∧ dx1 + b3 dx1 ∧ dx2 ) = (a1 b1 )dx1 ∧ dx2 ∧ dx3 + (a1 b2 )dx1 ∧ dx3 ∧ dx1 + (a1 b3 )dx1 ∧ dx1 ∧ dx2 (a2 b1 )dx2 ∧ dx2 ∧ dx3 + (a2 b2 )dx2 ∧ dx3 ∧ dx1 + (a2 b3 )dx2 ∧ dx1 ∧ dx2 (a3 b1 )dx3 ∧ dx2 ∧ dx3 + (a3 b2 )dx3 ∧ dx3 ∧ dx1 + (a3 b3 )dx3 ∧ dx1 ∧ dx2 = (a1 b1 )dx1 ∧ dx2 ∧ dx3 + 0 + 0 + 0 + (a2 b2 )dx2 ∧ dx3 ∧ dx1 + 0 + 0 + 0 + (a3 b3 )dx3 ∧ dx1 ∧ dx2 = (a1 b1 )dx1 ∧ dx2 ∧ dx3 + (a2 b2 )dx1 ∧ dx2 ∧ dx3 + (a3 b3 )dx1 ∧ dx2 ∧ dx3 = (a1 b1 + a2 b2 + a3 b3 )dx1 ∧ dx2 ∧ dx3 Thus, we have that α ∧ β = (a1 b1 + a2 b2 + a3 b3 )dx1 ∧ dx2 ∧ dx3 . 

Problem 4.6: The corresondence between differential forms and vector fields on an open subset of R3 in Subsection 4.6 also makes sense pointwise. Let V be a vector space of dimension 3 with basis e1 , e2 , e3 , and dual basis α1 , α2 , α3 . To a 1-covector α = a1 α1 + a2 α2 + a3 α3 on V , we associate the vector vα = ha1 , a2 , a3 i ∈ R3 . To the 2-covector γ = c1 α 2 ∧ α 3 + c2 α 3 ∧ α 1 + c3 α 1 ∧ α 2 on V , we associate the vector vγ = hc1 , c2 , c3 i ∈ R3 . Show that under this correspondence, the wedge product of 1-covectors correspons to the cross product of vectors in R3 : if α = a1 α1 + a2 α2 + a3 α3 and β = b1 α1 + b2 α2 + b3 α3 , then vα∧β = vα × vβ . Proof: Observe the following: α ∧ β = (a1 b1 )α1 ∧ α1 + (a1 b2 )α1 ∧ α2 + (a1 b3 )α1 ∧ α3 + (a2 b1 )α2 ∧ α1 + (a2 b2 )α2 ∧ α2 + (a2 b3 )α2 ∧ α3 + (a3 b1 )α3 ∧ α1 + (a3 b2 )α3 ∧ α2 + (a3 b3 )α3 ∧ α3 = 0 + (a1 b2 )α1 ∧ α2 + (a1 b3 )α1 ∧ α3 + (a2 b1 )α2 ∧ α1 + 0 + (a2 b3 )α2 ∧ α3 + (a3 b1 )α3 ∧ α1 + (a3 b2 )α3 ∧ α2 + 0 = (a1 b2 )α1 ∧ α2 + (a1 b3 )α1 ∧ α3 + (a2 b1 )α2 ∧ α1 + (a2 b3 )α2 ∧ α3 + (a3 b1 )α3 ∧ α1 + (a3 b2 )α3 ∧ α2 21

= (a2 b3 − a3 b2 )α2 ∧ α3 + (a3 b1 − a1 b3 )α3 ∧ α1 + (a1 b2 − a2 b1 )α1 ∧ α2 As a result, we have that vα∧β = ha2 b3 − a3 b2 , a3 b1 − a1 b3 , a1 b2 − a2 b1 i. Note that vα = ha1 , a2 , a3 i and vβ = hb1 , b2 , b3 i; it then follows immediately that vα × vβ = ha2 b3 − a3 b2 , a3 b1 − a1 b3 , a1 b2 − a2 b1 i. Thus, we have that vα∧β = ha2 b3 − a3 b2 , a3 b1 − a1 b3 , a1 b2 − a2 b1 i = vα × vβ , as desired. 

Problem 4.7: Let A =

L∞

k=−∞

Ak be a graded algebra over a field K with Ak = 0 for k < 0.

Let m be an integer. A superderivation of A of degree m is a K-linear map D : A → A such that for all k, D(Ak ) ⊂ Ak+m and for all a ∈ Ak and b ∈ A` , D(ab) = (Da)b + (−1)km a(Db). If D1 and D2 are two superderivations of A of respective degrees m1 and m2 , define their commutator to be [D1 , D2 ] = D1 ◦ D2 − (−1)m1 m2 D2 ◦ D1 . Show that [D1 , D2 ] is a superderivation of degree m1 + m2 . Proof:

As D1 and D2 are superderivations, they are k-linear, so [D1 , D2 ] is immediately

k-linear. Now consider the following: [D1 , D2 ](ab) = (D1 ◦ D2 − (−1)m1 m2 D2 ◦ D1 )(ab) = (D1 ◦ D2 )(ab) − (−1)m1 m2 (D2 ◦ D1 )(ab) = D1 (D2 (ab)) − (−1)m1 m2 (D2 (D1 (ab))) = D1 ((D2 a)b + (−1)km2 (a(D2 b))) − (−1)m1 m2 D2 ((D1 a)b + (−1)km1 (a(D1 b))) = D1 ((D2 a)b) + (−1)km2 D1 (a(D2 b)) − (−1)m1 m2 (D2 ((D1 a)b) + (−1)km1 D2 (a(D1 b))) = ((D1 (D2 a))b + (−1)(k+m2 )m1 (D2 a)(D1 b)) + (−1)km2 ((D1 a)(D − 2b) + (−1)km1 (a)(D1 (D2 b))) − (−1)m1 m2 (((D2 (D1 a))b + (−1)(k+m1 )m2 (D1 a)(D2 b)) + (−1)km1 ((D2 a)(D1 b) + (−1)km2 (a)(D2 (D1 b)))) = (D1 (D2 a))b + (−1)km1 +m1 m2 (D2 a)(D1 b) + (−1)km2 (D1 a)(D2 b) + (−1)km1 +km2 (a)(D1 (D2 b)) − (−1)m1 m2 (D2 (D1 a))b − (−1)km2 +m1 m2 +m1 m2 (D1 a)(D2 b) − (−1)km1 +m1 m2 (D2 a)(D1 b) − (−1)km1 +km2 +m1 m2 (a)(D2 (D1 b)) = (D1 (D2 a))b − (−1)m1 m2 (D2 (D1 a))b 22

+ (−1)km1 +m1 m2 (D2 a)(D1 b) − (−1)km1 +m1 m2 (D2 a)(D1 b) + (−1)km2 (D1 a)(D2 b) − (−1)km2 +2m1 m2 (D1 a)(D2 b) + (−1)km1 +km2 (a)(D1 (D2 b)) − (−1)km1 +km2 +m1 m2 (a)(D2 (D1 b)) = b((D1 (D2 a)) − (−1)m1 m2 (D2 (D1 a))) + 0 + (−1)km2 (D1 a)(D2 b)(1 − (−1)2m1 m2 ) + (−1)k(m1 +m2 ) (a)((D1 (D2 b)) − (−1)m1 m2 (D2 (D1 b))) = b((D1 (D2 a)) − (−1)m1 m2 (D2 (D1 a))) + 0 + (−1)k(m1 +m2 ) (a)((D1 (D2 b)) − (−1)m1 m2 (D2 (D1 b))) = b(D1 ◦ D2 − (−1)m1 m2 D2 ◦ D1 )(a) + (−1)k(m1 +m2 ) a(D1 ◦ D2 − (−1)m1 m2 D2 ◦ D1 )(b) = ([D1 , D2 ]a)b − (−1)k(m1 +m2 ) a([D1 , D2 ]b) Thus, we have that [D1 , D2 ] satisfies the rule outlined above, so [D1 , D2 ] is a superderivation of degree m1 + m2 . 

23

Chapter 5 Problem 5.1: Let A and B be two points not on the real line R. Consider the set S = (R − {0}) ∪ {A, B}. For any two positive real numbers c, d, define IA (−c, d) = (−c, 0) ∪ {A} ∪ (0, d) and similarly for IB (−c, d), with B instead of A. Define a topology on S as follows: On (R−{0}), use the subspace topology inherited from R, with open intervals as a basis. A basis of neighborhoods at A is the set {IA (−c, d) | c, d > 0}; similarly, a basis of neighborhoods at B is {IB (−c, d) | c, d > 0}. (a) Prove that the map h : IA (−c, d) → (−c, d) defined by h(x) = x for x ∈ (−c, 0) ∪ (0, d) and h(A) = 0 is a homeomorphism. Proof: (1) h is injective. Let x, y ∈ IA (−c, d) and say that h(x) = h(y). There are two cases: Case i: h(x) 6= 0. Then h(y) 6= 0, so we have that h(x) = x and h(y) = y, so x = y. Case ii: h(x) = 0. Then h(y) = 0, so we have that x = A = y, so x = y. As both cases hold, we have that h is injective. (2) h is surjective. Let y ∈ (−c, d). There are two cases: Case i: y 6= 0. Then y ∈ IA (−c, d) and h(y) = y. Case ii: y = 0. Then h(A) = y. As both cases hold, we have that h is surjective. (3) h is continuous. Note that we may prove continuity by showing that the preimage of basis elements in (−c, d) are open in IA (−c, d). As all open intervals in R serve as a basis for R, the intersection of all open intervals in R with (−c, d) will serve as a basis for (−c, d); in other words, a basis for (−c, d) is given by {(x, y) | (x, y) ⊆ (−c, d)}. We then have two cases: Case i: 0 ∈ / (x, y). Then we know that h−1 ((x, y)) = (x, y) ⊆ IA (−c, d), so h−1 ((x, y)) is open in IA (−c, d). Case ii: 0 ∈ (x, y). Then we know that h−1 ((x, y)) = IA (x, y) ⊆ IA (−c, d), so h−1 ((x, y)) is open in IA (−c, d). 24

As both cases hold, we have that h is continuous. (4) h−1 is continuous. Note that the basis elements of IA (−c, d) are given by (x, y) when 0∈ / (x, y) and IA (x, y) when x ≤ 0 ≤ y. Thus, there are two cases: Case i: h((x, y)). Then we know that h((x, y)) = (x, y) ⊆ (−c, d), so h((x, y)) is open in (−c, d). Case ii: h(IA (x, y)). Then we know that h(IA (x, y)) = (x, y) ⊆ (−c, d), so h(IA (x, y)) is open in (−c, d). As both cases hold, we have that h−1 is continuous. As the above four properties hold, it follows that h is a homeomorphism.  (b) Show that S is locally Euclidean and second countable, but not Hausdorff. Proof: Let p ∈ S and U ⊆ S be an open set containing p. Then there is some basis element Up containing p contained in U . We know from (a) that Up is homeomorphic to an open subset of R, so it follows that S is locally Euclidean. Recall that we may construct a basis B for R using only ball with rational radius centered at rational points (this was shown last semester). As S inherits its topology from R, we need only change those intervals (a, b) where 0 ∈ (a, b) to either IA (a, b) or IB (a, b). This does not change the countablility inherited from B, we know that S is second countable. Consider the points A, B ∈ S, clearly A 6= B. Let A ∈ U and B ∈ V , where U and V are open in S. As we know that any open set containing A will contain a basis element containing A, we may think of U as being of the form U = IA (a1 , a2 ). Similarly, we may say V = IB (b1 , b2 ). Furthermore, note that this implies that a1 , b1 < 0 < a2 , b2 . Let c1 = max{a1 , b1 } and c2 = min{a2 , b2 }, and note that (a1 , a2 ) ∩ (b1 , b2 ) = (c1 , c2 ). As a result, it follows that ((c1 , 0) ∪ (0, c2 )) ⊆ (IA (a1 , a2 ) ∩ IB (b1 , b2 )), so we have that IA (a1 , a2 ) ∩ IB (b1 , b2 ) 6= ∅ for any a1 , a2 , b1 , b2 ∈ R. Thus, S cannot be Hausdorff. 

Problem 5.2 A fundamental theorem of topology, the theorem on invariance of dimension, states that if two nonempty open sets U ⊂ Rn and V ⊂ Rm are homeomorphic, then n = m. Use the idea of Example 5.4 as well as the theorem on invariance of dimension to prove that the sphere with a hair in R3 is not locally Euclidean at q. Hence it cannot be a topological manifold. Proof:

Suppose the sphere with a hair is locally Euclidean of dimension n at the point q.

Then q has a neighborhood U homeomorphic to an open ball B = B(0, ) ⊆ Rn with q map25

ping to 0. The homeomorphism U → B restricts to a homeomorphism U \ {q} → B \ {0}. Now B \ {0} is either connected if n ≥ 2 or has two connected components if n = 1. Since U \ {q} has two connected components, the only possible homeomorphism would be from U to an open ball in R. However, we know that a neighborhood on the sphere will have dimension 2; as dimension is unvariant under homeomorphism, it is not possible that a homeomorphism exists between U and and an open ball in R. As a result, there cannot exist a homeomorphism between U and an open ball in Rn for any n, so the sphere with a hair is not locally Euclidean at q. It then follows that the sphere with a hair cannot be a topological manifold.  Problem 5.3 Let S 2 be the unit sphere x2 + y 2 + z 2 = 1 in R3 . Define in S 2 the six charts corresponding to the six hemispheres–the front, rear, right, left, upper, and lower hemispheres: U1 = {(x, y, z) ∈ S 2 | x > 0},

φ1 (x, y, z) = (y, z),

U2 = {(x, y, z) ∈ S 2 | x < 0},

φ2 (x, y, z) = (y, z),

U3 = {(x, y, z) ∈ S 2 | y > 0},

φ3 (x, y, z) = (x, z),

U4 = {(x, y, z) ∈ S 2 | y < 0},

φ4 (x, y, z) = (x, z),

U5 = {(x, y, z) ∈ S 2 | z > 0},

φ5 (x, y, z) = (x, y),

U6 = {(x, y, z) ∈ S 2 | z < 0},

φ6 (x, y, z) = (x, y),

−1 ∞ on φ4 (U14 ). Do the Describe the domain φ4 (U14 ) of φ1 ◦ φ−1 4 and show that φ1 ◦ φ4 is C

same for φ6 ◦ φ−1 1 . Proof: We know that U14 = U1 ∩ U4 , so φ4 (U14 ) = φ4 (U1 ∩ U4 ) = {(x, z) | x2 + z 2 < 1, x > 0} √ 2 2 is our domain. Note that φ1 ◦ φ−1 4 will be defined by (x, z) 7→ (− 1 − z − x , z). Observe that

√ 1 ∂ −1 − 1 − z 2 − x2 = (−2x). ∂x 2 (1 − z 2 − x2 )1/2

Furthermore, note that the denominator of the rational expression

1 (1−z 2 −x2 )1/2 ∞

is restricted

to only positive numbers by our domain, so this partial derivative will be C . Similarly, the partial derivative with respect to z will also be C ∞ . As a result, we have that the function describing the map in the first component is C ∞ . As the function describing the map in the 26

second component is the identity function, it is immediately C ∞ . Then because our φ1 ◦ φ−1 4 ∞ is C ∞ in both the first and second component, φ1 ◦ φ−1 4 is C .

We know that U61 = U6 ∩ U1 , so φ1 (U61 ) = φ6 (U6 ∩ U1 ) = {(y, z) | y 2 + z 2 < 1, z < 0} is our p 1 − z 2 − y 2 , y). Observe that domain. Note that φ6 ◦ φ−1 will be defined by (y, z) → 7 ( 1 ∂ p 1 1 1 − z2 − y2 = (−2y). 2 ∂y 2 (1 − z − y 2 )1/2 Furthermore, note that the denominator of the rational expression

1 (1−z 2 −y 2 )1/2 ∞

is restricted

to only positive numbers by our domain, so this partial derivative will be C . Similarly, the partial derivative with respect to z will also be C ∞ . As a result, we have that the function describing the map in the first component is C ∞ . As the function describing the map in the second component is the identity function, it is immediately C ∞ . Then because our φ6 ◦ φ−1 1 ∞ is C ∞ in both the first and second component, φ6 ◦ φ−1 1 is C . 

Problem 5.4 Let {(Uα , φα )} be the maximal atlas on a manifold M . For any open set U in M and a point p ∈ U , prove the existence of a coordinate open set Uα such that p ∈ Uα ⊂ U . Proof: Let Uβ be any coordinate neighborhood of p in the maximal atlas. Any open subset Uβ is again in the maximal atlas, because it is C ∞ compatible with all the open sets in the maximal atlas. Thus Uα = Uβ ∩ U is a coordinate neighborhood such that p ∈ Uα ⊂ U . 

27

Chapter 6 Problem 6.1: Let R be the real line with the differentiable structure given by the maximal atlas of the chart (R, φ = 1 : R → R), and let R0 be the real line with the differentiable structure given by the maximal atlas of the chart (R, ψ : R → R), where ψ(x) = x1/3 . (a) Show that these two differentiable structures are distinct. Proof: The manifolds described give rise to the following maps: F

R −→R0 1↓ ψ↓ R R We know that if R and R0 have the same differential structure, then F must be the identity map and ψ ◦ F ◦ 1−1 : R → R must be a diffeomorphism. However, when we let F = id it can immediately be seen for x ∈ R that ψ ◦ id ◦ 1−1 (x) = ψ ◦ id(x) = ψ(x) = x1/3 . As we have shown in previous work that x1/3 is not C ∞ , it follows that ψ ◦ F ◦ 1−1 cannot be a diffeomorphism. As a result, R and R0 cannot have the same differential structure.  (b) Show that there is a diffeomorphism between R and R0 . Proof: We now let F : R → R0 be defined by F (x) = x3 . It can then immediately be seen for x ∈ R that (ψ ◦ F ◦ 1−1 (x))−1 = (ψ ◦ F (x))−1 = (ψ(x3 ))−1 = ((x3 )1/3 )−1 = x−1 . As we know from previous work that x is a diffeomorophism, it follows that ψ ◦ F ◦ 1−1 is a diffeomorphism, so F is a diffeomorphism between R and R0 . 

Problem 6.2: Let M and N be manifolds and let q0 be a point in N . Prove that the inclusion map iq0 : M → M × N defined by iq0 (p) = (p, q0 ) is C ∞ . Proof:

Let (Uα , φα ) and (Vβ , ψβ ) be charts for M and N repsectively; this implies that

(Uα × Vβ , φα × ψβ ) will be a chart for M × N . Then the manifolds described give rise to the following maps: iq

0 M −→ M ×N φα ↓ φγ ×ψβ ↓ φα (Uα ) φγ (Uγ ) × ψβ (Vβ )

28

(Note that we must pick α, γ so that Uα ∩ Uγ 6= ∅ for this diagram to make sense.) In order ∞ to show that iq0 is C ∞ , we must show that (φγ × ψβ ) ◦ iq0 ◦ φ−1 α (x) is C . It can immediately

be seen that −1 −1 (φγ × ψβ ) ◦ iq0 ◦ φ−1 α (x) = (φγ × ψβ )(φα (x), q0 ) = (φγ ◦ φα (x), ψβ (q0 )). ∞ by showing that its components are We can then show that (φγ ◦ φ−1 α (x), ψβ (q0 )) is C ∞ C ∞ . We immediately have that φγ ◦ φ−1 for all x, as this is a transition map. α (x) is C

Furthermore, ψβ (q0 ) is a constant map, so it is also C ∞ for all x. As a result, we have that ∞ ∞ (φγ ◦ φ−1 α (x), ψβ (q0 )) is C , so iq0 is C . 

Problem 6.4: Find all points in R3 in a neighborhood of which the functions x, x2 + y 2 + z 2 − 1, z can serve as a local coordinate system. Proof: Define F : R3 → R3 by F (x, y, z) = (x, x2 + y 2 + z 2 − 1, z). The map F can serve as a coordinate map in a neighborhood of some point p ∈ R3 if and only if it is a local diffeomorphism at p. The Jacobian determinant  1 0  ∂(F 1 , F 2 , F 3 = det  2x 2y ∂(x, y, z) 0 0

of F is  0 2y 2z  2z   = 1 0 1 = 2y. 1

By the inverse function theorem, F is a local diffeomorphism at p = (x, y) if and only if y 6= 0; thus, F can serve as a coordinate system at any point not in the xz-plane. 

29

Chapter 7 Problem 7.5: Suppose a right action of a topological group G on a topological space S is continuous; this simply means that the map S × G → S describing the action is continuous. Define two points x, y of S to be equivalent if they are in the same orbit; i.e., there is an element g ∈ G such that y = xg. Let S/G be the quotient space; it is called the orbit space of the action. Prove that the projection map π : S → S/G is an open map. Proof:

Let U be an open subset of S.

For each g ∈ G we know that U g will be

open because right multiplication is a homeomorphism from S to S. We also know that S π−1(π(U )) = g∈G U g is open, as it is a union of open sets. It then follows from the definition of the quotient topology that π(U ) is open.  Problem 7.6: Let the additive group 2πZ act on R on the right by x · 2πn = x + 2πn, where n is an integer. Show that the orbit space R/2πZ is a smooth manifold. Proof: Let π : R → R/2πZ represent the projection map. Also define π1 : R/2πZ → (−π, π) by [t] 7→ t ∈ (−π, π) and π2 : R/2πZ → (0, 2π) by [t] 7→ t ∈ (0, 2π). R/2πZ is locally Euclidean: The maps described above give rise to the following: φ1 ↓

R/2πZ

φ2 ↓

φ2 ◦φ−1 1

(−π, π) −→ (0, 2π) We desire to show that φ2 and φ1 are C ∞ -compatible. There are two cases: Case 1: x ∈ (0, π). Observe that φ2 ◦ φ−1 1 (x) = φ2 ([x]) = x. Case 2: x ∈ (π, 2π). Observe that φ2 ◦ φ−1 1 (x) = φ2 ([x]) = x − 2π. The case for φ1 ◦ φ−1 2 is similar. As all of these maps are all the identity map or translations, they are all homeomorphisms. We can then extend these charts to a maximal atlas, so it follows that R/2πZ is locally Euclidean. R/2πZ is Hausdorff: Let [α], [β] ∈ R/2πZ such that [α] 6= [β]. Pick representatives a ∈ [α] and b ∈ [β] such that either a, b ∈ (−π, π) or a, b ∈ (0, 2π); in either case, a 6= b. As R is Hausdorff, we have open 30

U, V ⊆ R such that U ∩ V = ∅. We can then map U and V forward to π(U ) and π(V ), which will be open, as π is an open map. Furthermore, we have that π(U ) ∩ π(V ) = ∅, as desired. R/2πZ is second countable: Recall that R is second countable; it then follows that a quotient space of R will be second countable, so R/2πZ is second countable. As the above properties hold, we have that R/2πZ is a smooth manifold. 

Problem 7.7: (a) Let {(Uα , φα )}2α=1 be the atlas of the circle S 1 in Example 5.7, and let φα be the map φα followed by the projection R → R/2πZ. On U1 ∩ U2 = A t B, since φ1 and φ2 differ by an integer multiple of 2π, φ1 = φ2 . Therefore, φ1 and φ2 piece together to give a well-defined map φ : S 1 → R/2πZ. Prove that φ is C ∞ . Let π : R → R/2πZ be the projection map. Furthermore, let V1 = (−π, π) and V2 = (0, 2π). Now define π1 : V1 → R/2πZ be the restriction π (−π,π) and π2 : V2 → R/2πZ be the restriction π (0,2π) . The map φ described in the problem statement can be described Proof:

as φ = π ◦ φα . These maps then give rise to the following diagram: φ

S 1 −→R/2πZ φα ↓ πβ−1 ↓ φα (Uα ) Vβ We have several cases to verify for the different choices of φα and πβ . Case 1: α = β = 1. Let x ∈ (−π, π). Then we have that −1 −1 π1−1 ◦ φ ◦ φ−1 1 (x) = π1 ◦ π ◦ φα ◦ φ1 (t) = x.

Case 2: α = β = 2. Let x ∈ (0, 2π). Then we have that −1 −1 π2−1 ◦ φ ◦ φ−1 2 (x) = π2 ◦ π ◦ φα ◦ φ2 (t) = x.

Case 3: α = 1 and β = 2. There are two subcases: Subcase i: Let x ∈ (0, π). Then we have that −1 −1 π2−1 ◦ φ ◦ φ−1 1 (x) = π2 ◦ π ◦ φα ◦ φ1 (t) = x.

Subcase ii: Let x ∈ (−π, 0). Then we have that −1 −1 π2−1 ◦ φ ◦ φ−1 1 (x) = π2 ◦ π ◦ φα ◦ φ1 (t) = x + 2π.

31

Case 4: α = 2 and β = 1. Subcase i: Let x ∈ (0, π). Then we have that −1 −1 π2−1 ◦ φ ◦ φ−1 1 (x) = π2 ◦ π ◦ φα ◦ φ1 (t) = x.

Subcase ii: Let x ∈ (π, 2π). Then we have that −1 −1 π2−1 ◦ φ ◦ φ−1 1 (x) = π2 ◦ π ◦ φα ◦ φ1 (t) = x − 2π.

All of these maps are identities or translations, so they are C ∞ ; thus φ is C ∞ .  (b) The complex exponential R → S 1 , t 7→ eit , is constant on each orbit of the action of 2πZ on R. Therefore, there is an induced map F : R/2πZ → S 1 , F ([t]) = eit . Prove that F is C ∞ . Proof: The maps described give rise to the following diagram: F

S 1 ←−R/2πZ φα ↓ πβ−1 ↓ φ(Uα ) Vβ We have several cases to verify for the different choices of φα and πβ . Case 1: α = β = 1. Let x ∈ (−π, π). Then we have that φ1 ◦ F ◦ π1 (x) = φ1 ◦ F ([x]) = φ1 (eix ) = x. Case 2: α = β = 2. Let x ∈ (0, 2π). Then we have that φ2 ◦ F ◦ π2 (x) = φ2 ◦ F ([x]) = φ2 (eix ) = x. Case 3: α = 1 and β = 2. There are two subcases: Subcase i: Let x ∈ (0, π). Then we have that φ2 ◦ F ◦ π1 (x) = φ2 ◦ F ([x]) = φ2 (eix ) = x. Subcase ii: Let x ∈ (−π, 0). Then we have that φ2 ◦ F ◦ π1 (x) = φ2 ◦ F ([x]) = φ2 (eix ) = x + 2π. Case 4: α = 2 and β = 1. Subcase i: Let x ∈ (0, π). Then we have that φ2 ◦ F ◦ π1 (x) = φ2 ◦ F ([x]) = φ2 (eix ) = x. 32

Subcase ii: Let x ∈ (π, 2π). Then we have that φ2 ◦ F ◦ π1 (x) = φ2 ◦ F ([x]) = φ2 (eix ) = x − 2π. As all of these maps are C ∞ , we have that φα ◦ F ◦ πβ is always C ∞ . It then follows that F is C ∞ .  (c) Prove that F : R/2πZ → S 1 is a diffeomorphism. Proof:

All of the maps described in the cases of (b) are identities or translations, so they

are all diffeomorphisms. Thus, we have that F is a diffeomorphism. 

33

Chapter 8 Problem 8.1: Let F : R2 → R3 be the map (u, v, w) = F (x, y) = (x, y, xy). Let p = (x, y) ∈ R2 . Compute F∗ (∂/∂x p ) as a linear combination of ∂/∂u, ∂/∂v, and ∂/∂w at F (p). Proof:

To determine the coefficient a in F∗ (∂/∂x) = a∂/∂u + b∂/∂v + c∂/∂w, we apply

both sides to u to obtain       ∂ ∂ ∂ ∂ ∂ ∂ ∂ F∗ u= a +b +c u = a, so a = F∗ u= (u ◦ F ) = (x) = 1. ∂x ∂u ∂v ∂w ∂x ∂x ∂x Similarly, we have that       ∂ ∂ ∂ ∂ ∂ ∂ ∂ F∗ +b +c (v ◦ F ) = (y) = 0, v= a v = b, so b = F∗ v= ∂x ∂u ∂v ∂w ∂x ∂x ∂x as well as       ∂ ∂ ∂ ∂ ∂ ∂ ∂ F∗ w= a +b +c w = c, so c = F∗ w= (w ◦ F ) = (xy) = y. ∂x ∂u ∂v ∂w ∂x ∂x ∂x Thus, it follows that F∗ (∂/∂x) = ∂/∂u + y∂/∂w.  Problem 8.2: Let L : Rn → Rm be a linear map. For any p ∈ Rn , there is a canonical ∼

identification Tp (Rn ) → Rn given by X

∂ 7→ a = ha1 , . . . , an i. a ∂xi p i

Show that the differential L∗,p : Tp (Rn ) → TL(p) (Rm ) is the map L : Rn → Rm itself, with the identification of the tangent spaces as above. P Proof: Let v = ai ∂x∂ i p 7→ a. We then must only observe that L(p + ta) − L(p) L(p) + tL(a) − L(p) L∗ (v p ) = lim = lim = L(a). t→0 t→0 t t  Problem 8.3: Fix a real number α and define F : R2 → R2 by " # " #" # u cos α − sin α x = (u, v) = F (x, y) = . v sin α cos α y 34

Let X = −y∂/∂x + x∂/∂y be a vector field on R2 . If p = (x, y) ∈ R2 and F∗ (Xp ) = (a∂/∂u + b∂/∂v) F (p) , find a and b in terms of x, y, and α. Proof: First observe that " #" # cos α − sin α x sin α

cos α

y

" =

x cos α − y sin α x sin α + y cos α

# .

We shall follow a process similar to the one used in 8.1. To determine the coefficient a in F∗ (Xp ) = a∂/∂u + b∂/∂v, we apply both sides to u to obtain   ∂ ∂ F∗ (Xp )u = a +b u = a, so ∂u ∂v a = F∗ (Xp )u = Xp (u ◦ F ) = Xp (x cos α − y sin α) = −x sin α − y cos α. Similarly we have that   ∂ ∂ F∗ (Xp )v = a +b v = b, so ∂u ∂v b = F∗ (Xp )v = Xp (v ◦ F ) = Xp (x sin α + y cos α) = x cos α − y sin α. Thus, we have that a = −x sin α − y cos α and b = x cos α − y sin α.  Problem 8.4: Let x, y be the standard coordinates on R2 , and let U be the open set U = R2 − {(x, 0) | x ≥ 0}. On U the polar coordinates r, θ are uniquely defined by x = r cos θ, y = r sin θ, r > 0, 0 < θ < 2π. Find ∂/∂r and ∂/∂θ in terms of ∂/∂x and ∂/∂y. Proof: Let (U, (r, θ)) and (U, (r cos θ, r sin θ)) be charts on U . Then using Proposition 8.10 we immediately have that ∂ ∂x ∂ ∂y ∂ ∂ ∂ = + = cos θ + sin θ . ∂r ∂r ∂x ∂r ∂y ∂x ∂y Similarly, we have ∂ ∂x ∂ ∂y ∂ ∂ ∂ = + = −r sin θ + r cos θ . ∂θ ∂θ ∂x ∂θ ∂y ∂x ∂y 35

Thus, we have

∂ ∂r

∂ ∂ = cos θ ∂x + sin θ ∂y and

∂ ∂θ

∂ ∂ = −r sin θ ∂x + r cos θ ∂y . 

Problem 8.5: Prove proposition 8.15: Let c : (a, b) → M be a smooth curve, and let (U, x1 , . . . , xn ) be a coordinate chart about c(t). Write ci = xi ◦ c for the ith component of c in the chart. Then c0 (t) is given by 0

c (t) =

n X i=1

∂ c˙ (t) i . ∂x c(t) i

Thus, relative to the basis {∂/∂x p } fro Tc(t) M , the velocity c0 (t) is represented by the column vector



 c˙1 (t)  .   ..  .   n c˙ (t)

Proof: We know that c0 (t) =

P

aj ∂x∂ j . Now we apply this expression to xi to obtain ai in

the following manner: X  0 i j ∂ c (t)x = a xi j ∂x   0 i 1 ∂ i ∂ n ∂ + ··· + a + ··· + a xi c (t)x = a ∂x1 ∂xi ∂xn ∂ ∂ ∂ c0 (t)xi = a1 1 xi + · · · + ai i xi + · · · + an n xi ∂x ∂x ∂x c0 (t)xi = 0 + · · · + ai + · · · + 0   d x i = ai c∗ dt d i (x ◦ c) = ai dt d i (c ) = ai dt

Apply to xi Expand sum Distribute Evaluate each

∂ ∂xi

Definition of c0 Definition of c∗ Definition of xi

c˙i = ai

Definition of

Thus, we have that ai = c˙i , so it follows that c0 (t) =

Pn

i=1

c˙i (t) ∂x∂ i c(t) , as desired. 

Problem 8.6: Let p = (x, y) be a point in R2 . Then " #" # cos 2t − sin 2t x cp (t) = , t ∈ R, y sin 2t cos 2t 36

d dt

is a curve with initial point p in R2 . Compute the velocity vector c0p (0). Proof: First observe that " cp (t) =

#" # cos 2t − sin 2t x sin 2t

cos 2t

y

" =

# x cos 2t − y sin 2t x sin 2t + y cos 2t

.

Note that c1p (t) = x cos 2t − y sin 2t and c2p (t) = x sin 2t + y cos 2t, so it follows that c˙1p (t) = −2x sin 2t − 2y cos 2t and c˙2p (t) = 2x cos 2t − 2y sin 2t. By applying Proposition 8.15 we immediately obtain c0p (0)

=

c˙1p (0)

∂ ∂ 2 + c˙p (0) ∂x c(0) ∂y c(0)

∂ ∂ = (−2x sin 2(0) − 2y cos 2(0)) + (2x cos 2(0) − 2y sin 2(0)) ∂x c(0) ∂y c(0) ∂ ∂ = (0 − 2y) + (2x − 0) ∂x c(0) ∂y c(0) ∂ ∂ = 2y + 2x . ∂x c(0) ∂y c(0) ∂ ∂ + 2x .  Thus, we have that c0p (0) = 2y ∂x ∂y c(0) c(0) Problem 8.7: If M and N are manifolds, let π1 : M × N → M and π2 : M × N → N be the two projections. Prove that for (p, q) ∈ M × N , (π1∗ , π2∗ ) : Tp,q (M × N ) → Tp M × Tq N is an isomorphism. Proof: If (U, φ) = (U, x1 , . . . , xm ) and (V, ψ) = (V, y 1 , . . . , y n ) are charts about p in M and q in N repsectively it follows from Proposition 5.18 that a chart about (p, q) in M × N will be given by (U × V, φ × ψ) = (U × V, (π1∗ φ, π2∗ ψ)) = (U × V, x1 , . . . , xn , y 1 , . . . , y n ), P where xi = π1∗ xi and y i = π2∗ y i . Let π1∗ (∂/∂xj ) = aij ∂/∂xi . Then we have that   ∂ ∂ ∂xi i i i aj = π1∗ x = (x ◦ π ) = = δji . 1 ∂xj ∂xj ∂xj As a result, it follows that   X ∂ ∂ ∂ π1∗ = δji i = , which implies π1∗ j j ∂x ∂x ∂x i 37

! ∂ ∂ = . ∂xj p ∂xj (p,q)

We then similarly have that       ∂ ∂ ∂ ∂ = 0, π2∗ = 0, and π2∗ = j. π1∗ i j j y ∂y ∂x ∂y Thus, we have that a basis for T(p,q) (M × N ) will be given by ∂ ∂ ∂ ∂ ,..., m , 1 ,..., n . ∂x (p,q) ∂y (p,q) ∂y (p,q) ∂x1 (p,q) And then a basis for Tp M × Tq N is given by ! ! ! ! ∂ ∂ ∂ ∂ ,0 ,..., , 0 , 0, 1 , . . . , 0, n . ∂x1 p ∂xm p ∂y q ∂y q It then follows that the linear map (π1∗ , π2∗ ) maps a basis of T(p,q) (M × N ) to a basis of Tp M × Tq N . As a result, we have that (π1∗ , π2∗ ) is an isomorphism.  Problem 8.8: Let G be a Lie group with multiplication map µ : G × G → G, the inverse map ı : G → G, and identity element e. (a) Show that the differential at the identity of the multiplication map µ is addition: µ∗,(e,e) : Te G × Te G → Te G, defined by µ∗,(e,e) (Xe , Ye ) = Xe + Ye . Proof: Let c(t) be a curve starting at e ∈ G such that c0 (0) = Xe . Now define α(t) = (c(t), e) and note that α(t) is a curve starting at (e, e) ∈ G × G and α0 (0) = (Xe , 0). Similarly, let b(t) be a curve starting at e ∈ G such that b0 (0) = Ye . Now define β(t) = (e, b(t)) and note that β(t) is a curve starting at (e, e) ∈ G × G and β 0 (0) = (0, Ye ). We shall now compute u∗,(e,e) (Xe , Ye ) using α(t)β(t): d d d µ∗,(e,e) (Xe , Ye ) = (u ◦ αβ)(t) = (u(c(t)e, b(t)e)) = c(t)b(t) dt 0 dt 0 dt 0 = c0 (0)b(0) + c(0)b0 (0) = Xe (e, e) + Ye (e, e) = Xe + Ye . Thus, we have µ∗,(e,e) (Xe , Ye ) = Xe + Ye , as desired.  (b) Show that the differential at the identity of ı is the negative: ı∗,e : Te G → Te G, defined by ı∗,e (Xe ) = −Xe . Proof: Let c(t) be a curve starting at e ∈ G such that c0 (0) = Xe . We shall now compute ı∗,e (Xe ) using c(t): d d i∗,e (Xe ) = (i ◦ c)(t) = (−c(t)) = −c0 (0) = −Xe . dt 0 dt 0 38

Thus, we have i∗,e (Xe ) = −Xe , as desired. 

Problem 8.9: Let X1 , . . . , Xn be n vector fields on an open subset U of a manifold of dimension n. Suppose that at p ∈ U , the vectors (X1 )p , . . . , (Xn )p are linearly independent. Show that there is a chart (V, x1 , . . . , xn ) about p such that (Xi )p = (∂/∂xi )p for i = 1, . . . , n. P Proof: Let (V, y 1 , . . . , y n ) be a chart about p. Suppose that (Xj )p = i aij ∂/∂y i |p . As we have that (X1 )p , . . . , (Xn )p are linearly independent, we know that the matrix A = [aij ] is nonsingular. Now define a new coordinate system x1 , . . . , xn by i

y =

n X

aij xj for i = 1, . . . , n.

j=1

It then follows from the chain rule that X ∂y i ∂ X ∂ ∂ = = aij i . j j i ∂x ∂x ∂y ∂y i Which when considered at the point p the above equations can be realized as X ∂ i ∂ = a = (Xj )p . j ∂xj p ∂y i p i Representing this result in matrix notation then gives     y1 x1 . .  ..  = A  ..  , which implies that     n y xn



   x1 y1 .    ..  = A−1  ...  .     n x yn

As a result, we have that our new coordinate system described above is equivalent to P xj = ni=1 (A−1 )ji y i .  Problem 8.10: A real-valued function f : M → R on a manifold is said to have a local maximum at p ∈ M if there is a neighborhood U of p such that f (p) ≥ f (q) for all q ∈ U . (a) Prove that if a differentiable function f : I → R defined on an open interval I has a local maximum at p ∈ I, the f 0 (p) = 0. Proof:

As f has a local maximum at p ∈ I, it follows

that f (p) ≥ f (q) for all q ∈ I. Because f is differentiable the following limits exist: f 0 (p) = lim− x→p

f (x) − f (p) f (x) − f (p) ≥ 0 and f 0 (p) = lim+ ≤ 0. x→p x−p x−p

We now have that 0 ≥ f 0 (p) ≥ 0, so f 0 (p) = 0.  39

(b) Prove that a local maximum of a C ∞ function f : M → R is a critical point of f . Proof:

Let p ∈ M be the point at which the local maximum occurs. Now let Xp be a

tangent vector in Tp M and let c(t) be a curve in M starting at p with an arbitrary initial vector Xp . Then it follows that f ◦ p will be a real valued function with a local maximum at 0. Applying (a) tells us that the value of the derivative is 0 at 0. So then we have that d d 0 = (f ◦ c)(t) = (f∗ )c(0) ◦ (c∗ )0 = (f∗,p )(c0 (0)) = f∗,p (Xp ), dt 0 dt 0 so f∗,p (Xp ) = 0, which imtextbfplies that f∗ is not surjective at p because Xp was arbitrary. As a result, we have that p is a critical point of M . 

40

Chapter 9 Problem 9.1: Define f : R2 → R by f (x, y) = x3 − 6xy + y 2 . Find all values c ∈ R for which the level set f −1 (c) is a regular submanifold of R2 . Proof:

To find the desired values, we will construct the Jacobian for this map (note that

this case is somewhat degenerate): h J(f ) = ∂f ∂x

∂f ∂y

i

h i = 3x2 − 6y −6x + 2y .

For each (x, y) ∈ R2 , there are two cases for the rank of J. Case 1: rk(J(x, y)) = 0. Then (x, y) is a critical point of f . When rk(J(f )) = 0 we know that 3x2 − 6y = 0 and −6x + 2y = 0. This system of equations can be solved to show that (0, 0) and (6, 18) are the only two points satisfying these equations. Then we have that f (0, 0) = 0 and f (6, 18) = −108, so 0 and −108 are the values of c ∈ R for which the level set will not be a regular submanifold of R2 . Case 2: rk(J(x, y)) = 1. Then (x, y) is a regular point of f . As we solved for the critical points in Case 1, we know that the regular points will be given by R \ {−108, 0}. In conclusion, we have that the level set f −1 (c) is a regular submanifold of R2 for all c ∈ R \ {−108, 0} as long as f −1 (c) is nonempty. However, it can easily be seen that for any √ √ √ x ∈ R we have ( 3 x, 0) 7→ ( 3 x)3 − 6( 3 x)(0) + 02 = x, so no f −1 (c) will be empty. Note: Unfortunately, the regular level set theorem does not guarantee to us that f −1 (0) and f −1 (−108) will fail to be submanifolds. However, graphing the solutions of x3 − 6xy + y 2 = 0 yields the following set in R2 :

This solution set is not a submanifold because it fails to be locally Euclidean to R at (0, 0). Similarly, the graph of the solutions of x3 − 6xy + y 2 = −108 yields the following set in R2 :

41

This solution set is not a submanifold because its connected components do not all have the same dimension; the left side of the set has dimension 1, while the isolated point at (6, 18) has dimension 0.  Problem 9.2: Let x, y, z, w be the standard coordinates on R4 . Is the solution set of x5 + y 5 + z 5 + w5 = 1 in R4 a smooth manifold? Explain why or why not. (Assume that the subset is given the subspace topology.) Proof:

Let S ⊆ R4 be the solution set of x5 + y 5 + z 5 + w5 = 1 and define f : R4 → R

by (x, y, z, w) 7→ x5 + y 5 + z 5 + w5 . Note that f −1 (1) = S and (1, 1, 1, 1) ∈ f −1 (1) so then f −1 (1) 6= ∅. To determine the answer to the given question, we will construct the Jacobian for f : J(f ) =

h

∂f ∂x

∂f ∂y

∂f ∂z

∂f ∂w

i

h i = 5x4 5y 4 5z 4 5w4 .

As in 9.1, any 4-tuple (x, y, z, w) ∈ R4 that causes J(f ) to have rank zero will be a critical value of f . Note that this occurs only when x = y = z = w = 0; however, we can see that (0, 0, 0, 0) ∈ / S because 05 + 05 + 05 + 05 = 0 6= 1. Thus, we have that S will be a smooth manifold in R4 because S does not contain any critical points of f .  Problem 9.3: Is the solution set of the system of equations x3 + y 3 + z 3 = 1 and z = xy in R3 a smooth manifold? Prove your answer. Proof:

Let u(x, y, z) = x3 + y 3 + z 3 and v(x, y, z) = z − xy and define f : R3 → R2 by

(x, y, z) 7→ (u(x, y, z), v(x, y, z)). Now let S ⊆ R3 be the solution set of f (x, y, z) = (1, 0). Note that (1, 0, 0) ∈ f −1 (1, 0), so f −1 (1, 0) 6= ∅. To determine the answer to the given question, we will construct the Jacobian for this map: " # " # ∂u ∂u ∂u 2 2 2 3x 3y 3z ∂y ∂z J(f ) = ∂x = . ∂v ∂v ∂v y x 1 ∂x ∂y ∂z 42

If the rank of J(f ) < 2 for some (x, y, z), then (x, y, z) is critical point of f . The rank of J(u, v) will be less than 2 if and only if the all of the 2 × 2 minors of J(f ) are zero. By setting the 2 × 2 minors equal to 0, we get the system 3x3 − 3y 3 = 0, 3y 2 − 3z 2 x = 0, and 3x2 − 3z 2 y = 0. The solution set of these equations is given by Z = {(0, 0, z) | x = y = 0, z ∈ R} ∪ {(x, y, z) | x = y = −z 2 }. Let u1 and v0 be the solution sets of u = 1 and v = 0 respectively, and note that f −1 (1, 0) = u1 ∩ v0 .

It can immediately be seen that Z ∩ v0 = {(0, 0, 0), (−1, −1, 1)}.

However,

(0, 0, 0), (−1, −1, 1) ∈ / u1 , so then u0 ∩ v0 ∩ Z = ∅, so f −1 (1, 0) ∩ Z = ∅. As a result, f −1 (1, 0) does not contain any critical values of f , so S will be a smooth manifold.  Problem 9.4: Suppose that a subset S of R2 has the property that locally on S one of the coordinates is a C ∞ function of the other coordinate. Show that S is a regular submanifold of R2 . (Note that the unit circle defined by x2 + y 2 = 1 has this property. At every point of the circle, there is a neighborhood in which y is a C ∞ function of x or x is a C ∞ function of y.) Proof: Let p ∈ S; then there exists some open set U ⊆ R2 such that one of the coordinates in U ∩ S is a C ∞ function of the other coordinate. Without loss of generality assume that y = f (x) for some C ∞ function f : A → B, where A, B ⊆ R. Now let V = A × B ⊆ U and define F : V → R2 by F (x, y) = (x, y − f (x)). Since F is a diffeomorphism onto its image, we may use it as a coordinate map. In the chart (V, x, y − f (x)) we have that V ∩ S is defined by the vanishing of the coordinate y −f (x). This then proves that S is a regular manifold.  Problem 9.6: A polynomial F (x0 , . . . , xn ) ∈ R[x0 , . . . , xn ] is homogeneous of degree k if it P is a linear combination of monomials xi00 · · · xinn of degree nj=0 ij = k. Let F (x0 , . . . , xn ) be a homogeneous polynomial of degree k. Clearly, for any t ∈ R we have F (tx0 , . . . , txn ) = P ∂F tk F (x0 , . . . , xn ). Show that ni=0 xi ∂x = kF . i Proof: Define yi = txi . Then we have from the given information that F (y0 , . . . , yn ) = F (tx0 , . . . , txn ) = tk F (x0 , . . . , xn ). Differentiating the right and left sides with respect to t yields n X ∂F dyi = ktk−1 F (x0 , . . . , xn ). ∂y dt i i=0

43

But we know that

dyi dt

= xi , so we really have that n X

xi

i=0

∂F = ktk−1 F (x0 , . . . , xn ). ∂yi

As this is true for all t ∈ R, we may let t = 1 to observe that n X

∂F = kF (x0 , . . . , xn ), ∂xi

xi

i=0

as desired.  Problem 9.7: On the projective space RP n a homogeneous polynomial F (x0 , . . . , xn ) of degree k is not a function, since its value at a point [a0 , . . . , an ] is not unique. However, the zero set in RP n of a homogeneous polynomial F (x0 , . . . , xn ) is well defined, since F (a0 , . . . , an ) = 0 if and only if F (ta0 , . . . , tan ) = tk F (a0 , . . . , an ) = 0 for all t ∈ R× := R − {0}. The zero set of finitely many homogeneous polynomials in RP n is called a real projective variety. A projective variety defined by a single homogeneous polynomial of degree k is called a hypersurface of degree k. Show that the hypersurface Z(F ) defined by F (x0 , x1 , x2 ) = 0 is smooth if ∂F/∂x0 , ∂F/∂x1 , and ∂F/∂x2 are not simultaneously zero on Z(F ). ∂F ∂F ∂F Proof: We have from Problem 9.6 that kF = x0 ∂x + x1 ∂x + x2 ∂x ; furthermore we know 0 1 2

that on Z(F ) we have F = 0. It then follows that on Z(F ) we have ∂F ∂F ∂F + x1 + x2 . ∂x0 ∂x1 ∂x2

0 = x0 By way of contradiction say that

∂F ∂x1

=

∂F ∂x2

= 0; then it follows that

contradicts our given information that not all say that without loss of generality

∂F ∂x2

∂F , ∂F , ∂x0 ∂x1

and

∂F ∂x2

∂F ∂x0

= 0. However, this

are equal to zero, so we may

6= 0.

Recall that U0 = {[x0 , x1 , x2 ] | x0 6= 0}, and define x =

x1 , x0

y =

x2 , x0

and f : R2 → R by

f (x, y) = F (1, x, y). Now define ψ : RP 2 → R2 on U0 by ψ([x0 , x1 , x2 ]) = (1, f (x, y)). It then follows that ψ◦

φ−1 0 (x, y)

  x1 x2 = ψ 1, , = (x, f (x, y)). x 0 x0

We also know that F (x0 , x1 , x2 ) = xk0 f ( xx10 , xx20 ), so ∂F ∂x2

6= 0 we have that

∂f ∂y

∂F ∂x2

1 = xk0 ∂f = x0k−1 ∂f . Then because ∂y x0 ∂y

6= 0. It can now be seen that 1 0 ∂f det(J(ψ ◦ φ−1 6= 0, = 0 )) = ∂f ∂f ∂y ∂x

44

∂y

1 so we know that ψ ◦ φ−1 0 is C compatible.

We can construct similar arguments on U1 and U2 ; as U0 , U1 , and U2 cover RP 2 , we know that they will cover Z(F ). As a result, we have that Z(F ) will be a submanifold.  Problem 9.10: Let p ∈ f −1 (S) and (U, x1 , . . . , xm ) be an adapted chart centered at f (p) for M relative to S such that U ∩ S = Z(xm−k+1 , . . . , xm ), the zero set of the functions xm−k+1 , . . . , xm . Define g : U → Rk to be the map g = (xm−k+1 , . . . , xm ). (a) Show that f −1 (U ) ∩ f −1 (S) = (g ◦ f )−1 (0). Proof: Simply observe that f −1 (U ) ∩ f −1 (V ) = f −1 (U ∩ S) = f −1 (g −1 (0)) = (g ◦ f )−1 (0), as desired.  (b) Show that f −1 (U ) ∩ f −1 (S) is a regular level set of the function g ◦ f : f −1 (U ) → Rk . Proof:

Let p ∈ f −1 (U ) ∩ f −1 (S) = f −1 (U ∩ S); then f (p) ∈ U ∩ S. Note that g∗ will be

given by the k × n matrix [0 | Ik ]. Recall that S is defined by the vanishing of the last k coordinates; then any curve γ(t) ∈ S will be of the form γ(t) = (γ1 (t), . . . , γm−k (t), 0, . . . , 0) 0 (t), 0, . . . , 0). As a result, and γ 0 (t) = (γ10 (t), . . . , γm−k  0 ··· 0 1 0 ···  0 · · · 0 0 1 · · ·  g∗ (γ 0 (t)) =  . .  .. . . ... ... ... . . .  0 ··· 0 0 0 ···

we have that   0  0  γ1 (t)  0   ...  = 0. ..    .  γ 0 (t) m−k 1

As γ(t) was arbitrary, it then follows that g∗ (Tp S) = 0. Then because g : U → Rk is a projection, we have that g∗ (Tf (p) M ) = T0 (Rk ). By applying g∗ to the transversality equation we obtain g∗ f∗ (Tp N ) + g∗ (Tf (p) S) = g∗ (Tf (p) M ) g∗ f∗ (Tp N ) + 0 = T0 (Rk ) g∗ f∗ (Tp N ) = T0 (Rk ). As a result, it follows that g◦f : f −1 (U ) → Rk is a submersion at p. As p ∈ f −1 (U )∩f −1 (S) = (g ◦ f )−1 (0) was arbitrary, it follows that this set is a regular level set of g ◦ f .  (c) Prove the transervsality theorem. Proof:

It follows from the regular level set theorem that f −1 (U ) ∩ f −1 (S) is a regular

submanifold of f −1 (U ) ⊆ N . As a result, every p ∈ f −1 (S) has an adapted chart relative to f −1 (S) in N .  45

Chapter 11 Problem 11.1: The unit sphere S n in Rn+1 is defined by the equation

Pn+1 i=1

(xi )2 = 1.

For p = (p1 , . . . , pn+1 ) ∈ S n , show that a necessary and sufficient condition for X ∂ Xp = ai i ∈ Tp (Rn+1 ) ∂x p P k i to be tangent to S n at p is a p = 0. Let c : R → S n be a curve defined by c(t) = (x1 (t), . . . , xn+1 (t)) with c(0) = p P i i and c0 (0) = Xp . Define H = {(a1 , . . . , an+1 ) ∈ Rn+1 | n+1 i=1 a p = 0}. Differentiating the Pn+1 i equation i=1 (x (t))2 = 1 then yields Proof:

n+1 X

2(x0 (t))(x˙ i (t)) t=0 = 0

i=1

2

n+1 X

pi ai = 0.

i=1

This then implies that Tp (S 2 ) ⊆ H, but because dim(Tp (S 2 )) = dim(H) and both spaces are linear, then Tp (S 2 ) = H. As a result, we have that the condition is both necessary and sufficient, as desired. 

Problem 11.2: (a) Let i : S 1 ,→ R2 be the inclusion map of the unit circle. In this problem, we denote by x, y the stadnard coordinates on R2 and by x, y their restrictions to S 1 . Thus, x = i∗ x and y = i∗ y. On the upper semicircle U = {(a, b) ∈ S 1 | b > 0}, x is a local coordinate, so that ∂/∂x is defined. Prove that for p ∈ U , !   ∂ ∂ ∂y ∂ + . i∗ = ∂x p ∂x ∂x ∂y p Thus, although i∗ : Tp S 1 → Tp R2 is injective, ∂/∂x |p cannot be identified with ∂/∂x |p . Proof:

It follows from the definition of pullback that x = i∗ x = x ◦ i and y = i∗ y = y ◦ i.

We also know that i∗

! ∂ ∂ ∂ =α +β for some α and β. ∂x p ∂x ∂y

We can then apply both sides of this equation to x to obtain α: !   ∂ ∂ ∂ i∗ x= α +β x ∂x p ∂x ∂y 46

∂ ∂ ∂ (x ◦ i) = α x + β x ∂x p ∂x ∂y ∂ (x) = α + 0 ∂x p

1=α We can similarly apply both sides of the original equation to y to obtain β: !   ∂ ∂ ∂ i∗ y= α +β y ∂x p ∂x ∂y ∂ ∂ ∂ (y ◦ i) = α y + β y ∂x p ∂x ∂y ∂ (y) = 0 + β ∂x p ∂y =β ∂x p As a result, we have that i∗

!   ∂ ∂ ∂ ∂y ∂ ∂y ∂ = + = + , ∂x p ∂x ∂x p ∂y ∂x ∂x ∂y p

as desired.  Problem 11.3: Show that a smooth map f from a compact manifold N to Rm has a critical point. (Hint: Let π : Rm → R be the projection of the first factor. Consider the compositive map π ◦ f : N → R A second proof uses Corollary 11.6 and the connectedness of Rm .) Proof: By way of contradiction assume that f does not have a critical point; it then follows that f will be a submersion. We also know that the projection to the first factor π : Rm → R is a submersion, so then the composite function π ◦ f : N → R must be a submersion. However, as we know that N is compact, π ◦ f must have a maximum, but this implies π ◦ f has a critical point, which is a contradiction. Thus, f must have a critical point.  Problem 11.4: On the upper hemisphere of the unit sphere S 2 , we have the coordinate map φ = (u, v), where u(a, b, c) = a and v(a, b, c) = b. So the derivations ∂/∂u |p , ∂/∂v |p are tangent vectors of S 2 at any point p = (a, b, c) on the upper hemisphere. Let i : S 2 → R3 be the inclusion and x, y, z the standard coordinates on 47

R3 . The differential i∗ : Tp S 2 → Tp R3 maps ∂/∂u |p , ∂/∂v |p into Tp R3 . Thus, ! ∂ 1 ∂ 1 ∂ 1 ∂ i∗ =α +β +γ , ∂u p ∂x p ∂y p ∂z p ! ∂ 2 ∂ 2 ∂ 2 ∂ =α +β +γ , i∗ ∂v p ∂x p ∂y p ∂z p for some constants αi , β i , γ i . Find (αi , β i , γ i ) for i = 1, 2. Proof: Note that x ◦ i = u, y ◦ i = v and zzzzz. Then applying the equations given above to x, y, and z respectively yields the following: ! ∂ ∂ ∂ 1 (x) = (x ◦ i) = (u) = 1 α = i∗ ∂u p ∂u p ∂u p ! ∂ ∂ ∂ 1 (y) = (y ◦ i) = (v) = 0 β = i∗ ∂u p ∂u p ∂u p ! ∂ ∂ √ −2u ∂ 1 2 2 = −a (z) = γ = i∗ (z ◦ i) = ( 1−u −v )= √ ∂u p ∂u p ∂u p c 2 1 − u2 − v 2 p ! ∂ ∂ ∂ 2 α = i∗ (x) = (x ◦ i) = (u) = 0 ∂v p ∂v p ∂v p ! ∂ ∂ ∂ 2 (y) = (y ◦ i) = (v) = 1 β = i∗ ∂v p ∂v p ∂v p ! ∂ √ ∂ ∂ −2v 2 2 2 = −b (z ◦ i) = ( 1−u −v )= √ γ = i∗ (z) = ∂v p ∂v p ∂v p c 2 1 − u2 − v 2 p As a result, we have determined the coefficients in the expressions ! ! ∂ ∂ ∂ −a ∂ ∂ −b ∂ i∗ = + and i∗ = + , ∂u p ∂x c ∂z ∂v p ∂y c ∂z as desired. 

Problem 11.5: Prove that if N is a compact manifold, then a one-to-one immersion f : N → M is an embedding. Proof: Note that f is C ∞ because f is an immersion. As N is compact and M is Hausdorff, it follows from the continuity of f that f will be a closed map, which implies that f −1 is continuous. Recall that a function is always surjective onto its image. As a result, we have that f is continuous in both directions and bijective onto its image, so f is a homeomorphism from N to f (N ). Because f is also an immersion, we have then shown that f is an embedding.  48

Chapter 12 Problem 12.2: Let (U, φ) = (U, x1 , . . . , xn ) and (V, ψ) = (V, y 1 , . . . , y n ) be overlapping ˜ and (T V, ψ) ˜ on coordinate charts on a manifold M . They induce coordinate charts (T U, φ) the total space T M of the tangent bundle (see equation (12.1)), with transition function ψ˜ ◦ φ˜−1 : (x1 , . . . , xn , a1 , . . . , an ) 7→ (y 1 , . . . , y n , b1 , . . . , bn ). (a) Compute the Jacobian matrix of the transition function ψ˜ ◦ φ˜−1 at φ(p). Proof: From the given information we know that the Jacobian of ψ˜ ◦ φ˜−1 will be of the form 

∂y 1 ∂x1

···

 .  ..   ∂yn  ∂x1 · · · −1 J(ψ˜ ◦ φ˜ ) =   ∂b1 · · ·  ∂x1  .  ..  ∂bn ··· ∂x1

∂y 1 ∂xn

∂y 1 ∂a1

···

∂y n ∂xn ∂b1 ∂xn

∂y n ∂a1 ∂b1 ∂a1

···

∂bn ∂xn

∂bn ∂a1

···

.. .

.. .

.. .

···

.. .

∂y 1 ∂an



..  .   ∂y n   ∂an  . 1 ∂b   n ∂a ..  .   ∂bn ∂an

We also know that ˜−1

1

n

1

−1

n

n

φ (x , . . . , x , a , . . . , a ) = (p, ap ), where p = φ (x1 , . . . , x ) and ap =

n X i=1

∂ a . ∂xi p i

Furthermore, we have that n X ∂ ∂y j ∂ = . ∂xi p j=1 ∂xi ∂y i p As a result, we now have that ˜ ap ) = ψ(p,

y 1 (x1 , . . . , xn ), . . . , y n (x1 , . . . , xn ),

n X ∂y 1 i=1

49

∂xi

ai , . . . ,

n X ∂y n i=1

∂xi

! ai

.

It then follows from our above statements that  ∂y 1 ∂y 1 ∂y 1 ··· ∂x1 ∂xn ∂a1  .. .. ..  . . .   n n ∂y ∂y n ∂y  ··· 1 −1 ∂x1 ∂xn ˜ ˜  J(ψ ◦ φ ) =  ∂ Pn ∂y1 i Pn ∂y1 i ∂ P∂a n ∂y 1 ∂  ∂x1 i=1 ∂xi a · · · ∂xn i=1 ∂xi a ∂a1 i=1 ∂xi ai  .. .. ..  . . .  Pn ∂yn i Pn ∂yn i ∂ Pn ∂yn i ∂ ∂ · · · ∂xn i=1 ∂xi a ∂a1 i=1 ∂xi a i=1 ∂xi a ∂x1   ∂y 1 ∂y 1 · · · 0 · · · 0 ∂x1 ∂xn  .. .. .. ..   . . . .      n n ∂y ∂y  ··· 0 ··· 0  ∂x1 ∂xn .  = Pn ∂ 2 y1 i Pn ∂ 2 y1 i ∂y1 ∂y 1  · · · ∂xn   i=1 ∂x1 ∂xi a · · · i=1 ∂xn ∂xi a ∂x1  . . . ..   .. .. .. .    Pn ∂ 2 yn i ∂yn Pn ∂ 2 y n i ∂y n ··· · · · ∂xn i=1 ∂xn ∂xi a i=1 ∂x1 ∂xi a ∂x1

∂y 1 ∂an

···



.. .

··· ··· ···

    ∂y n  ∂an Pn ∂y1 i   ∂ i=1 ∂xi a  ∂an  ..  .  Pn ∂yn i ∂ i=1 ∂xi a ∂an

So we have computed the Jacobian of ψ˜ ◦ φ˜−1 , as desired.  (b) Show that the Jacobian determinant of the transition function ψ˜ ◦ φ˜−1 at φ(p) is (det[∂y i /∂xj ])2 . Proof: Define the n × n matrices A and B by  P  1 n ∂ 2 y1 ∂y ∂y 1 · · · ai · · · 1 n i=1 ∂x ∂x ∂x1 ∂xi   . ..  ..  . A= .  .  and B =   . Pn ∂ 2 yn i ∂y n ∂y n · · · ∂xn ··· i=1 ∂x1 ∂xi a ∂x1

∂ 2 y1 i i=1 ∂xn ∂xi a

Pn

.. .

∂ 2 yn i i=1 ∂xn ∂xi a

Pn

  . 

It can then be seen that the Jacobian computed in (a) can be represented by the lower triangular block matrix " J(ψ˜ ◦ φ˜−1 ) =

A 0

#

B A

The definition of A implies that A = [∂y i /∂xj ]. It then follows from the properties of lower triangular block matrices and the definition of A that A 0 det(J(ψ˜ ◦ φ˜−1 )) = = det(A) det(A) = (det(A))2 = (det[∂y i /∂xj ])2 , B A as desired. 

50

Problem 12.4: Let π : E → M be a C ∞ vector bundle and s1 , . . . , sr a C ∞ frame for E over an open set U in M . Then every e ∈ π −1 (U ) can be written uniquely as a linear combination e=

r X

cj (e)sj (p),

p = π(e) ∈ U.

j=1

Prove that cj : π −1 U → R is C ∞ for j = 1, . . . , r. (Hint: First show that the coefficients of e relative to the frame t1 , . . . , tr of a trivialization are C ∞ .) Fix p ∈ U and choose a trivializing open set V ⊆ U for E containing p, with

Proof:

trivialization φ : π −1 (V ) → V × Rr , and let t1 , . . . , tr be the C ∞ frame of the P trivialization φ. We may now write e and sj in terms of the frame t1 , . . . , tr as e = ri=1 bi ti P and sj = ri=1 aij ti ; note that then all of the bi and aij will be C ∞ functions by Lemma 12.11.

the C



Next we express e in terms of the ti ’s: r X

bi ti = e =

i=1

r X

cj s j =

j=1

Comparing the coefficients of ti gives bi =

Pr

j=1

X

aij ti .

1≤i,j≤r

cj aij ; represented in matrix notation as

    b1 c1 . . . . b=  .  = A  .  = Ac. br cr At each point of V , being the transition matrix between to bases, the matrix A is invertible. By Cramer’s rule, A−1 is a matrix of C ∞ functions on V . Hence, c = A−1 b is a column vector of C ∞ functions on V . Thus, we have that c1 , . . . , cr are C ∞ functions at p ∈ U . Since p is an arbitrary point of U , the coefficients cj are C ∞ functions on U . 

51

Chapter 14 Problem 14.1: Show that two C ∞ vector fields X and Y on a manifold M are equal if and only if for every C ∞ function f on M , we have Xf = Y f . Proof: (⇒) Assume that X and Y are equal vector fields on M . Then it is immediate that Xf = Y f . (⇐) Let f be a C ∞ function on M and assume that Xf = Y f for all f . Let p ∈ M be arbitrary; we shall show that Xp = Yp . To do this, it suffices to show that Xp [h] = Yp [h] for the germ [h]. Let h : U → R be a C ∞ function representing the germ [h]. We may extend h ˜ by multiplying by a C ∞ bump function supported in U that is identically to a C ∞ function h ˜ = Y h. ˜ As a result, 1 in a neighborhood of p. It then follows from our assumption that X h ˜ = (X h) ˜ p = (Y h) ˜ p = Yp h. ˜ As h ˜ = h in a neighborhood of p, we have we have that Xp h ˜ and Yp h = Yp h ˜ on that neighborhood. It then follows that Xp h = Yp h, so that Xp h = Xp h Xp = Yp . As p was arbitrary, it then follows that X = Y .  Problem 14.2: Let x1 , y 1 , . . . , xn , y n be the standard coordinates on R2n . The unit sphere P S 2n−1 in R2n is defined by the equation ni=1 (xi )2 + (y i )2 = 1. Show that X=

n X i=1

−y i

∂ ∂ + xi i i ∂x ∂y

is a nowhere-vanishing smooth vector field on S 2n−1 . Since all spheres of the same dimension are diffeomorphic, this proves that on every odd-dimensional sphere there is a nowherevanishing smooth vector field. It is a classical theorem of differential and algebraic topology that on an even-dimensional sphere every continuous vector field must vanish somewhere (see [28, Section 5, p.31] or [16, Theorem 16.5, p.70]). (Hint: Use Problem 11.1 to show that X is tangent to S 2n−1 .) Proof: Let p = (p1 , p2 , . . . , p2n ). We know that Xp =

n X i=1

∂ ∂ i −y (p) i + x (p) i . ∂x p ∂y p i

If we let ai represent the ith component function we can then observe that 2n X

ai pi = −y 1 (p)p1 + x1 (p)p2 + · · · − y n (p)p2n−1 + xn (p)p2n

i=1

= −p2 p1 + p1 p2 + · · · − p2n p2n−1 + p2n p2n−1 = 0 + · · · + 0 = 0, 52

so it follows from Problem 11.1 that Xp is tangent to S 2n−1 for all p ∈ R2n . As p was arbitrary, we have that X is a vector field on S 2n−1 . Let p ∈ S 2n−1 and by way of contradiction assume that Xp = 0. ∂ , ∂ , . . . , ∂x∂n , ∂y∂n are a basis, this implies ∂x1 ∂y 1 P But then ni=1 (xi (p))2 + (y i (p))2 = 0, which (y i )2 = 1, so we have that Xp 6= 0.

As we know that

that x1 (p) = y 1 (p) = · · · = xn (p) = y n (p) = 0. P is a contradiction to the fact that ni=1 (xi )2 +

Define t2i−1 = xi and t2i = y i for 1 ≤ i ≤ n, we may then rewrite X as X=

2n X

−ti+1

i=1

∂ ∂ + ti i+1 . i ∂t ∂t

We shall use the atlas on S 2n−1 whose charts are given by the stereographic projection. To that effect, define zi =

ti 1−t2n

for 1 ≤ i ≤ 2n − 1; note that this is the stereographic projection

of ti . For each i it then follows that 2n−1 X ∂zk ∂ ∂ = . ∂ti ∂ti ∂zk k=1

Thus, we may rewrite X on S 2n−1 as X=

2n X i=1

−ti+1

2n−1 X k=1

∂zk ∂ ∂ti ∂zk

! + ti

2n−1 X k=1

We can expand out this sum to see that the coefficient of each i

product of t ’s and function of each

∂ ∂zk

∂zk ’s. ∂xi

i

As we know that t and

∂zj ∂xi

∂zk ∂ ∂ti+1 ∂zk ∂ ∂zk

!

will be given by a sum and

are smooth for all i, j, the coefficient

will be smooth, so it follows that X is smooth on S 2n−1 by Proposition

14.2. Thus, we have that X is a nowhere-vanishing smooth vector field on S 2n−1 .  Problem 14.3: Let M be R \ {0} and let X be the vector field d/dx on M . Find the maximal integral curve of X starting at x = 1. Proof: We know that c0 (t) = x(t) ˙ = 1. Solving this differential equation with the initial condition c(0) = 1 yields x = t+1, so it follows that the maximal integral curve c : (−1, ∞) → R is defined by c(t) = t + 1. (This domain is restricted below by −1 because c(−1) = 0 and 0∈ / M . As we have by definition that the domain of an integral curve must be an interval, hence connected, we have the domain will be (−1, ∞).) 

53

Problem 14.4: Find the integral curves of the vector field " # x ∂ ∂ X(x,y) = x −y = on R2 . ∂x ∂y −y Proof: Let c(t) = (x(t), y(t)); then c0 (t) = (x(t), ˙ y(t)). ˙ It follows from the given information  x˙  x that y˙ = [ −y ], so x˙ = x and y˙ = −y. Solving these differential equations yields x(t) = c1 et and y(t) = c2 e−t for some constants c1 , c2 . As a result, we have that the integral curve is given by c : R → R2 defined by c(t) = (c1 et , c2 e−t ).  Problem 14.5: Find the maximal integral curve c(t) starting at the point (a, b) ∈ R2 of the vector field X(x,y) = ∂/∂x + x∂/∂y on R2 . Proof: Let c(t) = (x(t), y(t)); then c0 (t) = (x(t), ˙ y(t)). ˙ It follows from the given information  x˙  that y˙ = [ 1x ], so x˙ = 1 and y˙ = x. Solving these differential equations yields x(t) = t + c and y(t) =

t2 2

+ ct + d. We can then use the initial condition c(0) = (a, b) to determine that

x(t) = t + a and y(t) =

t2 2

+ at + b. As a result, we have that the maximal integral curve is 2

given by c : R → R defined by c(t) = (t + a, t2 + at + b).  2

Problem 14.6: (a) Suppose the smooth vector field X on a manifold M vanishes at a point p ∈ M . Show that the integral curve of X with initial point p is the constant curve c(t) ≡ p. Proof: Let c : R → M be defined by c(t) = p. We can immediately see that c(0) = p and c0 (t) = 0. Because X is given to be a smooth vector field, we have that our solution c(t) to the given differential equation is unique. As a result, we have that the integral curve of X with initial point p must be the constant curve c(t) ≡ p.  (b) Show that if X is the zero vector field on a manifold M , and ct (p) is the maximal integral curve of X starting at p, then the one-parameter group of diffeomorphisms c : R → Diff(M) is the constant map c(t) ≡ 1M . Proof: As X = 0 is smooth, we know from (a) that every integral curve on M will be constant. This implies that when fixing any t we have that the map ct : M → M will be defined by ct (p) = p. As a result, we have that c : R → Diff(M ) is the constant map c(t) ≡ 1M . 

Problem 14.7: Let X be the vector field x d/dx on R. For each p in R, find the maximal integral curve c(t) of X starting at p. Proof: We know that the integral curve will be of the form c : R → R defined by c(t) = x(t) 54

and c0 (t) = x(t). ˙ It follows from the given information that x˙ = x; solving this equation gives x(t) = aet for some constant a. Using the intial condition that c(0) = p then gives that x(t) = pet , so the maximal integral curve is c : R → R is defined by c(t) = pet .  Problem 14.8: Let X be the vector field x2 d/dx on the real line R. For each p > 0 in R, find the maximal integral curve of X with initial point p. Proof: We know that the integral curve will be of the form c : R → R defined by c(t) = x(t) and c0 (t) = x(t). ˙ It follows from the given information that x˙ = x2 ; solving this equation −1 for some constant a. Using the intial condition that c(0) = p then gives t+a that x(t) = t+−1−1 , note that this function smooth everywhere except at t = p1 . As we know p that p > 0, it follows that 0 < p1 , so the largest connected domain for x(t) containing 0 is the interval (−∞, p1 ). As a result, the maximal integral curve is given by c : (−∞, p1 ) → R defined by c(t) = t+−1−1 .  p

gives x(t) =

Problem 14.9: Suppose c(a, b) → M is an integral curve of the smooth vector field X on M . Show that for any real number s, the map cs : (a + s, b + s) → M,

cs (t) = c(t − s),

is also an integral curve of X. Proof: We can immediately see that c0s (t) = c0 (t − s) = Xc(t−s) = Xcs (t) , so cs is also an integral curve of X.  Problem 14.10: If f and g are C ∞ functions and X and Y are C ∞ vector fields on a manifold M , show that [f X, gY ] = f g[X, Y ] + f (Xg)Y − g(Y f )X. Proof: Let h be an arbitrary C ∞ function and observe the following: [f X, gY ]h = ((f X)(gY ) − (gY )(f X))h = (f X)(gY )h − (gY )(f X)h = f (XgY h) + f (gXY h) − g(Y f Xh) − g(f Y Xh) = f (gXY h) − g(f Y Xh) + f (XgY h) − g(Y f Xh) = f g(XY h) − gf (Y Xh) + f (XgY h) − g(Y f Xh) 55

= f g((XY h) − (Y Xh)) + f (XgY )h − g(Y f X)h = f g[X, Y ]h + f (Xg)(Y h) − g(Y f )(Xh) As h was aribtrary, it then follows that [f X, gY ] = f g[X, Y ] + f (Xg)Y − g(Y f )X.  i h ∂ ∂ ∂ on R2 . + x ∂y , ∂x Problem 14.11: Compute the Lie bracket −y ∂x Proof: We can observe that       ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ −y +x , = −y +x − −y +x ∂x ∂y ∂x ∂x ∂y ∂x ∂x ∂x ∂y         ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ = −y +x − −y − x ∂x ∂x ∂y ∂x ∂x ∂x ∂x ∂y 2 2 ∂ ∂ ∂ ∂ ∂ ∂ ∂ +y 2 − −x = −y 2 + x ∂x ∂y ∂x ∂x ∂y ∂x ∂y ∂ ∂ =0+0− =− . ∂y ∂y  Problem 14.12: Consider two C ∞ vector fields X, Y on Rn : X X ∂ ∂ bj j , X= ai i , Y = ∂x ∂x where ai , bj are C ∞ functions on Rn . Since [X, Y ] is also a C ∞ vector field on Rn , [X, Y ] =

X

ck

∂ ∂xk

for some C ∞ functions ck . Find the formula for ck in terms of ai and bj . Proof: We may observe the following: X

ck

∂ = [X, Y ] = XY − Y X ∂xk X ∂ X X ∂ ∂ X i ∂ = ai i bj j − bj j a i ∂x ∂x ∂x ∂x    i  j X X ∂b ∂ ∂ ∂a ∂ ∂ i j ∂ j i ∂ = a +b − b +a ∂xi ∂xj ∂xi ∂xj ∂xj ∂xi ∂xj ∂xi i,j i,j  j X  ∂bj ∂ ∂ i i ∂a = a −b ∂xi ∂xj ∂xi ∂xj i,j

Fixing k we then have that ck =

X

ai

k ∂bk i ∂a − b . ∂xi ∂xi

56

 Problem 14.13: Let F : N → M be a C ∞ diffeomorphism of manifolds. Prove that if g is a C ∞ function and X a C ∞ vector field on N , then F∗ (gX) = (g ◦ F −1 )F∗ X. Proof:

As F is a diffeomorphism, we have that X and F∗ X are F -related. This implies

that Y = F∗ X, so (Y f ) ◦ F = X(f ◦ F ) for all p ∈ N . Fixing a p ∈ N , let f be an arbitrary smooth function. We then have that (Y f )(F (p)) = X(f ◦ F )(p) ⇒ (g ◦ F −1 )(Y f )(F (p)) = gX(f ◦ F )(p) ⇒ (g ◦ F −1 )(F (p))(YF (p) f ) = F∗,p (gXp )F (p) f ⇒ (g ◦ F −1 )(F∗ X)F (p) f = F∗ (g(X)F (p) f. As p and f are arbitrary, we have the desired result.  Problem 14.14: Let F : N → M be a C ∞ diffeomorphism of manifolds. Prove that if X and Y are C ∞ vector fields on N , then F∗ [X, Y ] = [F∗ X, F∗ Y ]. Proof:

It follows from Exercise 14.15 that X and F∗ X are F -related and Y and F∗ Y are

F -related. Proposition 14.17 then implies that [X, Y ] is F -related to [F∗ X, F∗ Y ], which tells us that F∗ [X, Y ] = [F∗ X, F∗ Y ], as desired. 

57

Chapter 15 Problem 15.1: For X ∈ Rn×n , define the partial sum sm =

Pm

k=0

X k /k!.

(a) Show that for ` ≥ m, ks` − sm k ≤

` X

kXkk /k!.

k=m+1

Proof: Recall the properties of the norm and the definition of sm ; then we may observe that



k ` ` m ` `

X

X k k k X X X

X X X X kXkk





− ≤ , ks` − sm k =

=



k! k! k=m+1 k! k=m+1 k! k=m+1 k! k=0 k=0 so ks` − sm k ≤

P`

k=m+1

kXkk /k!, as desired. 

(b) Conclude that sm is a Cauchy sequence in Rn×n and therefore converges to a matrix, which P k we denote by eX . This gives another way of showing that ∞ k=0 X /k! is convergent, without using the comparison test or the theorem that absolute convergence implies convergence in a complete normed vector space. Proof:

Let  > 0 and note that the growth of kXkk is at most some type of polynomial

growth (because kXkk is based on some power of a linear combination of the columns of X). As we know that factorial growth is faster than polynomial growth, we may choose sufficently large N such that

∞ X kXkk < . k! k=N

As a result, we then have from (a) that for m, n > N with m ≥ n that ∞ m X X kXkk kXkk