Vectors In Mathematics From Basic Pdf
Discover the world's research
- 20+ million members
- 135+ million publications
- 700k+ research projects
Join for free
Lecture Notes
Vector Analysis
MATH 332
Ivan Avramidi
New Mexico Institute of Mining and Technology
Socorro, NM 87801
May 19, 2004
Author: Ivan Avramidi; File: vecanal4.tex; Date: July 1, 2005; Time: 13:34
Contents
1 Linear Algebra 1
1.1 Vectors in Rn and Matrix Algebra . . . . . . . . . . . . . . . . . . . 1
1.1.1 Vectors ............................. 1
1.1.2 Matrices............................. 3
1.1.3 Determinant........................... 8
1.1.4 Exercises ............................ 9
1.2 VectorSpaces.............................. 11
1.2.1 Exercises ............................ 13
1.3 Inner Product and Norm . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.1 Exercises ............................ 15
1.4 LinearOperators ............................ 17
1.4.1 Exercises ............................ 24
2 Vector and Tensor Algebra 27
2.1 MetricTensor.............................. 27
2.2 Dual Space and Covectors . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.1 Einstein Summation Convention . . . . . . . . . . . . . . . . 31
2.3 General Definition of a Tensor . . . . . . . . . . . . . . . . . . . . . 34
2.3.1 Orientation, Pseudotensors and Volume . . . . . . . . . . . . 37
2.4 Operators and Tensors . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.5 Vector Algebra in R 3 .......................... 44
3 Geometry 49
3.1 Geometry of Euclidean Space . . . . . . . . . . . . . . . . . . . . . 49
3.2 Basic Topology of Rn .......................... 53
3.3 Curvilinear Coordinate Systems . . . . . . . . . . . . . . . . . . . . 54
3.3.1 Change of Coordinates . . . . . . . . . . . . . . . . . . . . . 56
3.3.2 Examples............................ 57
3.4 Vector Functions of a Single Variable . . . . . . . . . . . . . . . . . 59
3.5 GeometryofCurves........................... 61
3.6 Geometry of Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . 65
I
II CONTENTS
4 Vector Analysis 69
4.1 Vector Functions of Several Variables . . . . . . . . . . . . . . . . . 69
4.2 Directional Derivative and the Gradient . . . . . . . . . . . . . . . . 71
4.3 ExteriorDerivative ........................... 73
4.4 Divergence ............................... 76
4.5 Curl................................... 77
4.6 Laplacian ................................ 78
4.7 Diff erential Vector Identities . . . . . . . . . . . . . . . . . . . . . . 79
4.8 Orthogonal Curvilinear Coordinate Systems in R 3 ........... 80
5 Integration 83
5.1 LineIntegrals .............................. 83
5.2 SurfaceIntegrals ............................ 84
5.3 VolumeIntegrals ............................ 86
5.4 Fundamental Integral Theorems . . . . . . . . . . . . . . . . . . . . 87
5.4.1 Fundamental Theorem of Line Integrals . . . . . . . . . . . . 87
5.4.2 Green's Theorem . . . . . . . . . . . . . . . . . . . . . . . . 87
5.4.3 Stokes's Theorem . . . . . . . . . . . . . . . . . . . . . . . . 87
5.4.4 Gauss's Theorem . . . . . . . . . . . . . . . . . . . . . . . . 87
5.4.5 General Stokes's Theorem . . . . . . . . . . . . . . . . . . . 88
6 Potential Theory 89
6.1 Simply Connected Domains . . . . . . . . . . . . . . . . . . . . . . 90
6.2 Conservative Vector Fields . . . . . . . . . . . . . . . . . . . . . . . 91
6.2.1 Scalar Potential . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.3 Irrotational Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . 92
6.4 Solenoidal Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . 93
6.4.1 Vector Potential . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.5 LaplaceEquation ............................ 94
6.5.1 Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . 94
6.6 PoissonEquation ............................ 95
6.6.1 Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . 95
6.6.2 PointSources.......................... 95
6.6.3 Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . 95
6.6.4 Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . 95
6.6.5 Green's Functions . . . . . . . . . . . . . . . . . . . . . . . 95
6.7 Fundamental Theorem of Vector Analysis . . . . . . . . . . . . . . . 96
7 Basic Concepts of Differential Geometry 97
7.1 Manifolds................................ 98
7.2 Diff erentialForms............................ 99
7.2.1 Exterior Product . . . . . . . . . . . . . . . . . . . . . . . . 99
7.2.2 Exterior Derivative . . . . . . . . . . . . . . . . . . . . . . . 99
7.3 Integration of Diff erentialForms.................... 100
7.4 General Stokes's Theorem . . . . . . . . . . . . . . . . . . . . . . . 101
7.5 Tensors in General Curvilinear Coordinate Systems . . . . . . . . . . 102
vecanal4.tex; July 1, 2005; 13:34; p. 1
CONTENTS III
7.5.1 Covariant Derivative . . . . . . . . . . . . . . . . . . . . . . 102
8 Applications 103
8.1 Mechanics................................ 104
8.1.1 InertiaTensor.......................... 104
8.1.2 Angular Momentum Tensor . . . . . . . . . . . . . . . . . . 104
8.2 Elasticity ................................ 105
8.2.1 StrainTensor.......................... 105
8.2.2 StressTensor.......................... 105
8.3 FluidDynamics............................. 106
8.3.1 Continuity Equation . . . . . . . . . . . . . . . . . . . . . . 106
8.3.2 Tensorof Momentum Flux Density . . . . . . . . . . . . . . 106
8.3.3 Euler's Equations . . . . . . . . . . . . . . . . . . . . . . . . 106
8.3.4 Rate of Deformation Tensor . . . . . . . . . . . . . . . . . . 106
8.3.5 Navier-Stokes Equations . . . . . . . . . . . . . . . . . . . . 106
8.4 Heat and Diff usionEquations...................... 107
8.5 Electrodynamics ............................ 108
8.5.1 Tensorof Electromagnetic Field . . . . . . . . . . . . . . . . 108
8.5.2 Maxwell Equations . . . . . . . . . . . . . . . . . . . . . . . 108
8.5.3 Scalar and Vector Potentials . . . . . . . . . . . . . . . . . . 108
8.5.4 WaveEquations......................... 108
8.5.5 D'Alambert Operator . . . . . . . . . . . . . . . . . . . . . . 108
8.5.6 Energy-Momentum Tensor . . . . . . . . . . . . . . . . . . . 108
8.6 Basic Concepts of Special and General Relativity . . . . . . . . . . . 109
Bibliography 111
Notation 113
Index 113
vecanal4.tex; July 1, 2005; 13:34; p. 2
IV CONTENTS
vecanal4.tex; July 1, 2005; 13:34; p. 3
Chapter 1
Linear Algebra
1.1 Vectors in Rn and Matrix Algebra
1.1.1 Vectors
•Rn is the set of all ordered n-tuples of real numbers, which can be assembled as
columns or as rows.
•Let x 1 ,..., xn be n real numbers. Then the column-vector (or just vector) is an
ordered n -tuple of the form
v=
v1
v2
.
.
.
vn
,
and the row-vector (also called a covector) is an ordered n-tuple of the form
vT =( v1 ,v2 ,..., vn ) .
The real numbers x 1 , . . . xn are called the components of the vectors.
•The operation that converts column-vectors into row-vectors and vice versa pre-
serving the order of the components is called the transposition and denoted by
T. That is
v1
v2
.
.
.
vn
T
=(v 1 , v 2 ,..., vn ) and (v1 , v2 ,..., vn ) T =
v1
v2
.
.
.
vn
.
Of course, for any vector v (vT )T = v.
1
2CHAPTER 1. LINEAR ALGEBRA
•The addition of vectors is defined by
u+ v=
u1 + v 1
u2 + v 2
.
.
.
un + v n
,
and u+v =(u 1 +v 1 ,..., un +vn ).
•Notice that one cannot add a column-vector and a row-vector!
•The multiplication of vectors by a real constant, called a scalar, is defined by
av=
av1
av2
.
.
.
avn
,av=(av 1 ,..., avn ) .
•The vectors that have only zero elements are called zero vectors, that is
0=
0
0
.
.
.
0
,0T =(0 ,..., 0) .
•The set of column-vectors
e1 =
1
0
0
.
.
.
0
,e2 =
0
1
0
.
.
.
0
,··· ,en =
0
.
.
.
0
0
1
and the set of row-vectors
eT
1=(1, 0,..., 0), eT
2=(0,1,...,0), eT
n=(0, 0,..., 1)
are called the standard (or canonical) bases in Rn .
•There is a natural product of column-vectors and row-vectors that assigns to a
row-vector and a column-vector a real number
huT ,v i= (u 1 , u 2 ,...,un )
v1
v2
.
.
.
vn
=
n
X
i= 1
uivi =u1 v1 + u2 v2 + · ·· + un vn .
This is the simplest instance of a more general multiplication rule for matrices
which can be summarized by saying that one multiplies row by column.
vecanal4.tex; July 1, 2005; 13:34; p. 4
1.1. VECTORS IN RN AND MATRIX ALGEBRA 3
•The product of two column-vectors and the product of two row-vectors, called
the inner product (or the scalar product), is defined then by
(u, v )= ( uT , vT )= h uT ,vi =
n
X
i=1
uivi =u1 v1 + · ·· + un vn .
•Finally, we define the norm (or the length) of both column-vectors and row-
vectors is defined by
||v || = ||vT || = p h vT ,v i=
n
X
i=1
v2
i
1/2
=q v2
1+· ·· +v 2
n.
1.1.2 Matrices
•A set of n 2real numbers Ai j ,i, j= 1 ,...,n , arranged in an array that has n
columns and n rows
A=
A11 A12 ··· A1 n
A21 A22 ··· A2 n
.
.
..
.
.. . . .
.
.
An1 An2 ··· Ann
is called a square n× n real matrix .
•The set of all real square n ×nmatrices is denoted by Mat( n,R).
•The number Ai j (also called an entry of the matrix) appears in the i-th row and
the j -th column of the matrix A
A=
A11 A12 ··· A1j ·· · A1 n
A21 A22 ··· A2j ·· · A2 n
.
.
..
.
.. . . .
.
..
.
..
.
.
Ai1 Ai2 ··· Ai j ·· · Ain
.
.
..
.
..
.
..
.
.. . . .
.
.
An1 An2 ··· An j ··· Ann
•Remark. Notice that the first index indicates the row and the second index
indicates the column of the matrix.
•The matrix whose all entries are equal to zero is called the zero matrix.
•The addition of matrices is defined by
A+ B=
A11 + B11 A12 +B12 ··· A1n +B1n
A21 + B21 A22 +B22 ··· A2n +B2n
.
.
..
.
... . .
.
.
An1 +Bn1 An2 +Bn2 ·· · Ann +Bnn
vecanal4.tex; July 1, 2005; 13:34; p. 5
4CHAPTER 1. LINEAR ALGEBRA
and the multiplication by scalars by
cA =
cA11 cA12 · · · cA1 n
cA21 cA22 · · · cA2 n
.
.
..
.
.. . . .
.
.
cAn1 cAn2 · ·· cAnn
•The numbers Aii are called the diagonal entries. Of course, there are ndiagonal
entries. The set of diagonal entries is called the diagonal of the matrix A.
•The numbers Ai j with i, j are called o ff -diagonal entries ; there are n (n −1)
off -diagonal entries.
•The numbers Ai j with i< j are called the upper triangular entries . The set of
upper triangular entries is called the upper triangular part of the matrix A.
•The numbers Ai j with i> j are called the lower triangular entries. The set of
lower triangular entries is called the lower triangular part of the matrix A.
•The number of upper-triangular entries and the lower-triangular entries is the
same and is equal to n (n− 1)/2.
•A matrix whose only non-zero entries are on the diagonal is called a diagonal
matrix. For a diagonal matrix
Ai j =0 if i, j.
•The diagonal matrix
A=
λ1 0··· 0
0λ 2 ··· 0
.
.
..
.
.. . . .
.
.
0 0 · ·· λn
is also denoted by A= diag(λ 1 , λ 2 , . . . , λ n )
•A diagonal matrix whose all diagonal entries are equal to 1
I=
1 0 ··· 0
0 1 ··· 0
.
.
..
.
.. . . .
.
.
0 0 ··· 1
is called the identity matrix. The elements of the identity matrix are
Ii j =
1, if i= j
0, if i, j .
vecanal4.tex; July 1, 2005; 13:34; p. 6
1.1. VECTORS IN RN AND MATRIX ALGEBRA 5
•A matrix Aof the form
A=
∗ ∗ ··· ∗
0∗ ··· ∗
.
.
..
.
.. . . .
.
.
0 0 ··· ∗
where ∗ represents nonzero entries is called an upper triangular matrix . Its
lower triangular part is zero, that is,
Ai j =0 if i< j.
•A matrix Aof the form
A=
∗0 ··· 0
∗ ∗ ··· 0
.
.
..
.
.. . . .
.
.
∗ ∗ ··· ∗
whose upper triangular part is zero, that is,
Ai j = 0 if i> j ,
is called a lower triangular matrix.
•The transpose of a matrix A whose ij -th entry is Aij is the matrix AT whose
ij-th entry is A ji . That is, AT obtained from A by switching the roles of rows and
columns of A:
AT =
A11 A21 ··· Aj1 ·· · An1
A12 A22 ··· Aj2 ·· · An2
.
.
..
.
.. . . .
.
..
.
..
.
.
A1i A2i ··· Aji · ·· Ani
.
.
..
.
..
.
..
.
.. . . .
.
.
A11 A2n · ·· A jn · ·· Ann
or ( AT )i j =Aji .
•A matrix A is called symmetric if
AT =A
and anti-symmetric if AT =−A.
•The number of independent entries of an anti-symmetric matrix is n (n −1)/2.
•The number of independent entries of a symmetric matrix is n (n+ 1)/2.
vecanal4.tex; July 1, 2005; 13:34; p. 7
6CHAPTER 1. LINEAR ALGEBRA
•Every matrix Acan be uniquely decomposed as the sum of its diagonal part AD ,
the lower triangular part AL and the upper triangular part AU
A= AD + AL + AU .
•For an anti-symmetric matrix
AT
U=−A L and A D =0.
•For a symmetric matrix AT
U=A L .
•Every matrix Acan be uniquely decomposed as the sum of its symmetric part AS
and its anti-symmetric part AA
A= AS + AA ,
where
AS = 1
2(A+ AT ), AA = 1
2(A− AT ) .
•The product of matrices is defined as follows. The i j -th entry of the product
C= AB of two matrices A and B is
Ci j =
n
X
k=1
Aik Bk j =Ai1 B1j +Ai2 B2j + ·· · + Ain Bn j .
This is again a multiplication of the "i-th row of the matrix Aby the j-th column
of the matrix B".
•Theorem 1.1.1 The product of matrices is associative, that is, for any matrices
A, B, C ( AB)C= A(BC) .
•Theorem 1.1.2 For any two matrices A and B
(AB )T =BTAT .
•A matrix A is called invertible if there is another matrix A −1 such that
AA−1 = A −1 A= I .
The matrix A−1 is called the inverse of A.
•Theorem 1.1.3 For any two invertible matrices A and B
(AB)−1 = B−1 A−1 ,
and ( A −1) T =( A T ) −1 .
vecanal4.tex; July 1, 2005; 13:34; p. 8
1.1. VECTORS IN RN AND MATRIX ALGEBRA 7
•A matrix A is called orthogonal if
AT A= AAT = I ,
which means AT = A−1 .
•The trace is a map tr : Mat(n, R ) that assigns to each matrix A= ( Ai j ) a real
number tr A equal to the sum of the diagonal elements of a matrix
tr A =
n
X
k=1
Akk .
•Theorem 1.1.4 The trace has the properties
tr(AB )= tr (BA ) ,
and tr A T = tr A .
•Obviously, the trace of an anti-symmetric matrix is equal to zero.
•Finally, we define the multiplication of column-vectors by matrices from the left
and the multiplication of row-vectors by matrices from the right as follows.
•Each matrix defines a natural left action on a column-vector and a right action
on a row-vector.
•For each column-vector vand a matrix A= (Ai j ) the column-vector u= Av is
given by
u1
u2
.
.
.
ui
.
.
.
un
=
A11 A12 ··· A1 n
A21 A22 ··· A2 n
.
.
..
.
.. . . .
.
.
Ai1 Ai2 ··· Ain
.
.
..
.
..
.
..
.
.
An1 An2 ··· Ann
v1
v2
.
.
.
vi
.
.
.
vn
=
A11 v1 +A12 v2 + · · · + A1n vn
A21 v1 +A22 v2 + · · · + A2n vn
.
.
.
Ai1 v1 +Ai2 v2 + · ·· + Ainvn
.
.
.
An1 v1 +An2 v2 + · ·· + Annvn
•The components of the vector uare
ui =
n
X
j= 1
Ai j vj = Ai1 v1 + Ai2 v2 + · · · + Ainvn .
•Similarly, for a row vector vT the components of the row-vector uT =vT A are
defined by
ui =
n
X
j= 1
vj Aji = v1A1i +v2A2i + · · · + vn A ni .
vecanal4.tex; July 1, 2005; 13:34; p. 9
8CHAPTER 1. LINEAR ALGEBRA
1.1.3 Determinant
•Consider the set Zn = { 1, 2 ,...,n }of the first n integers. A permutation ϕ of the
set { 1, 2,..., n} is an ordered n-tuple (ϕ (1), . . . , ϕ ( n )) of these numbers.
•That is, a permutation is a bijective (one-to-one and onto) function
ϕ:Zn →Zn
that assigns to each number ifrom the set Zn ={ 1,...,n} another number ϕ (i)
from this set.
•An elementary permutation is a permutation that exchanges the order of only
two numbers.
•Every permutation can be realized as a product (or a composition) of elemen-
tary permutations. A permutation that can be realized by an even number of
elementary permutations is called an even permutation. A permutation that
can be realized by an odd number of elementary permutations is called an odd
permutation.
•Proposition 1.1.1 The parity of a permutation does not depend on the repre-
sentation of a permutation by a product of the elementary ones.
•That is, each representation of an even permutation has even number of elemen-
tary permutations, and similarly for odd permutations.
•The sign of a permutation ϕ , denoted by sign(ϕ) (or simply ( −1)ϕ ), is defined
by
sign(ϕ )= ( −1)ϕ =( + 1, if ϕ is even,
−1, if ϕ is odd
•The set of all permutations of nnumbers is denoted by Sn .
•Theorem 1.1.5 The cardinality of this set, that is, the number of different per-
mutations, is
|Sn |= n! .
•The determinant is a map det : Mat(n, R ) →Rthat assigns to each matrix
A=( Aij ) a real number det A defined by
det A= X
ϕ∈Sn
sign(ϕ ) A 1ϕ(1) ·· · Anϕ( n) ,
where the summation goes over all n! permutations.
•The most important properties of the determinant are listed below:
Theorem 1.1.6 1. The determinant of the product of matrices is equal to the
product of the determinants:
det(AB )= det A det B .
vecanal4.tex; July 1, 2005; 13:34; p. 10
1.1. VECTORS IN RN AND MATRIX ALGEBRA 9
2. The determinants of a matrix A and of its transpose ATare equal:
det A= det AT .
3. The determinant of the inverse A−1 of an invertible matrix A is equal to the
inverse of the determinant of A:
det A −1 = (det A ) −1
4. A matrix is invertible if and only if its determinant is non-zero.
•The set of real invertible matrices (with non-zero determinant) is denoted by
GL(n, R). The set of matrices with positive determinant is denoted by GL+ (n, R).
•A matrix with unit determinant is called unimodular.
•The set of real matrices with unit determinant is denoted by S L (n, R).
•The set of real orthogonal matrices is denoted by O ( n).
•Theorem 1.1.7 The determinant of an orthogonal matrix is equal to either 1or
−1.
•An orthogonal matrix with unit determinant (a unimodular orthogonal matrix) is
called a proper orthogonal matrix or just a rotation .
•The set of real orthogonal matrices with unit determinant is denoted by SO ( n).
•A set G of invertible matrices forms a group if it is closed under taking inverse
and matrix multiplication, that is, if the inverse A−1 of any matrix A in Gbelongs
to the set Gand the product AB of any two matrices A and B in G belongs to G.
1.1.4 Exercises
1. Show that the product of invertible matrices is an invertible matrix.
2. Show that the product of matrices with positive determinant is a matrix with positive
determinant.
3. Show that the inverse of a matrix with positive determinant is a matrix with positive
determinant.
4. Show that GL ( n, R ) forms a group (called the general linear group).
5. Show that GL+ ( n, R ) is a group (called the proper general linear group).
6. Show that the inverse of a matrix with negative determinant is a matrix with negative
determinant.
7. Show that: a) the product of an even number of matrices with negative determinant is a
matrix with positive determinant, b) the product of odd matrices with negative determinant
is a matrix with negative determinant.
8. Show that the product of matrices with unit determinant is a matrix with unit determinant.
9. Show that the inverse of a matrix with unit determinant is a matrix with unit determinant.
vecanal4.tex; July 1, 2005; 13:34; p. 11
10 CHAPTER 1. LINEAR ALGEBRA
10. Show that S L ( n, R ) forms a group (called the special linear group or the unimodular
group).
11. Show that the product of orthogonal matrices is an orthogonal matrix.
12. Show that the inverse of an orthogonal matrix is an orthogonal matrix.
13. Show that O ( n ) forms a group (called the orthogonal group).
14. Show that orthogonal matrices have determinant equal to either +1 or −1.
15. Show that the product of orthogonal matrices with unit determinant isan orthogonal ma-
trix with unit determinant.
16. Show that the inverse of an orthogonal matrix with unit determinant is an orthogonal
matrix with unit determinant.
17. Show that SO ( n) forms a group (called the proper orthogonal group or the rotation
group).
vecanal4.tex; July 1, 2005; 13:34; p. 12
1.2. VECTOR SPACES 11
1.2 Vector Spaces
•Areal vector space consists of a set E, whose elements are called vectors , and
the set of real numbers R, whose elements are called scalars. There are two
operations on a vector space:
1. Vector addition,+ :E× E→ E , that assigns to two vectors u, v∈ E
another vector u+ v , and
2. Multiplication by scalars ,· :R×E →E , that assigns to a vector v∈ E
and a scalar a∈R a new vector av∈ E .
The vector addition is an associative commutative operation with an additive
identity. It satisfies the following conditions:
1. u+ v= v+ u , ∀u,v, ∈ E
2. (u+ v )+w =u + (v+ w ), ∀u,v, w ∈ E
3. There is a vector 0∈E , called the zero vector, such that for any v∈ E
there holds v+ 0= v .
4. For any vector v∈E , there is a vector (−v )∈E , called the opposite of v,
such that v+ (−v )=0 .
The multiplication by scalars satisfies the following conditions:
1. a ( bv )= (ab)v , ∀v ∈E ,∀ a,bR ,
2. (a+ b )v= av+ bv , ∀v ∈E , ∀ a,bR ,
3. a (u+ v )= au+ av , ∀u, v ∈E , ∀a R ,
4. 1 v= v ∀v ∈E .
•The zero vector is unique.
•For any u, v ∈Ethere is a unique vector denoted by w= v −u, called the
diff erence of v and u , such that u+ w= v .
•For any v ∈E,0v= 0, and (− 1)v= −v .
•Let E be a real vector space and A= {e1 ,..., ek }be a finite collection of vectors
from E . A linear combination of these vectors is a vector
a1 e1 + · ·· + ak e k ,
where { a 1 ,..., an } are scalars.
•A finite collection of vectors A= {e1 ,...,ek }is linearly independent if
a1 e1 + · ·· + ak e k = 0
implies a 1 = ··· =ak = 0.
vecanal4.tex; July 1, 2005; 13:34; p. 13
12 CHAPTER 1. LINEAR ALGEBRA
•A collection Aof vectors is linearly dependent if it is not linearly independent.
•Two non-zero vectors u and v which are linearly dependent are also called par-
allel, denoted by u||v.
•A collection Aof vectors is linearly independent if no vector of Ais a linear
combination of a finite number of vectors from A.
•Let Abe a subset of a vector space E . The span of A, denoted by span A , is the
subset of Econsisting of all finite linear combinations of vectors from A, i.e.
span A= {v ∈E |v= a 1 e 1 + ·· · + ak e k , e i ∈ A, ai ∈ R} .
We say that the subset span A is spanned by A .
•Theorem 1.2.1 The span of any subset of a vector space is a vector space.
•Avector subspace of a vector space E is a subset S ⊆Eof Ewhich is itself a
vector space.
•Theorem 1.2.2 A subset S of E is a vector subspace of E if and only if span S =
S.
•Span of Ais the smallest subspace of E containing A.
•A collection Bof vectors of a vector space Eis a basis of E if Bis linearly
independent and span B=E .
•A vector space E is finite-dimensional if it has a finite basis.
•Theorem 1.2.3 If the vector space E is finite-dimensional, then the number of
vectors in any basis is the same.
•The dimension of a finite-dimensional real vector space E, denoted by dim E , is
the number of vectors in a basis.
•Theorem 1.2.4 If {e1 ,...,en }is a basis in E, then for every vector v ∈E there
is a unique set of real numbers ( vi )= ( v1 ,...,vn ) such that
v=
n
X
i=1
vi e i = v1 e1 +· ·· + vn e n .
•The real numbers vi ,i= 1 ,...,n , are called the components of the vector v
with respect to the basis { ei } .
•It is customary to denote the components of vectors by superscripts , which
should not be confused with powers of real numbers
v2 ,( v)2 = vv, . . . , vn , ( v ) n .
vecanal4.tex; July 1, 2005; 13:34; p. 14
1.2. VECTOR SPACES 13
Examples of Vector Subspaces
•Zero subspace {0}.
•Line with a tangent vector u:
S1 =span {u } ={v ∈ E|v= tu, t∈R }.
•Plane spanned by two nonparallel vectors u1 and u 2
S2 =span {u 1 , u 2 } ={v ∈ E|v= tu1 + su2 , t, s∈R }.
•More generally, a k - plane spanned by a linearly independent collection of k vec-
tors { u 1 ,..., uk }
Sk =span { u 1 ,...,u k }= {v ∈ E|v= t1 u1 + ·· · + tk u k ,t1 ,...,tk ∈R }.
•An (n −1)-plane in an n-dimensional vector space is called a hyperplane.
1.2.1 Exercises
1. Show that if λv= 0, then either v= 0 or λ= 0.
2. Prove that the span of a collection of vectors is a vector subspace.
vecanal4.tex; July 1, 2005; 13:34; p. 15
14 CHAPTER 1. LINEAR ALGEBRA
1.3 Inner Product and Norm
•A real vector space Eis called an inner product space if there is a function
(·, · ) : E× E→ R , called the inner product, that assigns to every two vectors u
and v a real number (u,v) and satisfies the conditions: ∀u,v, w ∈E, ∀a ∈ R :
1. (v, v )≥ 0
2. (v, v )= 0 if and only if v= 0
3. (u, v )= ( v, u )
4. (u+ v,w )= ( u,w )+ (v, w )
5. (a u,v )= (u, av )= a ( u,v)
A finite-dimensional inner product space is called a Euclidean space.
•The inner product is often called the dot product, or the scalar product, and is
denoted by ( u,v)= u· v .
•All spaces considered below are Euclidean spaces. Henceforth, Ewill denote an
n-dimensional Euclidean space if not specified otherwise.
•The Euclidean norm is a function || · || :E →R that assigns to every vector
v∈Ea real number || v|| defined by
||v || =p ( v, v ).
•The norm of a vector is also called the length.
•A vector with unit norm is called a unit vector.
•Theorem 1.3.1 For any u, v ∈E there holds
||u+ v ||2 =||u||2 + 2(u, v )+ ||v||2 .
•Theorem 1.3.2 Cauchy-Schwarz's Inequality. For any u, v∈ E there holds
|(u, v ) | ≤ ||u || ||v || .
The equality
|( u, v ) |= ||u || ||v||
holds if and only if u and v are parallel.
•Corollary 1.3.1 Triangle Inequality. For any u, v∈ E there holds
||u+ v || ≤ ||u || + ||v|| .
vecanal4.tex; July 1, 2005; 13:34; p. 16
1.3. INNER PRODUCT AND NORM 15
•The angle between two non-zero vectors u and v is defined by
cos θ= (u, v)
||u || ||v || ,0≤θ≤π .
Then the inner product can be written in the form
(u,v )= ||u || || v|| cos θ .
•Two non-zero vectors u, v∈E are orthogonal , denoted by u⊥ v , if
(u,v )= 0.
•A basis {e1 ,..., en }is called orthonormal if each vector of the basis is a unit
vector and any two distinct vectors are orthogonal to each other, that is,
(ei , ej )=( 1,if i= j
0, if i, j .
•Theorem 1.3.3 Every Euclidean space has an orthonormal basis.
•Let S ⊂Ebe a nonempty subset of E. We say that x ∈Eis orthogonal to S,
denoted by x⊥S , if xis orthogonal to every vector of S.
•The set S ⊥ = {x∈E | x⊥S }
of all vectors orthogonal to Sis called the orthogonal complement of S.
•Theorem 1.3.4 The orthogonal complement of any subset of a Euclidean space
is a vector subspace.
•Two subsets A and B of E are orthogonal , denoted by A⊥ B , if every vector of
Ais orthogonal to every vector of B.
•Let S be a subspace of E and S ⊥ be its orthogonal complement. If every element
of E can be uniquely represented as the sum of an element of S and an element
of S⊥ , then E is the direct sum of S and S⊥ , which is denoted by
E= S⊕ S⊥ .
•The union of a basis of Sand a basis of S ⊥ gives a basis of E.
1.3.1 Exercises
1. Show that the Euclidean norm has the following properties
(a) ||v || ≥ 0, ∀v ∈E ;
(b) ||v || = 0 if and only if v= 0;
vecanal4.tex; July 1, 2005; 13:34; p. 17
16 CHAPTER 1. LINEAR ALGEBRA
(c) ||a v || = | a |||v|| , ∀v ∈E, ∀a ∈R .
2. Parallelogram Law. Show that for any u, v∈ E
||u+ v ||2 +||u − v||2 =2 ||u||2 +||v||2
3. Show that any orthogonal system in Eis linearly independent.
4. Gram-Schmidt orthonormalization process. Let G= { u 1 , ··· ,uk } be a linearly inde-
pendent collection of vectors. Let O= { v 1 , · ·· , vk } be a new collection of vectors defined
recursively by
v1 =u1 ,
vj = uj −
j−1
X
i=1
vi ( vi , uj )
||vi ||2 ,2≤j ≤k,
and the collection B= {e 1 ,...,ek } be defined by
ei = vi
||vi || .
Show that: a) Ois an orthogonal system and b) Bis an orthonormal system.
5. Pythagorean Theorem. Show that if u⊥ v , then
||u+ v ||2 =||u||2 +||v||2 .
6. Let B= {e 1 , ··· en } be an orthonormal basis in E. Show that for any vector v∈ E
v=
n
X
i=1e i (e i ,v)
and
||v||2 =
n
X
i=1
(ei , v )2 .
7. Prove that the orthogonal complement of a subset S of E is a vector subspace of E.
8. Let S be a subspace in E. Prove that
a) E⊥ ={0} , b) { 0}⊥ =E , c) (S⊥ )⊥ =S .
9. Show that the intersection of orthogonal subsets of a Euclidean space is either empty or
consists of only the zero vector. That is, for two subsets A and B , if A⊥ B , then A∩ B={ 0}
or ∅.
vecanal4.tex; July 1, 2005; 13:34; p. 18
1.4. LINEAR OPERATORS 17
1.4 Linear Operators
•Alinear operator on a vector space E is a mapping L :E →Esatisfying the
condition ∀u, v ∈E , ∀a ∈ R ,
L(u+ v )= L(u )+ L(v ) and L(av )=a L(v).
•Identity operator I on E is defined by
Iv= v,∀ v∈E
•Null operator 0 :E →Eis defined by
0v= 0,∀ v∈E
•The vector u=L ( v ) is the image of the vector v.
•If S is a subset of E, then the set
L(S )={u ∈E |u = L(v ) for some v∈S }
is the image of the set S and the set
L−1 (S )={v ∈E | L(v )∈S }
is the inverse image of the set A .
•The image of the whole space Eof a linear operator Lis the range (or the image)
of L , denoted by
Im(L )=L (E )= {u ∈E |u=L (v ) for some v∈E } .
•The kernel Ker(L ) (or the null space) of an operator Lis the set of all vectors in
Ewhich are mapped to zero, that is
Ker(L )=L −1 ({ 0} )={v ∈E | L ( v )= 0} .
•Theorem 1.4.1 For any operator Lthe sets Im(L ) and Ker (L ) are vector sub-
spaces.
•The dimension of the kernel Ker(L) of an operator L
null(L )= dimKer (L)
is called the nullity of the operator L.
•The dimension of the range Im(L) of an operator L
rank(L )= dimKer (L)
is called the rank of the operator L.
vecanal4.tex; July 1, 2005; 13:34; p. 19
18 CHAPTER 1. LINEAR ALGEBRA
•Theorem 1.4.2 For any operator Lon an n-dimensional Euclidean space E
rank(L )+ null(L )= n
•The set L(E ) of all linear operators on a vector space Eis a vector space with
the addition of operators and multiplication by scalars defined by
(L 1 +L 2 )(x )=L 1 (x )+L 2 (x), and (aL )(x )= aL (x ).
•The product of the operators A and B is the composition of A and B.
•Since the product of operators is defined as a composition of linear mappings,
it is automatically associative, which means that for any operators A ,B and C,
there holds ( AB )C= A (BC ) .
•The integer powers of an operator are defined as the multiple composition of the
operator with itself, i.e.
A0 = I A1 = A,A2 =AA, . . .
•The operator A on E is invertible if there exists an operator A −1 on E , called the
inverse of A , such that A −1A = AA −1 =I .
•Theorem 1.4.3 Let A and B be invertible operators. Then:
(A−1 )−1 =A , (AB)−1 = B−1 A −1 .
•The operators A and B are commuting if
AB =BA
and anti-commuting if AB = −BA .
•The operators A and B are said to be orthogonal to each other if
AB =BA = 0 .
•An operator A is involutive if A 2 = I
idempotent if A 2 =A ,
and nilpotent if for some integer k
Ak = 0 .
vecanal4.tex; July 1, 2005; 13:34; p. 20
1.4. LINEAR OPERATORS 19
Selfadjoint Operators
•The adjoint A ∗ of an operator Ais defined by
(A u, v )= (u, A∗ v ),∀ u, v∈E .
•Theorem 1.4.4 For any two operators A and B
(A∗ )∗ =A , (AB )∗ = B∗A∗ .
•An operator A is self-adjoint if
A∗ =A
and anti-selfadjoint if
A∗ =− A
•Every operator A can be decomposed as the sum
A= AS + AA
of its selfadjoint part AS and its anti-selfadjoint part A A
AS =1
2(A+ A∗ ),AA = 1
2(A− A∗ ).
•An operator A is called unitary if
AA∗ =A∗ A= I .
•An operator A on E is called positive , denoted by A ≥0, if it is selfdadjoint and
∀v ∈E (A v,v)≥ 0.
Projection Operators
•Let S be a subspace of E and E= S ⊕S ⊥ . Then for any u ∈Ethere exist unique
v∈Sand w∈S ⊥ such that u=v +w.
The vector v is called the projection of u onto S.
•The operator P on E defined by
Pu= v
is called the projection operator onto S.
vecanal4.tex; July 1, 2005; 13:34; p. 21
20 CHAPTER 1. LINEAR ALGEBRA
•The operator P ⊥ defined by
P⊥ u= w
is the projection operator onto S⊥ .
•The operators P and P ⊥ are called complementary projections. They have the
properties:
P∗ = P,( P⊥ )∗ = P⊥ ,
P+ P⊥ = I,
P2 = P,( P⊥ )2 = P⊥ ,
PP⊥ =P⊥ P= 0 .
•Theorem 1.4.5 An operator P is a projection if and only if P is idempotent and
self-adjoint.
•More generally, a collection of projections {P1 ,...,Pk }is a complete orthogo-
nal system of complimentary projections if
PiPk = 0if i, k
and k
X
i=1P i =P 1 +· ·· +P k =I.
•A complete orthogonal system of projections defines the orthogonal decomposi-
tion of the vector space
E= E1 ⊕ · ·· ⊕ Ek ,
where Ei is the subspace the projection Pi projects onto.
•Theorem 1.4.6 1. The dimension of the subspaces Eiare equal to the ranks
of the projections P i
dim Ei = rank Pi .
2. The sum of dimensions of the vector subspaces Eiequals the dimension of
the vector space E
n
X
i=1
dim Ei = dim E 1 + · ·· + dim Ek = dim E .
vecanal4.tex; July 1, 2005; 13:34; p. 22
1.4. LINEAR OPERATORS 21
Spectral Decomposition Theorem
•A real number λis called an eigenvalue of an operator Aif there is a unit vector
u∈Esuch that A u=λu.
The vector u is called the eigenvector corresponding to the eigenvalue λ.
•The span of all eigenvectors corresponding to the eigenvalue λ of an operator A
is called the eigenspace of λ.
•The dimension of the eigenspace of the eigenvalue λis called the multiplicity
(also called the geometric multiplicity ) of λ.
•An eigenvalue of multiplicity 1 is called simple (or non-degenerate).
•An eigenvalue of multiplicity greater than 1 is called multiple (or degenerate).
•The set of all eigenvalues of an operator is called the spectrum of the operator.
•Theorem 1.4.7 Let A be a selfadjoint operator. Then:
1. The number of eigenvalues counted with multiplicity is equal to the dimen-
sion n = dim E of the vector space E.
2. The eigenvectors corresponding to distinct eigenvalues are orthogonal to
each other.
•Theorem 1.4.8 Spectral Decomposition of Self-Adjoint Operators. Let A
be a selfadjoint operator on E. Then there exists an orthonormal basis B =
{e1 ,...,en } in E consisting of eigenvectors of Acorresponding to the eigenvalues
{λ1 ,...λ n}, and the corresponding system of orthogonal complimentary projec-
tions { P1 ,..., P n } onto the one-dimensional eigenspaces Ei,
A=
n
X
i= 1
λi Pi .
The projections {P i } are defined by
Pi v= ei (ei , v) .
and satisfy the equations
n
X
i=1P i =I,and P i P j =0 if i ,j.
•In other words, for any
v=
n
X
i=1e i (e i ,v),
we have
Av=
n
X
i=1
λi ei (ei ,v) .
vecanal4.tex; July 1, 2005; 13:34; p. 23
22 CHAPTER 1. LINEAR ALGEBRA
•Let f :R →Rbe a real-valued function on R. Let A be a selfadjoint operator on
a Euclidean space Egiven by its spectral decomposition
A=
n
X
i=1
λi Pi ,
where Pi are the one-dimensional projections. Then one can define a function
of the self-adjoint operator f (A ) on Eby
f(A ) =
n
X
i=1
f(λ i )P i .
•The exponential of an operator A is defined by
exp A = ∞
X
k=1
1
k!A k =
n
X
i=1
eλ i P i
•Theorem 1.4.9 Let U be a unitary operator on a real vector space E. Then there
exists an ani-selfadjoint operator Asuch that
U=exp A.
•Recall that the operators U and A satisfy the equations
U∗ =U−1 and A∗ =− A.
•Let A be a self-adjoint operator with the eigenvalues {λ1 , . . . , λn }. Then the trace
of the operator and the determinant of the operator A are defined by
tr A =
n
X
i= 1
λi , det A = λ1 ·· · λn .
•Note that tr I=n, det I = 1 .
•The trace of a projection Ponto a vector subspace Sis equal to its rank, or the
dimension of the vector subspace S,
tr P =rank P =dim S .
•The trace of a function of a self-adjoint operator Ais then
tr f ( A ) =
n
X
i=1
f(λ i ).
If there are multiple eigenvalues, then each eigenvalue should be counted with
its multiplicity.
vecanal4.tex; July 1, 2005; 13:34; p. 24
1.4. LINEAR OPERATORS 23
•Theorem 1.4.10 Let A be a self-adjoint operator. Then
detexp A =e tr A .
•Let A be a positive definite operator, A > 0. The zeta-function of the operator
Ais defined by
ζ(s )= tr A−s =
n
X
i= 1
1
λs
i
.
•Theorem 1.4.11 The zeta-functions has the properties
ζ(0) =n ,
and
ζ0 (0) =− logdet A .
Examples
•Let u be a unit vector and Pu be the projection onto the one-dimensional subspace
(line) Su spanned by u defined by
Pu v= u(u,v) .
The orthogonal complement S ⊥
uis the hyperplane with the normal u. The oper-
ator Ju defined by J u =I− 2P u
is called the reflection operator with respect to the hyperplane S ⊥
u. The reflec-
tion operator is a self-adjoint involution, that is, it has the following properties
J∗
u=J u ,J 2
u=I.
The reflection operator has the eigenvalue −1 with multiplicity 1 and the eigenspace
Su , and the eigenvalue +1 with multiplicity ( n−1) and with eigenspace S⊥
u.
•Let u 1 and u 2 be an orthonormal system of two vectors and Pu 1 ,u 2be the projec-
tion operator onto the two-dimensional space (plane) Su 1 ,u 2 spanned by u 1and
u2 P u 1 ,u2 v= u 1 ( u 1 , v)+ u2 (u2 ,v) .
Let Nu 1 ,u 2 be an operator defined by
Nu 1 ,u2 v=u 1( u 2 ,v)− u2 (u1 ,v) .
Then N u 1 ,u 2 P u 1 ,u 2 =P u 1 ,u 2 N u 1 ,u 2 =N u 1 ,u 2
and N 2
u1 , u2 =−P u 1 , u 2 .
vecanal4.tex; July 1, 2005; 13:34; p. 25
24 CHAPTER 1. LINEAR ALGEBRA
Arotation operator R u 1 ,u 2 (θ ) with the angle θin the plane Su 1 ,u 2is defined by
Ru 1 ,u2 (θ )= I− Pu 1 ,u2 +cos θ Pu 1 ,u2 +sin θ Nu 1 ,u2 .
The rotation operator is unitary, that is, it satisfies the equation
R∗
u1 ,u2 R u 1 , u 2=I.
•Theorem 1.4.12 Spectral Decomposition of Unitary Operators on Real Vec-
tor Spaces. Let U be a unitary operator on a real vector space E. Then the only
eigenvalues of U are +1 and −1 (possibly multiple) and there exists an orthogo-
nal decomposition E=E + ⊕E − ⊕ V 1 ⊕ ·· · ⊕ V k ,
where E+and E− are the eigenspaces corresponding to the eigenvalues 1and
−1 , and {V1 ,..., vk } are two-dimensional subspaces such that
dim E= dim E+ + dim E− + 2k .
Let P+ ,P− ,P1 ,...,P k be the corresponding orthogonal complimentary system
of projections, that is,
P+ +P− +
k
X
i=1P i =I.
Then there exists a corresponding system of operators N1 ,...,N ksatisfying the
equations N 2
i=−P i ,N i P i =P i N i =N i ,
Ni Pj =Pj Ni = 0,if i ,j
and the angles θ1 ,...θ ksuch that
U= P+ − P− +
k
X
i= 1
(cos θi Pi + sin θi Ni ) .
1.4.1 Exercises
1. Prove that the range and the kernel of any operator are vector spaces.
2. Show that
(aA +bB )∗ =aA∗ +bB∗ ∀a, b∈R ,
(A∗ )∗ =A
(AB)∗ =B∗ A∗
3. Show that for any operator Athe operators AA∗ and A +A∗ are selfadjoint.
4. Show that the product of two selfadjoint operators is selfadjoint if and only if they com-
mute.
5. Show that a polynomial p (A ) of a selfadjoint operator Ais a selfadjoint operator.
vecanal4.tex; July 1, 2005; 13:34; p. 26
1.4. LINEAR OPERATORS 25
6. Prove that the inverse of an invertible operator is unique.
7. Prove that an operator Ais invertible if and only if Ker A ={ 0} , that is, A v= 0 implies
v= 0.
8. Prove that for an invertible operator A , Im(A )=E , that is, for any vector v∈E there is a
vector u∈E such that v=A u .
9. Show that if an operator Ais invertible, then
(A−1 )−1 =A .
10. Show that the product AB of two invertible operators A and B is invertible and
(AB)−1 =B−1 A−1
11. Prove that the adjoint A∗ of any invertible operator Ais invertible and
(A∗ )−1 =(A−1 )∗ .
12. Prove that the inverse A −1 of a selfadjoint invertible operator is selfadjoint.
13. An operator A on E is called isometric if ∀v ∈E ,
||A v || = ||v|| .
Prove that an operator is unitary if and only if it is isometric.
14. Prove that unitary operators preserves inner product. That is, show that if Ais a unitary
operator, then ∀u, v ∈E (A u,A v)= ( u,v) .
15. Show that for every unitary operator A both A −1 and A∗ are unitary.
16. Show that for any operator Athe operators AA∗ and A∗ A are positive.
17. What subspaces do the null operator 0and the identity operator Iproject onto?
18. Show that for any two projection operators P and Q , PQ =0 if and only if QP = 0.
19. Prove the following properties of orthogonal projections
P∗ =P,( P⊥ )∗ = P⊥ , P⊥ + P= I,PP⊥ = P⊥ P= 0.
20. Prove that an operator is projection if and only if it is idempotent and selfadjoint.
21. Give an example of an idempotent operator in R 2which is not a projection.
22. Show that any projection operator Pis positive. Moreover, show that ∀v ∈ E
(P v, v )=||P v|| 2 .
23. Prove that the sum P = P 1 + P 2of two projections P 1 and P 2 is a projection operator if
and only if P 1 and P 2 are orthogonal.
24. Prove that the product P = P 1 P 2 of two projections P 1 and P 2 is a projection operator if
and only if P 1 and P 2commute.
25. Find the eigenvalues of a projection operator.
26. Prove that the span of all eigenvectors corresponding to the eigenvalue λof an operator A
is a vector space.
vecanal4.tex; July 1, 2005; 13:34; p. 27
26 CHAPTER 1. LINEAR ALGEBRA
27. Let E (λ)= Ker(A − λI ) .
Show that: a) if λ is not an eigenvalue of A , then E (λ )= ∅, and b) if λ is an eigenvalue of
A, then E (λ ) is the eigenspace corresponding to the eigenvalue λ.
28. Show that the operator A −λI is invertible if and only if λis not an eigenvalue of the
operator A.
29. Let T be a unitary operator. Then the operators Aand
˜
A= TAT−1
are called similar. Show that the eigenvalues of similar operators are the same.
30. Show that an operator similar to a selfadjoint operator is selfadjoint and an operator sim-
ilar to an anti-selfadjoint operator is anti-selfadjoint.
31. Show that all eigenvalues of a positive operator Aare non-negative.
32. Show that the eigenvectors corresponding to distinct eigenvalues of a unitary operator are
orthogonal to each other.
33. Show that the eigenvectors corresponding to distinct eigenvalues of a selfadjoint operator
are orthogonal to each other.
34. Show that all eigenvalues of a unitary operator Ahave absolute value equal to 1.
35. Show that if Ais a projection, then it can only have two eigenvalues: 1 and 0.
vecanal4.tex; July 1, 2005; 13:34; p. 28
Chapter 2
Vector and Tensor Algebra
2.1 Metric Tensor
•Let E be a Euclidean space and {ei }= {e1 , . . . en }be a basis (not necessarily
orthonormal). Then each vector v∈E can be represented as a linear combination
v=
n
X
i= 1
vi e i ,
where vi ,i= 1,...,n , are the components of the vector vwith respect to the
basis { ei } (or contravariant components of the vector v). We stress once again
that contravariant components of vectors are denoted by upper indices (super-
scripts).
•Let G= (gij ) be a matrix whose entries are defined by
gij =(e i ,e j ).
These numbers are called the components of the metric tensor with respect to
the basis { ei } (also called covariant components of the metric).
•Notice that the matrix Gis symmetric, that is,
gij = gji , GT =G.
•Theorem 2.1.1 The matrix G is invertible and
det G> 0 .
•The elements of the inverse matrix G −1 =(gij ) are called the contravariant
components of the metric. They satisfy the equations
n
X
j= 1
gij g jk =δ i j,
27
28 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
where δ i jis the Kronecker symbol defined by
δi j=
1, if i= j
0, if i, j
•Since the inverse of a symmetric matrix is symmetric, we have
gij = gji .
•In orthonormal basis
gij =gi j =δ i j , G= G−1 = I .
•Let v ∈Ebe a vector. The real numbers
vi =( e i , v)
are called the covariant components of the vector v. Notice that covariant com-
ponents of vectors are denoted by lower indices (subscripts).
•Theorem 2.1.2 Let v ∈E be a vector. The covariant and the contravariant
components of v are related by
vi =
n
X
j=1
gij vj , vi =
n
X
j=1
gij vj .
•Theorem 2.1.3 The metric determines the inner product and the norm by
(u,v )=
n
X
j=1
n
X
i=1
gij ui vj =
n
X
j=1
n
X
i=1
gij ui vj .
||v||2 =
n
X
i= 1
n
X
j= 1
gij vi vj =
n
X
j=1
n
X
i=1
gij vi vj .
vecanal4.tex; July 1, 2005; 13:34; p. 29
2.2. DUAL SPACE AND COVECTORS 29
2.2 Dual Space and Covectors
•A linear mapping ω :E →Rthat assigns to a vector v ∈Ea real number hω, vi
and satisfies the condition: ∀u, v ∈E , ∀a ∈ R
hω,u+ v i= hω, u i+ hω, v i, and hω, av i= a h ω,v i ,
is called a linear functional.
•The space of linear functionals is a vector space, called the dual space of E
and denoted by E∗ , with the addition and multiplication by scalars defined by:
∀ω, σ ∈E ∗ , ∀v ∈E, ∀a ∈ R,
hω+ σ, v i= hω, v i+ hσ, v i, and haω, v i=a hω,v i.
The elements of the dual space E∗ are also called covectors or 1-forms . In
keeping with tradition we will denote covectors by Greek letters.
•Theorem 2.2.1 The dual space E ∗ of a real vector space E is a real vector space
of the same dimension.
•Let {ei }= {e1 ,...,en }be a basis in E . A basis {ωi }= {ω1 , . . . ωn }in E ∗ such that
hωi ,ej i= δi j
is called the dual basis.
•The dual {ωi }of an orthonormal basis is also orthonormal.
•Given a dual basis {ωi }every covector σ in E∗ can be represented in a unique
way as
σ=
n
X
i= 1
σi ωi ,
where the real numbers (σi )= ( σ 1 , . . . , σn ) are called the components of the
covector σ with respect to the basis {ωi }.
•The advantage of using the dual basis is that it allows one to compute the com-
ponents of a vector vand a covector σby
vi =h ω i , vi
and
σi =hσ ,ei i .
That is
v=
n
X
i=1 e i hω i ,vi
and
σ=
n
X
i=1hσ, e i iω i .
vecanal4.tex; July 1, 2005; 13:34; p. 30
30 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
•More generally, the action of a covector σon a vector vhas the form
hσ, v i=
n
X
i=1 hσ,e i ihω i ,vi =
n
X
i=1
σi vi .
•The existense of the metric allows us to define the following map
g: E→ E∗
that assigns to each vector v a covector g (v ) such that
hg( v), u i=(v,u ).
Then
g( v)=
n
X
i=1
(v, ei ) ωi .
In particular,
g( e k ) =
n
X
i= 1
gki ω i .
•Let v be a vector and σbe the corresponding covector, so σ= g ( v ) and v =
g−1 (σ ). Then their components are related by
σi =
n
X
j=1
gij vj , vi =
n
X
j=1
gij σ j .
•The inverse map g −1 :E ∗ →Ethat assigns to each covector σa vector g−1 (σ)
such that
hσ, u i= (g −1 ( σ), u) ,
can be defined as follows. First, we define
g−1 (ω k ) =
n
X
i=1
gki e i .
Then
g−1 (σ )=
n
X
k=1
n
X
i=1 hσ, e k ig ki e i .
•The inner product on the dual space E ∗ is defined so that for any two covectors
αand σ( α,σ)=h α,g−1 (σ )i= ( g−1 ( α) ,g−1 ( α)) .
•This definition leads to g ij = ( ω i ,ωj ).
vecanal4.tex; July 1, 2005; 13:34; p. 31
2.2. DUAL SPACE AND COVECTORS 31
•Theorem 2.2.2 The inner product on the dual space E ∗ is determined by
(α,σ )=
n
X
i= 1
n
X
j=1
gij α i σ j .
In particular,
(ωi ,σ )=
n
X
j=1
gij σ j
•The inverse map g −1 can be defined in terms of the inner product of covectors as
g−1 (σ )=
n
X
i=1 e i (ω i ,σ) .
•Since there is a one-to-one correspondence between vectors and covectors, we
can treat a vector v and the corresponding covector g ( v ) as a single object and
denote the components vi of the vector v and the components of the covector
g( v) by the same letter, that is,
vi =
n
X
j=1
gij vj , vi =
n
X
j=1
gij vj .
•We call vi the contravaraint components and vi the covariant components.
This operation is called raising and lowering an index; we use gij to raise an
index and gij to lower an index.
2.2.1 Einstein Summation Convention
•In many equations of vector and tensor calculus summation over components of
vectors, covectors and, more generally, tensors, with respect to a given basis fre-
quently appear. Such a summation usually occurs on a pair of equal indices, one
lower index and one upper index, and one sums over all values of indices from 1
to n . The number of summation symbols P n
i=1is equal to the number of pairs of
repeated indices. That is why even simple equations become cumbersome and
uncomfortable to work with. This lead Einstein to drop all summation signs and
to adopt the following summation convention:
1. In any expression there are two types of indices: free indices and repeated
indices.
2. Free indices appear only once in an expression; they are assumed to take
all possible values from 1 to n. For example, in the expression
gij v j
the index i is a free index.
vecanal4.tex; July 1, 2005; 13:34; p. 32
32 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
3. The position of all free indices in all terms in an equation must be the same.
For example, gij vj +α i =σ i
is a correct equation, while the equation
gij vj + α i = σ i
is a wrong equation.
4. Repeated indices appear twice in an expression. It is assumed that there is a
summation over each repeated pair of indices from 1 to n. The summation
over a pair of repeated indices in an expression is called the contraction.
For example, in the expression
gij v j
the index jis a repeated index. It actually means
n
X
j=1
gij vj .
This is the result of the contraction of the indices k and l in the expression
gikvl .
5. Repeated indices are dummy indices: they can be replaced by any other
letter (not already used in the expression) without changing the meaning of
the expression. For example
gij vj = gikvk
just means n
X
j=1
gij vj = gi1 v1 + · ·· + gin vn ,
no matter how the repeated index is called.
6. Indices cannot be repeated on the same level. That is, in a pair of repeated
indices one index is in upper position and another is in the lower position.
For example, vi vi
is a wrong expression.
7. There cannot be indices occuring three or more times in any expression.
For example, the expression giivi
does not make sense.
•From now on we will use the Einstein summation convention. We will say that
an equation is written in tensor notation.
vecanal4.tex; July 1, 2005; 13:34; p. 33
2.2. DUAL SPACE AND COVECTORS 33
Examples
•First, we list the equations we already obtained above
vi =gij vj ,vj = g ji vi ,
(u, v )=gij ui vj =ui vi =uivi =gij ui vj ,
(α, β )= gij α i β j = α i β i = α i β i = g ij α i β j .
gij g jk =δ k
i.
•A contraction of indices one of which belongs to the Kronecker symbol just
renames the index. For example:
δi jv j=v i, δ i jδ j
k=δ i
k,
etc.
•The contraction of the Kronecker symbol gives
δi
i=
n
X
i=1
1=n.
vecanal4.tex; July 1, 2005; 13:34; p. 34
34 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
2.3 General Definition of a Tensor
•It should be realized that a vector is an invariant geometric object that does not
depend on the basis; it exists by itself independently of the basis. The basis is just
a convenient tool to represent vectors by its components. The componenets of a
vector do depend on the basis. It is this transformation law of the components of
a vector (and, more generally, a tensor as we will see later) that makes an n-tuple
of real numbers (v 1 ,..., vn ) a vector. Not every collection of nreal numbers is
a vector. To represent a vector, a geometric object that does not depend on the
basis, these numbers should transform according to a very special rule under a
change of basis.
•Let {ei }= {e1 ,..., en }and {e 0 j}={e 0
1,...,e 0
n}be two different bases in E . Obvi-
ously, the vectors from one basis can be decomposed as a linear combination of
vectors from another basis, that is
ei = Λ ji e0 j,
where Λji ,i, j= 1,...,n , is a set of n 2 real numbers, forming the transforma-
tion matrix Λ = ( Λij ). Of course, we also have the inverse transformation
e0 j=˜
Λkj e k
where ˜
Λkj ,k, j = 1,..., n , is another set of n 2 real numbers.
•The dual bases {ωi } and {ω 0j } are related by
ω0i = Λij ωj , ωi = ˜
Λij ω0j .
•By using the second equation in the first and vice versa we obtain
ei =˜
Λkj Λji ek ,e 0 j= Λ ik˜
Λkj e0
i,
which means that ˜
Λkj Λji = δk
i,Λ i k ˜
Λkj =δ i j.
Thus, the matrix ˜
Λ = (˜
Λij ) is the inverse transformation matrix .
•In matrix notation this becomes
˜
ΛΛ = I, Λ ˜
Λ = I ,
which means that the matrix Λis invertible and
˜
Λ = Λ−1 .
•The components of a vector vwith respect to the basis {e0
i}are
v0i = hω 0i , vi = Λ ij hω j ,vi .
vecanal4.tex; July 1, 2005; 13:34; p. 35
2.3. GENERAL DEFINITION OF A TENSOR 35
This immediately gives v 0i = Λ ij vj .
This is the transformation law of contravariant components. It is easy to
recognize this as the action of the transformation matrix on the column-vector of
the vector components from the left.
•We can compute the transformation law of the components of a covector σas
follows
σ0
i=hσ,e 0
ii=˜
Λji hσ,ej i ,
which gives
σ0
i=˜
Λji σj .
This is the transformation law of covariant components. It is the action of
the inverse transformation matrix on the row-vector from the right. That is, the
components of covectors are transformed with the transpose of the inverse trans-
formation matrix!
•Now let us compute the transformation law of the covariant components of
the metric tensor gij . By the definition we have
g0
ij =(e 0
i,e 0 j)=˜
Λki ˜
Λlj ( ek ,el ) .
This leads to g 0
ij = ˜
Λki ˜
Λlj gkl .
•Similarly, the contravariant components of the metric tensor gij transform
according to g 0ij = Λ ik Λ jl gkl .
•The transformation law of the metric components in matrix notation reads
G0 =( Λ−1 ) TGΛ−1
and G 0−1 = ΛG −1 Λ T .
•We denote the determinant of the covariant metric components G= (gij ) by
|g |= det G= det(gi j ) .
•Taking the determinant of this equation we obtain the transformation law of
the determinant of the metric
|g 0 |=(det Λ) −2 |g | .
vecanal4.tex; July 1, 2005; 13:34; p. 36
36 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
•More generally, a set of real numbers Ti 1 ...ip j 1 ...jq is said to represent components
of a tensor of type (p, q ) ( ptimes contravariant and qtimes covariant) if they
transform under a change of the basis according to
T0i1 ... ip
j1 ... jq = Λ i 1 l 1··· Λ i p l p˜
Λm 1 j 1 ··· ˜
Λmq jq Tl 1 ...lp
m1 ... mq .
This is the general transformation law of the components of a tensor of type
(p, q ).
•The rank of the tensor of type (p, q ) is the number ( p+ q ).
•Atensor product of a tensor Aof type ( p, q ) and a tensor Bof type (r,s) is a
tensor A⊗ B of type (p+ r, q+ s ) with components
(A⊗ B) i 1 ...iq l 1 ...lr
j1 ... jq k1 ... ks = A i 1 ... i p
j1 ... jq B l 1 ... l r
k1 ... ks .
•The symmetrization of a tensor of type (0,k ) with components Ai 1 ...ik is another
tensor of the same type with components
A(i1 ...ik ) = 1
k!X
ϕ∈Sk
Ai ϕ(1)...i ϕ(k ) ,
where summation goes over all permutations of k indices. The symmetrization
is denoted by parenthesis.
•The antisymmetrization of a tensor of type (0,k ) with components Ai 1 ...ik is
another tensor of the same type with components
A[i1 ...ik ] = 1
k!X
ϕ∈Sk
sign(ϕ ) Ai ϕ (1) ...iϕ (k) ,
where summation goes over all permutations of k indices. The antisymmetriza-
tion is denoted by square brackets.
•A tensor Ai 1 ...ik is symmetric if
A( i 1 ...ik ) =Ai 1 ...i k
and anti-symmetric if A [i 1 ...ik ] = Ai 1 ...ik .
•Anti-symmetric tensors of type (0, p) are called p -forms.
•Anti-symmetric tensors of type (p, 0) are called p -vectors.
•A tensor is isotropic if it is a tensor product of gij ,gi j and δi j.
•Every isotropic tensor has an even rank.
vecanal4.tex; July 1, 2005; 13:34; p. 37
2.3. GENERAL DEFINITION OF A TENSOR 37
•For example, the most general isotropic tensor of rank two is
Aij =aδ i j,
where a is a scalar, and the most general isotropic tensor of rank four is
Aij kl =agij gkl +bδ i
kδ j
l+cδ i
lδ j
k,
where a, b, c are scalars.
2.3.1 Orientation, Pseudotensors and Volume
•Since the transformation matrix Λis invertible, then the determinant det Λis
either positive or negative. If det Λ> 0 then we say that the bases { ei } and { e0
i}
have the same orientation, and if det Λ< 0 then we say that the bases { ei } and
{e0
i}have the opposite orientation.
•This defines an equivalence relation on the set of all bases on E called the ori-
entation of the vector space E . This equivalence relation divides the set of all
bases in two equivalence classes, called the positively oriented and negatively
oriented bases.
•A vector space together with a choice of what equivalence class is positively
oriented is called an oriented vector space.
•A set of real numbers Ai 1 ...ip
j1 ... jq is said to represent components of a pseudo-tensor
of type (p,q) if they transform under a change of the basis according to
A0i1 ... i p
j1 ... jq =sign(det Λ ) Λ i 1 l 1·· · Λ i p l p˜
Λm 1 j 1 ··· ˜
Λmq jq Al 1 ...l p
m1 ...mq ,
where sign(x ) = +1 if x> 0 and sign( x )= −1 if x< 0.
•The Levi-Civita symbol (also called alternating symbol) is defined by
εi 1 ...in =εi 1 ...in =
+1, if (i 1 ,...,in ) is an even permutation of (1,..., n ) ,
−1, if (i 1 ,...,in ) is an odd permutation of (1,..., n ) ,
0, if two or more indices are the same .
•The Levi-Civita symbols εi 1 ...in and εi 1 ...in do not represent tensors! They have the
same values in all bases.
•Theorem 2.3.1 The determinant of a matrix A =( Aij ) can be written as
det A= εi 1 ...in A 1i 1 . . . Ani n
=εj 1 ... jn A j 1 1 . . . Aj n n
=1
n!ε i 1 ...in ε j 1 ... jn A j 1 i 1 . . . A j n i n .
Here, as usual, a summation over all repeated indices is assumed from 1 to n.
vecanal4.tex; July 1, 2005; 13:34; p. 38
38 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
•Theorem 2.3.2 There holds the identity
εi 1 ...in εj 1 ... jn = X
ϕ∈Sn
sign(ϕ )δi 1
jϕ(1) ··· δ i n
jϕ(n)
=n!δ i 1
[j 1 ··· δ in
jn ].
The contraction of this identity over k indices gives
εi 1 ...in−k m1 ...mk εj 1 ... jn−k m1 ...mk =k !(n− k )! δi 1
[j 1 ··· δ in−k
jn−k ].
In particular,
εm 1 ...mn εm 1 ...mn =n! .
•Theorem 2.3.3 The sets of real numbers Ei 1 ...in and E i 1 ...in defined by
Ei 1 ...in =p | g| ε i 1 ...in
Ei 1 ...in =1
p| g| εi 1 ...in ,
where | g|= det(gij ) , define (pseudo)-tensors of type (0, n ) and ( n, 0) respectively.
•Let {v1 ,..., vn }be an ordered n-tuple of vectors. The volume of the paral-
lelepiped spanned by the vectors { v1 ,...,vn } is a real number defined by
|vol(v 1 ,..., vn ) |= q det((vi , vj )) .
•Theorem 2.3.4 Let {ei }be a basis in E, {ωi }be the dual basis, and {v1 ,..., vn }
be a set of n vectors. Let V = ( vij ) be the matrix of contravariant components of
the vectors {v j } v ij =hω i ,v j i ,
and W = ( v i j )be the matrix of covariant components of the vectors {v j }
vij = (e i ,v j )= g ik vkj .
Then the volume of the parallelepiped spanned by the vectors { v1 ,...,v n }is
|vol(v 1 ,..., vn ) |= p | g | | det V |= | det W |
p| g| .
•If the vectors {v1 ,..., vn }are linearly dependent, then
vol(v 1 ,..., vn )= 0 .
•If the vectors {v1 ,...,vn }are linearly independent, then the volume is a positive
real number that does not depend on the orientation of the vectors.
vecanal4.tex; July 1, 2005; 13:34; p. 39
2.3. GENERAL DEFINITION OF A TENSOR 39
•The signed volume of the parallelepiped spanned by an ordered n-tuple of
vectors { v1 ,..., vn } is
vol(v 1 ,..., vn )= p | g| det V
=sign(v 1 ,..., vn )| vol( v1 ,...,vn )| , .
The sign of the signed volume depends on the orientation of the vectors { v 1 ,..., vn } :
sign(v 1 ,...,vn )= sign(det V )=
+1, if { v 1 ,..., vn } is positively oriented
−1, if {v1 ,..., vn }is negatively oriented
•Theorem 2.3.5 The signed volume is equal to
vol(v 1 ,..., vn )=Ei 1 ...in vi 1 1 · · · vi n n = Ei 1 ...in vi 1 1 ··· vi n n ,
where vij = hω i ,v j iand vi j = (e i ,v j ).
That is why the pseudo-tensor Ei 1 ...in is also called the volume form.
Exterior Product and Duality
•The volume form allows one to define the duality of k-forms and (n −k)-vectors
as follows. For each k -form Ai 1 ...ik one assigns the dual (n− k )-vector by
∗Aj 1 ... j n−k =1
k! E j 1 ... j n−ki 1 ...ik A i 1 ...ik .
Similarly, for each k -vector Ai 1 ...ik one assigns the dual (n− k )-form
∗Aj 1 ... jn−k =1
k! E j 1 ... jn−k i1 ...ik A i 1 ...ik .
•Theorem 2.3.6 For each k-form αthere holds
∗ ∗ α= ( −1)k (n− k) α .
That is,
∗∗ = ( −1)k(n− k) .
•The exterior product of a k -form A and a m -form B is a (k+ m )-form A ∧B
defined by
(A∧ B )i 1 ...ikj 1 ... jm = (k+ m)!
k!m ! A [ i 1 ...ik B j 1 ... jm ]
Similarly, one can define the exterior product of p-vectors.
•Theorem 2.3.7 The exterior product is associative, that is,
(A∧ B )∧C= A ∧ (B∧ C ).
vecanal4.tex; July 1, 2005; 13:34; p. 40
40 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
•A collection {v1 ,...,v n−1 }of (n −1) vectors defines a covector αby
α=∗( v1 ∧ · ·· ∧ vn−1 )
or, in components,
αj =Eji 1 ...in−1 vi 11 ··· vi n−1 n−1 .
•Theorem 2.3.8 Let {v1 ,...,v n−1 }be a collection of ( n −1) vectors and S =
span { v 1 , . . . v n−1 } be the hyperplane spanned by these vectors. Let ebe a unit
vector orthogonal to S oriented in such a way that {v1 ,..., v n−1 , e} is oriented
positively. Then the vector u= g−1 (α ) corresponding to the 1 -form α= ∗( v1 ∧
· ·· ∧ v n−1 ) is parallel to e(with the same orientation)
u= e|| u||
and has the norm
||u || = vol(v 1 ,..., vn−1 , e ) .
•In three dimensions, i.e. when n= 3, this defines a binary operation ×, called
the vector product, that is
u= v× w=∗( v∧ w) ,
or uj =Ejik viwk =p |g |ε jik vi wk .
vecanal4.tex; July 1, 2005; 13:34; p. 41
2.4. OPERATORS AND TENSORS 41
2.4 Operators and Tensors
•Let A be an operator on E . Let {ei }= {e1 ,..., en }be a basis in a Euclidean
space and {ωi }= {ω 1 ,...,ωn } be the dual basis in E∗ . The real square matrix
A=( Aij ), i, j= 1 ,..., n, defined by
Aej =Aij e i ,
is called the matrix of the operator A .
•Therefore, there is a one-to-one correspondence between the operators on Eand
the real square n× n matrices A= ( Aij ).
•It can be computed by
Aij =hω i ,A e j i= gik (e k ,A e j ).
•Remark. Notice that the upper index, which is the first one, indicates the row
and the lower index is the second one indicating the column of the matrix. The
convenience of this notation comes from the fact that all upper indices (also
called contravariant indices) indicate the components of vectors and "belong"
to the vector space E while all lower indices (called covariant indices) indicate
components of covectors and "belong" to the dual space E∗ .
•The matrix of the identity operator Iis
Iij =δ i j.
•For any v ∈Ev=vj e j ,vj =hω j ,vi
we have A v=Aij vj e i .
That is, the components ui of the vector u= A vare given by
ui =Aij vj .
Transformation Law of Matrix of an Operator
•Under a change of the basis ei = Λ ji e 0 j, the matrix Aij of an operator Atransforms
according to A 0ij = Λ ik Akm ˜
Λmj ,
which in matrix notation reads
A0 = ΛAΛ−1 .
•Therefore, the matrix A= ( Aij ) of an operator Arepresents the components of
a tensor of type (1,1). Conversely, such tensors naturally define linear operators
on E . Thus, linear operators on Eand tensors of type (1,1) can be identified.
vecanal4.tex; July 1, 2005; 13:34; p. 42
42 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
•The determinant and the trace of the matrix of an operator are invariant under the
change of the basis, that is
det A0 = det A, tr A0 = tr A .
•Therefore, one can define the determinant of the operator A and the trace of
the operator A by the determinant and the trace of its matrix, that is,
det A = det A, tr A = tr A .
•For self-adjoint operators these definitions are consistent with the definition in
terms of the eigenvalues given before.
•The matrix of the sum A +B of two operators A and B is the sum of matrices A
and B of the operatos A and B.
•The matrix of a scalar multiple cA is equal to cA , where A is the matrix of the
operator A and c∈ R.
Matrix of the Product of Operators
•The matrix of the product C = AB of two operators reads
Cij =hω i ,AB e j i= hω i ,A e k ihω k ,B e j i= Aik Bkj ,
which is exactly the product of matrices A and B.
•Thus, the matrix of the product AB of the operators A and B is equal to the
product AB of matrices of these operators in the same order.
•The matrix of the inverse A −1 of an invertible operator Ais equal to the inverse
A−1 of the matrix A of the operator A.
•Theorem 2.4.1 The algebra L(E ) of linear operators on E is isomorphic to the
algebra Mat(n, R ) of real square n ×n matrices.
Matrix of the Adjoint Operator
•For the adjoint operator A ∗ we have
(ei , A∗ ej )= (A ei , ej )= (ej ,A ei ).
Therefore, the matrix of the adjoint operator is
A∗kj =gki ( e i , A∗ e j )= gki Ali gl j .
In matrix notation this reads
A∗ = G −1 AT G .
vecanal4.tex; July 1, 2005; 13:34; p. 43
2.4. OPERATORS AND TENSORS 43
•Thus, the matrix of a self-adjoint operator Asatisfies the equation
Akj =gki Ali glj or gik Akj =Ali gl j ,
which in matrix notation reads
A= G−1 AT G, or GA = AT G .
•The matrix of a unitary operator Asatisfies the equation
gki Ali glj Ajm =δk
mor A l i g lj A j m =g im ,
which in matrix notation has the form
G−1 AT GA = I, or AT GA = G .
vecanal4.tex; July 1, 2005; 13:34; p. 44
44 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
2.5 Vector Algebra in R 3
•We denote the standard orthonormal basis in R 3by
e1 = i, e2 = j, e3 = k ,
so that e i ·ej =δij .
•Each vector v is decomposed as
v=v1 i+v2 j+v3 k.
The components are computed by
v1 =v· i, v2 =v· j, v3 =v· k .
•The norm of the vector
||v || = q v2
1+v 2
2+v 2
3.
•Scalar product is defined by
v· u=v1 u1 +v2 u2 +v3u3 .
•The angle between vectors
cos θ=u·v
||u || ||v || .
•The orthogonal decomposition of a vector v with respect to a given unit vector u
is v=vk +v⊥ ,
where v k = u(u· v), v⊥ = v− u( u· v) .
•We denote the Cartesian coordinates in R 3by
x1 = x, x2 = y, x3 = z.
The radius vector (the position vector) is
r=x i+y j+z k.
•The parametric equation of a line parallel to a vector u=a i+b j+c k is
r= r0 +t u,
where r 0 = x 0 i+ y 0 j+ z 0 k is a fixed vector and tis a real parameter. In
components, x=x 0 + at ,y= y 0 + bt ,z= z 0 + ct .
The non-parametric equation of a line (if a, b, c are non-zero) is
x− x0
a= y−y0
b= z−z0
c.
vecanal4.tex; July 1, 2005; 13:34; p. 45
2.5. VECTOR ALGEBRA IN R3 45
•The parametric equation of a plane spanned by two non-parallel vectors u and
vis r=r0 +tu +sv,
where t and s are real parameters.
•A vector n that is perpendicular to both vectors u and v is normal to the plane.
•The non-parametric equation of a plane with the normal n=a i+b j+c k is
(r− r 0) ·n= 0
or a (x− x 0 )+b( y −y 0)+(z −z 0 )=0,
which can also be written as
ax + by + cz = d ,
where d=ax 0 + by 0 + cz 0 .
•The positive (right-handed) orientation of a plane is defined by the right hand
(or counterclockwise) rule. That is, if u1 and u2 span a plane then we orient the
plane by saying which vector is the first and which is the second. The orientation
is positive if the rotation from u 1 to u 2 is counterclockwise and negative if it is
clockwise. A plane has two sides. The positive side of the plane is the side with
the positive orientation, the other side has the negative (left-handed) orientation.
•The vector product of two vectors is defined by
w= u× v=det
i j k
u1u2u3
v1v2v3
,
or, in components,
wi = ε ijk uj vk = 1
2ε ijk (uj vk − uk vj ) .
•The vector products of the basis vectors are
ei × ej =εijk ek .
•If u and v are two nonzero nonparallel vectors, then the vector w= u ×vis
orthogonal to both vectors, u and v , and, hence, to the plane spanned by these
vectors. It defines a normal to this plane.
•The area of the parallelogram spanned by two vectors u and vis
area(u,v )= |u × v|= || u || ||v|| sin θ .
vecanal4.tex; July 1, 2005; 13:34; p. 46
46 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
•The signed volume of the parallelepiped spanned by three vectors u ,v and w
is
vol(u,v,w )=u· ( v,w )= det
u1u2 u 3
v1v2 v 3
w1w2w3
=εijk ui vj wk .
The signed volume is also called the scalar triple product and denoted by
[u, v, w ]=u· (v× w ) .
•The signed volume is zero if and only if the vectors are linearly dependent, that
is, coplanar.
•For linearly independent vectors its sign depends on the orientation of the triple
of vectors { u,vw}
vol(u, v, w )= sign(u,v,w)|vol(u, v, w)| ,
where
sign(u, v, w )=( 1 if {u, v,w } is positively oriented
−1 if {u, v,w } is negatively oriented
•The scalar triple product is linear in each argument, anti-symmetric
[u, v,w ]= −[v, u,w ]=− [u,w,v ]=− [w, v, u]
cyclic [ u,v,w]= [ v,w,u]= [ w, u, v] .
It is normalized so that [i, j, k]= 1.
•The orthogonal decomposition of a vector vwith respect to a unit vector ucan
be written in the form
v= u( u· v)− u×( u× v) .
•The Levi-Civita symbol in three dimensions
εijk =εi jk =
+1 if (i, j, k ) = (1,2,3),(2,3,1),(3, 1, 2)
−1 if (i, j, k )= (2,1,3),(3,2,1),(1, 3, 2)
0 otherwise
has the following properties:
εijk =− ε jik = −εik j = −εk ji
εijk = ε jki =εki j
vecanal4.tex; July 1, 2005; 13:34; p. 47
2.5. VECTOR ALGEBRA IN R3 47
εijk εmnl =6 δm
[iδ n
jδ l
k]
=δm
iδ n
jδ l
k+δ m
jδ n
kδ l
i+δ m
kδ n
iδ l j−δm
iδ n
kδ l j−δm
jδ n
iδ l
k−δ m
kδ n
jδ l
i
εijk εmnk =2 δm
[iδ n
j]=δ m
iδ n
j−δ m
jδ n
i
εijk εmjk =2 δm
i
εijk εijk = 6
•This leads to many vector identities that express double vector product in terms
of scalar product. For example,
u×( v× w)= ( u· w) v−( u· v)w
u×( v× w)+ v× ( w× u)+ w×( u× v)= 0
(u× v )× (w× n )= v [ u,w,n ]− u[v, w,n ]
(u× v )· (w× n )= (u· w )(v· n )− (u· n )(v· w )
vecanal4.tex; July 1, 2005; 13:34; p. 48
48 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
vecanal4.tex; July 1, 2005; 13:34; p. 49
Chapter 3
Geometry
3.1 Geometry of Euclidean Space
•The set Rn can be viewed geometrically as a set of points, that we will denote
by P ,Q , etc. With each point P we associate an ordered n-tuple of real numbers
(xi
P)=(x 1
P,...,x n
P), called the coordinates of the point P. The assignment of
n-tuples of real numbers to the points in space should be bijective. That is,
diff erent points are assigned diff erent n -tuples, and for every n-tuple there is a
point in space with such coordinates. Such a map is called a coordinate system.
•A space Rn with a coordinate system is a Euclidean space if the distance be-
tween any two points P and Q is determined by
d( P ,Q)=v
tn
X
i=1
(xi
P−x i
Q) 2 .
Such coordinate system is called Cartesian.
•The point Owith the zero coordinates (0 ,..., 0) is called the origin of the Carte-
sian coordinate system.
•In Rn it is convenient to associate vectors with points in space. With each point
Pwith Cartesian coordinates ( x1 ,..., xn ) in R n we associate the column-vector
r=(xi ) with the components equal to the Cartesian coordinates of the point P.
We say that this vector points from the origin Oto the point P ; it has its tail
at the point Oand its tip at the point P. This vector is often called the radius
vector, or the position vector, of the point Pand denoted by r .
•Similarly, with every two points P and Q with the coordinates ( xi
P) and (x i
Q) we
associate the vector u PQ =rQ − rP =(xi
Q−x i
P)
that points from the point Pto the point Q .
49
50 CHAPTER 3. GEOMETRY
•Obviously, the Euclidean distance is given by
d( P ,Q)=|| r P −r Q || .
•The standard (orthonormal) basis {e1 ,..., en }of Rn are the unit vectors that con-
nect the origin Owith the points { (1,0,..., 0), ... , (0,..., 0,1)} that have only
one nonzero coordinate which is equal to 1.
•The one-dimensional subspaces Li spanned by a single basis vector ei ,
Li = span {e i }= { P|r P = t e i , t∈ R},
are the lines called the coordinate axes . There are n coordinate axes; they are
mutually orthogonal and intersect at only one point, the origin O.
•The two-dimensional subspaces Pij spanned by a couple of basis vectors ei and
ej ,Pij =span {ei , ej } ={P |rP =te i + se j , t, s∈ R} ,
are the planes called the coordinate planes. There are n (n− 1)/ 2 coordinate
planes; the coordinate planes are mutually orthogonal and intersect along the
coordinate axes.
•Let a and b be real numbers such that a< b . The set [a, b ] is a closed interval
in R . A parametrized curve C in Rn is a map C : [a, b ]→ Rn which assigns a
point in R n
C:r ( t)= ( xi ( t))
to each real number t∈ [a,b].
•The positive orientation of the curve C is determined by the standard orientation
of R , that is, by the direction of increasing values of the parameter t.
•The point r ( a) is the initial point and the point r (b ) is the endpoint of the
curve.
•The curve ( −C ) is the parametrized curve with the opposite orientation. If the
curve C is parametrized by r (t ), a≤ t≤ b , then the curve (−C ) is parametrized
by (−C ) : r (−t+ a+ b ) .
•The boundary ∂C of the curve C consists of two points C 0 and C 1correspond-
ing to r (a ) and r (b ), that is,
∂C= C1 −C0 .
•A curve C is continuous if all the functions xi (t ) are continuous for any ton
[a,b].
vecanal4.tex; July 1, 2005; 13:34; p. 50
3.1. GEOMETRY OF EUCLIDEAN SPACE 51
•Let a 1 , a 2 and b 1 ,b 2 be real numbers such that
a1 < b1 , and a2 < b2 .
The set D= [a 1 , b 1] × [ a 2 ,b 2 ] is a closed rectangle in the plane R 2.
•Aparametrized surface S in Rn is a map S :D →Rn which assigns a point
S:r ( u)= ( xi ( u))
in Rn to each point u= ( u 1 ,u 2) in the rectangle D.
•The positive orientation of the surface S is determined by the positive orienta-
tion of the standard basis in R 2. The surface (−S ) is the surface with the opposite
orientation.
•The boundary ∂S of the surface S consists of four curves S (1),0 ,S (1),1 ,S (2),0 ,
and S (2),1 parametrized by r (a 1 , v ), r ( b 1 , v ), r ( u,a 2 ) and r ( u, b 2 ) respectively.
Taking into account the orientation, the boundary of the surface Sis
∂S= S (2),0 +S (1),1 −S (2),1 −S (1),0 .
•Let a 1 ,..., ak and b1 ,..., bk be real numbers such that
ai <bi , i=1,..., k .
The set D= [ a 1 , b 1 ] × ··· × [ ak ,bk ] is called a closed k -rectangle in Rk . In
particular, the set [0, 1]k =[0, 1] × ·· · × [0, 1] is the standard k -cube .
•Let D= [ a 1 ,b 1 ] × ··· × [ ak , bk ] be a closed rectangle in R k . A parametrized
k-dimensional surface Sin R n is a continuous map S: D→R n which assigns
a point S :r (u)= ( xi ( u))
in Rn to each point u= ( u 1 ,..., uk ) in the rectangle D.
•A (n −1)-dimensional surface is called the hypersurface . A non-parametrized
hypersurface can be described by a single equation
F( x)= F( x1 ,..., xn )= 0,
where F : Rn → R is a real-valued function of ncoordinates.
•The boundary ∂S of S consists of (k −1)-surfaces, S (i), 0 and S (i), 1 ,i= 1 ,...,k,
called the faces of the k -surface S . Of course, a k -surface S has 2k faces. The
face S (i), 0 is parametrized by
S(i), 0 :r ( u1 ,..., ui−1 , ai ,ui+1 ,..., uk ) ,
where the i-th parameter ui is fixed at the initial point, i.e. ui =ai , and the face
S(i), 0 is parametrized by
S(i), 1 :r ( u1 ,..., ui−1 , bi ,ui+1 ,..., uk ) ,
where the i -th parameter ui is fixed at the endpoint, i.e. ui =bi .
vecanal4.tex; July 1, 2005; 13:34; p. 51
52 CHAPTER 3. GEOMETRY
•The boundary of the surface Sis defined by
∂S =
k
X
i=1
(−1)i (S (i), 0 −S (i), 1 ) .
•Let S 1 ,...,Sm be parametrized k -surfaces. A formal sum
S=
m
X
i=1
ai S i
with integer coeffi cients a 1 ,...,am , is called a k -chain. Usually (but not always)
the integers ai are equal to 1, (−1) or 0.
•The product of any k -chain S with zero is called the zero chain
0S= 0.
•The addition of k-chains and multiplication by integers is defined by
m
X
i=1
ai Si +
m
X
i= 1
bi Si =
m
X
i=1
(ai +bi )Si ,
b
m
X
i=1
ai Si
=
m
X
i=1
(bai )Si .
•The boundary of a k -chain S is an (k −1)-chain ∂S defined by
∂
m
X
i= 1
ai Si
=
m
X
i=1
ai ∂ Si .
•Theorem 3.1.1 For any k-chain S there holds
∂(∂S)= 0 .
vecanal4.tex; July 1, 2005; 13:34; p. 52
3.2. BASIC TOPOLOGY OF RN 53
3.2 Basic Topology of R n
•Let P 0 be a point in a Euclidean space Rn and ε > 0 be a positive real number.
The open ball Bε ( P 0 ) or radius with the center at P 0is the set of all points
whose distance from the point P 0is less than ε, that is,
Bε (P0 )={ P| d( P ,P0 ) < ε}.
•Aneighborhood of a point P 0is any set that contains an open ball centered at
P0 .
•Let S be a subset of a Euclidean space Rn . A point P is an interior point of Sif
there is a neighborhood of Pthat lies completely in S.
•A point P is an exterior point of S if there is a neighborhood of Pthat lies
completely outside of S.
•A point P is a boundary point of S if it is neither an interior nor an exterior
point. If P is a boundary point of S, then every neighborhood of Pcontains
points in S and points not in S.
•The set of boundary points of Sis called the boundary of S , denoted by ∂S .
•The set of all interior points of Sis called the interior of S , denoted by So .
•A set S is called open if every point of Sis an interior point of S, that is, S= So .
•A set S is closed if it contains all its boundary points, that is, S= So ∪∂S .
•Henceforth, we will consider only open sets and call them regions of space.
•A region S is called connected (or arc-wise connected) if for any two points P
and Q in S there is an arc joining P and Q that lies within S.
•A connected region, that is a connected open set, is called a domain.
•A domain Sis said to be simply-connected if every closed curve lying within S
can be continuously deformed to a point in the domain without any part of the
curve passing through regions outside the domain.
•A domain is simply connected if for any closed curve lying in the domain there
can be found a surface within the domain that has that curve as its boundary.
•A domain is said to be star-shaped if there is a point Pin the domain such that
for any other point in the domain the entire line segment joining these two points
lies in the domain.
vecanal4.tex; July 1, 2005; 13:34; p. 53
54 CHAPTER 3. GEOMETRY
3.3 Curvilinear Coordinate Systems
•We say that a function f (x )=f (x 1 ,...,xn ) is smooth if it has continuous partial
derivatives of all orders.
•Let P be a point with Cartesian coordinates (xi ). Suppose that we assign another
n-tuple of real numbers (qi )= ( q1 ,...,qn ) to the point P, so that
xi = fi ( q),
where fi ( q )=fi (q 1 ,..., qn ) are smooth functions of the variables xi . We will
call this a change of coordinates.
•The matrix
J= ∂ x i
∂qj !
is called the Jacobian matrix . The determinant of this matrix is called the Ja-
cobian.
•A point P 0at which the Jacobian matrix is invertible, that is, the Jacobian is not
zero, det J, 0, is called a nonsingular point of the new coordinate system (qi ).
•Theorem 3.3.1 (Inverse Function Theorem) In a neighborhood of any nonsin-
gular point P0the change of coordinates is invertible.
That is, if xi = fi ( q)and
det ∂xi
∂qj ! P0
,0,
then for all points sufficiently close to P0there exist n smooth functions
qi =hi ( x)= hi ( x1 ,..., xn ) ,
of the variables ( xi ) such that
fi ( h1 ( x) ,..., hn ( x)) = xi , hi ( f1 ( q) ,..., fn ( q)) = qi .
The Jacobian matrix of the inverse transformation is the inverse matrix of the
Jacobian matrix of direct transformation, i.e.
∂xi
∂qj
∂qj
∂xk = δ i
k,and ∂ q i
∂xj
∂xj
∂qk = δ i
k.
•The curves Ci along which only one coordinate is varied, while all other are
fixed, are called the coordinate curves, that is,
xi = xi ( q1
0,...,qi −1
0,qi ,qi +1
0,...,qn
0)}.
vecanal4.tex; July 1, 2005; 13:34; p. 54
3.3. CURVILINEAR COORDINATE SYSTEMS 55
•The vectors e i = ∂ r
∂qi
are tangent to the coordinate curves.
•The surfaces Sij along which only two coordinates are varied, while all other are
fixed, are called the coordinate surfaces, that is,
xi =xi ( q1
0,...,q i−1
0,qi ,qi+1
0,...,qj −1
0,qj ,qj+1
0,...,qn
0)}.
•Theorem 3.3.2 For each point P there are n coordinate curves that pass through
P. The set of tangent vectors { e i }to these coordinate curves is linearly indepen-
dent and forms a basis.
•The basis {ei }is not necessarily orthonormal.
•The metric tensor is defined as usual by
gij = e i ·e j =
n
X
k=1
∂xk
∂qi
∂xk
∂qj .
•The dual basis of 1-forms is defined by
ωi =dqi =∂qi
∂xj σ j ,
where σj is the standard dual basis.
•The vector
dr= e i dqi =∂ r
∂qi dq i
is called the infinitesimal displacement.
•The arc length, called the interval, is determined by
ds2 = || dr ||2 = dr· dr= g ij dqidq j .
•The volume of the parallelepiped spanned by the vectors {e1 dq1 ,..., en dqn },
called the volume element, is
dV =p | g|dq1 · ·· dqn ,
where, as usual, |g |= det(gij ).
•A coordinate system is called orthogonal if the vectors ∂r /∂qi are mutually
orthogonal. The norms of these vectors
hi =
∂r
∂qi
are called the scale factors.
vecanal4.tex; July 1, 2005; 13:34; p. 55
56 CHAPTER 3. GEOMETRY
•Then one can introduce the orthonormal basis {ei }by
ei =
∂r
∂qi
−1∂r
∂qi = 1
hi
∂r
∂qi .
•For an orthonormal system the vector components are (there is no difference
between contravariant and covariant components)
vi =vi =v· e i .
•Then the interval has the form
ds2 =
n
X
i=1
h2
i(dq i ) 2 .
•The volume element in orthogonal coordinate system is
dV = h1 ··· hndq1 ··· dqn .
3.3.1 Change of Coordinates
•Let (qi ) and (q 0 j) be two curvilinear coordinate systems. Then they should be
related by a smooth invertible transformation
q0i = fi ( q)= fi ( q1 ,..., qn ), qi =hi ( q0 )= fi ( q01 ,..., q0n ) ,
such that fi ( h ( q 0 )) =q 0i , hi (f( q)) = qi .
•The Jacobian matrices are related by
∂q0 i
∂qj
∂qj
∂q0k = δ i
k,∂qi
∂q0 j
∂q0 j
∂qk = δ i
k,
so that the matrix ∂q0i
∂qj is inverse to ∂ qi
∂q0j .
•The basis vectors in these coordinate systems are
e0
i=∂r
∂q0i , e i = ∂ r
∂qi .
Therefore, they are related by a linear transformation
e0
i=∂q j
∂q0i e j ,ej = ∂q0i
∂qj e 0 j.
They have the same orientation if the Jacobian of the change of coordinates is
positive and oppositive orientation if the Jacobian is negative.
vecanal4.tex; July 1, 2005; 13:34; p. 56
3.3. CURVILINEAR COORDINATE SYSTEMS 57
•Thus, a set of real numbers Ti 1 ...ip j 1 ... jq is said to represent components of a ten-
sor of type ( p,q) ( p times contravariant and q times covariant) if they transform
under a change of coordinates according to
T0i1 ... ip
j1 ... jq =∂q 0 i1
∂ql 1 ··· ∂q0i p
∂ql p
∂qm 1
∂q0j1 ··· ∂qmq
∂q0jq T l 1 ...lp
m1 ...mq .
This is the general transformation law of the components of a tensor of type
(p,q ) with respect to a change of curvilinear coordinates.
•A pseudo-tensor has an additional factor equal to the sign of the Jacobian, that
is, the components of a pseudo-tensor of type (p,q) transform as
T0i1 ... i p
j1 ... jq =sign " det ∂q 0 i 1
∂ql 1 !# ∂q0i 1
∂ql 1 ··· ∂q0i p
∂ql p
∂qm 1
∂q0j1 ··· ∂qm q
∂q0jq T l 1 ...lp
m1 ...mq .
This is the general transformation law of the components of a pseudo-tensor
of type (p, q) with respect to a change of curvilinear coordinates.
3.3.2 Examples
•The polar coordinates in R 2 are introduced by
x1 =ρcos ϕ , x2 =ρsin ϕ ,
where ρ≥ 0 and 0 ≤ ϕ < 2π . The Jacobian matrix is
J= cos ϕ− ρ sin ϕ
sin ϕ ρ cos ϕ! .
The Jacobian is det J=ρ .
Thus, the only singular point of the polar coordinate system is the origin ρ= 0.
At all nonsingular points the change of variables is invertible and we have
ρ=p (x 1 )2 +(x 2 )2 , ϕ = cos−1 x 1
ρ! =sin−1 x 2
ρ! .
The coordinate curves of ρare half-lines (rays) going through origin with the
slope tan ϕ. The coordinate curves of ϕare circles with the radius ρcentered at
the origin.
•The cylindrical coordinates in R 3 are introduced by
x1 =ρcos ϕ , x2 =ρsin ϕ , x3 = z
where ρ≥ 0, 0 ≤ ϕ < 2π and z∈ R . The Jacobian matrix is
J=
cos ϕ −ρ sin ϕ0
sin ϕ ρ cos ϕ0
0 0 1
.
vecanal4.tex; July 1, 2005; 13:34; p. 57
58 CHAPTER 3. GEOMETRY
The Jacobian is det J=ρ .
Thus, the only singular point of the cylindrical coordinate system is the origin
ρ=0. At all nonsingular points the change of variables is invertible and we have
ρ=p (x 1 )2 +(x 2 )2 , ϕ = cos−1 x 1
ρ! =sin−1 x 2
ρ! ,z= x3 .
The coordinate curves of ρare horizontal half-lines in the plane z= const going
through the z -axis. The coordinate curves of ϕare circles in the plane z= const
of radius ρcentered at the z axis. The coordinate curves of z are vertical lines.
The coordinate surfaces of ρ, ϕ are horizontal planes. The coordinate surfaces of
ρ, z are vertical half-planes going through the z-axis. The coordinate surfaces of
ϕ, z are vertical cylinders centered at the origin.
•The spherical coordinates in R 3 are introduced by
x1 = rsin θcos ϕ , x2 = rsin θsin ϕ , x3 = rcos θ
where r≥ 0, 0 ≤ ϕ < 2π and 0 ≤θ≤ π. The Jacobian matrix is
J=
sin θcos ϕr cos θcos ϕ −r sin θsin ϕ
sin θsin ϕr cos θsin ϕr sin θcos ϕ
cos θ −r sin θ 0
.
The Jacobian is det J= r 2 sin θ .
Thus, the singular points of the spherical coordinate system are the points where
either r= 0, which is the origin, or θ= 0 or θ= π , which is the whole z -axis. At
all nonsingular points the change of variables is invertible and we have
r=p ( x1 )2 + ( x2 )2 + ( x3 )2 ,
ϕ=cos−1 x 1
ρ! =sin−1 x 2
ρ! ,
θ=cos−1 x 3
r! ,
where ρ= p (x 1 )2 + (x 2 )2 .
The coordinate curves of rare half-lines going through the origin. The coordi-
nate curves of ϕare circles of radius r sin θ centered at the zaxis. The coordinate
curves of θare vertical half-circles of radius rcentered at the origin. The coor-
dinate surfaces of r, ϕ are half-cones around the z-axis going through the origin.
The coordinate surfaces of r, θ are vertical half-planes going through the z-axis.
The coordinates surfaces of ϕ, θ are spheres of radius rcentered at the origin.
vecanal4.tex; July 1, 2005; 13:34; p. 58
3.4. VECTOR FUNCTIONS OF A SINGLE VARIABLE 59
3.4 Vector Functions of a Single Variable
•A vector-valued function is a map v : [a,b ] →Efrom an interval [a, b ] of
real numbers to a vector space Ethat assigns a vector v (t ) to each real number
t∈[ a, b ].
•We say that a vector valued function v (t ) has a limit v 0 as t→ t 0 , denoted by
lim
t→ t0 v(t )= v 0
if lim
t→ t0 ||v(t )− v 0 || =0.
•A vector valued function v (t ) is continuous at t 0if
lim
t→ t0 v(t )= v(t 0 ) .
A vector valued function v (t ) is continuous on the interval [a,b] if it is continuous
at every point tof this interval.
•A vector valued function v (t ) is diff erentiable at t 0 if there exists the limit
lim
h→0
v(t 0 + h )− v(t 0)
h.
If this limit exists it is called the derivative of the function v (t ) at t 0 and denoted
by
v0 (t 0 )=d v
dt = lim
h→0
v(t 0 + h)− v(t 0)
h.
If the function v (t ) is diff erentiable at every t in an interval [a, b ], then it is called
diff erentiable on that interval.
•Let {ei }be a constant basis in Ethat does not depend on t. Then a vector valued
function v (t ) is represented by its components
v(t )= vi ( t ) e i
and the derivative of vcan be computed componentwise
dv
dt = dv i
dt e i .
•The derivative is a linear operation, that is,
d
dt (u+ v)= d u
dt + d v
dt , d
dt ( c v)=cd v
dt ,
where c is a scalar constant.
vecanal4.tex; July 1, 2005; 13:34; p. 59
60 CHAPTER 3. GEOMETRY
•More generally, the derivative satisfies the product rules
d
dt ( f v)=fd v
dt + d f
dt v ,
d
dt ( u,v)= d u
dt ,v! + u, d v
dt !
Similarly for the exterior product
d
dt ω∧σ= d ω
dt ∧σ+ω∧ d σ
dt
By taking the dual of this equation we obtain in R 3the product rule for the vector
product d
dt u×v= d u
dt ×v+ u× d v
dt
•Theorem 3.4.1 The derivative v 0 (t )of a vector valued function v (t )with the
constant norm is orthogonal to v ( t) . That is, if ||v( t) || = const , then for any t
(v0 (t ),v (t )) = 0 .
vecanal4.tex; July 1, 2005; 13:34; p. 60
3.5. GEOMETRY OF CURVES 61
3.5 Geometry of Curves
•Let r= r (t ) be a parametrized curve.
•A curve r=r 0 +tu
is a straight line parallel to the vector upassing through the point r 0.
•Let u and v be two orthonormal vectors. Then the curve
r= r0 +a(cos t u+sin t v )
is a circle of radius a with the center at r 0 in the plane spanned by the vectors u
and v.
•Let {u,v,w }be an orthonormal triple of vectors. Then the curve
r= r0 +b(cos t u+sin t v ) +at w
is the helix of radius bwith the axis passing through the point r 0and parallel to
w.
•The vertical distance between the coils of the helix, equal to 2π |a|, is called the
pitch.
•Let (qi ) be a curvilinear coordinate system. Then a curve can be described by
qi =qi ( t ), which in Cartesian coordinates becomes r= r ( q ( t )).
•The derivative d r
dt = ∂ r
∂qi dqi
dt
of the vector valued function r (t ) is called the tangent vector . If r (t ) represents
the position of a particle at the time t, then r0 is the velocity of the particle.
•The norm
dr
dt
=r gij (q( t )) dq i
dt dq j
dt
of the velocity is called the speed. Here, as usual
gij =∂r
∂qi · ∂ r
∂qj =
n
X
k=1
∂xk
∂qi
∂xk
∂qj
is the metric tensor in the coordinate system (qi ).
•We will say that a curve r= r (t ) is smooth if:
a) it has continuous derivatives of all orders,
b) there are no self-intersections, and
c) the speed is non-zero, i.e. || r 0 (t) || , 0, at every point on the curve.
vecanal4.tex; July 1, 2005; 13:34; p. 61
62 CHAPTER 3. GEOMETRY
•For a curve r : [a, b ] →Rn , the possibility that r ( a )=r ( b) is allowed. Then it
is called a closed curve. A closed curve does not have a boundary.
•A curve consisting of a finite number of smooth arcs joined together without
self-intersections is called piece-wise smooth, or just regular.
•For each regular curve there is a natural parametrization, or the unit-speed
parametrization with a natural parameter ssuch that
dr
ds
=1.
•The orientation of a parametrized curve is determined by the direction of in-
creasing parameter. The point r (a ) is called the initial point and the point r ( b)
is called the endpoint.
•Nonparametric curves are not oriented.
•The unit tangent is determined by
T=
dr
dt
−1dr
dt .
For the natural parametrization the tangent is the unit tangent, i.e.
T=d r
ds = ∂ r
∂qi dq i
ds .
•The norm of the displacement vector dr= r 0 dt
ds =
dr
dt dt =||dr || = rgi j (q(t )) dqi
dt dq j
dt dt .
is called the length element.
•The length of a smooth curve r : [a, b ] →Rn is defined by
L=Z C ds =Z b
a
dr
dt dt =Z b
ar g ij (q(t)) dq i
dt dq j
dt dt
For the natural parametrization the length of the curve is simply
L= b− a.
That is why, the parameter sis nothing but the length of the arc of the curve
from the initial point r (a ) to the current point r (t )
s( t)=Z t
adτ
dr
dτ
.
vecanal4.tex; July 1, 2005; 13:34; p. 62
3.5. GEOMETRY OF CURVES 63
This means that ds
dt =
dr
dt
,
and d r
dt = ds
dt d r
ds .
•The second derivative
r00 =d 2 r
dt2 = d
dt ∂ r
∂qi ! dqi
dt + ∂ r
∂qi d 2 q i
dt2 .
is called the acceleration.
•In the natural parametrization this gives the natural rate of change of the unit
tangent d T
ds = d 2 r
ds2 = d
ds ∂ r
∂qi ! dq i
ds + ∂ r
∂qi d 2 q i
ds2 .
•The norm of this vector is called the curvature of the curve
κ=
dT
ds
=
dr
dt
−1
dT
dt
.
The radius of curvature is defined by
ρ=1
κ.
•The normalized rate of change of the unit tangent defines the principal normal
N=ρd T
ds =
dT
dt
−1dT
dt .
•The unit tangent and the principal normal are orthogonal to each other. They
form an orthonormal system.
•Theorem 3.5.1 For any smooth curve r= r(t ), the acceleration r 00 lies in the
plane spanned by the vectors T and N . The orthogonal decomposition of r 00
with respect to T and N has the form
r00 =d|| r 0||
dt T+κ|| r 0||2 N .
•The vector d N
ds +κT
is orthogonal to both vectors T and N , and hence, to the plane spanned by these
vectors. In a general space Rn this vector could be decomposed with respect to
a basis in the (n− 2)-dimensional subspace orthogonal to this plane. We will
restrict below to the case n= 3.
vecanal4.tex; July 1, 2005; 13:34; p. 63
64 CHAPTER 3. GEOMETRY
•In R 3 one defines the binormal
B= T× N.
Then the triple {T, N, B } is a right-handed orthonormal system called a moving
frame.
•By using the orthogonal decomposition of the acceleration one can obtain an
alternative formula for the curvature of a curve in R 3as follows. We compute
r0 × r00 =κ|| r0 ||3 B.
Therefore,
κ=|| r0 ×r 00||
|| r 0||3 .
•The scalar quantity
τ=B·d N
ds =−N · d B
ds
is called the torsion of the curve.
•Theorem 3.5.2 (Frenet-Serret Equations) For any smooth curve in R3 there
hold
dT
ds =κN
dN
ds = −κT+ τB
dB
ds = −τN .
•Theorem 3.5.3 Any two curves in R3 with identical curvature and torsion are
congruent.
vecanal4.tex; July 1, 2005; 13:34; p. 64
3.6. GEOMETRY OF SURFACES 65
3.6 Geometry of Surfaces
•Let S be a parametrized surface. It can be described in general curvilinear coor-
dinates by qi =qi ( u, v ), where u∈ [a, b ] and v∈ [ c, d ]. Then r= r ( q ( u,v)).
•The parameters u and v are called the local coordinates on the surface.
•The curves r ( u, v 0) and r ( u 0 , v), with one coordinate being fixed, are called the
coordinate curves.
•The tangent vectors to the coordinate curves
ru =∂ r
∂u= ∂ r
∂qi
∂qi
∂u and rv = ∂ r
∂v= ∂ r
∂qi
∂qi
∂v
are tangent to the surface.
•A surface is smooth if:
a) r (u, v ) has continuous partial derivatives of all orders,
b) the tangent vectors ru and rv are non-zero and linearly independent,
c) there are no self-intersections.
•It is allowed that r (a,v )=r (b,v ) and r ( u,c )=r ( u, d ).
•A plane TP spanned by the tangent vectors ru and rv at a point Pon a smooth
surface S is called the tangent plane.
•A surface is smooth if the tangent plane is well defined, that is, the tangent vec-
tors are linearly independent (nonparallel), which means that it does not degen-
erate to a line or a point at every point of the surface.
•A surface is piece-wise smooth if it consists of a finite number of smooth pieces
joined together.
•The orientation of the surface is achived by cutting it in small pieces and orient-
ing the small pieces separately. If this can be made consistenly for the whole
surface, then it is called orientable.
•The boundary ∂S of the surface r= r ( u, v ), where u ∈[a,b],v ∈[c, d ] consists
of the curves r ( a,v ), r (b, v ), r (u, c ) and r ( u, d ). A surface without boundary
is called closed.
•Remark. There are non-orientable smooth surfaces.
•In R 3 one can define the unit normal vector to the surface by
n=|| ru ×rv ||−1 ru ×rv .
Notice that
|| ru ×rv || =p || ru ||2|| rv ||2 −(ru ·rv )2 .
vecanal4.tex; July 1, 2005; 13:34; p. 65
66 CHAPTER 3. GEOMETRY
In components,
(ru × rv )i = √ g εilm ∂ ql
∂u
∂qm
∂v ,
ru ·ru =gi j ∂ q i
∂u
∂qj
∂u ,rv · rv =gi j ∂qi
∂v
∂qj
∂v ,ru ·rv =gi j ∂qi
∂u
∂qj
∂v .
•The sign of the normal is determined by the orientation of the surface.
•For a smooth surface the unit normal vector field nvaries smoothly over the
surface.
•The normal to a closed surface in R 3is usually oriented in the outward direction.
•In R 3 a surface can also be described by a single equation
F( x, y, z)= 0 .
This equation does not prescribe the orientation though. Then
∂F
∂xi
∂xi
∂u=0,∂ F
∂xi
∂xi
∂v=0 .
The unit normal vector is then
n=± grad F
|| grad F || .
The sign here is fixed by the choice of the orientation. In components,
ni =∂ F
∂xi .
•Let r (u, v ) be a surface, u= u (t ),v= v (t ) be a curve in the rectangle D =
[a,b ]× [c, d ], and r ((u (t ),v (t )) be the image of that curve on the surface S . Then
the arc length of this curve is
dl =
dr
dt dt ,
or dl 2 = ||dr || 2 = hab duadub ,
where u 1 =u and u 2 = v , and
hab =gij ∂ q i
∂ua
∂qj
∂ub ,
is the induced metric on the surface and the indices a, b take only the values
1, 2. In more detail,
dl2 =h11 du2 + 2 h 12 du dv + h22 dv2 ,
and h 11 =ru ·ru ,h 12 =h 21 =ru ·rv ,h 22 =rv ·rv .
vecanal4.tex; July 1, 2005; 13:34; p. 66
3.6. GEOMETRY OF SURFACES 67
•The area of a plane spanned by the vectors ru du and rv dv is called the area
element dS =√ h du dv ,
where h= det hab = || r u || 2 || rv || 2 − (ru · rv ) 2 .
•In R 3 the area element can also be written as
dS = || r u ×r v || du dv
•The area element of a surface in R 3parametrized by
x= u, y= v, z= f( u, v) ,
is
dS = s 1+ ∂ f
∂x! 2
+ ∂f
∂y! 2 dxdy .
•The area element of a surface in R 3described by one equation F (x, y,z )= 0 is
dS =
∂F
∂z
−1
|| grad F || dx dy
=
∂F
∂z
−1 s ∂F
∂x! 2
+ ∂F
∂y! 2
+ ∂F
∂z! 2 dxdy
if ∂F /∂z, 0.
•The area of a surface Sdescribed by r :D →Rn , where D= [a, b ] ×[c, d ], is
S=Z S dS =Z b
aZ d
c
√h du dv .
•Let S be a parametrized hypersurface defined by r= r ( u )=r ( u 1 ,..., r n−1 ).
The tangent vectors to the hypersurface are
∂r
∂ua ,
where a= 1, 2,..., (n− 1).
The tangent space at a point Pon the hypersurface is the hyperplane equal to the
span of these vectors
T=span ( ∂ r
∂ua ,..., ∂ r
∂ua ) .
The unit normal to the hypersurface Sat the point P is the unit vector northog-
onal to T.
vecanal4.tex; July 1, 2005; 13:34; p. 67
68 CHAPTER 3. GEOMETRY
If the hypersurface is described by a single equation
F( x)= F( x1 ,..., xn )= 0,
then the normal is n=± grad F
|| grad F || .
The sign here is fixed by the choice of the orientation. In components,
(dF )i =∂F
∂xi .
The arc length of a curve on the hypersurface is
dl2 = h ab duadub ,
where
hab =gij ∂ q i
∂ua
∂qj
∂ub .
The area element of the hypersurface is
dS = √ h du1 . . . dun−1 .
vecanal4.tex; July 1, 2005; 13:34; p. 68
Chapter 4
Vector Analysis
4.1 Vector Functions of Several Variables
•The set Rn is the set of ordered n-tuples of real numbers x= (x 1 ,..., xn ). We
call such n -tuples points in the space Rn . Note that the points in space (although
related to vectors) are not vectors themselves!
•Let D= [a 1 , b 1 ] × · ·· × [ an , bn ] be a closed rectangle in R n .
•Ascalar field is a scalar-valued function of n variables. In other words, it is a
map f :D→ R , which assigns a real number f (x )=f (x 1 ,...,xn ) to each point
x=( x1 ,...,xn ) of D.
•The hypersurfaces defined by f (x)= c,
where c is a constant, are called level surfaces of the scalar field f.
•The level surfaces do not intersect.
•Avector field is a vector-valued function of nvariables; it is a map v :D→ Rn
that assigns a vector v (x ) to each point x= (x 1 ,..., xn ) in D.
•Atensor field is a tensor-valued function on D.
•Let v be a vector field. A point x 0 in Rn such that v (x 0 )=0 is called a singular
point (or a critical point) of the vector field v. A point x is called regular if it
is not singular, that is, if v (x ), 0.
•In a neighborhood of a regular point of a vector field vthere is a family of
parametrized curves r (t ) such that at each point the vector vis tangent to the
curves, that is, d r
dt = fv,
where f is a scalar field. Such curves are called the flow lines , or stream lines,
or characteristic curves, of the vector field v.
69
70 CHAPTER 4. VECTOR ANALYSIS
•Flow lines do not intersect.
•No flow lines pass through a singular point.
•The flow lines of a vector field v= vi (x ) ei can be found from the differential
equations dx 1
v1 =··· = dxn
vn .
vecanal4.tex; July 1, 2005; 13:34; p. 69
4.2. DIRECTIONAL DERIVATIVE AND THE GRADIENT 71
4.2 Directional Derivative and the Gradient
•Let P 0 be a point and ube a unit vector. Then r (s )=r 0 + su is the equation of
the oriented line passing through P 0with the unit tangent u.
•Let f (x ) be a scalar field. Then the derivative
d
ds f(x(s )) s=0
at s= 0 is called the directional derivative of f at the point P 0in the direction
of u and denoted by
∇u f= d
ds f(x( s)) s=0.
•The directional derivatives in the direction of the basis vectors ei are the partial
derivatives
∇e i f=∂ f
∂xi ,
which are also denoted by
∂i f= ∂ f
∂xi .
•More generally, let r (s ) be a parametrized curve in the natural parametrization
and u=d r/ ds be the unit tangent. Then
∇u f=∂ f
∂xi dxi
ds
In curvilinear coordinates
∇u f=∂ f
∂qi dqi
ds .
•The covector (1-form) field with the components ∂f /∂qi is denoted by
df =∂ f
∂xi dx i = ∂f
∂qi dq i .
•Therefore, the 1-forms dqi form a basis in the dual space of covectors.
•The vector field corresponding to the 1-form df is called the gradient of the
scalar field fand denoted by
grad f=∇ f= gij ∂ f
∂qi e j .
•The directional derivative is simply the action of the covector d f on the vector u
(or the inner product of the vectors grad f and u)
∇u f= h df , u i =( grad f, u) .
vecanal4.tex; July 1, 2005; 13:34; p. 70
72 CHAPTER 4. VECTOR ANALYSIS
•Therefore,
∇u f= || grad f || cos θ ,
where θ is the angle between the gradient and the unit tangent u.
•Gradient of a scalar field points in the direction of the maximum rate of increase
of the scalar field.
•The maximum value of the directional derivative at a fixed point is equal to the
norm of the gradient max
u∇ u f=|| grad f || .
•The minimum value of the directional derivative at a fixed point is equal to the
negative norm of the gradient
min
u∇ u f=−|| grad f || .
•Let f be a scalar field and P 0be a point where grad f, 0. Let r= r ( s ) be
a curve passing through Pwith the unit tangent u=d r/ ds. Suppose that the
directional derivative vanishes, ∇u f= 0. Then the unit tangent uis orthogonal
to the gradient grad f at P . The set of all such curves forms a level surface
f( x)= c, where c= f( P0 ). The gradient grad f is orthogonal to the tangent
plane to the this surface at P 0.
•Theorem 4.2.1 For any smooth scalar field f there is a level surface f ( x)= c
passing through every point where the gradient of f is non-zero, grad f,0 .
The gradient grad f is orthogonal to this surface at this point.
•A vector field v is called conservative if there is a scalar field fsuch that
v= grad f .
The scalar field fis called the scalar potential of v.
vecanal4.tex; July 1, 2005; 13:34; p. 71
4.3. EXTERIOR DERIVATIVE 73
4.3 Exterior Derivative
•Recall that antisymmetric tensors of type (0,k ) are called the k-forms, and the
antisymmetric tensors of type (k, 0) are called k -vectors. We denote the space of
all k -forms by Λk and the space of all k -vectors by Λk .
•The exterior derivative dis an operator
d:Λ k →Λ k+1 ,
that assigns a (k+ 1)-form to each k-form. It is defined as follows.
•A scalar field can be also called a 0-form. The exterior derivative of a zero form
fis a 1-form
df =∂ f
∂qi dxi
with components
(df )i =∂ f
∂qi .
•The exterior derivative of a 1-form is a 2-form dσ defined by
(dσ )ij =∂σ j
∂qi − ∂σ i
∂qj .
•The exterior derivative of a k -form σ is a (k+ 1)-form dσ with components
(d σ )i 1 i 2 ...ik+1 = (k+ 1) ∂
∂q[i1 σ i 2 ...ik+1 ] = X
ϕ∈Sk+1
sign(ϕ ) ∂
∂qi ϕ(1) σ i ϕ(2) ...i ϕ(k+ 1) .
•Theorem 4.3.1 The exterior derivative of a k-form is a ( k+ 1)-form.
•The exterior derivative plays the role of the gradient for k-forms.
•Theorem 4.3.2 The exterior derivative has the property
d2 =0
•Recall that the duality operator ∗assigns a (n −k)-vector to each k-form and an
(n− k )-form to each k -vector:
∗:Λk →Λn−k , ∗:Λk →Λn−k .
•Therefore, one can define the operator
∗d:Λ k →Λn−k −1 ,
which assigns a (n− k− 1)-vector to each k-form by
(∗d σ )i 1 ...in−k −1 = 1
k! g −1/ 2 ε i 1 ...in−k−1 j1 j2 ... jk+1 ∂
∂qj 1 σ j 2 ... jk+1
vecanal4.tex; July 1, 2005; 13:34; p. 72
74 CHAPTER 4. VECTOR ANALYSIS
•We can also define the operator
∗d ∗: Λk → Λk −1
acting on k -vectors, which assigns a (k− 1) vector to a k -vector.
•Theorem 4.3.3 For any k-vector A with components Ai 1 ...ik there holds
(∗d ∗ A ) i 1 ...ik−1 =(−1)nk+1 g−1/ 2 ∂
∂qj g 1/2 A ji 1 ...ik−1 .
•The operator ∗d ∗ plays the role of the divergence of k -vectors.
•Theorem 4.3.4 The operator ∗d ∗ has the property
(∗d ∗ )2 = 0
•Let G denote the operator that converts k -vectors to k-forms,
G:Λ k →Λ k .
That is, if Aj 1 ... jk are the components of a k-vector, then the corresponding k-form
σ=GA has components
σi 1 ...ik =gi 1 j 1 . . . gi k j k Aj 1 ... jk .
•Then the operator G ∗d:Λ k →Λ n−k −1
assigns a (n− k− 1)-form to each k-form by
(G∗ d σ ) i 1 ...in−k −1 = 1
k! g −1/ 2 g i 1 m 1 ··· g i n−k−1 m n−k−1 ε m 1 ...mn−k−1 j1j2 ... jk+1 ∂
∂qj 1 σ j 2 ... jk+1
•The operator ∗d plays the role of the curl of k-forms.
•Further, we can define the operator
δ=G∗ dG∗: Λk →Λk−1 ,
which assigns a (k− 1)-form to each k-form.
•Theorem 4.3.5 For any k-form A with components σ i 1 ...ik there holds
(δσ )i 1 ...ik−1 = (−1)nk+1 g −1/ 2 gi 1 j 1 ··· gi k−1 j k−1
∂
∂qj g 1/2 g jp gj 1 m 1 ··· gj k−1 m k−1 σ pm 1 ...mk−1 .
•The operator δ plays the role of the divergence of k-forms.
•Theorem 4.3.6 The operator δ has the property
δ2 =0 .
vecanal4.tex; July 1, 2005; 13:34; p. 73
4.3. EXTERIOR DERIVATIVE 75
•Therefore the operator
L= dδ+ δ d
assigns a k -form to each k-form, that is,
L:Λ k →Λ k .
•This operator plays the role of the Laplacian of k-forms.
•Ak -form σ is called closed if dσ= 0.
•Ak -form σ is called exact if there is a (k −1)-form α such that σ=d α .
•A 1-form σ corresponding to conservative vector field vis exact, that is, σ= d f .
•Every exact k -form is closed.
vecanal4.tex; July 1, 2005; 13:34; p. 74
76 CHAPTER 4. VECTOR ANALYSIS
4.4 Divergence
•The divergence of a vector field vis a scalar field defined by
div v= (−1)n+1 ∗d ∗v ,
which in local coordinates becomes
div v= g−1/ 2 ∂
∂qi ( g 1/2 vi ) .
where g= det gij .
•Theorem 4.4.1 For any vector field v the divergence div v is a scalar field.
•The divergence of a covector field σis
div σ= g− 1/2 ∂
∂qi ( g 1/2 g ij σ j ) .
•In Cartesian coordinates this gives simply
div v= ∂i vi .
•A vector field v is called solenoidal if
div v= 0 .
•The 2-form ∗v dual to a solenoidal vector field vis closed, that is, d ∗v= 0.
Physical Interpretation of Divergence
•The divergence of a vector field is the net outflux of the vector field per unit
volume.
vecanal4.tex; July 1, 2005; 13:34; p. 75
4.5. CURL 77
4.5 Curl
•Recall that the operator ∗dassigns a ( n −k−1)-vector to a k -form. In case n= 3
and k= 1 this operator assigns a vector to a 1-form. This enables one to define
the curl operator in R 3 , which assigns a vector to a covector by
curl σ= ∗dσ ,
or, in components,
(curl σ )i = g− 1/2 εijk ∂
∂qj σ k =g −1/ 2 det
e1 e2e3
∂
∂q1
∂
∂q2
∂
∂q 3
σ1 σ2σ 3
.
•We can also define the curl of a vector field vby
(curlv)i = g−1/ 2 ε ijk ∂
∂qj (gkm vm ) .
•In Cartesian coordinates we have simply
(curl σ )i = εijk ∂j σk .
This can also be written in the form
curl σ= det
i j k
∂x ∂y∂z
σ1 σ2 σ3
•A vector field v in R 3 is called irrotational if
curlv = 0 .
•The one-form σ corresponding to an irrotational vector field v is closed, that is
dσ=0.
•Each conservative vector field is irrotational.
•Let v be a vector field. If there is a vector field Asuch that
v= curl A ,
when A is called the vector potential of v.
•If v has a vector potential, then it is solenoidal.
•If A is a vector potential for v, then the 2-form ∗v dual to v is exact, that is,
∗v=d α, where α is the 1-form corresponding to A.
Physical Interpretation of the Curl
•The curl of a vector field measures its tendency to swirl; it is the swirl per unit
area.
vecanal4.tex; July 1, 2005; 13:34; p. 76
78 CHAPTER 4. VECTOR ANALYSIS
4.6 Laplacian
•The scalar Laplace operator (or the Laplacian) is the map ∆ :C ∞ (Rn ) →
C∞ (R n ) that assigns a scalar field to a scalar field. It is defined by
∆f =div grad f =g−1/ 2 ∂ i g1/2 gij ∂ j f .
•In Cartesian coordinates it is simply
∆f =∂i∂i f.
•The Laplacian of a 1-form (covector field) σis defined as follows. First, one
obtains a 2-form dσ by the exterior derivative. Then one take the dual of this
2-form to get a (n− 2)-form ∗d σ . Then one acts by exterior derivative to get a
(n− 1)-form d∗ d σ , and, finally, by taking the dual again one gets the 1-form
∗d ∗d σ. Similarly, reversing the order of operations one gets the 1-form d∗d∗ σ.
The Laplacian is the sum of these 1-forms, i.e.
∆σ =− (G∗ dG ∗ d + dG ∗ dG ∗)σ .
The expression of this Laplacian in components is too complicated, in general.
•The components expression for this is
(∆ v )i = gij ∂
∂qj g −1/ 2 ∂
∂qk g 1/2 − g −1/ 2 ∂
∂qj g 1/2 g pi g q j ∂
∂qp gqk
+g−1/ 2 ∂
∂qj g 1/2 g pj gqi ∂
∂qp g qk vk .
•Of course, in Cartesian coordinates this simpifies significantly
(∆v )i =∂j∂j vi .
•In R 3 it can be written as
∆v = grad div v− curl curl v .
Interpretation of the Laplacian
•The Laplacian ∆measures the difference between the value of a scalar field f ( P)
at a point Pand the average of faround this point.
vecanal4.tex; July 1, 2005; 13:34; p. 77
4.7. DIFFERENTIAL VECTOR IDENTITIES 79
4.7 Diff erential Vector Identities
•The identities below that involve the vector product and the curl apply only for
R3 . Other formulas are valid for arbitrary Rn in arbitrary coordinate systems:
grad ( fh)= ( grad f) h+ f grad h
div( f v )= ( grad f )·v+f div v
grad f (h (x )) = df
dh grad h
curl (f v)= ( grad f)× v+f curlv
div(u× v )= ( curl u )·v −u · ( curl v)
curl grad f= 0
div curlv = 0
div( grad f× grad h )= 0
•Let ei be the standard basis in Rn ,xi be the Cartesian coordinates, r=xi ei be
the position (radius) vector field and r= || r || =p xi xi . Scalar fields that depend
only on r and vector fields that depend on x and r are called radial fields . Below
ais a constant vector field.
div r= n
curl r = 0
grad ( a· r)= a
curl ( a× r)= 2 a
grad r= r
r
grad f (r )= df
dr r
r
grad 1
r=−r
r3
grad rk = krk−2 r
∆f( r) =f00 +(n−1)
rf 0
∆rk =k (k +n − 2)rk−2
∆1
rn−2 =0
•Some useful formulas when working with radial fields are
∂i xk = δ k
i, δ i
i=n.
vecanal4.tex; July 1, 2005; 13:34; p. 78
80 CHAPTER 4. VECTOR ANALYSIS
4.8 Orthogonal Curvilinear Coordinate Systems in R3
•Let (q 1 ,q 2 , q 3) be an orthogonal coordinate system in R 3 and {ˆ
e1 , ˆ
e2 , ˆ
e3 }be the
corresponding orthonormal basis
ˆ
ei =1
hi
∂r
∂qi .
where
hi =
∂r
∂qi
are the scale factors.
Then for any vector v=vi ˆ
ei the contravariant and the covariant components
coincide vi =vi = ˆ
ei · v .
•The displacement vector, the interval and the volume element in the orthogonal
coordinate system are dr=h 1 ˆ
e1 + h2 ˆ
e2 +h3 ˆ
e3 ,
ds2 =h2
1(dq 1 ) 2 +h 2
2(dq 2 ) 2 +h 2
3(dq 3 ) 2 ,
dV = h1 h2h3dq1dq2dq3 .
•The differential operators introduced above take the following form
grad f= ˆ
e1 1
h1
∂
∂q1 f+ ˆ
e2 1
h2
∂
∂q2 f+ ˆ
e3 1
h3
∂
∂q3 f
div v= 1
h1 h2 h3 ( ∂
∂q1 ( h 2 h 3 v 1)+ ∂
∂q2 ( h 3 h 1 v 2 )+ ∂
∂q3 ( h 1 h 2 v 3 ) )
curlv = ˆ
e1 1
h2 h3 " ∂
∂q2 ( h 3 v 3)− ∂
∂q3 ( h 2 v 2 ) #
+ˆ
e2 1
h3 h1 " ∂
∂q3 ( h 1 v 1)− ∂
∂q1 ( h 3 v 3 ) #
+ˆ
e3 1
h1 h2 " ∂
∂q1 ( h 2 v 2)− ∂
∂q2 ( h 1 v 1 ) #
∆f =1
h1 h2 h3 ( ∂
∂q1 h 2 h 3
h1
∂
∂q1 ! + ∂
∂q1 h 2 h 3
h1
∂
∂q1 ! + ∂
∂q1 h 2 h 3
h1
∂
∂q1 !) f
•Cylindrical coordinates:
dr= dρˆ
eρ +ρd ϕ ˆ
eϕ +dz ˆ
ez
ds2 = d ρ2 +ρ2 dϕ2 + dz2
vecanal4.tex; July 1, 2005; 13:34; p. 79
4.8. ORTHOGONAL CURVILINEAR COORDINATE SYSTEMS IN R3 81
dV =ρ dρ dϕ dz
grad f= ˆ
eρ ∂ρ f+ ˆ
eϕ 1
ρ∂ ϕ f+ˆ
ez ∂z f
div v= 1
ρ∂ ρ ( ρv ρ )+1
ρ∂ ϕ v ϕ + ∂ z vz
curlv = 1
ρ
ˆ
eρ ρ ˆ
eϕ ˆ
ez
∂ρ∂ϕ∂z
vρ ρ vϕvz
∆f =1
ρ∂ ρ ( ρ ∂ ρ f)+ 1
ρ2 ∂ 2
ϕf+∂2
zf
•Spherical coordinates:
dr= dr ˆ
er + rdθ ˆ
eθ +r sin θd ϕ ˆ
eϕ
ds2 =dr2 + r2 dθ2 + r2 sin2 θ dϕ 2
dV = r2 sin θ dr dθ d ϕ
grad f= ˆ
er ∂r f+ ˆ
eθ 1
r∂θ f+ˆ
eϕ 1
rsin θ∂ ϕ f
div v= 1
r2 ∂ r ( r 2 v r )+1
rsin θ∂ θ (sin θv θ )+ 1
rsin θ∂ ϕ v ϕ
curlv = 1
r2 sin θ
ˆ
er rˆ
eθ rsin θ ˆ
eϕ
∂r∂θ∂ϕ
vr rvθ r sin θ vϕ
∆f =1
r2 ∂ r ( r 2 ∂ r f)+ 1
r2 sin θ∂ θ (sin θ∂θ f)+ 1
r2 sin2 θ∂ 2
ϕf
vecanal4.tex; July 1, 2005; 13:34; p. 80
82 CHAPTER 4. VECTOR ANALYSIS
vecanal4.tex; July 1, 2005; 13:34; p. 81
Chapter 5
Integration
5.1 Line Integrals
•Let C be a smooth curve described by r (t ), where t ∈[a, b ]. The length of the
curve is defined by
L=Z C ds =Z b
a
dr
dt dt .
•Let f be a scalar field. Then the line integral of the scalar field f is
ZC f ds = Z b
af(x (t ))
dr
dt dt .
•If v is a vector field, then the line integral of the vector field v along the curve
Cis defined by
ZC v·dr= Z b
av(x (t )) ·d r
dt dt .
•In components, the line integral of a vector field takes the form
ZC v·dr= ZC vidqi = ZC v 1 dq 1 + · ·· +vn dqn ,
where vi = gij vj are the covariant components of the vector field.
•The expression
σ=vi dqi =v1dq1 +· · · +vndqn
is called a diff erential 1-form . Each covector naturally defines a differential
form. That is why it is also called a 1-form.
•If C is a closed curve, then the line integral of a vector field is denoted by
IC v·dr
and is called the circulation of the vector field v about the closed curve C.
83
84 CHAPTER 5. INTEGRATION
5.2 Surface Integrals
•Let S be a smooth parametrized surface described by r :D →Rn , where
D=[ a,b]× [ c, d ]. The surface integral of a scalar field f is
ZS f dS = Z b
aZ d
cf(x (u, v )) √ h du dv ,
where u 1 = u,u 2 =v ,h= det hab and hab is the induced metric on the surface
hab =gij ∂ q i
∂ua
∂qi
∂ub .
•Let A be an antisymmetric tensor field of type (0, 2) with components Ai j . It
naturally defines the diff erential 2-form
α=X
i< j
Aij dqi ∧dq j
=A12dq1 ∧dq2 +· ·· +A1n dq1 ∧dqn
+A23dq2 ∧dq3 +· ·· +A2n dq2 ∧dqn
+· ·· +An−1,n dqn−1 ∧dqn .
•Then the surface integral of a 2 -form α is defined by
ZS α= ZS X
i< j
Aij dqi ∧dq j =Z b
aZ d
cX
i<j
Aij J i j dudv ,
where
Jij =∂ q i
∂u
∂qj
∂v− ∂qj
∂u
∂qi
∂v .
•In R 3 every 2-form defines a dual vector. Therefore, one can integrate vectors
over a surface. Let vbe a vector field in R 3. Then the dual two form is
Aij =√ g ε i jk vk ,
or A 12 = √ gv 3 , A 13 =−√ gv 2 ,A 23 = √ g v 1 .
Ttherefore,
α=√ g v3 dq1 ∧dq2 −v2dq1 ∧dq3 +v1 dq2 ∧dq3 .
Then the surface integral of the vector field v, called the total flux of the
vector field through the surface, is
ZS α= ZS v·ndS = Z b
aZ d
c[v, r u ,r v ]du dv ,
vecanal4.tex; July 1, 2005; 13:34; p. 82
5.2. SURFACE INTEGRALS 85
where n=|| ru ×rv ||−1 ru ×r v
is the unit normal to the surface and
[v, ru , rv ]= vol(v, ru ,rv )=√ gε i jk vi ∂ q j
∂u
∂qk
∂v .
•Similarly, the integrals of a diff erential k -form
α=X
i1 <···<ik
Ai 1 ...ik dqi 1 ∧ · ·· ∧ dqi k
with components Ai 1 ...ik over a k -dimensional surface qi =qi ( u 1 ,..., uk ), ui ∈
[ai ,bi ] are defined by
ZS α= Z bk
ak ··· Z b 1
a1 X
i1 <···< i k
Ai 1 ...ik Ji 1 ...ik du1 ··· duk ,
where
Ji 1 ...ik = k!∂ q [i 1
∂u1 ··· ∂qi k ]
∂uk .
•The surface integral over a closed surface Swithout boundary is denoted by
IS α .
•In the case k= n −1 we obtain the integral of a (n −1)-form α over a hypersurface
ZS α= Z bn−1
an−1 ··· Z b 1
a1 X
i1 <···<in−1
Ai 1 ...i n−1 Ji 1 ...i n−1 du1 ··· dun−1 .
Let n be the unit vector orthogonal to the hypersurface and v= ∗α be the vector
field dual to the (n− 1)-form α . Then
ZS α= Z bn−1
an−1 ··· Z b 1
a1
v· n√ h du1 ··· dun−1 .
This defines the total flux of the vector field v through the hypersurface S .
The normal can be determined by
√h n j =1
(n− 1)! √ gε ji 1 ...in−1
∂qi 1
∂u1 ··· ∂qi n−1
∂un−1 .
vecanal4.tex; July 1, 2005; 13:34; p. 83
86 CHAPTER 5. INTEGRATION
5.3 Volume Integrals
•Let D= [ a 1 ,b 1 ] ×···× [ an ,bn ] be a domain in Rn described in local coordinates
(qi ) by qi ∈ [ai ,bi ] .
•The volume element in general curvilinear coordinates is
dV =√ g dq1 · ·· dqn ,
where g= det(gij ).
•The volume of the region D is
V=Z D dV =Z b 1
a1 ··· Z b n
an
√g dq 1 ··· dqn .
•The volume integral of a scalar field f (x ) is
ZD f dV = Zb1
a1 ··· Z b n
an
f( x( q)) √ g dq1 ··· dq n .
vecanal4.tex; July 1, 2005; 13:34; p. 84
5.4. FUNDAMENTAL INTEGRAL THEOREMS 87
5.4 Fundamental Integral Theorems
5.4.1 Fundamental Theorem of Line Integrals
•Theorem 5.4.1 Let C be a smooth curve parametrized by r (t ) , t ∈[a, b ]. Then
for any scalar field f (a 0-form)
ZC df = ZC
∂f
∂qi dq i = Z C grad f·dr= f( x( b)) −f( x( a)) .
•The line integral of a conservative vector field does not depend on the interior of
the curve but only on the endpoints of the curve.
•Corollary 5.4.1 The circulation of a smooth conservative vector field over a
closed smooth curve is zero,
IC grad f· dr=0.
5.4.2 Green's Theorem
•Theorem 5.4.2 Let x and y be the Cartesian coordinates in R2 . Let U be a
bounded region in R2 with the boundary ∂ U, which is a closed curve oriented
couterclockwise. Then for any 1 -form α= Aidxi =A1 dx + A2dy
ZU ∂A 2
∂x− ∂A1
∂y! dxdy = I ∂U ( A 1 dx + A2dy) .
5.4.3 Stokes's Theorem
•Theorem 5.4.3 Let S be a bounded surface in R3 with the boundary ∂ S oriented
consistently with the the surface S. Then for any vector field v
ZS curl v ·ndS = I∂S v·dr.
5.4.4 Gauss's Theorem
•Theorem 5.4.4 Let D be a bounded domain in R3 with the boundary ∂ D ori-
ented by an outward normal. Then for any vector field v
ZD div vdV = I∂D v·ndS
vecanal4.tex; July 1, 2005; 13:34; p. 85
88 CHAPTER 5. INTEGRATION
5.4.5 General Stokes's Theorem
•Theorem 5.4.5 Let S be a bounded smooth k-dimensional surface in R nwith the
boundary ∂ S , which is a closed ( k− 1)-dimensional surface oriented consistently
with S. Let
α=X
i1 <··· < ik−1
Ai 1 ...ik−1 dqi 1 ∧ · ·· ∧ dqi k−1
be a smooth ( k− 1) -form. Then
ZS dα= I∂S α .
•In components this formula takes the form
ZS
∂Ai 1 ...ik−1
∂qj
∂q[ j
∂u1
∂qi 1
∂u2 ··· ∂qi k−1]
∂uk du 1 ··· duk =Z ∂SA i 1 ...ik−1
∂q[ i 1
∂u1 ··· ∂qi k−1]
∂uk−1 du 1 ··· duk−1 .
vecanal4.tex; July 1, 2005; 13:34; p. 86
Chapter 6
Potential Theory
89
90 CHAPTER 6. POTENTIAL THEORY
6.1 Simply Connected Domains
vecanal4.tex; July 1, 2005; 13:34; p. 87
6.2. CONSERVATIVE VECTOR FIELDS 91
6.2 Conservative Vector Fields
6.2.1 Scalar Potential
vecanal4.tex; July 1, 2005; 13:34; p. 88
92 CHAPTER 6. POTENTIAL THEORY
6.3 Irrotational Vector Fields
vecanal4.tex; July 1, 2005; 13:34; p. 89
6.4. SOLENOIDAL VECTOR FIELDS 93
6.4 Solenoidal Vector Fields
6.4.1 Vector Potential
vecanal4.tex; July 1, 2005; 13:34; p. 90
94 CHAPTER 6. POTENTIAL THEORY
6.5 Laplace Equation
6.5.1 Harmonic Functions
vecanal4.tex; July 1, 2005; 13:34; p. 91
6.6. POISSON EQUATION 95
6.6 Poisson Equation
6.6.1 Dirac Delta Function
6.6.2 Point Sources
6.6.3 Dirichlet Problem
6.6.4 Neumann Problem
6.6.5 Green's Functions
vecanal4.tex; July 1, 2005; 13:34; p. 92
96 CHAPTER 6. POTENTIAL THEORY
6.7 Fundamental Theorem of Vector Analysis
vecanal4.tex; July 1, 2005; 13:34; p. 93
Chapter 7
Basic Concepts of Diff erential
Geometry
97
98 CHAPTER 7. BASIC CONCEPTS OF DIFFERENTIAL GEOMETRY
7.1 Manifolds
vecanal4.tex; July 1, 2005; 13:34; p. 94
7.2. DIFFERENTIAL FORMS 99
7.2 Diff erential Forms
7.2.1 Exterior Product
7.2.2 Exterior Derivative
vecanal4.tex; July 1, 2005; 13:34; p. 95
100 CHAPTER 7. BASIC CONCEPTS OF DIFFERENTIAL GEOMETRY
7.3 Integration of Differential Forms
vecanal4.tex; July 1, 2005; 13:34; p. 96
7.4. GENERAL STOKES'S THEOREM 101
7.4 General Stokes's Theorem
vecanal4.tex; July 1, 2005; 13:34; p. 97
102 CHAPTER 7. BASIC CONCEPTS OF DIFFERENTIAL GEOMETRY
7.5 Tensors in General CurvilinearCoordinate Systems
7.5.1 Covariant Derivative
vecanal4.tex; July 1, 2005; 13:34; p. 98
104 CHAPTER 8. APPLICATIONS
8.1 Mechanics
8.1.1 Inertia Tensor
8.1.2 Angular Momentum Tensor
vecanal4.tex; July 1, 2005; 13:34; p. 99
8.2. ELASTICITY 105
8.2 Elasticity
8.2.1 Strain Tensor
8.2.2 Stress Tensor
vecanal4.tex; July 1, 2005; 13:34; p. 100
106 CHAPTER 8. APPLICATIONS
8.3 Fluid Dynamics
8.3.1 Continuity Equation
8.3.2 Tensor of Momentum Flux Density
8.3.3 Euler's Equations
8.3.4 Rate of Deformation Tensor
8.3.5 Navier-Stokes Equations
vecanal4.tex; July 1, 2005; 13:34; p. 101
8.4. HEAT AND DIFFUSION EQUATIONS 107
8.4 Heat and Diffusion Equations
vecanal4.tex; July 1, 2005; 13:34; p. 102
108 CHAPTER 8. APPLICATIONS
8.5 Electrodynamics
8.5.1 Tensor of Electromagnetic Field
8.5.2 Maxwell Equations
8.5.3 Scalar and Vector Potentials
8.5.4 Wave Equations
8.5.5 D'Alambert Operator
8.5.6 Energy-Momentum Tensor
vecanal4.tex; July 1, 2005; 13:34; p. 103
8.6. BASIC CONCEPTS OF SPECIAL AND GENERAL RELATIVITY 109
8.6 Basic Concepts of Special and General Relativity
vecanal4.tex; July 1, 2005; 13:34; p. 104
110 CHAPTER 8. APPLICATIONS
vecanal4.tex; July 1, 2005; 13:34; p. 105
Bibliography
[1] A. I. Borisenko and I. E. Tarapov, Vector and Tensor Analysis, Dover, 1979
[2] D. E. Bourne and P. C. Kendall, Vector Analysis and Cartesian Tensors, Nelson,
1977
[3] H. F. Davis and A. D. Snider, Introduction to Vector Analysis, 7th Edition,
Brown Publishers, 1995
[4] J. H. Hubbard and B. B. Hubbard, Vector Calculus, Linear Algebra, and Diff er-
ential Forms, (2nd Edition), Prentice Hall, 2001
[5] H. M. Schey, Div, grad, curl, and all that: an informal text on vector calculus,
W.W. Norton, 1997
[6] Th. Shifrin, Multivariable Mathematics, Wiley, 2005
[7] M. Spivak, Calculus on Manifolds: A Modern Approach to Classical Theorems
of Advanced Calculus, HarperCollins Publishers, 1965
[8] J. L. Synge and A. Schild, Tensor Calculus, Dover, 1978
[9] R. C. Wrede, Introduction to Vector and Tensor Analysis, Dover, 1972
111
112 BIBLIOGRAPHY
vecanal4.tex; July 1, 2005; 13:34; p. 106
ResearchGate has not been able to resolve any citations for this publication.
- Th
- Shifrin
Th. Shifrin, Multivariable Mathematics, Wiley, 2005
Vectors In Mathematics From Basic Pdf
Source: https://www.researchgate.net/publication/264873260_Lecture_Notes_Vector_Analysis_MATH_332
Posted by: rivasasaing.blogspot.com
0 Response to "Vectors In Mathematics From Basic Pdf"
Post a Comment