Linear Algebra

The mathematics of vectors, vector spaces, linear mappings, and systems of linear equations

Advanced Level
Complete Lesson
120 min

Introduction to Linear Algebra

Linear algebra is a fundamental branch of mathematics that deals with vectors, vector spaces (also called linear spaces), linear transformations, and systems of linear equations. It has extensive applications in natural sciences, engineering, computer science, economics, and more.

Why Study Linear Algebra?

  • Foundation for advanced mathematics and physics
  • Essential for computer graphics and machine learning
  • Used in solving systems of equations in engineering
  • Basis for quantum mechanics formulation
  • Critical for data analysis and statistics

Linear Algebra in Action

This 3D visualization shows how linear transformations can rotate, scale, and skew objects in space:

3D Transformation Visualization

Vectors and Vector Spaces

Definition: Vector

A vector is a mathematical object that has both magnitude and direction. In ℝ² (2D space), a vector can be represented as an ordered pair (x, y). In ℝ³ (3D space), it's (x, y, z), and so on for higher dimensions.

Vector Operations

Vectors support several fundamental operations:

Operation Notation Description
Vector Addition v + w Component-wise addition
Scalar Multiplication a·v Multiply each component by scalar a
Dot Product v · w Sum of component-wise products
Cross Product v × w Only in ℝ³, produces perpendicular vector

Example: Vector Operations

Let v = (2, 5) and w = (3, -1):

Addition: v + w = (2+3, 5+(-1)) = (5, 4)

Scalar Multiplication: 3·v = (3×2, 3×5) = (6, 15)

Dot Product: v · w = (2×3) + (5×-1) = 6 - 5 = 1

Definition: Vector Space

A vector space (or linear space) is a collection of vectors that can be added together and multiplied ("scaled") by numbers (called scalars in this context). Scalars are often real numbers, but can also be complex numbers.

Theorem: Vector Space Axioms

A set V with operations of addition and scalar multiplication is a vector space if for all u, v, w ∈ V and all scalars a, b, the following axioms hold:

  1. u + v ∈ V (closure under addition)
  2. u + v = v + u (commutativity of addition)
  3. (u + v) + w = u + (v + w) (associativity of addition)
  4. There exists 0 ∈ V such that v + 0 = v (additive identity)
  5. For every v ∈ V, there exists -v ∈ V such that v + (-v) = 0 (additive inverse)
  6. a·v ∈ V (closure under scalar multiplication)
  7. a·(b·v) = (ab)·v (associativity of scalar multiplication)
  8. 1·v = v (multiplicative identity)
  9. a·(u + v) = a·u + a·v (distributivity of scalar multiplication over vector addition)
  10. (a + b)·v = a·v + b·v (distributivity of scalar multiplication over scalar addition)

Matrices and Matrix Operations

Definition: Matrix

A matrix is a rectangular array of numbers arranged in rows and columns. An m×n matrix has m rows and n columns. The entry in the i-th row and j-th column is typically denoted as aij.

Example: 2×3 Matrix

A = | 1 2 3 |
| 4 5 6 |

Here, a11 = 1, a12 = 2, a23 = 6, etc.

Matrix Operations

Operation Notation Description Requirements
Matrix Addition A + B Component-wise addition Same dimensions
Scalar Multiplication c·A Multiply each element by c -
Matrix Multiplication A·B Dot product of rows and columns Columns of A = rows of B
Transpose AT Rows become columns and vice versa -
Determinant det(A) Scalar value of square matrix Square matrix
Inverse A-1 Matrix that when multiplied gives identity Square, non-singular

Example: Matrix Multiplication

Let A be 2×3 and B be 3×2:

A = | 1 2 3 | B = | 7 8 |
| 4 5 6 | | 9 10 |
|11 12 |

Then A·B is:

A·B = | 1×7 + 2×9 + 3×11 1×8 + 2×10 + 3×12 |
| 4×7 + 5×9 + 6×11 4×8 + 5×10 + 6×12 |

= | 58 64 |
|139 154 |

Theorem: Properties of Matrix Multiplication

For matrices A, B, C of appropriate dimensions:

  1. Associative: (AB)C = A(BC)
  2. Distributive: A(B + C) = AB + AC and (A + B)C = AC + BC
  3. Not commutative: AB ≠ BA in general
  4. Identity: AI = IA = A where I is identity matrix
  5. Transpose: (AB)T = BTAT

Systems of Linear Equations

A system of linear equations is a collection of one or more linear equations involving the same set of variables. For example:

3x + 2y - z = 1
2x - 2y + 4z = -2
-x + ½y - z = 0

Matrix Representation

Systems of linear equations can be represented in matrix form as AX = B, where:

Example: Matrix Representation

The system above can be written as:

| 3 2 -1 | | x | | 1 |
| 2 -2 4 | | y | = | -2 |
|-1 ½ -1 | | z | | 0 |

Solving Systems

There are several methods to solve systems of linear equations:

Method Description When to Use
Gaussian Elimination Row operations to reach row-echelon form General method for any system
Gauss-Jordan Further reduction to reduced row-echelon form When explicit solution is needed
Cramer's Rule Uses determinants to find solution Small systems with unique solutions
Matrix Inversion X = A-1B When A is square and invertible

Example: Gaussian Elimination

Solve the system:

x + y + z = 6
2y + 5z = -4
2x + 5y - z = 27

Solution:

  1. Write augmented matrix:
  2. | 1 1 1 | 6 |
    | 0 2 5 |-4 |
    | 2 5 -1 |27 |
  3. R3 ← R3 - 2R1:
  4. | 1 1 1 | 6 |
    | 0 2 5 |-4 |
    | 0 3 -3 |15 |
  5. R3 ← R3 - (3/2)R2:
  6. | 1 1 1 | 6 |
    | 0 2 5 | -4 |
    | 0 0 -10.5| 21 |
  7. Back substitution gives z = -2, y = 3, x = 5

Determinants and Inverses

Definition: Determinant

The determinant is a scalar value that can be computed from the elements of a square matrix. It encodes important properties of the matrix and the linear transformation it represents.

Calculating Determinants

For a 2×2 matrix:

det( | a b | ) = ad - bc
| c d |

For a 3×3 matrix (Sarrus' rule):

det( | a b c | ) = aei + bfg + cdh - ceg - bdi - afh
| d e f |
| g h i |

For larger matrices, use Laplace expansion or row reduction.

Theorem: Properties of Determinants

  1. det(I) = 1 where I is the identity matrix
  2. det(AT) = det(A)
  3. det(AB) = det(A)det(B)
  4. If A is triangular, det(A) is product of diagonal entries
  5. Swapping two rows changes the sign of the determinant
  6. Multiplying a row by a scalar multiplies the determinant by that scalar
  7. Adding a multiple of one row to another doesn't change the determinant
  8. A is invertible if and only if det(A) ≠ 0

Definition: Matrix Inverse

The inverse of a square matrix A is a matrix A-1 such that AA-1 = A-1A = I, where I is the identity matrix. A matrix is invertible (non-singular) if and only if its determinant is non-zero.

Example: Finding the Inverse of a 2×2 Matrix

For matrix A:

A = | a b |
| c d |

The inverse is:

A-1 = (1/det(A)) | d -b |
| -c a |

For A = | 2 3 | with det(A) = 2×4 - 3×1 = 5
| 1 4 |

A-1 = (1/5) | 4 -3 | = | 0.8 -0.6 |
| -1 2 | |-0.2 0.4 |

Applications of Determinants

Eigenvalues and Eigenvectors

Definition: Eigenvalue and Eigenvector

For a square matrix A, a non-zero vector v is an eigenvector and λ is its corresponding eigenvalue if:

Av = λv

This means the matrix transformation only scales the vector v by λ without changing its direction.

Finding Eigenvalues and Eigenvectors

The eigenvalues are found by solving the characteristic equation:

det(A - λI) = 0

For each eigenvalue λ, the corresponding eigenvectors are found by solving:

(A - λI)v = 0

Example: Finding Eigenvalues and Eigenvectors

Find the eigenvalues and eigenvectors of A:

A = | 2 1 |
| 1 2 |

Solution:

  1. Characteristic equation: det(A - λI) = 0
  2. | 2-λ 1 | = (2-λ)² - 1 = λ² - 4λ + 3 = 0
    | 1 2-λ |
  3. Roots: λ = 1 and λ = 3
  4. For λ=1: (A - I)v = 0 ⇒ v = t(1, -1)
  5. For λ=3: (A - 3I)v = 0 ⇒ v = t(1, 1)

Theorem: Properties of Eigenvalues

  1. The sum of eigenvalues equals the trace of the matrix (sum of diagonal elements)
  2. The product of eigenvalues equals the determinant of the matrix
  3. A matrix is invertible if and only if all eigenvalues are non-zero
  4. For symmetric matrices, eigenvalues are real and eigenvectors are orthogonal
  5. Similar matrices have the same eigenvalues

Applications

Linear Transformations

Definition: Linear Transformation

A linear transformation (or linear map) between two vector spaces V and W is a function T: V → W that preserves the operations of vector addition and scalar multiplication:

  1. T(u + v) = T(u) + T(v) for all u, v ∈ V
  2. T(cv) = cT(v) for all c ∈ F (the scalar field) and v ∈ V

Matrix Representation

Every linear transformation T: ℝⁿ → ℝᵐ can be represented by an m×n matrix A such that T(v) = Av for all v ∈ ℝⁿ.

Example: Rotation Transformation

The linear transformation that rotates vectors in ℝ² by angle θ counterclockwise is represented by:

R(θ) = | cosθ -sinθ |
| sinθ cosθ |

For θ = 90°:

R(90°) = | 0 -1 |
| 1 0 |

Applying to vector (1, 0):

| 0 -1 | | 1 | = | 0 |
| 1 0 | | 0 | | 1 |

Which is indeed (1, 0) rotated 90° to (0, 1).

Types of Linear Transformations

Transformation Description Matrix
Scaling Stretches or compresses along axes Diagonal matrix with scale factors
Rotation Rotates vectors by fixed angle Orthogonal matrix with sin/cos
Reflection Flips vectors across a line/plane Symmetric matrix with determinant -1
Shear Shifts one component proportionally to another Diagonal 1's with one non-zero off-diagonal
Projection Projects onto a subspace Idempotent matrix (P² = P)

Theorem: Properties of Linear Transformations

  1. The composition of linear transformations is linear
  2. The inverse of a linear transformation (if it exists) is linear
  3. Linear transformations preserve linear combinations
  4. The kernel (null space) and image (range) are subspaces
  5. A linear transformation is completely determined by its action on a basis

Orthogonality and Inner Product Spaces

Definition: Inner Product

An inner product on a vector space V is a function that takes two vectors and returns a scalar, satisfying:

  1. ⟨u, v⟩ = ⟨v, u⟩ (conjugate symmetry for complex spaces)
  2. ⟨u + v, w⟩ = ⟨u, w⟩ + ⟨v, w⟩ (linearity in first argument)
  3. ⟨cu, v⟩ = c⟨u, v⟩ for any scalar c
  4. ⟨v, v⟩ ≥ 0 and ⟨v, v⟩ = 0 iff v = 0 (positive-definiteness)

Dot Product in ℝⁿ

The standard inner product in ℝⁿ is the dot product:

u · v = u₁v₁ + u₂v₂ + ... + uₙvₙ

Definition: Orthogonal Vectors

Two vectors u and v are orthogonal if ⟨u, v⟩ = 0. A set of vectors is orthogonal if all pairs are orthogonal, and orthonormal if additionally each vector has norm 1.

Theorem: Pythagorean Theorem

For orthogonal vectors u and v:

‖u + v‖² = ‖u‖² + ‖v‖²

where ‖v‖ = √⟨v, v⟩ is the norm (length) of v.

Gram-Schmidt Process

This algorithm transforms a set of linearly independent vectors into an orthogonal (or orthonormal) set spanning the same subspace.

Example: Gram-Schmidt Process

Orthogonalize the vectors v₁ = (1, 1, 1), v₂ = (1, 2, 1), v₃ = (1, 2, 2):

  1. u₁ = v₁ = (1, 1, 1)
  2. u₂ = v₂ - proju₁v₂ = (1,2,1) - (4/3)(1,1,1) = (-1/3, 2/3, -1/3)
  3. u₃ = v₃ - proju₁v₃ - proju₂v₃ = (1,2,2) - (5/3)(1,1,1) - (-1/3)/(-1/3)(-1/3,2/3,-1/3) = (0, -1/2, 1/2)

The orthogonal set is {u₁, u₂, u₃}.

Orthogonal Matrices

Definition: Orthogonal Matrix

A square matrix Q is orthogonal if QTQ = QQT = I, or equivalently, QT = Q-1.

Theorem: Properties of Orthogonal Matrices

  1. Columns (and rows) form an orthonormal set
  2. Preserve inner products: ⟨Qv, Qw⟩ = ⟨v, w⟩
  3. Preserve norms: ‖Qv‖ = ‖v‖
  4. Determinant is ±1
  5. Product of orthogonal matrices is orthogonal

Applications of Linear Algebra

Computer Graphics

Linear algebra is fundamental in computer graphics for representing and manipulating 2D and 3D objects:

3D Transformation Example

A 3D cube transformed using linear algebra operations:

3D Cube Transformation Visualization

Machine Learning

Linear algebra is the foundation of many machine learning algorithms:

Engineering

Linear algebra applications in engineering include:

Quantum Mechanics

Quantum mechanics is formulated in terms of linear algebra:

Exercises and Problems

Exercise 1: Vector Operations

  1. Given vectors u = (2, -1, 3) and v = (4, 0, -2), compute:
    1. u + v
    2. 3u - 2v
    3. u · v
    4. ‖u‖ (the norm of u)

    Solution:

    1. u + v = (6, -1, 1)
    2. 3u - 2v = (6-8, -3-0, 9+4) = (-2, -3, 13)
    3. u · v = 8 + 0 - 6 = 2
    4. ‖u‖ = √(4 + 1 + 9) = √14
  2. Find a unit vector in the direction of w = (1, -2, 2).

    Solution:

    ‖w‖ = √(1 + 4 + 4) = 3

    Unit vector = w/‖w‖ = (1/3, -2/3, 2/3)

Exercise 2: Matrix Operations

  1. Given matrices:
    A = | 1 2 | B = | 0 -1 |
    | 3 -1 | | 2 3 |
    Compute:
    1. A + B
    2. AB
    3. BA
    4. AT

    Solution:

    1. A + B = | 1 1 |
      | 5 2 |
    2. AB = | 4 5 |
      |-2 -6 |
    3. BA = |-3 1 |
      |11 1 |
    4. AT = | 1 3 |
      | 2 -1 |
  2. Find the inverse of C = | 2 5 |
    | 1 3 |

    Solution:

    det(C) = 6 - 5 = 1

    C-1 = | 3 -5 |
    |-1 2 |

Exercise 3: Systems of Equations

  1. Solve the system using Gaussian elimination:
    x + 2y = 5
    3x + 4y = 6

    Solution:

    Augmented matrix:

    | 1 2 | 5 |
    | 3 4 | 6 |

    R2 ← R2 - 3R1:

    | 1 2 | 5 |
    | 0 -2 | -9 |

    Back substitution:

    -2y = -9 ⇒ y = 4.5

    x + 2(4.5) = 5 ⇒ x = 5 - 9 = -4

    Solution: x = -4, y = 4.5

Lesson Summary