Vector Spaces: The Abstract Playground of Linear Algebra

Vector Spaces: The Abstract Playground of Linear Algebra
Vector Spaces: The Abstract Playground of Linear Algebra | Ideasthesia

Here's the moment linear algebra becomes powerful: when you realize vectors don't have to be arrows.

Polynomials are vectors. Functions are vectors. Matrices are vectors. Solutions to differential equations are vectors.

If you can add things and scale them—and the operations behave sensibly—you have a vector space. And everything you learned about arrows applies.


The Definition

A vector space is a set V with two operations (addition and scalar multiplication) satisfying these properties:

Addition:

  • Commutative: u + v = v + u
  • Associative: (u + v) + w = u + (v + w)
  • Identity: there exists 0 such that v + 0 = v
  • Inverses: for every v, there exists -v such that v + (-v) = 0

Scalar Multiplication:

  • Distributes over vectors: c(u + v) = cu + cv
  • Distributes over scalars: (c + d)v = cv + dv
  • Associative: c(dv) = (cd)v
  • Identity: 1v = v

That's it. If your objects satisfy these rules, they form a vector space, and all of linear algebra applies.


Examples

The Obvious: ℝⁿ

The set of all n-tuples of real numbers with component-wise addition and scaling. This is the vector space you've been working with.

Polynomials

The set of all polynomials of degree ≤ n.

Add polynomials: (2x² + 3x + 1) + (x² - x + 4) = 3x² + 2x + 5.

Scale polynomials: 3(x² + 2) = 3x² + 6.

This is a vector space. The zero vector is the zero polynomial. Additive inverses are negatives of coefficients.

The dimension? There are n+1 free coefficients (constant through xⁿ), so dimension n+1.

Functions

The set of all continuous functions from [0,1] to ℝ.

Add functions: (f + g)(x) = f(x) + g(x).

Scale functions: (cf)(x) = c · f(x).

This is a vector space—an infinite-dimensional one. Every point x contributes a degree of freedom.

Matrices

The set of all m×n matrices with real entries.

Add matrices entry-wise. Scale matrices by multiplying every entry.

This is a vector space of dimension m×n.

Solutions to Differential Equations

Consider the equation y'' + y = 0.

Solutions include sin(x), cos(x), and any linear combination c₁sin(x) + c₂cos(x).

The set of all solutions is a vector space. Add two solutions, get another solution. Scale a solution, get another solution.


Why Abstraction Matters

The power is in the generalization.

Once you prove a theorem about vector spaces, it applies to all of them. Theorems about ℝⁿ apply to polynomials. Theorems about polynomials apply to functions.

Instead of reproving everything for each setting, you prove it once in the abstract, and it works everywhere.


Subspaces

A subspace is a vector space inside a vector space.

Formally: W is a subspace of V if:

  • W is nonempty
  • W is closed under addition (u, v in W implies u + v in W)
  • W is closed under scaling (v in W implies cv in W)

These conditions guarantee W inherits the vector space structure from V.

Examples in ℝ³:

  • A plane through the origin is a subspace
  • A line through the origin is a subspace
  • The origin alone is a subspace (the trivial subspace)
  • A plane not through the origin is NOT a subspace (no zero vector)

Example with polynomials:

  • Polynomials of degree ≤ 2 form a subspace of all polynomials
  • Polynomials with p(0) = 0 form a subspace
  • Polynomials with p(0) = 1 do NOT form a subspace (not closed under addition)

Span

The span of a set of vectors is all possible linear combinations.

span{v₁, v₂, ..., vₖ} = {c₁v₁ + c₂v₂ + ... + cₖvₖ : cᵢ ∈ ℝ}

In ℝ³:

  • Span of one nonzero vector: a line through the origin
  • Span of two non-parallel vectors: a plane through the origin
  • Span of three non-coplanar vectors: all of ℝ³

The span is always a subspace. It's the smallest subspace containing the given vectors.


Linear Independence

Vectors v₁, ..., vₖ are linearly independent if the only way to get zero is the trivial way:

c₁v₁ + c₂v₂ + ... + cₖvₖ = 0 implies c₁ = c₂ = ... = cₖ = 0

If vectors are linearly independent, none is redundant. None can be written as a combination of the others.

If vectors are linearly dependent, at least one is a combination of the others. You have redundancy.

Examples:

  • (1,0) and (0,1) in ℝ²: independent
  • (1,0), (0,1), and (1,1) in ℝ²: dependent (the third is sum of first two)
  • 1, x, x² as polynomials: independent
  • 1, x, and 2+x: dependent (2+x = 2·1 + 1·x)

Basis

A basis for a vector space V is a set of vectors that:

  1. Spans V (every vector in V can be expressed as a combination)
  2. Is linearly independent (no redundancy)

A basis is the minimal spanning set. The maximal independent set.

Standard basis for ℝⁿ: e₁ = (1,0,...,0), e₂ = (0,1,...,0), ..., eₙ = (0,...,0,1)

Every vector in ℝⁿ is a unique combination: (a₁, a₂, ..., aₙ) = a₁e₁ + a₂e₂ + ... + aₙeₙ.

Basis for polynomials of degree ≤ 2: {1, x, x²}

Every such polynomial is uniquely a + bx + cx² = a(1) + b(x) + c(x²).


Dimension

The dimension of a vector space is the number of vectors in any basis.

All bases of a given vector space have the same size. This is a theorem, not a definition.

Dimensions:

  • ℝⁿ has dimension n
  • Polynomials of degree ≤ n have dimension n+1
  • m×n matrices have dimension mn
  • Continuous functions on [0,1] are infinite-dimensional

Dimension is the measure of "how many independent directions" exist.


Coordinates

Once you choose a basis, every vector has a unique representation as a combination of basis vectors.

The coefficients in that combination are the coordinates of the vector relative to that basis.

Different basis, different coordinates, same vector.

Example: In ℝ², with standard basis {(1,0), (0,1)}, the vector (3,4) has coordinates [3, 4].

With basis {(1,1), (1,-1)}, the same vector (3,4) has different coordinates. (You can verify: (3,4) = 3.5(1,1) + (-0.5)(1,-1), so coordinates are [3.5, -0.5].)

Coordinates are not intrinsic—they depend on your choice of basis. The vector is intrinsic.


Why This Matters

Vector spaces let you do geometry on abstract objects.

Once you know something forms a vector space, you can:

  • Talk about dimension
  • Choose coordinates
  • Find bases
  • Study subspaces
  • Apply linear transformations

Polynomials become geometric objects. Functions become points in infinite-dimensional space. Solutions to equations become subspaces.

The same intuition from arrows in ℝ³ extends to any vector space. That's the power of abstraction.


The Foundation

Vector spaces are the playing field of linear algebra.

Matrices are transformations between vector spaces. Eigenvalues describe the behavior of those transformations. Rank measures the dimension of the image. Nullity measures the dimension of the kernel.

Everything connects. And it all starts with the simple observation: if you can add and scale, you have a vector space.


This is Part 8 of the Linear Algebra series. Next: "Linear Transformations: Functions That Preserve Structure."


Part 8 of the Linear Algebra series.

Previous: Systems of Linear Equations: Matrices as Equation Solvers Next: Linear Transformations: Functions That Preserve Structure