Systems of Linear Equations: Matrices as Equation Solvers
Every system of linear equations is secretly asking one question: which vector gets transformed into this target?
You have a matrix A. You have a target vector b. You want to find the input vector x such that Ax = b.
That's it. All the techniques—row reduction, Gaussian elimination, matrix inversion—are just methods for answering this question.
The Matrix Form
Start with a system:
2x + 3y = 7
4x - y = 5
This can be written as Ax = b:
| 2 3 | | x | | 7 |
| 4 -1 | | y | = | 5 |
The matrix A encodes the coefficients. The vector x holds the unknowns. The vector b is the target.
Solving the system means finding x.
Geometric Interpretation
Think about what Ax = b means geometrically.
A transforms the vector x into the vector b. You're asking: which x lands on b after the transformation?
If A is invertible, there's exactly one answer: x = A⁻¹b.
If A is not invertible (det = 0), things are more complicated. Maybe no x works. Maybe infinitely many do.
The geometry tells you what to expect:
- One solution: The transformation is invertible. Every target has exactly one source.
- No solution: The target b isn't in the range of A. The transformation can't reach it.
- Infinitely many solutions: The transformation collapses some dimensions. Multiple inputs map to the same output.
Row Reduction
The practical method for solving systems is row reduction (Gaussian elimination).
Write the augmented matrix [A | b]:
| 2 3 | 7 |
| 4 -1 | 5 |
Apply row operations:
- Swap two rows
- Multiply a row by a nonzero scalar
- Add a multiple of one row to another
These operations don't change the solution—they transform the system into an equivalent one.
Goal: get the matrix into row echelon form (or reduced row echelon form), where the solution is obvious.
A Worked Example
Solve:
2x + 3y = 7
4x - y = 5
Augmented matrix:
| 2 3 | 7 |
| 4 -1 | 5 |
Subtract 2 × (row 1) from row 2:
| 2 3 | 7 |
| 0 -7 |-9 |
From row 2: -7y = -9, so y = 9/7.
Substitute back: 2x + 3(9/7) = 7, so 2x = 7 - 27/7 = 22/7, so x = 11/7.
Solution: x = 11/7, y = 9/7.
When There's No Solution
Some systems are inconsistent.
x + y = 3
x + y = 5
Geometrically: two parallel lines. They never intersect.
Augmented matrix:
| 1 1 | 3 |
| 1 1 | 5 |
Subtract row 1 from row 2:
| 1 1 | 3 |
| 0 0 | 2 |
Row 2 says: 0 = 2. Contradiction. No solution.
This happens when b isn't in the column space of A—you can't reach the target using linear combinations of the transformation's columns.
When There Are Infinitely Many Solutions
Some systems are underdetermined.
x + y + z = 6
2x + y + z = 8
Two equations, three unknowns. There's a whole line of solutions.
Augmented matrix:
| 1 1 1 | 6 |
| 2 1 1 | 8 |
Subtract 2 × row 1 from row 2:
| 1 1 1 | 6 |
| 0 -1 -1 |-4 |
From row 2: y + z = 4. From row 1: x + y + z = 6.
Let z = t (free parameter). Then y = 4 - t, and x = 6 - (4-t) - t = 2.
Solution: x = 2, y = 4 - t, z = t, for any real t.
This is a line in 3D space. Infinitely many solutions parametrized by t.
Rank and Solvability
The rank of a matrix is the number of linearly independent rows (or columns).
For system Ax = b with m equations and n unknowns:
- If rank(A) < rank([A|b]): no solution
- If rank(A) = rank([A|b]) = n: unique solution
- If rank(A) = rank([A|b]) < n: infinitely many solutions
The rank tells you how many independent constraints there are. If constraints conflict, no solution. If there are fewer constraints than unknowns, you have freedom.
The Inverse Method
If A is square and invertible, there's a shortcut:
x = A⁻¹b
Just multiply both sides by A⁻¹.
This works because A⁻¹A = I, so A⁻¹(Ax) = A⁻¹b gives x = A⁻¹b.
Computing A⁻¹ can be expensive, so for large systems, row reduction is usually faster. But conceptually, the inverse method shows what solving means: reversing the transformation.
Applications
Circuit Analysis: Kirchhoff's laws give linear equations for currents and voltages. Solve the system to find the currents.
Economics: Leontief input-output models are linear systems. The production required to meet demand is found by solving Ax = b.
Computer Graphics: Finding intersection points, solving lighting equations, computing physics simulations—all involve linear systems.
Machine Learning: Least squares regression minimizes ||Ax - b||². The solution involves solving a linear system derived from the normal equations.
Balancing Chemical Equations: Coefficients must balance atoms on both sides. This is a linear system.
Numerical Considerations
For large systems, numerical methods matter.
Row reduction is O(n³)—manageable but not trivial for large n.
Pivoting strategies improve numerical stability—avoid dividing by small numbers.
Iterative methods (Jacobi, Gauss-Seidel, conjugate gradient) work for very large sparse systems.
Specialized solvers exploit structure (sparse, symmetric, positive definite) for efficiency.
The Big Picture
Linear systems are the computational core of linear algebra.
Matrices represent transformations. Solving Ax = b means inverting that transformation—finding what input produces a given output.
Row reduction is the universal algorithm. It tells you:
- If solutions exist
- How many there are
- What they are
Every application of linear algebra eventually comes down to solving linear systems. It's the workhorse operation that makes the whole field useful.
This is Part 7 of the Linear Algebra series. Next: "Vector Spaces: The Abstract Playground of Linear Algebra."
Part 7 of the Linear Algebra series.
Previous: Eigenvalues and Eigenvectors: The Directions That Do Not Rotate Next: Vector Spaces: The Abstract Playground of Linear Algebra
Comments ()