Euler's Method: Numerical Approximation Step by Step
Not all differential equations have neat closed-form solutions. Many can't be solved analytically at all—no formula involving elementary functions exists.
That's where numerical methods come in. And the simplest, most intuitive numerical method is Euler's method.
The idea: approximate the solution by following the tangent line step by step. Take small steps in the direction the slope field points, updating your position iteratively.
It's crude but effective. It works for any first-order ODE, regardless of whether an analytical solution exists.
The Core Idea
Given dy/dx = f(x, y) with initial condition y(x₀) = y₀, you want to approximate y(x) for x > x₀.
Euler's insight: At (x₀, y₀), the slope is f(x₀, y₀). If you walk a small distance h in the x-direction, following that slope, you'll approximately reach:
y₁ = y₀ + h·f(x₀, y₀)
Now you're at (x₁, y₁) where x₁ = x₀ + h. Repeat: compute the new slope f(x₁, y₁), take another step:
y₂ = y₁ + h·f(x₁, y₁)
Continue stepping. Each step approximates the solution by linearization.
The Algorithm
Input: ODE dy/dx = f(x, y), initial condition (x₀, y₀), step size h, number of steps n.
Output: Approximations (x₀, y₀), (x₁, y₁), ..., (x_n, y_n).
Procedure:
- Set
x₀andy₀(initial condition) - For i = 0 to n-1:
- Compute slope:
k = f(x_i, y_i) - Update x:
x_{i+1} = x_i + h - Update y:
y_{i+1} = y_i + h·k
- Compute slope:
- Return sequence of points
That's it. Three lines of code.
Example: Exponential Growth
Solve dy/dx = y with y(0) = 1 from x = 0 to x = 1 using step size h = 0.25.
We know the exact solution is y = e^x, so we can check accuracy.
Step 0: x₀ = 0, y₀ = 1
Step 1:
- Slope: k = f(0, 1) = 1
- x₁ = 0 + 0.25 = 0.25
- y₁ = 1 + 0.25·1 = 1.25
Step 2:
- Slope: k = f(0.25, 1.25) = 1.25
- x₂ = 0.25 + 0.25 = 0.5
- y₂ = 1.25 + 0.25·1.25 = 1.5625
Step 3:
- Slope: k = f(0.5, 1.5625) = 1.5625
- x₃ = 0.5 + 0.25 = 0.75
- y₃ = 1.5625 + 0.25·1.5625 ≈ 1.9531
Step 4:
- Slope: k = f(0.75, 1.9531) = 1.9531
- x₄ = 0.75 + 0.25 = 1.0
- y₄ = 1.9531 + 0.25·1.9531 ≈ 2.4414
Exact value at x = 1: e¹ ≈ 2.7183
Euler approximation: 2.4414
Error: ≈ 0.28 (about 10%)
Not perfect, but in the ballpark. With smaller steps, accuracy improves.
Geometric Interpretation
Euler's method follows the slope field.
At each point (x_i, y_i), the slope field gives direction f(x_i, y_i). Euler's method walks along that direction for distance h (in x), landing at the next point.
The solution curve is tangent to the slope field everywhere. Euler's method approximates the curve by a piecewise linear path following local tangent directions.
Think of it as navigating with a compass. At each position, check direction, walk a fixed distance, check direction again, repeat.
Step Size and Accuracy
The step size h controls the trade-off between accuracy and computation.
Small h: More steps, better accuracy, more computation.
Large h: Fewer steps, worse accuracy, less computation.
The local truncation error (error per step) is proportional to h². If you halve the step size, error per step drops by a factor of 4.
But over a fixed interval, the number of steps doubles, so total accumulated error is proportional to h (first-order method).
Doubling precision (halving h) roughly halves the total error.
Example: Step Size Comparison
For dy/dx = y, y(0) = 1, approximate y(1):
| Step size h | Steps | Approximation | Error |
|---|---|---|---|
| 0.5 | 2 | 2.25 | 0.47 |
| 0.25 | 4 | 2.4414 | 0.28 |
| 0.1 | 10 | 2.5937 | 0.12 |
| 0.01 | 100 | 2.7048 | 0.014 |
Exact: e ≈ 2.7183
As h decreases, approximation converges to exact solution.
Error Accumulation
Each step introduces error. Over many steps, errors accumulate.
Local truncation error: Error in a single step (proportional to h²).
Global truncation error: Total accumulated error after many steps (proportional to h).
For n steps over interval [x₀, x_f]:
- n = (x_f - x₀)/h
- Global error ≈ n·(h²) = (x_f - x₀)·h
So global error scales linearly with step size.
This is why Euler's method is "first-order"—error scales as h¹.
Instability and Divergence
Euler's method can become unstable for certain equations and step sizes.
Example: dy/dx = -10y, y(0) = 1
Exact solution: y = e^(-10x) (rapid decay).
If h is too large, Euler's method can overshoot and oscillate wildly.
Stability condition: For dy/dx = λy, Euler's method is stable if |1 + hλ| < 1.
For λ = -10, this requires h < 0.2. If h = 0.3, the method diverges.
Stiff equations (where solution decays rapidly) are particularly problematic for Euler's method. Implicit methods (backward Euler, trapezoidal) handle stiffness better.
When Euler's Method Fails
Problem 1: Stiff equations (rapid decay or growth).
Solution: Use implicit methods.
Problem 2: Chaotic systems (sensitive dependence on initial conditions).
Solution: Use higher-order methods with very small step size, or adaptive step size.
Problem 3: Long-time integration (accumulation of error).
Solution: Use higher-order methods (Runge-Kutta), symplectic integrators (for Hamiltonian systems).
Euler's method is great for quick approximations, not for precision or long-term integration.
Improved Euler (Heun's Method)
A simple improvement: average the slopes at the beginning and end of the step.
Standard Euler: y_{i+1} = y_i + h·f(x_i, y_i)
Improved Euler:
- Predict:
ỹ_{i+1} = y_i + h·f(x_i, y_i)(Euler step) - Correct:
y_{i+1} = y_i + (h/2)[f(x_i, y_i) + f(x_{i+1}, ỹ_{i+1})](average slopes)
This is second-order—error scales as h², not h. Much more accurate for the same step size.
Runge-Kutta Methods
The most famous improvement: fourth-order Runge-Kutta (RK4).
Instead of one slope evaluation per step, RK4 uses four:
- k₁ = f(x_i, y_i)
- k₂ = f(x_i + h/2, y_i + hk₁/2)
- k₃ = f(x_i + h/2, y_i + hk₂/2)
- k₄ = f(x_i + h, y_i + hk₃)
Then: y_{i+1} = y_i + (h/6)(k₁ + 2k₂ + 2k₃ + k₄)
This is fourth-order: error scales as h⁴. For the same step size, RK4 is vastly more accurate than Euler.
RK4 is the workhorse of numerical ODE solving. Most scientific computing uses RK4 or adaptive variants.
Adaptive Step Size
Instead of fixed h, adjust step size based on local error estimate.
Idea: Take two steps of size h/2 and one step of size h. Compare results. If they differ significantly, error is large—reduce h. If they're close, increase h.
Adaptive methods (like Runge-Kutta-Fehlberg) automatically balance accuracy and efficiency.
This is what scipy.integrate.solve_ivp and MATLAB's ode45 do under the hood.
Systems of ODEs
Euler's method extends trivially to systems.
For dy/dx = f(x, y) and dz/dx = g(x, y, z):
y_{i+1} = y_i + h·f(x_i, y_i, z_i) z_{i+1} = z_i + h·g(x_i, y_i, z_i)
Update all variables simultaneously at each step.
Second-order ODEs reduce to systems of first-order, so Euler's method (and its descendants) handle them too.
Example: Predator-Prey System
Lotka-Volterra equations:
dx/dt = αx - βxy (prey growth, predation) dy/dt = δxy - γy (predator growth, death)
This is a system of two ODEs. No analytical solution exists for general initial conditions.
Euler's method (or RK4) gives numerical approximation. With right parameters, you see oscillating populations—prey increases, predators increase, prey crashes, predators crash, repeat.
Numerical methods reveal dynamics that analytical methods can't access.
Practical Implementation
Python example:
def euler_method(f, x0, y0, h, n):
x = x0
y = y0
for i in range(n):
y = y + h * f(x, y)
x = x + h
return x, y
# Example: dy/dx = y, y(0) = 1
f = lambda x, y: y
x, y = euler_method(f, 0, 1, 0.1, 10)
print(f"y(1) ≈ {y}") # Approximation of e
Simple, modular, extensible.
Why Learn Euler First
Euler's method is pedagogically perfect:
- Intuitive: Follow the slope field, step by step
- Simple: Three lines of code
- General: Works for any ODE
- Foundation: All other methods are refinements
Once you understand Euler, higher-order methods (RK4, multistep methods, implicit methods) are variations on the theme.
You learn the core concept—discretize time, approximate locally, iterate—without mathematical overhead.
Limitations Summary
Euler's method is:
- First-order (error proportional to h)
- Unstable for stiff equations
- Inaccurate for long-time integration
- Inefficient compared to higher-order methods
But it's:
- Simple to implement
- Intuitive to understand
- Universally applicable
- Foundation for all numerical ODE methods
For production work, use RK4 or adaptive methods. For learning and quick prototyping, Euler's method is perfect.
Next: Bernoulli equations, a nonlinear ODE type solvable by clever substitution.
Part 8 of the Differential Equations series.
Previous: The Characteristic Equation: Turning Calculus into Algebra Next: The Bernoulli Equation: A Clever Substitution
Comments ()