1.6. Initial Value Problem for ODE: single-step methods#
The solution process of an ODE is performed by steps. In a single-step approach, the solution of the following step depends on the current one. In a multiple step method, the solution of the next step is calculated from several steps. A multiple step approach is more accurate, you can think of a similar construct as the higher-order derivatives treated previously.
For first order ODEs with the general form:
One initial condition is needed to find only one solution! Without it, there would be an infinite number of solutions possible. The initial condition is:
Forward (Explicit) Euler#
The simplest initial value numerical integration is the Forward Euler, which is an explicit method. Although simple, it contains the basic characteristics as more advanced and accurate methods. It looks as:
where \(\Delta x\) is the step size and the slope is a constant that approximates the rate of change of \(y\) with respect to \(x\) (a.k.a. the derivative) in the interval \(x_i\) to \(x_{i+1}\). The solution starts at \(i=0\) given by the initial condition, then \(i\) is increased to 1 where the values are calculated using the previous equations. This loop continues until the points cover the desired domain. The computation of the slope is the key difference between single step methods. For forward euler, the slope is computed with the values at the current step!

Fig. 1.11 Illustrating the forward Euler method#
Forward Euler Example#
Consider the following equation:
We want to know the solution in the domain \([a=0,b=30]\). The initial condition is \(y(x_0)=1\) and the step size is \(\Delta x = 0.2\).
The discretization of the differential equation transforms the problem into an algebraic one. Following the formula of explicit Euler for the discretization yields:
In the following code, Forward Euler is implemented.
import numpy as np
import matplotlib.pyplot as plt
dx = .2
x = np.arange(0,30+dx,dx)
y = np.zeros(len(x))
##-----------------------------
##Forward Euler Implementation
##-----------------------------
alpha= 1
y[0] = 1
for i in range(len(x)-1):
y[i+1] = y[i] + dx*( -alpha*y[i])
##------------------------------
y_exact = np.exp(-alpha*x)
plt.plot(x,y_exact)
plt.plot(x,y)
plt.legend(['exact solution','Forward Euler solution'])
<matplotlib.legend.Legend at 0x1182a4c20>

The Forward Euler is a Finite Difference Method where it uses a grid-based approach and computes the solution at the nodes.
Can you show how the Forward Euler gets derived from the Finite Difference method
hint: Use the forward difference that we derived earlier in the Taylor Series Expansion chapter .
Solution
In the Taylor expansion chapter we derived the forward difference for a first derivative:
Use the forward Euler to approximate the solution to the initial-value problem.
with \(\Delta t=0.5\)
Solution
\(t_0= 0, t_1= 0.5, t_2= 1\)
Use the forward euler:
for i =0
for i = 1
The approximated solution is 2.625
Error analysis#
There are two types of errors: round-off and truncation. The round-off error is due to the computer’s limitation to represent a floating number (decimal). To illustrate it consider the following: the difference between 1 and 0.9 is 0.1. If you subtract 0.1 from itself, you should obtain 0, even if this subtraction is repeated ten thousand times. The following code demonstrates this behavior, using floating-point numbers with varying precision.
def accumulated_error(a,b,solution,iterations):
error = solution - np.abs(a-b)
error_accumulated = 0
for i in range(iterations):
error_accumulated = error_accumulated + error
return error_accumulated
print('Error using 16 bits of memory = ',accumulated_error(np.float16(1.),np.float16(0.9),np.float16(0.1),10000))
print('Error using 32 bits of memory = ',accumulated_error(np.float32(1.),np.float32(0.9),np.float32(0.1),10000))
print('Error using 64 bits of memory = ',accumulated_error(np.float64(1.),np.float64(0.9),np.float64(0.1),10000))
Error using 16 bits of memory = -1.220703125
Error using 32 bits of memory = -0.00022351741790771484
Error using 64 bits of memory = 2.7755575615628914e-13
As you can see, this simple operation can give discernible errors. Imagine a computation spanning 100 years with a time step of seconds and more complex operations: the round-off error will be present! As you can see, this can be reduced by increasing the precision or the number of digits used to represent numbers but be careful, this is not free as the memory the computer uses increases as well as the computation time.
The truncation error is related to the method chosen for the slopes approximation. This can be obtained using our reliable TSE as this gives the exact solution. Therefore the truncation error per step is
The total truncation error accounts for the number of steps as:
In honor to this first order total truncation error, Forward Euler is referred to as a first-order method.
Stability#
The error that is introduced in each step of the numerical solution ideally does not to increase as the solution advances. Under a well posed and proper solution, the error is expected to reduce with smaller steps. In some cases, the error increases without bound as the solution advances (even with smaller steps): the solution becomes unstable. The stability depends on the numerical method, the step size and the behavior of the differential equation. Therefore, the stability conditions will differ when applying the same numerical method to different equations.
Let’s consider the stability for Forward Euler applied to a more general form of the problem above.
With initial condition \(y(0)=1\) and \(\alpha>0\), the exact solution is:
The Forward Euler equivalent is
Following the initial steps of the numerical solution a pattern arises:
Comparing this last equation with the exact solution, it can be seen that the term \((1-\alpha \Delta x)^n\) in the numerical solution is approximating the term \(e^{-\alpha \Delta x_i}\) in the exact solution. The latter tends to decay with larger values of \(x_i\) and positive \(\alpha\). To make sure that the term \((1-\alpha \Delta x)^n\) decays with larger \(n\) values \(1-\alpha \Delta x\) should be less than \(|1|\), i.e.,
Click here for a more detailed derivation
Subtract 1 of both sides
Divide by -1
This is the stability criterion. If not complied, then the solution will be unstable. It is said that Forward Euler is conditionally stable! This is true for every explicit numerical method.
Lets go back to the last example described in the code above.
We want to know the solution in the domain \([a=0,b=30]\). The initial condition is \(y(x_0)=1\) and the step size is \(\Delta x = 0.2\).
What is the step size at which the problem described using the Forward Euler becomes unstable? Feel free to use the interactive figure below and change dx to test what happens when the step size becomes larger.
Click rocket
–>Live Code
to interact with the plot below
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact
import ipywidgets as widgets
def forward_euler(dx):
# Define the x range and initialize y
x = np.arange(0, 30 + dx, dx)
y = np.zeros(len(x))
# Forward Euler Implementation
alpha = 1
y[0] = 1
for i in range(len(x) - 1):
y[i + 1] = y[i] + dx * (-alpha * y[i])
x_exact = np.arange(0, 30 + dx, 0.2)
# Exact solution
y_exact = np.exp(-alpha * x_exact)
# Plot both solutions
plt.figure(figsize=(8, 5))
plt.plot(x_exact, y_exact, label='Exact solution', linestyle='--')
plt.plot(x, y, label='Forward Euler solution', marker='o')
plt.legend()
plt.title(f"Forward Euler with dx = {dx}")
plt.xlabel('x')
plt.ylabel('y')
plt.grid(True)
plt.show()
# Create an interactive slider for dx
interact(forward_euler, dx=widgets.FloatSlider(value=0.2, min=0.01, max=3.0, step=0.01, description='dx'));
Solution
You can see in the interactive figure above that when the stepsize becomes larger the forward euler solution becomes less accurate. You can also see that when the step size becomes bigger then 2 the Forward Euler solution starts to blow up. This is because of the stability criteria of the Forward Euler.
When \(\alpha=1\), then \(\Delta x < 2\) (Note that \(\Delta x\) can not be negative and therefore will always be > 0 )
Backward (Implicit) Euler#
Just as Forward Euler is the simplest Explicit method, Backward Euler method is the simplest implicit method. They look very similar but the slope is computed at the next step! That’s where the implicit term comes from: the slope depends on the unknown value.
Let’s consider the same problem as before:
We want to know the solution in the domain \([a=0,b=30]\). The initial condition is \(y(x_0)=1\) and the step size is \(\Delta x = 0.2\).
The discretization of the differential equation transforms the problem into an algebraic one. Following the formula of implicit Euler, the discretization yields:
then
finally
In the following code, Implicit Euler is implemented.
dx = 0.3
x = np.arange(0,30+dx,dx)
y = np.zeros(len(x))
##-----------------------------
##Backward Euler Implementation
##-----------------------------
alpha=1
y[0] = 1
for i in range(len(x)-1):
y[i+1] = y[i]/(1+dx*alpha)
##------------------------------
y_exact = np.exp(-alpha*x)
plt.plot(x,y_exact)
plt.plot(x,y)
plt.legend(['exact solution','Backward Euler solution'])
<matplotlib.legend.Legend at 0x16377e990>

You can see that the result is similar to Forward Euler, except that the curve for Backward Euler is slightly above than the exact solution. This makes sense as the derivative is taken to be in front, so the slope is initially underestimated while the explicit solution initially overestimated the slope.
The round-off errors and truncation error are similar. Backward Euler is also a first-order method. But what about the stability?
Modify the step size in the Implicit Euler code, try to make the solution unstable. What do you notice?
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact
import ipywidgets as widgets
def backward_euler(dx):
# Define the x range and initialize y
x = np.arange(0, 30 + dx, dx)
y = np.zeros(len(x))
##-----------------------------
##Backward Euler Implementation
##-----------------------------
alpha=1
y[0] = 1
for i in range(len(x)-1):
y[i+1] = y[i]/(1+dx*alpha)
##------------------------------
x_exact = np.arange(0, 30 + dx, 0.2)
# Exact solution
y_exact = np.exp(-alpha * x_exact)
# Plot both solutions
plt.figure(figsize=(8, 5))
plt.plot(x_exact, y_exact, label='Exact solution', linestyle='--')
plt.plot(x, y, label='Backward Euler solution', marker='o')
plt.legend()
plt.title(f"Backward Euler with dx = {dx}")
plt.xlabel('x')
plt.ylabel('y')
plt.grid(True)
plt.show()
# Create an interactive slider for dx
interact(backward_euler, dx=widgets.FloatSlider(value=0.2, min=0.01, max=5.0, step=0.01, description='dx'));
Let’s understand what happens by following the same procedure as for the stability analysis of Explicit Euler.
The exact solution is the same:
For the numerical solution, the numerical solution pattern is slightly different:
This last term to the power \(n\) approximates the decaying exponential in the exact solution, just like in the Explicit Euler method. Here, the only condition that must be satisfied for stability is
This can be expressed as:
Lets first look at the right side condition \(\frac{1}{1+\alpha \Delta x} < 1\).
If we multiply by the term \(1+\alpha \Delta x\), we get:
As \(\alpha\) is positive and the step size must be larger than 0, the right side of the criterion is always true.
Lets now look at the left side condition \(-1 <\frac{1}{1+\alpha \Delta x}\):
We again multiply by \(1+\alpha \Delta x\):
Again this left side criterion always holds.
Hence, an implicit Backward Euler scheme is unconditionally stable. This is also the case for more advanced implicit schemes.
Global errors#
The global error is the sum of round-off and total truncation errors. The plot below displays the global errors for both the Forward and Backward Euler methods, alongside the reference line representing the slope of the truncation error \(\mathcal{O}(\Delta x)\) (1 to 1). As shown, the global errors for both methods align with the expected slope, indicating that the errors decrease in accordance with the methods’ first-order accuracy.
import numpy as np
import matplotlib.pyplot as plt
def global_error(dx_values):
errors_forward = []
errors_backward = []
for dx in dx_values:
x = np.arange(0, 30 + dx, dx)
y_forward = np.zeros(len(x))
y_backward = np.zeros(len(x))
alpha = 1
y_forward[0] = 1
for i in range(len(x) - 1):
y_forward[i + 1] = y_forward[i] + dx * (-alpha * y_forward[i])
y_backward[0] = 1
for i in range(len(x) - 1):
y_backward[i + 1] = y_backward[i] / (1 + dx * alpha)
y_exact = np.exp(-alpha * x)
n= (30-0)/dx
# Compute the global error (L2 norm) for both methods
error_forward = np.sqrt((1/(n-1))*np.sum((y_forward - y_exact) ** 2))
error_backward = np.sqrt((1/(n-1))*np.sum((y_backward - y_exact) ** 2))
errors_forward.append(error_forward)
errors_backward.append(error_backward)
# Plot the global error vs. dx in a loglog plot for both methods
plt.figure(figsize=(8, 5))
plt.loglog(dx_values, errors_forward, label='Forward Euler Global error', marker='o')
plt.loglog(dx_values, errors_backward, label='Backward Euler Global error', marker='s')
plt.loglog(dx_values, [dx**1 for dx in dx_values], label=r'$\mathcal{O} (\Delta x)$', linestyle='--')
plt.title(r"Global error comparison: Forward vs Backward Euler")
plt.xlabel(r'$\Delta x$')
plt.ylabel('Global error')
plt.grid(True)
plt.legend()
plt.show()
dx_values = [0.00125, 0.0025, 0.005, 0.01, 0.02, 0.04, 0.08, 0.16, 0.32, 0.64]
# Call the function to plot the global error for both Forward and Backward Euler
global_error(dx_values)

Nonlinear ODE#
In the previous example, the explicit and implicit implementation seem to only differ in the formulation. However, this changes when treating non-linear equations. Let’s see an example to identify the “implicitness”.
A certain pollutant \(p\) decays over time at a rate proportional to its concentration to the power \(3/2\). There is a second mechanism that produces \(p\) in time and scales with \(p_{cst}=1000\), \(cst\) indicates a constant:
where \(p(t)\) is the pollutant concentration in time. The initial concentration is also \(p(t=0)=p_0=1000\). Solve the equation from \(t=0\) until \(t=0.5\) with a time step \(\Delta t=0.002\text{ s}\). Use first the explicit Forward Euler and then the implicit Backward Euler.
The expression for the explicit Forward Euler becomes:
The algorithm to solve the explicit Forward Euler is exactly the same as before.
import numpy as np
import matplotlib.pyplot as plt
dt = 0.0002
t = np.arange(0,1+dt,dt)
p = np.zeros(len(t))
##-----------------------------
##Explicit Euler Implementation
##-----------------------------
p_cst = 1000
p[0] = p_cst
for i in range(len(t)-1):
p[i+1] = p[i] + dt*( -p[i]**(3/2) + 5*p_cst*(1-np.exp(-t[i])))
##------------------------------
plt.plot(t,p)
plt.legend(['Forward Euler solution'])
<matplotlib.legend.Legend at 0x118ba67b0>

Now, using the implicit Backward Euler, the expression becomes:
Obtaining the solution requires a different procedure because now the unknown \(p_{i+1}\) appears in both sides of the equation and cannot be solved directly instead an iterative procedure is needed! For example, a widely use method like Newton Rhapson.
Newton-Raphson Method#
In the Observation Theory week, you used Gauss-Newton to solve a non-linear least square problem. That method is actually an extension of the Newton-Rhapson method. Here, it is described with an emphasis in the tricky aspects. Also, the algorithm is summarized and its main advantages and disadvantages are mentioned.
Think of a function \(g(z)=0\). Its solution is graphically represented as the intersection of \(g(z)\) with the \(z\) axis (see the Figure below). Numerically that solution can be found by selecting an initial guess \(z_0\), estimating the tangent (slope) \(g'(z_0)\) and finding its intersection with the \(z\) axis, which indicates the value \(z_1\). Now, the solution is closer. We repeat this process to find \(z_2\) but now the guess is \(z_1\). The process is repeated until the value \(g(z_*)\) is very small: \(|g(z_*)<\epsilon|\). Here, \(\epsilon\) is the tolerance e.g., \(\epsilon=10^{-6}\) (see the red square in the Figure below).
Fig. 1.12 Illustrating the Newton-rhapson method#
Some remarks:#
Finding the point \(z_{j+1}\) where the tangent crosses the \(z\)-axis is done following an approximation of first order of \(g'(z_j)\):
solving for \(z_{j+1}\) gives the expression
Look at the implicit Backward Euler approximation of the pollutant problem. We can rewrite it to be \(g(z)=q(p_{i+1})=0\) as follows:
and \(g'(z)=q'(p_{i+1})\) is its derivative:
Finally, the expression used to iterate is
Notice that the subscript \(i\) relates to the time steps of the backward Euler method and the superscript \(j\) relates to the Newton-Raphson iteration, where \(p_{i+1}^j\) when \(j=0\) is the initial guess (\(p_{i+1}^0)\)! The solution is found and the iteration stops at \(j=m\), once \(|g(z)=q(p_{i+1},m)|<\epsilon\).
The Newton-Raphson method:
Converges quickly and can provide highly accurate approximations.
It can be used for both real and complex roots.
But the method also has its limitations:
It may not converge if the initial guess is far from the actual value or if the function is discontinuous near the value.
The derivative of the function is needed, which may not always be known.
Pseudo code: The Backward-Euler-Newton-Raphson Algorithm#
Backward Euler Scheme:#
The update equation for \(p_{i+1}\) is given by:
We solve for \(p_{i+1}\) at \(t_i\), for \(i = 0, 1, \dots, m\) using Newton-Raphson.
Newton-Raphson Method:#
The iteration for \(p_{i+1}\) is as follows:
Initialize: \(p_{i+1}^{(0)} = p_i\)
Set \(j = 0\)
while \(\left| g(p_{i+1}^j) \right| > \epsilon\) do:
Update \(p_{i+1}^{j+1}\) using the Newton-Raphson formula:
\[ p_{i+1}^{j+1} = p_{i+1}^j - \frac{p_{i+1}^j - p_i - \Delta t \left( - (p_{i+1}^j)^{3/2} + 5 \cdot p_{\text{cst}} \cdot (1 - e^{-t_{i+1}}) \right)} {1 + \frac{3}{2} \Delta t \cdot (p_{i+1}^j)^{1/2}} \]Increment \(j = j + 1\)
Set \(p_{i+1} = p_{i+1}^{j}\)
Finally, update \(p_{i+1}\) using the Backward Euler formula: