### Archive

Posts Tagged ‘first order ode’

## Taylor method with automatic differentiation – 2

In the previous post, we have talked about Taylor method to solve ordinary differential equation (ODE) numerically. All the derivatives in the Taylor method are computed using automatic differentiation. However, the drawback of this method is when we change our ODE, we have to write another code. In this post, we are going to present a way such that when we want to compute another ODE, we don’t need to change the majority of our code.

Continuing our analysis in the last post, we are going to need operations on polynomials. We are going to define operations such as:

1. Addition between a constant and a polynomial
2. Multiplication between  a constant and a polynomial
4. Multiplication between polynomials
5. Dividing a constant by a polynomial
6. Dividing a polynomial by a polynomial
7. Logarithm of a polynomial
8. Exponentiation of a polynomial
9. Power of a polynomial

Operation 1 and 2 are trivial. Suppose $x(h)$ is a polynomial, then $x(h)=\sum x_i h^i$ for some real constant $x_i$. Suppose $\alpha$ is a real constant, then the addition between $\alpha$ and $x(h)$ is a polynomial $y(h)$ where $y_0 = x_0 + \alpha$ and $y_i = x_i$ for all $i \geq 1$. The multiplication between $\alpha$ and $x(h)$ is a polynomial $z(h)$ where $z_i = \alpha x_i$ for all $i$.

Operation 3 and 4 are not difficult. Suppose $x(h), y(h)$ are polynomials. Then the addition between $x(h)$ and $y(h)$ is a polynomial $z(h)$ where $z_i = x_i + y_i$ for all $i$. The multiplication between $x(h)$ and $y(h)$ is a polynomial $w(h)$ where $w_i = \sum_{j=0}^{i} x_j y_{i-j}$.

Operation 5 and 6 are a little bit tricky. Suppose $x(h)$ is a polynomial and $\alpha$ is a constant. We want to compute $y(h)$ where $y(h) = \alpha / x(h)$. Notice that $x(h)y(h)=\alpha$. Therefore, we are going to take advantage of this equality to find the coefficient of $y(h)$. Thus we shall have $y_0 = \alpha / x_0$, and $y_i = - (\sum_{j=1}^{i} x_j y_{i-j})/x_0$. For operation 6, let’s suppose $z(h) = y(h) / x(h)$. As before, notice that $x(h)z(h) = y(h)$, thus we can compute the coefficient of $z(h)$ using this equality. Thus, $z_0 = y_0 / x_0$ and $z_i = (y_i - \sum_{j=1}^{i} x_j z_{i-j} )/x_0$.

Now, let’s consider the next operation, which is a logarithm of a polynomial. Suppose $x(h)$ is a polynomial and we are going to find $y(h)=\log(x(h))$. Let’s differentiate both side with respect to $h$, then we have $y'(h) = x'(h) / x(h)$. Thus by exploiting a division operation above, we have $y_0 = \log x_0$ and $y_i = (ix_i - (\sum_{j=1}^{i-1} x_j ((i-j)y_{i-j})))/(ix_0)$.

Now, let’s consider the exponential operator, which is an exponentiation of a polynomial. Suppose $x(h)$ is a polynomial and we are going to find $y(h)=\exp(x(h))$. Let’s take logarithm on both side and then differentiate it, we then have $y'(h)/y(h) = x'(h)$, or $y'(h)=y(h)x'(h)$. Therefore, by exploiting a multiplication operation, we shall have $y_0 = \exp x_0$ and $y_i = (\sum_{j=0}^{i-1} y_j ((i-j)x_{i-j}))/i$.

For the last operation of this post, we shall consider the power of a polynomial. Suppose $x(h)$ is a polynomial, we are going to find $y(h)=(x(h))^\alpha$ for any real number $\alpha$. First, we need to take logarithm on both sides and then differentiate the result with respect to $h$, $y'(h)/y(h) = \alpha x'(h)/x(h)$ or $y'(h) = \alpha x'(h)y(h)/x(h)$. By exploiting multiplication and division, we will get the coefficient of $y(h)$.

To facilitate these operation, we build a Python module that will handle these operations. We will also write another code to integrate an ODE. This integration code will work in general without any dependencies on the system or on the order of the system.

def integrate(F,y_init,t_init = 0.0 ,tStop = 1.0 ,h = 0.05 ):
dim = len(y_init)
y_sol = []
for i in range(0,dim):
y_sol.append([])
y_sol[i].append(y_init[i])
t_sol = []
t_sol.append(t_init)
t = t_init

while t &lt; tStop:
y = []
for i in range(0,dim):
y.append([])
y[i].append(y_sol[i][-1])

for k in range(0,20):
dy = F(y)
for i in range(0,dim):
y[i].append(dy[i][k]/float(k+1))
n = len(y[0])
for i in range(0,dim):
temp = 0.0
for j in range(0,n):
temp = temp + y[i][j]*h**j
y_sol[i].append(temp)
t_sol.append(t+h)
t = t + h

return np.array(t_sol) ,np.transpose(np.array(y_sol))

All the operations above and the last code will be written in a python file so that if we need to use this Taylor method, we can just import it. Let’s take an example to integrate one ODE. We will create another code that specifies the ODE and also imports all the operations above and then numerically integrate it.

from mytaylor import *
import matplotlib.pyplot as plt
import numpy as np
import math as m
def fun(y):
z = subtractpoli(y[0],multiplypoli(y[0],y[0]))
return [z]
y_init = [0.5]
t_init = 0
h = 0.1
tStop = 5.0
#integrating ODE
T, Y = integrate(fun,y_init,t_init,tStop,h)
#real solution computation
real_sol = []
for t in T:
temp = m.exp(t)/(m.exp(t)+1.0)
real_sol.append(temp)
#plotting the error versus time
real_sol = np.array(real_sol)
error = Y[:,0]-real_sol
plt.plot(T[:],error[:])
plt.show()

In the above code, we try to numerically integrate $y' = y(1-y)$ with initial condition $y(0)=0.5$. This is the same system we try to integrate in the previous post. I hope you can see the difference between the last code and the code we have shown in the previous post. We really don’t need to consider about Taylor method at all, we just need to write our system in the above code if we want to compute the numerical solution of our code. Notice that we have written all the operations and the integrating code in a file named mytaylor.py which is imported in the first line of the above code. In the fifth line, we can write any ODE we want to integrate, however, the way we write our ODE will be slightly different. Due to polynomial operations that we have defined in this post, $x+y$ cannot be written as just $x+y$ but addpoly($x,y$). Once we define our ODE, we can just integrate it like in line 15.

Another thing we would like to do is to extend the operation of polynomial. We have not included trigonometry operations on polynomials and et cetera. We would like also to test this Taylor method to a stiff system. Please find the following two files (mytaylor.py and test_case.py) for interested readers if you would like to test yourselves.

Categories: python, Teachings

## Taylor method with automatic differentiation

June 14, 2016 1 comment

I teach numerical analysis course and in the one of the chapters, I teach my students about the numerical solution of ordinary differential equations (ODEs). One of numerical solution methods I teach is the so-called Taylor method. This method works by applying the Taylor series to the solution of the differential equation. Suppose we have one-dimensional differential equation as follow:

$\frac{dy}{dt} = f(y)$ and $y(t) = y_0$.

Let’s assume that we only consider an autonomous ODE first.  To find a solution at the next step, $y(t+h)$, we need to apply Taylor expansion, as follow:

$y(t+h) = y(t) + y'(t) h + y''(t)\frac{h^2}{2} +y'''(t)\frac{h^3}{6}+\dots$.

Therefore, depending on how accurate we want our solution is, all we need is to compute the derivatives of $y(t)$. Unfortunately, computing all the derivatives is not an easy task for a computer. We have to derive all the derivatives by ourselves and then input them to the computer.

In this note, I will introduce an automatic differentiation as a tool to compute the derivatives of $y(t+h)$ so that we can compute $y(t+h)$ as accurately as possible. Let us consider the series expansion of $y(t+h)$ as above, but I will use constant $a_i$ to replace the $i-$th derivative of $y(t)$,

$y(t+h) = a_0 + a_1 h + a_2 h^2 + \dots$.

Let us derive the above series with respect to $h$.

$y'(t+h) = a_1 + 2 a_2 h + 3a_3 h^2 + \dots$.

Notice that the derivative at the left hand side of the above equation is actually equal to $f(y(t+h))$ since we know that $y(t+h)$ satisfies the ordinary differential equation. Then we have,

$f(y(t+h)) = a_1 + 2 a_2 h + 3a_3 h^2 + \dots$.

Since $y(t+h)$ is a series of $h$ then $f(y(t+h))$ is also a series of $h$. Then both sides of the above equation are series of $h$, thus we can equate each of the coefficient of $h^j$ to get $a_1$$a_2$ and so on. The most difficult task in this problem is to find the series of $f(y(t+h))$ since the function $f(y)$ can be anything.

Let’s take an example. Suppose we want to integrate the following differential equation,

$\frac{dy}{dt} = y(1-y)$ and $y(0)=0.5$.

Let us choose $h=0.1$ and compute $y(0.1)$. As before, we assume that $y(0.1)$ is a series of $h$ as follows,

$y(0.1) = a_0 + a_1 h + a_2 h^2 + \dots$.

Differentiating both sides of the above equation, we get

$y'(0.1) = a_1 + 2 a_2 h + 3 a_3 h^2 + \dots$.

The right hand side of the above is equal to

$y'(0.1) = y(0.1)(1-y(0.1)) = (a_0 + a_1 h + \dots)(1 - (a_0 + a_1 h + \dots)) = b_0 + b_1 h + b_2 h^2 + \dots$.

Therefore, we get,

$a_1 = b_0$$a_2 = b_1/2$,$a_3 = b_2/3$,  and so on.

The real question is how to find $b_0$, $b_1$ and so on. If the function $f(y)$ only involves operations such as addition, subtraction and multiplication, it would be easy to find the $b$‘s. But if the function $f(y)$ has operations such as division, exponential, logarithm and trigonometric functions then we have to think a bit harder on how to find $b$‘s. We go back to the previous example. Since the operations in this example are only multiplication and subtraction, we can easily compute $b_0$, $b_1$ and so on. Using algebraic modification, we get the following,

$b_0 = a_0 \times (1-a_0)$, thus $a_1 =a_0 \times (1-a_0)$
$b_1 = (a_0 \times -a_1 ) + (a_1 \times (1-a_0))$, thus $a_2 =((a_0 \times -a_1 ) + (a_1 \times (1-a_0)))/2$
$b_2 = (a_0 \times -a_2 ) + (a_1 \times (-a_1))+(a_2 \times (1-a_0))$, thus $a_3 =((a_0 \times -a_2 ) + (a_1 \times (-a_1))+(a_2 \times (1-a_0)))/3$

and so on.

The more $a$‘s we compute, the more accurate our solution is. Using Taylor method this way, we can easily apply this method in a program. It should be clear why this method is called automatic differentiation because we don’t need to manually derive all the derivatives.

Next, I tried to numerically integrate the previous example. Below is the graphic of the error of Taylor method with automatic differentiation versus time. As we can see from the figure, that the error of our numerical integrator is so small and it is almost exact. The number of $a$ I used in this graph is 10.

I use python to numerically integrate this example. You can find the code to produce the above figure in the end of this note. One of the drawbacks of this method is when you change your ODE, you have to change your code as well. In other numerical integrators such as Runge Kutta, we don’t need to change the whole code, we just need to change the ODE and we should get the solution. Together with my student, I am trying to tackle this problem and I hope it will be posted in here.

import math as m
import numpy as np
import pylab as plt

#preliminary constants
order = 10
h = 0.1
t = 0.0
t_end = 5
y0 = 0.5
#defining solution
sol = []
sol.append(y0)
time = []
time.append(t)
#start of the program
while t<t_end:
a = []
a.append(sol[-1])
for k in range(1,order):
if k == 1:
a.append(a[k-1]*(1-a[k-1]))
else:
sum = 0.0
for j in range(0,k-1):
sum = sum + a[j]*(-a[k-1-j])
a.append((sum+a[k-1]*(1.0-a[0]))/k)
sum = 0.0
for i in range(0,order):
sum = sum + a[i]*h**i
sol.append(sum)
t = t + h
time.append(t)
#end of the program

#real solution computation
real_sol = []
for t in time:
temp = m.exp(t)/(m.exp(t)+1.0)
real_sol.append(temp)

#plotting the error versus time
sol = np.array(sol)
real_sol = np.array(real_sol)
time = np.array(time)
error = sol-real_sol
plt.plot(time[:],error[:])
plt.show()
Categories: Justincase, python

## Mixture problem in Calculus

I keep forgetting the way to solve this problem. So, when I in future have no clue I know where to look.

The problem
A 200-galon tank is half full of distilled water. At time $t=0$, a solution containing 0.5 pound/galon of salt enters the tank at the rate of 5 gal/minutes, and the well-stirred mixture is withdrawn at the rate of 3 gal/minutes.
a. Give a mathematical model representation of the above problem.
b. At what time, will the tank be full?
c. At the time the tank is full, how many pounds of salt will it contain?

The mathematical model
The model describing the above process is based on the following formula:

rate of change of the amount of salt  = (rate of salt arrived) – (rate of salt departed)

Suppose $V(t)$ is the volume of the liquid in the tank at time $t$. The volume of liquid inside the tank will increase as the inflow (5 gal/minutes) is bigger than the outflow (3 galon/minutes).  Thus, we have

$V(t)=$ 100 galon + (5 gal/min – 3 gal/min) $\times t$ min,
$V(t)=(100 + 2t)$ gal,

as we know that initially the tank is half full (100 gal).

Suppose $y(t)$ is the amount of salt inside the tank at time $t$. The rate in of salt is as follows,

rate in (pound/min) $=$ (0.5 pound/gal) $\times$ (5 gal/min) $=$ 2.5 pound/min,

while the rate out is
rate out (pound/min) $= \frac{y(t)}{V(t)} \times (pound/gal)$ outflow (gal/min)
$= \frac{y}{100+2t} \times 3$ (pound/min)
$= \frac{3y}{100+2t}$ (pound/min).

Therefore, the mathematica model is given by,

$\frac{dy}{dt}=2.5 - \frac{3y}{100+2t}$.          (1)

The solution
First we determine time $t$ at which the tank will be full. We use the following equation,

$V(t)=(100 + 2t)$,

We solve for $t$ when the volume $V(t)$ is 200 galon,  then we obtain $t=50$.

To solve the differential equation (1) we need the initial condition
as follow,

$y(0)=0$.

We are going to use Maple to solve the problem

> model := diff(y(t),t) = 2.5 - (3*y(t))/(100+2*t);
> init := y(0)=0;
> dsolve({model,init});

We obtain the solution that is given by

$y(t) = 50+t-\frac{2500 \sqrt{50}}{(50+t)^{3/2}}$

When $t=50$, the amount of salt inside the tank will be

$y(50) = 82.32233047$.

Categories: Justincase, Teachings