1. What an amazing tour, at a wonderful place.
I had an amazing experience, meeting with a lot of laureates, a lot of young and energetic researchers. We discuss a lot of things with one common thing which is mathematics. The meeting place could not have been better and Heidelberg is just perfect.
2. Mathematics meets his little own brother, computer science.
In this event, we did not only talk about math, we also talk about computer science which is so much younger than mathematics.
3. Frontier between mathematics and computer science is not a fixed boundary, but it is actually an intersection.
Sir Michael Atiyah said this during the workshop. Read many Atiyah quotes here.
4. Fermat last theorem
We saw the boy who is fascinated by Fermat’s last theorem. He solved it in 1994 and won the Abel’s prize.
5. Euler proof of infinite prime
I like this proof of infinite prime and he solved it in the eighteenth century.
6. Deep learning perhaps should’t be named deep learning
Noel Sharkey mentioned that machine learning is nothing like learning, instead, it is a statistical parameter optimization.
7. Sometimes when you want to make things tidy, you find a new knowledge. This is what happens in tensor analysis. Vladimir Voevodsky had an idea to help verifying mathematical profs by computers and preserve intimate connection between maths and human intuition. He called this unimath.
8. Something wrong in computer science education. Leslie Lamport cricitized computer science education as some of computer scientist did not understand the abstraction he made during the talk. He also said “Not every programming languages are as bad as C, but all prevent you from getting to a superior level of abstraction, he proposed Pluscal.”
9. I think Barbara Liskov is the only woman participating as the laureate in this forum. Wondering how many women have won Turing, Abel or Fields medal.
10. The last talk I remember was from Heisuke Hironaka, Fields Medalist in 1970. He gave many advises and one of those are “Time is expensive, use it well”, ” You learn from your mates more than from your teacher” and” I want to write a book dedicated to my wife ” which was sweet.
I think that is all I can write. I really enjoy my experience in this forum. And now let’s end our journey at Heidelberg castle.
Continuing our analysis in the last post, we are going to need operations on polynomials. We are going to define operations such as:
Operation 1 and 2 are trivial. Suppose is a polynomial, then for some real constant . Suppose is a real constant, then the addition between and is a polynomial where and for all . The multiplication between and is a polynomial where for all .
Operation 3 and 4 are not difficult. Suppose are polynomials. Then the addition between and is a polynomial where for all . The multiplication between and is a polynomial where .
Operation 5 and 6 are a little bit tricky. Suppose is a polynomial and is a constant. We want to compute where . Notice that . Therefore, we are going to take advantage of this equality to find the coefficient of . Thus we shall have , and . For operation 6, let’s suppose . As before, notice that , thus we can compute the coefficient of using this equality. Thus, and .
Now, let’s consider the next operation, which is a logarithm of a polynomial. Suppose is a polynomial and we are going to find . Let’s differentiate both side with respect to , then we have . Thus by exploiting a division operation above, we have and .
Now, let’s consider the exponential operator, which is an exponentiation of a polynomial. Suppose is a polynomial and we are going to find . Let’s take logarithm on both side and then differentiate it, we then have , or . Therefore, by exploiting a multiplication operation, we shall have and .
For the last operation of this post, we shall consider the power of a polynomial. Suppose is a polynomial, we are going to find for any real number . First, we need to take logarithm on both sides and then differentiate the result with respect to , or . By exploiting multiplication and division, we will get the coefficient of .
To facilitate these operation, we build a Python module that will handle these operations. We will also write another code to integrate an ODE. This integration code will work in general without any dependencies on the system or on the order of the system.
def integrate(F,y_init,t_init = 0.0 ,tStop = 1.0 ,h = 0.05 ): dim = len(y_init) y_sol = [] for i in range(0,dim): y_sol.append([]) y_sol[i].append(y_init[i]) t_sol = [] t_sol.append(t_init) t = t_init while t < tStop: y = [] for i in range(0,dim): y.append([]) y[i].append(y_sol[i][-1]) for k in range(0,20): dy = F(y) for i in range(0,dim): y[i].append(dy[i][k]/float(k+1)) n = len(y[0]) for i in range(0,dim): temp = 0.0 for j in range(0,n): temp = temp + y[i][j]*h**j y_sol[i].append(temp) t_sol.append(t+h) t = t + h return np.array(t_sol) ,np.transpose(np.array(y_sol))
All the operations above and the last code will be written in a python file so that if we need to use this Taylor method, we can just import it. Let’s take an example to integrate one ODE. We will create another code that specifies the ODE and also imports all the operations above and then numerically integrate it.
from mytaylor import * import matplotlib.pyplot as plt import numpy as np import math as m #define your ODE here def fun(y): z = subtractpoli(y[0],multiplypoli(y[0],y[0])) return [z] #define your computation parameter here y_init = [0.5] t_init = 0 h = 0.1 tStop = 5.0 #integrating ODE T, Y = integrate(fun,y_init,t_init,tStop,h) #real solution computation real_sol = [] for t in T: temp = m.exp(t)/(m.exp(t)+1.0) real_sol.append(temp) #plotting the error versus time real_sol = np.array(real_sol) error = Y[:,0]-real_sol plt.plot(T[:],error[:]) plt.show()
In the above code, we try to numerically integrate with initial condition . This is the same system we try to integrate in the previous post. I hope you can see the difference between the last code and the code we have shown in the previous post. We really don’t need to consider about Taylor method at all, we just need to write our system in the above code if we want to compute the numerical solution of our code. Notice that we have written all the operations and the integrating code in a file named mytaylor.py which is imported in the first line of the above code. In the fifth line, we can write any ODE we want to integrate, however, the way we write our ODE will be slightly different. Due to polynomial operations that we have defined in this post, cannot be written as just but addpoly(). Once we define our ODE, we can just integrate it like in line 15.
Another thing we would like to do is to extend the operation of polynomial. We have not included trigonometry operations on polynomials and et cetera. We would like also to test this Taylor method to a stiff system. Please find the following two files (mytaylor.py and test_case.py) for interested readers if you would like to test yourselves.
and .
Let’s assume that we only consider an autonomous ODE first. To find a solution at the next step, , we need to apply Taylor expansion, as follow:
.
Therefore, depending on how accurate we want our solution is, all we need is to compute the derivatives of . Unfortunately, computing all the derivatives is not an easy task for a computer. We have to derive all the derivatives by ourselves and then input them to the computer.
In this note, I will introduce an automatic differentiation as a tool to compute the derivatives of so that we can compute as accurately as possible. Let us consider the series expansion of as above, but I will use constant to replace the th derivative of ,
.
Let us derive the above series with respect to .
.
Notice that the derivative at the left hand side of the above equation is actually equal to since we know that satisfies the ordinary differential equation. Then we have,
.
Since is a series of then is also a series of . Then both sides of the above equation are series of , thus we can equate each of the coefficient of to get , and so on. The most difficult task in this problem is to find the series of since the function can be anything.
Let’s take an example. Suppose we want to integrate the following differential equation,
and .
Let us choose and compute . As before, we assume that is a series of as follows,
.
Differentiating both sides of the above equation, we get
.
The right hand side of the above is equal to
.
Therefore, we get,
, ,, and so on.
The real question is how to find , and so on. If the function only involves operations such as addition, subtraction and multiplication, it would be easy to find the ‘s. But if the function has operations such as division, exponential, logarithm and trigonometric functions then we have to think a bit harder on how to find ‘s. We go back to the previous example. Since the operations in this example are only multiplication and subtraction, we can easily compute , and so on. Using algebraic modification, we get the following,
, thus
, thus
, thus
and so on.
The more ‘s we compute, the more accurate our solution is. Using Taylor method this way, we can easily apply this method in a program. It should be clear why this method is called automatic differentiation because we don’t need to manually derive all the derivatives.
Next, I tried to numerically integrate the previous example. Below is the graphic of the error of Taylor method with automatic differentiation versus time. As we can see from the figure, that the error of our numerical integrator is so small and it is almost exact. The number of I used in this graph is 10.
I use python to numerically integrate this example. You can find the code to produce the above figure in the end of this note. One of the drawbacks of this method is when you change your ODE, you have to change your code as well. In other numerical integrators such as Runge Kutta, we don’t need to change the whole code, we just need to change the ODE and we should get the solution. Together with my student, I am trying to tackle this problem and I hope it will be posted in here.
import math as m import numpy as np import pylab as plt #preliminary constants order = 10 h = 0.1 t = 0.0 t_end = 5 y0 = 0.5 #defining solution sol = [] sol.append(y0) time = [] time.append(t) #start of the program while t<t_end: a = [] a.append(sol[-1]) for k in range(1,order): if k == 1: a.append(a[k-1]*(1-a[k-1])) else: sum = 0.0 for j in range(0,k-1): sum = sum + a[j]*(-a[k-1-j]) a.append((sum+a[k-1]*(1.0-a[0]))/k) sum = 0.0 for i in range(0,order): sum = sum + a[i]*h**i sol.append(sum) t = t + h time.append(t) #end of the program #real solution computation real_sol = [] for t in time: temp = m.exp(t)/(m.exp(t)+1.0) real_sol.append(temp) #plotting the error versus time sol = np.array(sol) real_sol = np.array(real_sol) time = np.array(time) error = sol-real_sol plt.plot(time[:],error[:]) plt.show()
The procedure
Please have a look at the following figure.
The above figure is the first step of our method. Take a sheet of paper and let’s call the left side, side A and the right side, side B. First, you need to fold the paper side A as in the second image of the figure above. It doesn’t have to divide the paper equally, you can just fold it anywhere. Once you do that, you undo the fold and you will have a fold mark, which we call it . Next, you need to fold again, but now you need to fold the right side (side B) along the fold mark as in the second row of the above figure. Now you will have two fold marks, the latter is called . Now take a look at the following figure.
Now that we have two fold marks ( and ), we need to fold side A along the fold mark , as in the first row of the above figure. Undo the fold, you will have the third fold mark (we call it ). Next, we fold the side B until it touch the fold mark and we will have four fold marks, and . Continue the step over and over again and at one stage, at the n-step, will be close to one third of the paper and will be very close to two third of the paper.
Simulation
Let’s take an example. Suppose in the beginning, let’s fold side A such that we divide the paper equally. Assume the length of a paper is 1. This means . If we simulate this case, we will get the following table.
n | 1 | 2 | 3 | 4 | 5 |
0.5 | 0.375 | 0.3475 | 0.3359375 | 0.333984375 | |
0.75 | 0.6875 | 0.671875 | 0.66796875 | 0.666992188 | |
6 | 7 | 8 | |||
0.333496094 | 0.333374023 | 0.333343506 | |||
0.666748047 | 0.666687012 | 0.666671753 |
As we see, that it took approximately 4-8 steps until is closed to one third of a paper or is closed enough to two third of the paper. However, what happens if we choose different initial condition. We simulate this using a number of initial condition. Suppose we use an A4 paper which has length of 29.7 cm. We choose a number of different initial conditions. We can see the results in the following figures. The x-axis from the above figure is the initial condition and the y-axis is the number of iteration needed until it is closed enough to one third of paper. Since we use an A4 paper, the x-axis ranges from 0 to 29.7 cm. The tolerance we use in the above figure is 1 mm. In conclusion, we only need to do only maximum four steps until the last fold mark is only 0.1 cm away from one third of the A4 paper.
We relax the tolerance in the above figure. We use 0.5 cm as the tolerance and the number of iterations is slightly improved. It can be seen that if our first trial is close enough to one third of A4 paper, we only need to do one step.
Mathematical proof
Finally, it comes to prove that this methods should work. mathematically not numerically. First of all, we need to mathematically model the above procedures. Let’s assume that the length of the paper we use is 1. Let’s use the same notation as used in the above procedures. Given we will get . Recursively, we could compute and as follow:
and , as n=1,2, … .
Even though there are two sequences and , we only need to consider . We could simplify the sequence as follow.
is given and for .
Lemma 1
When (as ), the sequence is bounded above (bounded below).
Lemma 2
When (as ), the sequence is monotonically increasing (decreasing).
Theorem 3
Given , the sequence converges to 1/3.
Corollary 4
Given , the sequence converges to 2/3.
The proofs of the above results are given to my students as exercises.
This is the end of this note and the following is the code to produce the above figures.
import math import numpy as np import pylab as plt def f(x): return (29.7+x)/2.0 def g(y): return y/2.0 def number_of_iter(x,tol,max_iter): iter = 0 er = abs(x-29.7/3) if er <= tol: return 1 while er > tol and iter < max_iter: iter = iter + 1 y = f(x) x = g(y) er = min(abs(29.7/3-x),abs(2*29.7/3-y)) return iter #main code x = np.arange(0,29.7,0.1) y = [] tol = 0.5 max_iter = 100 for t in x: result = number_of_iter(t,tol,max_iter) y.append(result) y = np.array(y) ax = plt.gca() ax.set_ylim([0,5]) plt.plot(x,y) plt.show()
That is just a background and my main point here is about bash scripting. After a few years using ubuntu, I have not created any bash script. Today, finally I learn to create one script. I created a script to automate my boring routine. When I write a paper, I need some illustrations. I mostly use xfig to create some mathematical images and to be able to use in the figure I need to convert it to an eps file. The produced eps file will be then converted to a pdf file as it is perfectly compatible to my pdflatex command. But before that, I need to crop the resulted pdf file in order to remove white space around the image. Suppose the name of my xfig file is spam.fig. I then write a series of command.
figtex2eps spam.fig ps2pdf spam.eps pdfcrop spam.pdf
I want to write a script that do all the above automatically. Thus I created the following script.
#!/bin/bash # Converts a .fig-file (xfig file) to a .eps-file by using a built-in function figtex2eps # and then convert it to a .pdf file by using a built-in function ps2pdf # and finally convert it to a cropped pdf file by using a built-in function pdfcrop # # ivanky saputra https://ivanky.wordpress.com # # credit to : # $ /home/abel/c/berland/etc/cvsrepository/figtex2eps/prog/figtex2eps $ # $ ps2pdf in ubuntu $ # $ pdfcrop in ubuntu $ echo "We are going to change the following $1.fig to a cropped pdf file" function quit { exit } if [$1 == ""]; then echo "no files given"; quit else echo "Processing $1.fig............"; figtex2eps $1.fig ps2pdf $1.eps pdfcrop $1.pdf echo "Done"; fi
As someone has said that it is better to share your code and data openly as we as human are idiots and will make mistakes, please give me any suggestion to improve mine.
Okay, enough for the background, I am sure you don’t want to hear anymore on that. Let’s get back to main point of this note. Long story short, a straight line became one of the topics in my course, and I was interested on how the formula to compute distance between a point and a line is derived.In this case, I only consider two dimensional case and by distance, I mean the shortest distance between such a point and a certain line, which also means the perpendicular distance. If you don’t remember what is the formula, let me recall you.
Consider a straight line , given by the following equation:
.
Suppose we have a point , then the distance between the point and the straight line is
There are other proofs on how to derive the above formula, but I really really like the proof I am going to show you below. I will divide this note into two section. The first section will talk about a straight line and how to get an equation of a straight line. The latter section will derive the formula.
1. A straight line equation
In high school, I think you must know what conditions we need to have in order to obtain a straight line equation. If we have two points in coordinate we can compute the line equation using the following:
.
If we have a gradient and a point, we can also compute the line equation using the following:
.
Of course, in a problem, we won’t get this information so easily. We have work a bit more to get either two points or one point and a gradient and then we are able to obtain the equation of the straight line.
Before I teach this course, I can only conclude that if you want to know the equation of a straight line you need to know either:
(i) two points, or
(ii) a point and gradient of the line.
But, actually there is a third condition, and if we know this condition, we can also compute the equation of a straight line. In the third condition, a line is assumed to be the tangent line of a certain circle. Thus, the characteristics we need to know to form a line are the radius of the circle and the angle between the radius and the positif axis. See the figure below.
In the figure, we can see that a line can be determined uniquely if the radius of the circle and the angle are known. In the first section of this note, we shall derive how to define a line equation given these two conditions.
Suppose we have a straight line, are are given. See the figure below. Consider the point . We are going to find the relationship of and .
The radius , which is , can be computed by adding and . We shall consider first. Consider triangle which is a right triangle at . We have the relationship:
.
Consider another triangle, , which is also a right triangle at . We have the following relationship
as it can be computed that the angle is also .
Therefore, we have , which is the equation of straight line given that and is known.
Before we discuss how to derive the formula to compute the distance between a point and a line, I would like to discuss how to find and if we know the general equation of a straight line . Consider a line equation, then we move $c$ to the right hand side and multiple both sides with a non zero constant , . We shall choose such that and . Therefore, we have
and
our equation of line becomes:
.
Here, we have that the right hand side of the above equation is , and we choose the positive value to get . The angle can also be computed once we determine .
2. Distance between a point and a line
To compute a perpendicular distance between a point and a line, we shall use the above result. Consider a straight line and a point in the following figure.
We want to compute . In order to do that, we need to make another line that is parallel to the line and passing through point . This line, because it is parallel to and, has the following equation:
Thus the distance between the point and the line can be easily computed by considering the absolute difference between and as follow:
As we know from the first section that we can substitute the latter expression into:
or
which is the same as the formula we mentioned in the beginning of our note.
My first coursera was nothing but really, really good. The class could not definitely be better. The intructors have put so much effort in preparing the class and at the same time they were really enjoyable and fun. I really learned a lot from this class and I recommend everyone to take this class. After nine weeks, I finished the course and proudly said that I got 96.9% with distinction mark.
But what I wanted to tell here is the content of the course. From knowing nothing of Python I now know that the word ‘python’ is originally from the Month Python’s Flying Circus not the reptilia. I also know how useful this new programming language is. Python can be used to build an interactive game, to conduct some scientific computations, to analyse data science and many more. There are already many communities that use Python and they are (still) growing. One more good thing is the fact that it is free. Python works on either Windows, Linux or Mac and the installation on each device is not that difficult. I am using both Windows and Ubuntu. For Windows, I use mainly Python(x,y) and for Ubuntu, I don’t have to do anything as it is already inside. However, in this course, I did not have to use all of these as one of the instructors has built a nice Python web application that is called Codeskulptor. It will allows us to make a python scripts on your web browser so actually you don’t have to install python in your computer.
From the course, I learned to make interactive games. Even though the games I made were not perfect but I think it is playable and fun enough. I learned to make games like rock-paper-scissors-spock-lizzard, number guessing game, stopwatch game, pong, memory, and asteroids. The following links are the python scripts for all of the above games I made during the course. The link will bring you to a codeskulptor website, in which I had already written a python script. You just need to press the play button on the top left of your screen.
1. RPSLS This game is called rock-paper-scissors-lizzard-spock game. It is an extention of rock-paper-scissors and first appeared in The Big Bang Theory series.
2. Number guessing game This game, as you might guess, is a number guessing game. The computer hide a number between a small number and a big number and you have finite tries to guess what number the computer is hiding.
3. Stopwatch This game will train your reflexes. It will show a stopwatch and you have to press the pause button every whole second.
4. Pong Oh, please tell me you know this game.
5. Memory There are 16 cards facing down, and all of them come in pair. Figure out the pair with the smallest number of tries.
6. Blackjack The blackjack game, yes, it is the blackjack game that you always know.
7. Asteroids Asteroids is an arcade space shooter game and I think I played this game when I was a kid. I played this game using an Atari device.
Other than those games, I made two another python scripts just for fun. The first is a fan and the second is the illustration of Hilbert’s curve. Hope you like it.
Lemma 1
Suppose we have a cube equation . The roots of this equation are a couple of purely imaginary roots and a single real root if and only if the following conditions are satisfied and .
The proof is very easy and I am not going to prove this. Apparently, the conditions and are a necessary and sufficient condition for the cube equations to have a single real root and a couple of purely imaginary roots.
Okay, now let us move to the beginning why I bother with cube equations in the first place.
At the moment I am studying a mathematical model on HIV-1 virus with cell mediated immunity. I just started so I still don’t know all the representation of this model in the real life. But here is the model.
Yes, you are right, it is a five dimensional model. The variables are the state variables that are representing the density of uninfected cells, the infected cells, the density of virus, the CTL response cells and the CTL effector cells respectively. Why it is five dimensional, what are the parameters representing and other modeling questions, I still don’t know the answer. At the moment I am interested to find an equilibrium in this system that can undergo a Hopf bifurcation. An equilibrium can have a Hopf if the Jacobian of the system evaluated at this equilibrium has a purely imaginary eigenvalue. Actually this is the first time I handle a five dimensional system so it will be interesting.
Now we know why I need Lemma 1.
In my case, the situation is greatly simplified as all parameters are numbers except two which are and . From the paper that I read, I found that there are three equilibria in the system, so called . At the moment I am interested in the equilibrium because I suspected that this equilibrium can have a Hopf bifurcation and because the coordinate of this equilibrium has the following conditon . Let us have a look at the Jacobian matrix evaluated at .
As I said before, all the parameters are numbers except and . Also will be an expression that depends on and . As a result the matrix above will be depending on .
We are interested in what value of such that the matrix above has a purely imaginary eigenvalues. Even though the above matrix is not a block matrix but when we compute the eigenvalues, we can really separate the matrix above. We can obtain the eigenvalues of the above matrix by computing the eigenvalues of the following two matrices:
and
.
The eigenvalues of matrix is easy and it is real. So the problem really is in the computing the eigenvalue of matrix . Consider the matrix in a simpler notation.
.
The characteristic polynomial of the above matrix is
Therefore to find the value such that we have a Hopf bifurcation, we only need to solve the following conditions:
1. make sure that is positive and
2. solve .
I created this note (which is a part of our paper) so that I won’t forget what I had done. We don’t usually show this kind of computation but for me this little computation will be useful. Even though I uses software to compute and investigate the existence of Hopf bifurcation but it does not show the exact value of the parameter. Using algebraic approach I found the exact value.
Reference of the model
Yu, Huang, and Jiang, Dynamics of an HIV-1 infection model with cell mediated immunity, Communications in Nonlinear Science and Numerical Simulation, 2014, vol. 19.
Additional note: (Two dimensional case) Suppose the Jacobian matrix evaluated at the equilibrium is given below
.
Then the equilibrium has a Hopf bifurcation if the following conditions are satisfied:
1. and
2. .
1. @MrHonner: Evolutionary game theory in the classroom http://drvinceknight.blogspot.com/2014/03/a-very-brief-interactive-look-at.html by @drvinceknight #mathchat #math
@MrHonner is a twitter account owned by a mathematician Patrick Honner. I followed him and at some stage his tweet opened my mind to learn something new in mathematical teaching
2. @MrHonner: “It’s not how big your class is, it’s what you do with it” Interesting take on class size http://anglesofreflection.blogspot.com/2014/03/its-not-how-big-your-class-is-its-what.html #edchat
Again, this was a tweet from @MrHonner that brought me an article about the class size vs the quality of teaching. Since we are all teacher, I am sure that this article will be of interest.
3. @ISACalculus: Daylight Savings is mathematically illogical. Dilation not translation. Thanks @MrHonner for clearing this up. http://mrhonner.com/archives/1805
This is a light article yet interesting about Daylight Saving. I am now living in Indonesia, a very nice tropical region which does not have Daylight Saving. I once lived in Australia where Daylight Saving occurs every November – March (the other way around from Europe). I find it Daylight Saving is very interesting and I like this article very much.
4. @standupmaths: You need some maths inequalities? This guy has ≥ you’ll ever need: http://www.lkozma.net/inequalities_cheat_sheet/
Very useful article for mathematics students who wants to compete in mathematics olympiad. But let me give a credit to @standupmaths first. This account belongs to Matt Parker who is (of course) a mathematician. And he is also a stand-up comedian. If I get the chance and the universe allows me, I would invite him to my university. Anyway, the title of the article explains a lot. It would give us a summary of inequalities you have ever (or never) heard. So, grab it before the link is dead.
5. http://gizmo.do/2xZuhZm I am so sorry that I do not put who is the one tweeted this link cause I have forgotten. However, the link will bring you to an article about fractal. It is a very nice introduction to fractal and you’ll love it the way the author presented the article. Not only did he provide figures and illustrations but also he provided animations and simulations on how to make a fractal shape.
6. http://blogs.wsj.com/atwork/2014/04/15/best-jobs-of-2014-congratulations-mathematicians/?mod=e2fb Again I have forgotten who was the one who tweeted/retweeted this link. This is my last article about mathematics on the web. I think it is quite pretty to end this post with this article cause this article is about how math can give you the best job (in terms of salary). The article said in the beginning: “Another day, another reason to get better at math,” which for me, is true.
As a conclusion, I’ll let you read the articles. I am pretty sure I am going to write math on the web 2 next month or after that.