These are not intended to be complete lecture notes.
Complicated figures or tables or formulas are included here
in case they were not clear or not copied correctly in class.
Computer commands, directory names and file names are included.
Specific help may be included here yet not presented in class.
Source code may be included in line or by a link.
Lecture numbers correspond to the syllabus numbering.
Introduction:
Hello, my name is Jon Squire and I have been using computers
to solve numerical problems since 1959. I have about 1 million
lines of source code, in more than 15 languages,
written over more than 50 years.
How can that be? Check the numerical computation:
1,000,000/50 years is 20,000 lines per year.
20,000/200 working days per year is 100 lines per working day.
Link to file type, file count, line count.
With a lot of reuse, cut-and-paste, same programs and
data files including scripts for many languages on many
operating systems, easy.
On a job, 20,000/(50 weeks*5 days per week) is 80 lines per day.
80/8 hours is 10 lines per hour. You can do that.
You may not save every line you type. sad.
Overview:
You will be writing 6 small programs and a project in a language
of your choice.
Full details and sample code in a few lnguages will be provided.
You will always have a weekend between a homework assignment
and the due date.
You may use whatever language or languages you like and you
may experiment with other languages. Examples will be provided
in C, Java, Python, Ruby, Matlab and others. You will see many
languages including Fortran, Ada, Lisp, Pascal, Delphi, Scheme...
The point is that the "syntactic sugar" of any language does
not mean much. You may sometime convert code from some language
into a language you like better.
I have found some programs that I wrote can run faster in
Java, Python, Ada, Fortran, Matlab than they run in optimized C.
Well, the Python and Matlab use efficient library routines
that are written in Fortran. We will cover threads and
multiprocessor HPC concepts. There is a full course, CMSC 483
Parallel and Distributed processing where you get to program
a multiprocessor.
You will be exposed to toolkits. Very valuable to help you
produce more and better software with less effort. Over time
you may develop a toolkit to help others. Toolkits are
available for numerical computation, graphics, AI, robotics,
any many other areas.
Read the syllabus.
In this course there may be no exactly correct answer.
This lecture will provide the definitions and the intuition for
absolute error
relative error
round off error
truncation error
Then how to call intrinsic function and elementary functions.
Some terms will be used without comment in the rest of the course.
Learn them now. You should run the sample code to increase your
understanding and belief.
"Absolute error" A positive number, the difference between the
exact (analytic) answer and the computed, approximate, answer.
"Relative error" A positive number, the Absolute Error divided
by the exact answer. (Not defined if the exact answer is zero.)
Most numerical software is first tested with one or more
known solutions. Then, the software may be used on
problems where the solution is not known.
Given the exact answer is 1000 and the computed answer is 1001:
The absolute error is 1
The relative error is 1/1000 = 0.001
For a set of computed numbers there are three common absolute errors:
"Maximum Error" the largest error in the set.
"Average Error" the sum of the absolute errors divided by the number
in the set.
"RMS Error" the root mean square of the absolute errors.
sqrt(sum_over_set(absolute_error^2)/number_in_set)
Given the exact answer is 100 answers of 1.0 and we
computed 99 answers of 1.0 and one answer of 101.0:
The maximum error is 100.0 101.0 - 1.0
The average error is 1.0 100.0/100
The RMS error is 10.0 sqrt(100.0*100.0/100)
In some problems the main concern is the maximum error, yet
the RMS error is often the best intuitive measure of error.
Generally much better intuitive measure than average error.
Almost all Numerical Computation arithmetic is performed using
IEEE 754-1985 Standard for Binary Floating-Point Arithmetic.
The two formats that we deal with in practice are the 32 bit and
64 bit formats. You need to know how to get the format you desire
in the language you are programming. Complex numbers use two values.
older
C Java Fortran 95 Fortran Ada 95 MATLAB Python R
------ ------ ---------------- ---------- ------------ -------- --------
32 bit float float real real float N/A float
64 bit double double double precision real*8 long_float 'default' 'default'
complex
32 bit 'none' 'none' complex complex complex N/A N/A
64 bit 'none' 'none' double complex complex*16 long_complex 'default' 'default'
'none' means not provided by the language (may be available as a library)
N/A means not available, you get the default.
IEEE Floating-Point numbers are stored as follows:
The single format 32 bit has
1 bit for sign, 8 bits for exponent, 23 bits for fraction
about 6 to 7 decimal digits
The double format 64 bit has
1 bit for sign, 11 bits for exponent, 52 bits for fraction
about 15 to 16 decimal digits
Some example numbers and their bit patterns:
decimal
stored hexadecimal sign exponent fraction significand
The "1" is not stored |
|
31 30....23 22....................0 |
1.0
3F 80 00 00 0 01111111 00000000000000000000000 1.0 * 2^(127-127)
0.5
3F 00 00 00 0 01111110 00000000000000000000000 1.0 * 2^(126-127)
0.75
3F 40 00 00 0 01111110 10000000000000000000000 1.1 * 2^(126-127)
0.9999995
3F 7F FF FF 0 01111110 11111111111111111111111 1.1111* 2^(126-127)
0.1
3D CC CC CD 0 01111011 10011001100110011001101 1.1001* 2^(123-127)
63 62...... 52 51 ..... 0
1.0
3F F0 00 00 00 00 00 00 0 01111111111 000 ... 000 1.0 * 2^(1023-1023)
0.5
3F E0 00 00 00 00 00 00 0 01111111110 000 ... 000 1.0 * 2^(1022-1023)
0.75
3F E8 00 00 00 00 00 00 0 01111111110 100 ... 000 1.1 * 2^(1022-1023)
0.9999999999999995
3F EF FF FF FF FF FF FF 0 01111111110 111 ... 1.11111* 2^(1022-1023)
0.1
3F B9 99 99 99 99 99 9A 0 01111111011 10011..1010 1.10011* 2^(1019-1023)
|
sign exponent fraction |
before storing subtract bias
Note that an integer in the range 0 to 2^23 -1 may be represented exactly.
Any power of two in the range -126 to +127 times such an integer may also
be represented exactly. Numbers such as 0.1, 0.3, 1.0/5.0, 1.0/9.0 are
represented approximately. 0.75 is 3/4 which is exact.
Some languages are careful to represent approximated numbers
accurate to plus or minus the least significant bit.
Other languages may be less accurate.
Now for some experiments for you to run an the computer of your choice
in the language of your choice.
In single precision floating point print:
10^10 * sum( 1.0, 10^-7, -1.0) answer should be 1,000
10^10 * sum( 1.0, 0.5*10^-7, -1.0) answer should be 500
expression 10^10 * ( 1.0 + 10^-7 -1.0) answer should be 1,000
10000000000.0*(1.0+10.0E-7-1.0)
The order of addition is important.
Adding a small number to 1.0 may not change the value.
This small number is less that what we call "epsilon".
error_demo1.adb Ada source code
error_demo1_ada.out output
error_demo1.c C source code
error_demo1_c.out output
epsilon.c C double and float source code
epsilon_c.out double and float output
error_demo1.f90 Fortran source code
error_demo1_f90.out output
error_demo1.java Java source code
error_demo1_java.out output
error_demo1.py3 Python source code
error_demo1_py3.out output
error_demo1.m Matlab source code
error_demo1_m.out output
error_demo1.r R source code
error_demo1_r.out output
Remember 10^-7 is 0.0000001, not a power of 2.
Thus, can not be stored exactly.
1.0e-16 = 10^-16 not exact because not a power of 2
Also, floating point arithmetic is performed in registers with
more bits than can be stored. Thus, as shown below, you may
get more precision than you expect. Do not count on it.
epsilon.c showing more precision source code
epsilon.out showing forced store output
We will cover, in the course, methods of reducing error in areas:
statistics sum x^2 - (sum x)^2 vs sum(x-mean)
polynomial definition, evaluation Horner's rule vs x^N
approximation, e.g. sin(x) truncation error vs roundoff error
derivatives
indefinite integral, definite integral, area
partial differential equations
and show sample code in many languages:
Ada 95, C, Fortran 95, Java, Python, MatLab and R
Error accumulation when computing standard deviation.
Subtracting large numbers loses significant digits.
123456 - 123455 = 1.00000 only 1 significant digit
With only 6 digits representable
123456.00 - 123455.99 = 1.00000 yet should be 0.010000
Two ways of computing the standard deviation are shown,
with the errors indicated for various sets of data:
Cases:
stddev(1, 2,..., 100)
stddev(10, 20,..., 1,000) should just scale
stddev(100, 200,..., 10,000)
stddev(1,000, 2,000,..., 100,000)
stddev(10,000, 20,000,..., 1,000,000)
stddev(10,001, 10,002,..., 10,100) should be same as first
See computed values in .out files:
sigma_error.csigma_error_c.outsigma_error.f90sigma_error_f90.outsigma_error.adbsigma_error_ada.outsigma_error.java floatsigma_error_java.outsigma_errord.java doublesigma_errord_java.outsigma_error.pysigma_error_py.outsigma_error.msigma_error_m.out
Using different algorithms, expect slightly different results.
Using different types, float or double, may get very different results.
Iteration needing uniform step size should use multiplication,
not addition as shown in:
bad_roundoff.cbad_roundoff_c.out
The elementary functions are sin, cos, tan, exp, sqrt
and inverse forms asin, acos, atan, log, power
and reciprocal forms cosecant, secant, cotangent,
and hyperbolic forms sinh, cosh, tanh,
and inverse hyperbolic forms asinh, acosh, atanh, ...
The intrinsic functions are built into the language, e.g. abs (sometimes)
Note that Fortran 95 and Java need no extra information to get the
real valued elementary functions. "C" needs #include <math.h>
while Ada 95 needs 'with' and 'use' Ada.Numerics.Elementary_functions .
Note that Ada 95 overloads the function names and provides
single precision as 'float' and double precision as 'long_float'.
ef_ada.adb Ada 95, float and long float
Note that Fortran 95, java, python, overload the function names and provides
the same function name for single and double precision.
Fortran 95 names single precision as 'real' and
double precision as 'double precision'.
ef_f95.f90 Fortran 95, real and double
Note that Java provides double precision as 'double' for variables and constants.
ef_java.java Java, only double
Hyper.java create your own hyperbolic
Note that C provides double precision only as 'double' and constants.
ef_c.c C, only double is available
ef_c.out nan means not a number, bad input
Note that MATLAB has the most functions and all functions are
automatically double or double complex as needed.
ef_matlab.m MATLAB everything
ef_matlab.out automatic complex
Note that Python needs import math and can list available functions
test_math.py Python many, automatic conversion
test_math_py.out Python output
test_math.py3 Python many, automatic conversion
test_math_py3.out Python output
Just for fun: Power of 2 "tree"
A small program that prints epsilon, 'eps' and the largest and
smallest floating point numbers, run on one machine, shows:
float_sig.c running, may die
type float has about six significant digits
eps is the smallest positive number added to 1.0f that changes 1.0f
type float eps= 5.960464478E-08
type float small= 1.401298464E-45
this could be about 1.0E-38, above shows un-normalized IEEE floating point
type float large= 1.701411835E+38
type double has about fifteen significant digits
type double eps= 1.11022302462515654E-16
type double small=4.940656458E-324
this could be about 1.0E-304, above shows un-normalized IEEE floating point
type double large=8.988465674E+307
The program is float_sig.c
Using 2^n = 10^k/3.32 and n bits has a largest number 2^n -1 and
k digits has a largest number 10^k -1. 24 bits would be 7 digits.
53 bits would be 16 digits, the more optimistic significant
digits for IEEE floating point.
Notes: I have chosen to keep most WEB pages plain text so that
they may be read by all browsers. Some referenced pages may
be .pdf, .jpg, .gif, .ps or other file types.
Many math books use many Greek letters. You may want to refer
to various alphabets.
If you want to stretch your concept of numbers,
check out the Continuum Hypothesis
Beware of easy methods of printing, related to epsilon
eps.py Python2 "print" Python3 "print( )"
eps.f90 Fortran "print"
You may use any language you want. I do not run your code.
Turn in paper in class or submit your source code and output for grading.
It is OK to ask for an example in a language I have not
provided. For various "C", I just provide .h and .c.
You may add to .h files for C++
#ifdef __cplusplus
extern "C" {
#endif
// header file function prototypes
#ifdef __cplusplus
}
#endif
You may use Fortran and C object files in many
languages, including Java, Python, Matlab, etc.
Last updated 12/13/2021
Some physical problems are easy to solve numerically using just
the basic equations of physics. Other problems may be very difficult.
Consider a specific model rocket with a specific engine.
Given all the data we can find, compute the maximum altitude
the rocket can obtain. Yes, this is rocket science.
Most physics computation is performed with metric units.
units and equations
Estes Alpha III
Length 12.25 inches = 0.311 meters
Diameter 0.95 inches = 0.0241 meters
Body area 0.785 square inches = 0.506E-3 square meters cross section
Cd of body 0.45 dimensionless
Fins area 7.69 square inches = 0.00496 square meters total for 3 fins
Cd of fins 0.01 dimensionless
Weight/mass 1.2 ounce = 0.0340 kilogram without engine
Engine 0.85 ounce = 0.0242 kilogram initial engine mass
Engine 0.33 ounce = 0.0094 kilogram final engine mass
Thrust curve
Total impulse 8.82 newton seconds (area under curve)
Peak thrust 14.09 newton
Average thrust 4.74 newton
Burn time 1.86 second
Initial conditions:
t = 0 time
s = 0 height
v = 0 velocity
a = 0 acceleration
F = 0 total force not including gravity
m = 0.0340 + 0.0242 mass
i = 1 start with some thrust
Basic physics: (the order is needed for hw2)
(in while loop, start with time=0.1 thrust=6)
Fd = Cd*Rho*A*v^2 /2 two equations, body and fins
Fd is force of drag in newtons in opposite direction of velocity
Cd is coefficient of drag, dimensionless (depends on shape)
Rho is density of air, use 1.293 kilograms per meter cubed
A is total surface area in square meters
v is velocity in meters per second (v^2 is velocity squared)
Fg = m*g Fg is force of gravity toward center of Earth
m is mass in kilograms
g is acceleration due to gravity, 9.80665 meters per second squared
Ft = value from thrust curve array at this time, you enter this data.
index i, test i>18 and set Ft = 0.0 (time > 1.8 seconds)
Do not copy! This is part of modeling and simulation.
start with first non zero thrust.
F = Ft - (Fd body + Fd fins + Fg) resolve forces
a = F/m a is acceleration we will compute from knowing
F, total force in newtons and
m is mass in kilograms of body plus engine mass that changes
dv = a*dt dv is velocity change in meters per second in time dt
a is acceleration in meters per second squared
dt is delta time in seconds
v = v+dv v is new velocity after the dt time step
(v is positive upward, stop when v goes negative)
v+ is previous velocity prior to the dt time step
dv is velocity change in meters per second in time dt
ds = v*dt ds is distance in meters moved in time dt
v is velocity in meters per second
dt is delta time in seconds
s = s+ds s is new position after the dt time step
s+ is previous position prior to the dt time step
ds is distance in meters moved in time dt
m = m -0.0001644*Ft apply each time step
print t, s, v, a, m
t = t + dt time advances
i = i + 1
if v < 0 quit, else loop
Ft is zero at and beyond 1.9 seconds, rocket speed decreases
Homework Problem 1:
Write a small program to compute the maximum height when
the rocket is fired straight up. Assume no wind.
In order to get reasonable consistency of answers, use dt = 0.1 second
Every student will have a different answer.
Some where near 350 meters that is printed on the box. +/- 30%
Any two answers that are the same, get a zero.
Suggestion: Check the values you get from the thrust curve by
simple summation. Using zero thrust at t=0 and t=1.9 seconds, sampling
at 0.1 second intervals, you should get a sum of about 90 . Adjust
values to make it this value in order to get reasonable consistency of
answers.
The mass changes as the engine burns fuel and expels mass
at high velocity. Assume the engine mass decreases from 0.0242 kilograms
to 0.0094 grams proportional to thrust. Thus the engine mass is
decreased each 0.1 second by the thrust value at that time times
(0.0242-0.0094)/90.0 = 0.0001644 . mass=mass-0.0001644*thrust at this time.
"thrust" = 0.0 at time t=0.0 seconds
"thrust" = 6.0 at time t=0.1 seconds.
"thrust" = 0.0 at and after 1.9 seconds. Important, rocket is still climbing.
Check that the mass is correct at the end of the flight. 0.0340+0.0094
Published data estimates a height of 1100 feet, 335 meters to
1150 feet, 350 meters.
Your height will vary.
Your homework is to write a program that prints every 0.1 seconds:
the time in seconds
height in meters
velocity in meters per second
acceleration in meters per second squared
force in newtons
mass in kilograms (just numbers, all on one line)
and stop when the maximum height is reached.
Think about what you know. It should become clear that at each
time step you compute the body mass + engine mass, the three
forces combined into Ft-Fd_body-Fd_fins-Fg, the acceleration, the velocity
and finally the height. Obviously stop without printing if
the velocity goes negative (the rocket is coming down).
The program has performed numerical double integration. You might
ask "How accurate is the computation?"
Well, the data in the problem statement is plus or minus 5%.
We will see later that the computation contributed less error.
A small breeze would deflect the rocket from vertical and
easily cause a 30% error. We should say that:
"the program computed the approximate maximum height."
Additional cases you may wish to explore.
What is the approximate maximum height without any drag,
set Rho to 0.0 for a vacuum.
What is the approximate maximum height using dt = 0.05 seconds.
What is the approximate maximum height if launched at 45 degrees
rather than vertical, resolve forces in horizontal and vertical.
For this type of rocket to have stable flight the center of gravity
of the rocket must be ahead of the center of pressure. The center
of pressure is the centroid of the area looking at the side of
the rocket. This is why the fins extend out the back and the
engine mass is up in the rocket.
The shape of a moving object determines its drag coefficient, Cd.
The drag coefficient combined with velocity and area of the
moving object determine the drag force.
Fd = 1/2 * Cd * density * area * velocity^2
A few shapes and their respective drag coefficients are:
We will cover numeric computation of flow.
For more physics equations, units and conversion click here.For more physics equations, center of mass, click here.Homework 1 assignment
What you are doing is called "Modeling and Simulation", possibly
valuable buzz words on a job interview. Also "rocket scientist".
Observing what you are modeling may help.
Simultaneous equations are multiple equations involving the same
variables. In general, to get a unique solution we need the
same number of equations as the number of unknown variables,
and the equations mutually linearly independent.
A sample set of three equations in three unknowns is:
eq1: 2*x + 3*y + 2*z = 13
eq2: 3*x + 2*y + 3*z = 17
eq3: 4*x - 2*y + 2*z = 12
One systematic solution method is called the Gauss-Jordan reduction.
We will reduce the three equations such that eq1: has only x,
eq2: has only y and eq3: has only z making the constant yield
the solution. Operations are always based on the latest version
of each equation. The numeric solution will perform the same
operations on a matrix.
Reduce the coefficient of x in the first equation to 1, dividing by 2
eq1: 1*x + 1.5*y + 1*z = 6.5
Eliminate the variable x from eq2: by subtracting eq2 coefficient of
x times eq1:
eq2: becomes eq2: - 3* eq1:
eq2: (3-3)*x + (2-4.5)*y + (3-3)*z = (17-19.5) then simplifying
eq2: 0*x - 2.5*y + 0*z = -2.5
Eliminate the variable x from eq3: by subtracting eq3 coefficient of
x times eq1:
eq3: becomes eq3: - 4* eq1:
eq3: (4-4)*x (-2 -6)*y + (2-4)*z = (12-26) then becomes
eq3: 0*x - 8*y - 2*z = -14
The three equations are now
eq1: 1*x + 1.5*y + 1*z = 6.5
eq2: 0*x - 2.5*y + 0*z = - 2.5
eq3: 0*x - 8.0*y - 2*z = -14.0
Reduce the coefficient of y in eq2: to 1, dividing by -2.5
eq2: 0*x + 1*y + 0*z = 1
Eliminate the variable y from eq1: by subtracting eq1 coefficient of y times eq2:
eq1: becomes eq1: -1.5* eq2:
eq1: 1*x + 0*y + 1*z = (6.5-1.5) then becomes
eq1: 1*x + 0*y + 1*z = 5
Eliminate the variable y from eq3: by subtracting eq3 coefficient of y times eq2:
eq3: becomes eq3: -8* eq2:
eq3: 0*x + 0*y - 2*z = (-14 +8) then becomes
eq3: 0*x + 0*y - 2*z = -6
The three equations are now
eq1: 1*x + 0*y + 1*z = 5
eq2: 0*x + 1*y + 0*z = 1
eq3: 0*x + 0*y - 2*z = -6
Reduce the coefficient of z in eq3: to 1, dividing by -2
eq3: 0*x + 0*y + 1*z = 3
Eliminate the variable z from eq1: by subtracting eq1 coefficient of z times eq2:
eq1: becomes eq1: -1* eq2:
eq1: 1*x + 0*y + 0*z = (5-3) then becomes
eq1: 1*x + 0*y + 0*z = 2
Eliminate the variable z from eq2: by subtracting eq2 coefficient of z times eq2:
eq2: becomes eq3: 0* eq2:
eq2: 0*x + 1*y + 0*z = 1
The three equations are now
eq1: 1*x + 0*y + 0*z = 2 or x = 2
eq2: 0*x + 1*y + 0*z = 1 or y = 1
eq3: 0*x + 0*y + 1*z = 3 or z = 3 the desired solution.
The numerical solution simply places the values in a matrix and uses the same
reductions shown above.
Given the equations: |A|*|X| = |Y| B = |AY|
eq1: 2*x + 3*y + 2*z = 13 | 2 3 2 | |x| |13|
eq2: 3*x + 2*y + 3*z = 17 | 3 2 3 |*|y|=|17|
eq3: 4*x - 2*y + 2*z = 12 | 4 -2 2 | |z| |12|
Create the matrix | 2 3 2 13 | having 3 rows and 4 columns
B = | 3 2 3 17 |
| 4 -2 2 12 |
The following code, using n=3 for these three equations, computes the
same desired solution as the manual method above.
for k=1:n
for j=k+1:n
B(k,j) = B(k,j)/B(k,k)
end
for i=1:n
if(i not k)
for j=1:n
B(i,j) = B(i,j) - B(i,k)*B(k,J)
Pivoting to avoid zero on diagonal and improve accuracy
Now we must consider the possible problem of B(k,k) being zero
for some value of k. It is rather obvious that the order of the
equations does not matter. The equations can be given in any
order and we get the same solution. Thus, simply interchange
any equation where we are about to divide by a zero B(k,k)
with some equation below that would not result in a zero B(k,k).
It turns out that we get better accuracy if we always pick
the equation that has the largest absolute value for B(k,k).
If the largest value turns out to be zero then there is no
unique solution for the set of equations.
We generally want numerical code to run efficiently and thus we
will not physically interchange the equations but rather keep
a row index array that tells us where the k th row is now.
The code for the final algorithm is given in the links below.
Note types of errors
test_simeq_small.c source codetest_simeq_small_c.out output
test_simeq_small.c results should be exactly X[0]=1.0, X[1]=-1.0
err is the error multiplying the solution, X, times matrix A*X=Y
a small change in data gives a big change in solution.
the big change is not indicated by the computed error.
decimal .835*X[0] + .667*X[1] = .168 Y[0]
decimal .333*X[0] + .266*X[1] = .067 Y[1]
flt-pt 0.835000*X[0] + 0.667000*X[1] = 0.168000
flt-pt 0.333000*X[0] + 0.266000*X[1] = 0.067000
initialization complete, solving
solution X[0]=1.0, err=0
solution X[1]=-1.0, err=0 -- expected
now perturb Y[1] from .067 to .066, .001 change, and run again
flt-pt 0.835000*X[0] + 0.667000*X[1] = 0.168000 Y[0]
flt-pt 0.333000*X[0] + 0.266000*X[1] = 0.066000 Y[1] -- only change
solution X[0]=-666.0, err=0
solution X[1]=834.0, err=2.84217e-14 -- X[1] WOW!
notice computed errors are still small,
yet, the values of X[0] and X[1] are very different.
Typical problem with people and computers:
Often called garbage in, garbage out!
test_inverse_small.c source codetest_inverse_small_c.out output
We will see later, when the norm of the coefficients of
simultaneous equations is inverted, and the norm of the inverse
is different by many orders of magnitude, the system will be
called ill-conditioned.
Throughout this course you will see variations of this source code,
tailored for specific applications. The packaging will change with
"C" files having code inside with 'static void', Fortran 95 code using
modules and, Java and Ada code using packages. Python etc. code.
It should be noted that the algorithm is exactly the same for sets
of equations with complex values. The code change is simply
changing the type in Fortran 95, Java, and Ada 95. The Java class
'Complex' is on my WEB page. The "C" code requires a lot of changes.
I wrote the first version of this program for the IBM 650 in assembly
language as an electrical engineering student. The program was for
complex values and solved for node voltages in alternating current
circuits. A quick and dirty version is ac_circuit.java
that needs a number of java packages:
Matrix.javaComplex.javaComplexMatrix.javaac_analysis.java an improved version
Then an even more complete version that plots up to eight node voltages.
ac_plot.java simple Java plot added
ac_plot.dat capacitor, inductor and tuned circuits
Output of java myjava.ac_plot.java < ac_plot.dat
There are systems of equations with no solutions:
eq1: 1*x + 0*y + 0*z = 2
eq2: 2*x + 0*y + 0*z = 2
eq3: 4*x - 2*y + 3*z = 5
Some may ask: What about solving |A| * |X| = |Y| for X, given A and Y
using |X| = |Y| * |A|^-1 (inverse of matrix A) ?
The reason this is not a good numerical solution is that slightly
more total error will be in the inverse |A|^-1 and then a little
more error will come from the vector times matrix multiplication.
Matrix Inverse
The code for matrix inverse is very similar to the code for solving
simultaneous equations. Added effort is needed to find the
maximum pivot element and there must be both row and column
interchanges. An example that shows the increasing error with the
increasing size of the matrix, on a difficult matrix, is shown below.
Note that results of a 16 by 16 matrix using 64-bit IEEE Floating
point arithmetic that is ill conditioned may become useless.
inverse.f90test_inverse.f90test_inverse_f90.out
Extracted form test_inverse_f90.out is
initializing big matrix, n= 2 , n*n= 4
sum of error= 1.84748050191530E-16 , avg error= 4.61870125478825E-17
initializing big matrix, n= 4 , n*n= 16
sum of error= 2.19971263426544E-12 , avg error= 1.37482039641590E-13
initializing big matrix, n= 8 , n*n= 64
sum of error= 0.00000604139304982709 , avg error= 9.43967664035483E-8
initializing big matrix, n= 16 , n*n= 256
sum of error= 83.9630735209012 , avg error= 0.327980755941020
initializing big matrix, n= 32 , n*n= 1024
sum of error= 4079.56590417946 , avg error= 3.98395107830025
initializing big matrix, n= 64 , n*n= 4096
sum of error= 53735.8765782488 , avg error= 13.1191104927365
initializing big matrix, n= 128 , n*n= 16384
sum of error= 85784.2643647822 , avg error= 5.23585597929579
initializing big matrix, n= 256 , n*n= 65536
sum of error= 1097119.16168229 , avg error= 16.7407098645368
initializing big matrix, n= 512 , n*n= 262144
sum of error= 1.36281435213093E+7 , avg error= 51.9872418262837
initializing big matrix, n= 1024 , n*n= 1048576
sum of error= 1.24247404738082E+9 , avg error= 1184.91558778841
Very similar results from the C version:
inverse.ctest_inverse.ctest_inverse.outinverse.pytest_inverse.pytest_inverse_py.outtest_inv.py numpy versiontest_inv_py.out
Extracted form test_inverse.out is
initializing big matrix, n=1024, n*n=1048576
sum of error=1.24247e+09, avg error=1184.92
Inverse.rb Ruby class Inversetest_inverse.rb Ruby testtest_inverse_rb.out Ruby output
Multiple Precision
A case study using 32-bit IEEE floating point and 50, 100, and 200
digit multiple precision are shown in Lecture 3a
Reducing the number of equation when some values are known
Later, when we study partial differential equations, we will need
cs455_l3c.shtml a process for reducing the number
of equations when we know the value of one or more elements
of the unknown vector.
Nonlinear equations and systems of nonlinear equations
are covered in Lecture 16
The code for matrix inverse is very similar to the code for solving
simultaneous equations. Added effort is needed to find the
maximum pivot element and there must be both row and column
interchanges. An example that shows the increasing error with the
increasing size of the matrix, on a difficult matrix, is shown below.
Note that results of a 16 by 16 matrix using 64-bit IEEE Floating
point arithmetic that is ill conditioned may become useless.
Given a matrix A, computing the inverse AI, then checking that
|A| * |AI| = |II| is approximately the identity matrix |I|
is useful and possibly very important.
The check used for this case study was to sum the absolute values
of |II| - |I| and print the sum and also print the sum divided
by n*n, the number of elements in the matrix.
This case study uses the classic, difficult to invert,
variation of the Hilbert Matrix, in floating point format,
shown here as rational numbers:
| 1/2 1/3 1/4 1/5 | using i for the column index and
| 1/3 1/4 1/5 1/6 | j for the row index,
| 1/4 1/5 1/6 1/7 | the (i,j) element is 1/(i+j)
| 1/5 1/6 1/7 1/8 | as a floating point number
A few solutions are (A followed by AI):
| 1/2 | n=1 | 2 |
| 1/2 1/3 | n=2 | 18 -24 |
| 1/3 1/4 | | -24 36 |
| 1/2 1/3 1/4 1/5 | n=4 | 200 -1200 2100 -1120 |
| 1/3 1/4 1/5 1/6 | | -1200 8100 -15120 8400 |
| 1/4 1/5 1/6 1/7 | | 2100 -15120 29400 -16800 |
| 1/5 1/6 1/7 1/8 | | -1120 8400 -16800 9800 |
| 1/2 1/3 1/4 1/5 1/6 1/7 1/8 1/9 |
| 1/3 1/4 1/5 1/6 1/7 1/8 1/9 1/10 |
| 1/4 1/5 1/6 1/7 1/8 1/9 1/10 1/11 |
| 1/5 1/6 1/7 1/8 1/9 1/10 1/11 1/12 |
| 1/6 1/7 1/8 1/9 1/10 1/11 1/12 1/13 |
| 1/7 1/8 1/9 1/10 1/11 1/12 1/13 1/14 |
| 1/8 1/9 1/10 1/11 1/12 1/13 1/14 1/15 |
| 1/9 1/10 1/11 1/12 1/13 1/14 1/15 1/16 |
| 2592 -60480 498960 -1995840 4324320 -5189184 3243240 -823680 |
| -60480 1587600 -13970880 58212000 -129729600 158918760 -100900800 25945920 |
| 498960 -13970880 128066400 -548856000 1248647400 -1553872320 998917920 -259459200 |
| -1995840 58212000 -548856000 2401245000 -5549544000 6992425440 -454053600 118918800 |
| 4324320 -12972960 1248647400 5549544000 12985932960 -16527551040 10821610800 -2854051200 |
| -5189184 158918760 -1553872320 6992425440 -16527551040 21210357168 -13984850880 3710266560 |
| 3243240 -100900800 998917920 -4540536000 10821610800 -13984850880 9275666400 -2473511040 |
| -823679 25945920 -259459200 1189188000 -2854051200 3710266560 -2473511040 662547600 |
Note the exponential growth of the size of numbers in the inverse.
Now see increasing errors with size of simultaneous equations:
simeq2.csimeq2.hsimeq2p.htest_simeq_hilbert.ctest_simeq_hilbert_c.out
Much better accuracy with random matrices
test_simeq_random.ctest_simeq_random_c.out
One measure of the difficulty of inverting a matrix is the size
of the largest diagonal during each step of the inversion process.
The magnitude of the largest element of the inverse will be
approximately the order of the reciprocal of the smallest
of the largest diagonal.
The smallest of the largest diagonal for a few cases are:
n = 2 .277E-1
n = 4 .340E-4
n = 8 .770E-10
n = 16 .560E-22
n = 32 .358E-46
n = 64 .645E-95
n = 128 .178E-192
n = 256 failed with 200 digit multiple precision
The average error, as computed for various precisions, is
7-digit 15-digit 166-bit 332-bit 664-bit MatLab
IEEE IEEE mpf mpf mpf IEEE
n 32-bit 64-bit 50-digit 100-digit 200-digit 64-bit
2 1.29E-7 4.61E-17 1.99E-59 1.36E-107 6.37E-204 0.00
4 7.84E-5 1.37E-13 1.91E-58 3.05E-106 6.12E-203 2.39E-13
8 3.83E0 9.43E-8 4.44E-50 1.81E-98 3.24E-194 7.31E-7
16 1.23E1 3.27E-1 8.13E-39 2.77E-87 8.56E-183 2.37E2
32 5.20E0 3.98E0 4.72E-14 4.45E-62 1.19E-158 1.46E6
64 6.13E0 1.31E1 1.01E8 2.20E-13 1.20E-109 2.55E8
128 8.39E0 5.23E0 7.79E7 3.46E8 3.83E-12 2.27E10
256 1.14E3 1.67E1 1.76E8 8.51E7 1.05E8 8.38E11
The errors bigger than E-5 are very deceiving
in the first two columns. They indicate failure to invert.
MatLab does indicate failure for n=16 and larger, other codes had
the matrix singular error suppressed.
A reasonable conclusion, for this matrix, is that an n by n matrix
needs more than n bits of floating point precision in order to
get a reliable inverse. Twice the number of bits as n to get
good results.
Before you panic, notice the results for the same test in MatLab
for this hard to invert matrix verses a pseudo random matrix.
n=2, avgerr=0 , rnderr=4.16334e-017
n=4, avgerr=2.39808e-013 , rnderr=1.04246e-016
n=8, avgerr=7.31534e-007 , rnderr=7.86263e-016
n=16, avgerr=237.479 , rnderr=3.57e-016
n=32, avgerr=1.46377e+006 , rnderr=7.5899e-016
n=64, avgerr=2.55034e+008 , rnderr=2.34115e-015
n=128, avgerr=2.2773e+010 , rnderr=6.70795e-015
n=256, avgerr=8.38903e+011 , rnderr=1.9137e-014
Well conditioned matrices may be inverted for n in the range
10,000 to 20,000 with IEEE 64-bit floating point.
Many large matrices are sparse, having many zero elements,
and may have only bands of non zero elements. Unfortunately
the inverse of sparse matrices are not sparse, thus
sparse matrix storage techniques may actually be slower.
The MatLab code is, of course, the shortest (stripped here):
n=1;
for r=1:8
n=2*n;
A=zeros(n,n);
B=rand(n,n);
for i=1:n
for j=1:n
A(i,j)=1.0/(i+j);
end
end
avgerr=sum(sum(abs(A*inv(A)-eye(n,n))))/(n*n)
rnderr=sum(sum(abs(B*inv(B)-eye(n,n))))/(n*n)
end
The detailed code and results are:
invrnd.m actual MatLab code
invrnd_m.out file output
invrnd_mm.out partial screen output
inverse.f90 basic inverse
test_inverse.f90 test program
test_inverse_f90.out output
inverse.py basic inverse
test_inverse.fpy test program
test_inverse_py.out output
Simeq.scala has inverse
TestSimeq.scala test has inverse
TestSimeq_scala.out output has inverse
Extracted form test_inverse_f90.out is
initializing big matrix, n= 2 , n*n= 4
sum of error= 1.84748050191530E-16 , avg error= 4.61870125478825E-17
initializing big matrix, n= 4 , n*n= 16
sum of error= 2.19971263426544E-12 , avg error= 1.37482039641590E-13
initializing big matrix, n= 8 , n*n= 64
sum of error= 0.00000604139304982709 , avg error= 9.43967664035483E-8
initializing big matrix, n= 16 , n*n= 256
sum of error= 83.9630735209012 , avg error= 0.327980755941020
initializing big matrix, n= 32 , n*n= 1024
sum of error= 4079.56590417946 , avg error= 3.98395107830025
initializing big matrix, n= 64 , n*n= 4096
sum of error= 53735.8765782488 , avg error= 13.1191104927365
initializing big matrix, n= 128 , n*n= 16384
sum of error= 85784.2643647822 , avg error= 5.23585597929579
initializing big matrix, n= 256 , n*n= 65536
sum of error= 1097119.16168229 , avg error= 16.7407098645368
initializing big matrix, n= 512 , n*n= 262144
sum of error= 1.36281435213093E+7 , avg error= 51.9872418262837
initializing big matrix, n= 1024 , n*n= 1048576
sum of error= 1.24247404738082E+9 , avg error= 1184.91558778841
Very similar results from the C version:
inverse.cinverse.htest_inverse.ctest_inverse.out
Similar results from float rather than double in the C version:
inversef.cinversef.htest_inversef.ctest_inversef.out
Exploring results from 50 digit multiple precision arithmetic version:
mpf_inverse.cmpf_inverse.htest_mpf_inverse.ctest_mpf50_inverse.out
Changing 'digits' to 100 digit multiple precision arithmetic version:
test_mpf100_inverse.out
Changing 'digits' to 200 digit multiple precision arithmetic version:
test_mpf200_inverse.out
The 200 digit run with more output:
test_mpf_inverse.out
We have a number of clusters at UMBC, I happen to have used
our Bluegrit, Bluewave, Tara and Maya and Taki clusters and the
MPI examples are from these multi processor machines.
For multi core machines, there are Java Threads and
"C" pthreads and Ada tasks. I have a 12 core desktop
and Intel has a many core computer.
Examples are presented below.
At the end are a few multi core benchmarks for you to run.
We can use our biggest super computer on campus, Taki.
NOTE:
MPI is running on a distributed memory system.
Each process may be considered to have local memory and
in general, there is no common shared memory.
Multicore machines are described here as shared memory systems.
All memory is available to all threads and tasks.
Also know technically as a single address space.
Some parallel programming techniques apply to both
distributes and shared memory, some techniques do not
apply to both memory systems.
MPI
MPI stands for Message Passing Interface and is
available on many multiprocessors. MPI may be installed
as the open source version MPICH. There are other
software libraries and languages for multiprocessors,
yet, this lecture only covers MPI.
The WEB page here at UMBC is
www.csee.umbc.edu/help/MPI
Programming in MPI is the SPMD Single Program Multiple Data style of
programming. One program runs on all CPU's in the multiprocessor.
Each CPU has a number, called a rank in MPI, called myid in my code and
called node or node number in comments.
"if-then-else" code may be based on node number is used to have
unique computation on specific nodes. There is a master node,
typically the node with rank zero in MPI. The node number may
also be used in index expressions and other computation. Many
MPI programs use the master as a number cruncher along with the
other nodes in addition to the master serving as overall control
and synchronization.
Examples below are given first in "C" and then a few in Fortran.
Other languages may interface with the MPI library.
These just show a simple MPI use, these are combined later for
solving simultaneous equations on a multiprocessor.
Just check that a message can be sent and received from each
node, processor, CPU, etc. numbered as "rank".
roll_call.croll_call.out
Just scatter unique data from the "master" to all nodes.
Then gather the unique results from all nodes.
scat.cscat.out
Here is the Makefile I used.
Makefile for C on Bluegrit cluster
Repeating the "roll_call" just changing the language to Fortran.
roll_call.Froll_call_F.out
Repeating scatter/gather just changing the language to Fortran.
scat.Fscat_F.out
The Fortran version of the Makefile with additional files I used.
Makefile for Fortran on Bluegrit clustermy_mpif.h only needed if not on clusternodes only needed if default machinefile not used
MPI Simultaneous Equations
Now, the purpose of this lecture, solve huge simultaneous
equations on a highly parallel multiprocessor.
Well, start small when programming a multiprocessor and print out
every step to be sure the indexing and communication is exactly
correct.
This is hard to read, yet it was a necessary step.
psimeq_debug.cpsimeq_debug.out
Then, some clean up and removing or commenting out most debug print:
psimeq1.cpsimeq1.out
The input data was created so that the exact answers were 1, 2, 3 ...
It is interesting to note: because the data in double precision floating
point was from the set of integers, the answers are exact for
8192 equations in 8192 unknowns.
psimeq1.out8192
|A| * |X| = |Y| given matrix |A| and vector |Y| find vector |X|
| 1 2 3 4 5 | |5| | 35| for 5 equations in 5 unknowns
| 2 2 3 4 5 | |4| | 40| the solved problem is this
| 3 3 3 4 5 |*|3|=| 49|
| 4 4 4 4 5 | |2| | 61|
| 5 5 5 5 5 | |1| | 75|
A series of timing runs were made, changing the number of equations.
The results were expected to increase in time as order n^3 over the
number of processors being used. Reasonable agreement was measured.
Using 16 processors:
Number of Time computing Cube root of
equations solution (sec) 16 times Time (should approximately double
1024 3.7 3.9 as number of equations double)
2048 17.2 6.5
4096 83.5 11.0
8192 471.9 19.6
More work may be performed to minimize the amount of
data send and received in "rbuf".
Given numeric data points, find an equation that approximates the data
with a least square fit. This is one of many techniques for getting
an analytical approximation to numeric data.
The problem is stated as follows :
Given measured data for values of Y based on values of X1,X2 and X3. e.g.
Y_actual X1 X2 X3 observation, i
-------- ----- ----- -----
32.5 1.0 2.5 3.7 1
7.2 2.0 2.5 3.6 2
6.9 3.0 2.7 3.5 3
22.4 2.2 2.1 3.1 4
10.4 1.5 2.0 2.6 5
11.3 1.6 2.0 3.1 6
Find a, b and c such that Y_approximate = a * X1 + b * X2 + c * X3
and such that the sum of (Y_actual - Y_approximate) squared is minimized.
(We are minimizing RMS error.)
The method for determining the coefficients a, b and c follows directly
form the problem definition and mathematical analysis given below.
Set up and solve the system of linear equations:
(Each SUM is for i=1 thru 6 per table above, note symmetry)
| SUM(X1*X1) SUM(X1*X2) SUM(X1*X3) | | a | | SUM(X1*Y) |
| SUM(X2*X1) SUM(X2*X2) SUM(X2*X3) | x | b | = | SUM(X2*Y) |
| SUM(X3*X1) SUM(X3*X2) SUM(X3*X3) | | c | | SUM(X3*Y) |
Easy to program, not good data:
lsfit_lect.clsfit_lect_c.outlsfit_lect.javalsfit_lect_java.outlsfit_lect.pylsfit_lect_py.outlsfit_lect.f90lsfit_lect_f90.outlsfit_lect.m similar to clsfit_lect_m.outtest_lsfit.py3 Python3 two methodstest_lsfit_py3.out outputtest_lsfit2.py3 Python3 two methodstest_lsfit2_py3.out output better
Now, suppose you wanted a constant term to make the fit:
Y_approximate = Y0 + a * X1 + b * X2 + c * X3
Then the linear equations would be:
| SUM( 1* 1) SUM( 1*X1) SUM( 1*X2) SUM( 1*X3) | | Y0 | | SUM( 1*Y) |
| SUM(X1* 1) SUM(X1*X1) SUM(X1*X2) SUM(X1*X3) | | a | | SUM(X1*Y) |
| SUM(X2* 1) SUM(X2*X1) SUM(X2*X2) SUM(X2*X3) | x | b | = | SUM(X2*Y) |
| SUM(X3* 1) SUM(X3*X1) SUM(X3*X2) SUM(X3*X3) | | c | | SUM(X3*Y) |
Note the symmetry! Easy to program.
Note the simultaneous equations, from Lecture 3: |A| x |X| = |Y|
|A| and |Y| easily computable, solve for |X| to get Y0, a, b and c
We now have a simple equation to compute Y approximately from a reasonable
range of X1, X2, and X3.
Y is called the dependent variable and X1 .. Xn the independent variables.
The procedures below implement a few special cases and the general case.
The number of independent variables can vary, e.g. 2D, 3D, etc. .
The approximation equation may use powers of the independent variables
The user may create additional independent variables e.g. X2 = SIN(X1)
with the restriction that the independent variables are linearly
independent. e.g. Xi not equal p Xj + q for all i,j,p,q
Mathematical derivation
The mathematical derivation of the least square fit is as follows :
Given data for the independent variable Y in terms of the dependent
variables S,T,U and V consider that there exists a function F
such that Y = F(S,T,U,V)
The problem is to find coefficients a,b,c and d such that
Y_approximate = a * S + b * T + c * U + d * V
and such that the sum of ( Y - Y_approximate ) squared is minimized.
Note: a, b, c, d are scalars. S, T, U, V, Y, Y_approximate are vectors.
To find the minimum of SUM((Y - Y_approximate)^2)
the derivatives must be taken with respect to a,b,c and d and
all must equal zero simultaneously. The steps follow :
SUM((Y - Y_approximate)^2) = SUM((Y - a*S - b*T - c*U - d*V)^2)
d/da = -2 * S * SUM( Y - a*S - b*T - c*U - d*V )
d/db = -2 * T * SUM( Y - a*S - b*T - c*U - d*V )
d/dc = -2 * U * SUM( Y - a*S - b*T - c*U - d*V )
d/dd = -2 * V * SUM( Y - a*S - b*T - c*U - d*V )
Setting each of the above equal to zero (derivative minimum at zero)
and putting constant term on left, the -2 is factored out,
the independent variable is moved inside the summation
SUM( a*S*S + b*S*T + c*S*U + d*S*V = S*Y )
SUM( a*T*S + b*T*T + c*T*U + d*T*V = T*Y )
SUM( a*U*S + b*U*T + c*U*U + d*U*V = U*Y )
SUM( a*V*S + b*V*T + c*V*U + d*V*V = V*Y )
Distributing the SUM inside yields
a * SUM(S*S) + b * SUM(S*T) + c * SUM(S*U) + d * SUM(S*V) = SUM(S*Y)
a * SUM(T*S) + b * SUM(T*T) + c * SUM(T*U) + d * SUM(T*V) = SUM(T*Y)
a * SUM(U*S) + b * SUM(U*T) + c * SUM(U*U) + d * SUM(U*V) = SUM(U*Y)
a * SUM(V*S) + b * SUM(V*T) + c * SUM(V*U) + d * SUM(V*V) = SUM(V*Y)
To find the coefficients a,b,c and d solve the linear system of equations
| SUM(S*S) SUM(S*T) SUM(S*U) SUM(S*V) | | a | | SUM(S*Y) |
| SUM(T*S) SUM(T*T) SUM(T*U) SUM(T*V) | x | b | = | SUM(T*Y) |
| SUM(U*S) SUM(U*T) SUM(U*U) SUM(U*V) | | c | | SUM(U*Y) |
| SUM(V*S) SUM(V*T) SUM(V*U) SUM(V*V) | | d | | SUM(V*Y) |
Some observations :
S,T,U and V must be linearly independent.
There must be more data sets (Y, S, T, U, V) than variables.
The analysis did not depend on the number of independent variables
A polynomial fit results from the substitutions S=1, T=x, U=x^2, V=x^3
The general case for any order polynomial of any number of variables
may be used with a substitution, for example, S=1, T=x, U=y, V=x^2,
W=x*y, X= y^2, etc to terms such as exp(x), log(x), sin(x), cos(x).
Any number of terms may be used. The "1" is for the constant term.
Using the S,T,U,V notation above, fitting Y_approx to find a, b, c, d
Y_approx = a*S + b*T + c*U + d*V + ...
Choose S = 1.0 thus a is the constant term
Choose T = log(x) thus b is the coefficient of log(x)
Choose U = log(x*x) thus c is the coefficient of log(x*x)
Choose V = sin(x) thus d is the coefficient of sin(x)
see 4 additional terms in lsfit_log.c
Thus our x data, n = 21 samples in code, is fit to
Y_approx = a + b*log(x) + c*log(x*x) + d*sin(x) + ...
By putting the terms in a vector, simple indexing builds the matrix:
A[i][j] = A[i][j] + term_i * term_j summing over n terms using k
lsfit_log.c fitting log and other termslsfit_log_c.out
truncated series vs lsfit for sin(x+y) and sin(x+y+z)
using code from above, another test case:
fit_sin.adb demonstrates that a lsfit to a specified power,
does not give the same coefficients as a truncated approximation,
to the same power, and is a more accurate fit.
fit_sin.adbfit_sin_ada.out
Terms for fitting two and three variables, 2D and 3D
And to see how polynomials in two and three variables may be fit:
Given a value of Y for each value of X, get polynomial with X^2
Y_approximate = C0 + C1 * X + C2 * X^2
Then the linear equations would be:
| SUM(1 *1) SUM(1 *X) SUM(1 *X^2) | | C0 | | SUM(1 *Y) |
| SUM(X *1) SUM(X *X) SUM(X *X^2) | | C1 | | SUM(X *Y) |
| SUM(X^2*1) SUM(X^2*X) SUM(X^2*X^2) |x| C2 | = | SUM(X^2*Y) |
Given a value of Y for each value of X, get polynomial with X^3
Y_approximate = C0 + C1 * X + C2 * X^2 + C3 * X^3
Then the linear equations would be:
| SUM(1 *1) SUM(1 *X) SUM(1 *X^2) SUM(1 *X^3) | | C0 | | SUM(1 *Y) |
| SUM(X *1) SUM(X *X) SUM(X *X^2) SUM(X *X^3) | | C1 | | SUM(X *Y) |
| SUM(X^2*1) SUM(X^2*X) SUM(X^2*X^2) SUM(X^2*X^3) |x| C2 | = | SUM(X^2*Y) |
| SUM(X^3*1) SUM(X^3*X) SUM(X^3*X^2) SUM(X^3*X^3) | | C3 | | SUM(X^3*Y) |
Note that index [i][j] of the matrix has SUM(X^i*X^j)) when subscripts 0..3
Note that the terms for two and three variables in a polynomial are:
two three can optimize
a^0 b^0 a^0 b^0 c^0 constant, "1"
a^1 b^0 a^1 b^0 c^0 just a
a^0 b^1 a^0 b^1 c^0 just b
a^0 b^0 c^1
a^2 b^0 a^2 b^0 c^0 just a^2
a^1 b^1 a^1 b^1 c^0 a*b
a^0 b^2 a^1 b^0 c^1
a^0 b^2 c^0
a^0 b^1 c^1
a^0 b^0 c^2
Then terms with a sum of the powers equal to 3, 4, 5, 6 are available
Note that the matrix has the first row as the sum of each term
multiplied by the first term. The second row is the sum of each
term multiplied by the second term, etc.
The data for the terms is from the raw data sets of
y_actual a b or y_actual a b c
being used to determine a fit
y_approx=F(a,b) or y_approx=F(a,b,c)
Terms for the data point y1 a1 b1 are:
1 a1 b1 a1^2 a1*b1 b1^2 the constant term y1
These terms are multiplied by the first term, "1" and added to row 1.
These terms are multiplied by the second term, "a1" and added to row 2, etc.
Then the terms for data point y2 a2 b2 are:
1 a2 b2 a2^2 a2*b2 b2^2 the constant term y2
These terms are multiplied by the first term, "1" and added to row 1.
These terms are multiplied by the second term, "a2" and added to row 2, etc.
Then the simultaneous equations are solved for the coefficients C1, C2, ...
to get the approximating function
y_approx = F(a,b) = C1 + C2*a + C3*b + C4*a^2 + C5*a*b + C6*b^2
The following sample programs compute the least square fit of
internally generated data for low polynomial powers and compute
the accuracy of the fit in terms of root-mean-square error,
average error and maximum error. Note that the fit becomes exact
when the data is from a low order polynomial and the fit uses at
least that order polynomial.
least_square_fit_2d.cleast_square_fit_2d.outleast_square_fit_3d.cleast_square_fit_3d.outleast_square_fit_4d.cleast_square_fit_4d.outleast_square_fit2d.adbleast_square_fit2d_ada.outreal_arrays.adsreal_arrays.adb
Development of Python 2d and 3d least square fit
polyfit2.py3polyfit2_py3.outpolyfit3.py3polyfit3_py3.out
You can translate the above to your favorite language.
Now, if everything works, a live interactive demonstration of
There are many other ways to fit a set of points.
The Spline fit smooths the fit by controlling derivatives.
SplineFrame.java
(Replace "LeastSquareFit" with "Spline" then get same files as above.)
Demonstration to see a different type of fit.
Right click on points, left to right increasing X.
Left click to see plot.
Optional lecture on taylor fit to implement math functions
that may not be availabke in programming language you are using.
optional lecture 4.5
updated 10/30/2021
A polynomial of degree n has the largest exponent value n.
There are n+1 terms and n+1 coefficients.
A normalized polynomial has the coefficient of the
largest exponent equal to 1.0.
An nth order polynomial with coefficients c_0 through c_n.
y = c_0 + c_1 x + c_2 x^2 +...+ c_n-1 x^n-1 + c_n x^n
Horners method for evaluating a polynomial
"y" is computed numerically using Horners method that
needs n multiplications and n additions.
Thus faster and more accurate.
Starting at the highest power, set y = c_n * x
Then combine the next coefficient y = (c_n-1 + y) * x
Continue for c_n-2 until c_1 y = (c_1 + y) * x
Then finish y = c_0 + y
The code in C, Fortran 90, Java and Ada 95 is:
/* peval.c Horners method for evaluating a polynomial */
double peval(int n, double x, double c[])
{ /* an nth order polynomial has n+1 coefficients */
/* stored in c[0] through c[n] */
/* y = c[0] + c[1]*x + c[2]*x^2 +...+ c[n]*x^n */
int i;
double y = c[n]*x;
if(n<=0) return c[0];
for(i=n-1; i>0; i--) y = (c[i]+y)*x;
return y+c[0];
} /* end peval */
! peval.f90 Horners method for evaluating a polynomial
function peval(n, x, c) result (y)
! an nth order polynomial has n+1 coefficients
! stored in c(0) through c(n)
! y = c(0) + c(1)*x + c(2)*x^2 +...+ c(n)*x^n
implicit none
integer, intent(in) :: n
double precision, intent(in) :: x
double precision, dimension(0:n), intent(in) :: c
double precision :: y
integer i
if(n<=0) y=c(0); return
y = c(n)*x
do i=n-1, 1, -1
y = (c(i)+y)*x
end do
y = y+c(0)
end function peval
// peval.java Horners method for evaluating a polynomial
double peval(int n, double x, double c[])
{ // an nth order polynomial has n+1 coefficients
// stored in c[0] through c[n]
// y = c[0] + c[1]*x + c[2]*x^2 +...+ c[n]*x^n
double y = c[n]*x;
if(n<=0) return c[0];
for(int i=n-1; i>0; i--) y = (c[i]+y)*x;
return y+c[0];
} // end peval
-- peval.adb Horners method for evaluating a polynomial
function peval(n : integer; x : long_float; c : vector) return long_float is
-- an nth order polynomial has n+1 coefficients
-- stored in c(0) through c(n)
-- y := c(0) + c(1)*x + c(2)*x^2 +...+ c(n)*x^n
y : long_float;
begin
if n<=0 then return c(0); end if;
y := c(n)*x;
for i in reverse 1..n-1 loop -- do i:=n-1, 1, -1
y := (c(i)+y)*x;
end loop;
y := y+c(0);
return y;
end peval;
MatLab
y = polyval(c, x)
Python
from numpy import polyval
y = polyval(c, x)
Testing high degree polynomials
Although Horners method is fast and accurate on most polynomials,
the following test programs in C, Fortran 90, Java and Ada95
show that evaluating polynomials of order 9 and 10 can have
significant absolute error.
The test programs generate a set of roots r_0, r_1, ...
and computer the coefficients of a set of polynomials
y = (x-r_0)*(x-r_1)*...*(x-r_n-1) becomes
y = c_0 + c_1*x + c_2*x^2 +...+ c_n-1*x^n-1 + 1.0*x^n
when x is any of the roots, y should be zero.
Study one of the .out files and see how the absolute
error increases with polynomial degree and increases with
the magnitude of the root.
(The source code was run on various types of computers
and various operating systems. Results vary.)
peval.cpeval_c.outpeval.f90peval_f90.outpeval.javapeval_java.outpeval.adbpeval_ada.outtest_peval.rbtest_peval_rb.out
Having the coefficients of a polynomial allows creating
the derivative and integral of the polynomial.
Derivative of Polynomial
Given:
y = c_0 + c_1 x + c_2 x^2 +...+ c_n-1 x^n-1 + c_n x^n
dy/dx = c_1 + 2 c_2 x + 3 c_3 x^2 +...+ n-1 c_n-1 x^n-2 + n c_n x^n-1
The coefficients of dy/dx may be called cd and computed:
for(i=0; i<n; i++) cd[i] = (double)(i+1)*c[i+1];
nd = n-1;
The derivative may be computed an any point, example x=a, using
dy_dx_at_a = peval(nd, a, cd);
Integral of Polynomial
Similarly, the integral y(x) dx has coefficients
Given:
y = c_0 + c_1 x + c_2 x^2 +...+ c_n-1 x^n-1 + c_n x^n
int_y_dx = 0 + c_0 x + c_1/2 x^2 +...+ c_n-1/n x^n + c_n/(n+1) x^n+1
The coefficients of int_y_dy may be called ci and computed:
ci[0] = 0.0; /* a reasonable choice */
ni = n+1;
for(i=1; i<=ni; i++) ci[i] = c[i-1]/(double)i;
The integral of the original polynomial y from a to b is computed:
int_y_a_b = peval(ni, b, ci) - peval(ni, a, ci);
Taylor series, for any differentiable function, f(x)
(x-a) f'(a) (x-a)^2 f''(a) (x-a)^3 f'''(a)
f(x) = f(a) + ----------- + -------------- + --------------- + ...
1! 2! 3!
MacLaurin series, Taylor series with a=0
x f'(0) x^2 f''(0) x^3 f'''(0)
f(x) = f(0) + ------- + ---------- + ----------- + ...
1! 2! 3!
Taylor series, offset
h f'(x) h^2 f''(x) h^3 f'''(x)
f(x+h) = f(x) + ------- + ---------- + ----------- + ...
1! 2! 3!
Example f(x) = e^x, thus f'(x) = e^x f''(x) = e^x f'''(x) = e^x
f(0) = 1 f'(0) = 1 f''(0) = 1 f'''(0) = 1
substituting in the MacLaurin series
x x^2 x^3 x^4
f(x) = 1 + -- + --- + --- + --- + ...
1! 2! 3! 4!
Example f(x) = sin(x), f'(x) = cos(x) f''(x) = -sin(x) f'''(x) = -cos(x)
f(0) = 0 f'(0) = 1 f''(0) = 0 f'''(0) = -1
substituting in the MacLaurin series
x x^3 x^5 x^7
f(x) = -- - --- + --- - --- + ...
1! 3! 5! 7!
Example f(x) = cos(x), f'(x) = -sin(x) f''(x) = -cos(x) f'''(x) = sin(x)
f(0) = 1 f'(0) = 0 f''(0) = -1 f'''(0) = 0
substituting in the MacLaurin series
x^2 x^4 x^6
f(x) = 1 - --- + --- - --- + ...
2! 4! 6!
Many MacLaurin series expansions are shown on Mathworld
Taylor and MacLaurin series can be expanded for two and more variables.
Two dimensional Taylor series expansion is here
Orthogonal Polynomials
For use in integration and fitting data by a polynomial,
orthogonal polynomials are used. The definition of orthogonal
is based on having a set of polynomials from degree 0 to
degree n that are denoted by p_0(x), p_1(x),...,p_n-1(x), p_n(n).
A weighting function is allowed, w(x), that depends on x but
not on the degree of the polynomial. The set of polynomials
is defined as orthogonal over the interval x=a to x=b when
the following two conditions are satisfied:
/ 0 if i != j
integral from x=a to x=b of w(x) p_i(x) p_j(x) dx = |
\ not 0 if i=j
sum from i=1 to i=n c_i *p_i(x) = 0 for all x only when all c_i are zero.
Legendre Polynomials, Chebyshev Polynomials, Laguerre Polynomials and
Lagrange Polynomials are covered below.
Legendre Polynomials
P_n(x) are defined as
P_0(x) = 1
P_1(x) = x
P_2(x) = 1/2 (3 x^2 - 1)
P_3(x) = 1/2 (5 x^3 - 3 x)
P_4(x) = 1/8 (35 x^4 - 30 x^2 + 3)
the general recursion formula for P_n(x) is
P_n(x) = 2n-1/n x P_n-1(x) - n-1/n P_n-2(x)
The weight function is w(x) = 1
The interval is a = -1 to b = 1
Explicit expression
P_n(x) = 1/2^n sum m=0 to n/2
(-1)^m (2n-2m)! / [(n-m)! m! (n-2m)!] x^(n-2m)
P_n(x) = (-1)^n/(2^n n!) d^n/dx^n ((1-x^2)^n)
Legendre polynomials are best known for use in Gaussian
integration. These approximate a least square fit.
integrate f(x) for x= -1 to 1
For n point integration, determine w_i and x_i i=1,n
where x_i is the i th root of P_n(x) and
w_i = (x_i)
integral x=-1 to 1 of f(x) = sum i=1 to n of w_i f(x_i)
Change interval from t in [a, b] to x in [-1, 1]
x = (b+a+(b-a)t)/2
see example test program gauleg.c
and results gauleg_c.out
Chebyshev polynomials
T_n(x) are defined as
T_0(x) = 1
T_1(x) = x
T_2(x) = 2 x^2 - 1
T_3(x) = 4 x^3 - 3 x
T_4(x) = 8 x^4 - 8 x^2 + 1
the general recursion formula for T_n(x) is
T_n(x) = 2 x T_n-1(x) - T_n-2(x)
The weight function, w(x) = 1/sqrt(1-x^2)
The interval is a = -1 to b = 1
Explicit expression
T_n(x) = n/2 sum m=0 to n/2 (-1)^m (n-m-1)!/(m! (n-2m)!) (2x)^(n-2m)
T_n(x) = cos(n arccos(x))
Chebyshev polynomials are best known for approximating a function
while minimizing the maximum error, covered in a latter lecture.
Fit f(x) with approximate function F(x)
c_i = 2/pi integral x=-1 to 1 f(x) T_i(x) /sqrt(1-x^2) dx
(difficult near x=-1 and x=1)
F(x) = c_0 /2 + sum i=1 to n c_i T_i(x)
Change interval from t in [a, b] to x in [-1, 1]
x = (b+a+(b-a)t)/2
see example test program chebyshev.c
and results chebyshev_c.out
Laguerre polynomials
L_n(x) are defined as
L_0(x) = 1
L_1(x) = -x + 1
L_2(x) = x^2 - 4 x + 2
L_3(x) = -x^3 +9 x^2 - 18 x + 6
the general recursion formula for L_n(x) is
L_n(x) = (2n-x-1) L_n-1(x) - (n-1)^2 L_n-2(x)
The weight function, w(x) = e^(-x)
The interval is a = 0 to b = infinity
Explicit expression
L_n(x) = sum m=0 to n (-1)^m n!/(m! m! (n-m)!) x^m
L_n(x) = 1/(n! e^-x) d^n/dx^n (x^n e^(-x))
Laguerre polynomials are best known for use in integrating
functions where the upper limit is infinity.
Lagrange Polynomials
These will be used later in Galerkin Finite Element Method
for solving partial differential equations.
Lagrange polynomials L_n(x) are defined as
L_1,0(x) = 1 - x
L_1,1(x) = x
L_2,0(x) = 1 -3 x +2 x^2
L_2,1(x) = 4 x -4 x^2
L_2,2(x) = - x +2 x^2
L_3,0(x) = 1 -5.5 x +9 x^2 -4.5 x^3
L_3,1(x) = 9 x -22.5 x^2 +13.5 x^3
L_3,2(x) = -4.5 x +18 x^2 -13.5 x^3
L_3,3(x) = x -4.5 x^2 +4.5 x^3
L_4,1(x) = 16 x -69.33 x^2 + 96 x^3 - 42.33 x^4
L_5,2(x) = -25 x +222.92 x^2 -614.58 x^3 +677.08 x^4 -260.42 x^5
For 0 ≤ x ≤ 1 and equal spaced points in between.
The general recursion formula for L_n+1,j(x) is
unavailable
Explicit expression
L_n,k(x) = product i=0 to n, i/=k of (x-x_i)/(x_k-x_i)
Lagrange polynomials are best known for solving differential
equations with equal and unequal grid spacing.
An nth order Lagrange polynomial will exactly fit n data points.
Fit f(x) with approximate function F(x)
F(x) = sum k=0 to n of f(x_k) L_n,k(x)
Change of coordinates for integration
Change interval from t in [a, b] to x in [0, 1]
x = (t-a)/(b-a)
Change interval from t in [a, b] to x in [-1, 1]
x = 2(t-a)/(b-a)-1
Example plots
see example test program lagrange.c
and results lagrange_c.out
A family of Lagrange Polynomials can be constructed to be
1.0 at x_i and zero at every x_j where i ≠ j
Using the notation above, np=11 and each color is a different k.
Each of the 11 points has one color at 1.0 and all other
colors at 0.0 .
Lagrange Phi shape functions
First derivative of Lagrange Phi shape functions
A function may be given as an analytic expression
such as sqrt(exp(x)-1.0) or may be given as a set
of points (x_i, y_i).
There are occasions when an efficient and convenient computer
implementation is needed. One of the efficient and convenient
implementations is a polynomial.
Thanks to Mr. Taylor and Mr. Maclaurin we can convert any
continuously differentiable function to a polynomial:
Taylor series, given differentiable function, f(x)
pick a convenient value for a
(x-a) f'(a) (x-a)^2 f''(a) (x-a)^3 f'''(a)
f(x) = f(a) + ----------- + -------------- + --------------- + ...
1! 2! 3!
Maclaurin series, same as Taylor series with a=0
x f'(0) x^2 f''(0) x^3 f'''(0)
f(x) = f(0) + ------- + ---------- + ----------- + ...
1! 2! 3!
Taylor series, function offset by value of h
h f'(x) h^2 f''(x) h^3 f'''(x)
f(x+h) = f(x) + ------- + ---------- + ----------- + ...
1! 2! 3!
Please use analytic differentiation rather than numerical differentiation.
Programs such as Maple have Taylor Series generation as a primitive.
An example Taylor series is: e^x = 1 + x + x^2/2! + x^3/3! + x^4/4!
Using a fixed number of terms, fourth power in this example,
will result in truncation error. The series has been truncated.
It should be obvious, that for large x, the error will become
very large. Also, this type of series will fail or be very
inaccurate if there are discontinuities in the function being fit.
We often estimate "truncation" error as the next order term
that is not used.
Note the relation of "estimated truncation error" to maximum error
and rms error as more terms are used in the approximation.
TaylorFit.javaTaylorFit_java.out
It is interesting to note that: The truncation error is usually
slightly less than the maximum error, thus a reasonable estimate
of the accuracy of the fit.
Unequally spaced points often use least square fit
For functions given as unequally spaced points, use
the least square fit technique in Lecture 4
Lagrange Fit minimizes maximum error (at fit points)
The Lagrange Fit minimizes the error at the chosen points to fit.
The Lagrange Fit is good for fitting data given at uniform spacing.
The Lagrange fit requires the fewest evaluations of the function
to be fit, convenient if the function to be fit requires
significant computation time.
The Lagrange series approximation f(t) to f(x) is defined as:
L_n(x) = sum j=0..N f(x_j) L_n,j(x)
L_n,j(x) = product i=0..N i /= j (x - x_i)/(x_j - x_i)
Collect coefficients, a_n, of L_n(x) to get
f(t) = sum i=0..N a_n t^n
LagrangeFit.javaLagrangeFit.out
Legendre Fit minimizes RMS error
The Legendre Fit, similar to the Least Square Fit, minimizes
the RMS error of the fit.
The Legendre series approximation f(t) to f(x) is defined as:
f(t) = a_0 g_0 + sum n=1..N a_n g_n P_n(t) then combining coefficients can be
f(t) = sum n=0..n b_n t^n a simple polynomial
a_n = integral -1 to 1 f(x) P_n(x) dx
g_n = (2 n + 1)/2
P_0(x) = 1
P_1(x) = x
P_n(x) = (2n-1)/n x P_n-1(x) - (n-1)/n P_n-2(x)
Suppose f(x) is defined over the interval a to b, rather than -1 to 1, then
a_n = (b-a)/2 integral -1 to 1 f(a+b+x(b-a)/2) P_n(x) dx
LegendreFit.javaLegendreFit.out
Chebyshev Fit minimizes maximum error (at fit points)
The Chebyshev Fit minimizes to maximum error of the fit for
a given order polynomial.
The Chebyshev series approximation f(t) to f(x) is defined as:
f(t) = a_0/2 + sum n=1..N a_n T_n(t) then combining coefficients can be
f(t) = sum n=0..n b_n t^n a simple polynomial
a_n = 2/Pi integral -1 to 1 f(x) T_n(x)/sqrt(1-x^2) dx
T_0(x) = 1
T_1(x) = x
T_n+1(x) = 2 x T_n(x) - T_n-1(x)
for -1 < x < 1 T_n(x) = cos(n acos(x))
When given an analytic function it may be best to use analytic
evaluation of the integrals. When given just points it may be best
to not use Chebyshev fit, use Lagrange fit. When given a
computer implementation of the function, f(x), to be fit,
use a very good adaptive integration.
ChebyshevFit.javaChebyshevFit.out
Note good intermediate points on smoth curve.
test_chebyshev_fit.py3test_chebyshev_fit_py3.out
Note bad intermediate points on square wave.
chebyshev_sqw.py3Chebyshev_sqw_py3.out
spline fit exact at data points with approximate slope
Numerical Integration is usually called Numerical Quadrature.
Watch for "int" in this page it is short for "integrate".
Numerical integration could be simple summation, but
in order to get reasonable step size and good accuracy
we need better techniques.
integral f(x) dx from x=a to x=b with step size h=(b-a)/n
using simple summation would be computed:
area = (b-a)/n * ( sum i=0..n-1 f(a+i*h) ) not using f(b)
or
area = (b-a)/(n+1) * ( sum i=0..n f(a+i*h) )
"h" has become the approximation to "dx" using "n" steps.
A larger value of "n" gives a smaller value of "h" and
up to where round off error grows, a better approximation.
Shown is n=9 indicated by red dots.
from QuadratureSum.java
Exact solution integral from 0 to 1 of 4(x-x^2) = 2/3 = 0.6666666
Trapezoidal method
The Trapezoidal rule approximates the area between x and x+h
as h * (f(x)+f(x+h))/2 , base times average height.
Summing the areas we note that f(a) and f(b) are used once
while the other intermediate f(x) are used twice, thus:
area = (b-a)/n * ( (f(a)+f(b))/2 + sum i=1..n-1 f(a+i*h) )
note: i=0 is f(a) i=n is f(b) thus sum index range 1..n-1
h = (b-a)/n
cutting h in half generally cuts the error in half for large n
Shown is n=8 regions based on 7 sums plus (f(a)+f(b))/2
from QuadratureTrap.javatrapezoide.py Trapezoidal Integration
output is:
trapezoide.py running
trap_int area under x*x from 1.0 to 2.0 = 2.335
trapezoide.py finished
or trapezoide.py3 Trapezoidal Integrationtrapezoide.java Trapezoidal Integration
output is:
trapezoide.java running
trap_int area under x*x from 1.0 to 2.0 = 2.3350000000000004
trapezoide.java finished
You do not need to use equal step sizes.
For better accuracy with trapazoidal integration,
use smaller step size where slope is largest.
Generally this requires more steps.
Better accuracy
In order to get better accuracy with fewer evaluations of
the function, f(x), we have found a way to choose the values
of x and to apply a weight, w(x) to each evaluation of f(x).
The integral is evaluated using:
area = sum i=1..n w(i)*f(x(i))
The w(i) and x(i) are computed using the Legendre polynomials
covered in the previous lecture. The numerical analysis shows
that using n evaluations we obtain accuracy about equal to
fitting the f(x(i)) with an nth order polynomial and accurately
computing the integral using that nth order polynomial.
Some values of weights w(i) and ordinates x(i) -1 < x < 1 are:
x[1]= 0.0000000000000E+00, w[1]= 2.0000000000000E+00
x[1]= -5.7735026918963E-01, w[1]= 1.0000000000000E-00
x[2]= 5.7735026918963E-01, w[2]= 1.0000000000000E-00
x[1]= -7.7459666924148E-01, w[1]= 5.5555555555555E-01
x[2]= 0.0000000000000E+00, w[2]= 8.8888888888889E-01
x[3]= 7.7459666924148E-01, w[3]= 5.5555555555555E-01
x[1]= -8.6113631159405E-01, w[1]= 3.4785484513745E-01
x[2]= -3.3998104358486E-01, w[2]= 6.5214515486255E-01
x[3]= 3.3998104358486E-01, w[3]= 6.5214515486255E-01
x[4]= 8.6113631159405E-01, w[4]= 3.4785484513745E-01
x[1]= -9.0617984593866E-01, w[1]= 2.3692688505618E-01
x[2]= -5.3846931010568E-01, w[2]= 4.7862867049937E-01
x[3]= 0.0000000000000E+00, w[3]= 5.6888888888889E-01
x[4]= 5.3846931010568E-01, w[4]= 4.7862867049937E-01
x[5]= 9.0617984593866E-01, w[5]= 2.3692688505618E-01
x[1]= -9.3246951420315E-01, w[1]= 1.7132449237916E-01
x[2]= -6.6120938646626E-01, w[2]= 3.6076157304814E-01
x[3]= -2.3861918608320E-01, w[3]= 4.6791393457269E-01
x[4]= 2.3861918608320E-01, w[4]= 4.6791393457269E-01
x[5]= 6.6120938646626E-01, w[5]= 3.6076157304814E-01
x[6]= 9.3246951420315E-01, w[6]= 1.7132449237916E-01
For a range from a to b, new_x[i] = a+(x[i]+1)*(b-a)/2
from QuadratureGau.java
and gauleg.java
Using the function gaulegf a typical integration could be:
double x[9], w[9] for 8 points
a = 0.5; integrate sin(x) from a to b
b = 1.0;
n = 8;
gaulegf(a, b, x, w, n); x's adjusted for a and b
area = 0.0;
for(j=1; j<=n; j++)
area = area + w[j]*sin(x[j]);
The following programs, gauleg for Gauss Legendre integration,
computes the x(i) and w(i) for various values of n. The
integration is tested on a constant f(x)=1.0 and then on
integral sin(x) from x=0.5 to x=1.0
integral exp(x) from x=0.5 to x=5.0
integral ((x^x)^x)*(x*(log(x)+1)+x*log(x)) from x=0.5 to x=5.0
This is labeled "mess" in output files.
Note how the accuracy increases with increasing values of n,
then, no further accuracy is accomplished with larger n.
Also note, the n where best numerical accuracy is achieved
is a far smaller n than where round off error would be
significant.
Downloadable code in C, Python, Fortran, Java, Ada and Matlab
Multidimensional integration extends easily by using
gaulegf(ax, bx, xx, wx, nx); x's adjusted for ax and bx
gaulegf(ay, by, yy, wy, ny); y's adjusted for ay and by
volume = 0.0;
for(i=1; i<=nx; i++)
for(j=1; j<=ny; j++)
volume = volume + wx[i]*wy[j]*f(xx[i],yy[j]);
gaulegf(ax, bx, xx, wx, nx); x's adjusted for ax and bx
gaulegf(ay, by, yy, wy, ny); y's adjusted for ay and by
gaulegf(az, bz, zz, wz, nz); z's adjusted for az and bz
volume4d = 0.0;
for(i=1; i<=nx; i++)
for(j=1; j<=ny; j++)
for(k=1; k<=nz; k++)
volume4d = volume4d + wx[i]*wy[j]*wz[k]*f(xx[i],yy[j],zz[k]);
Volume code as shown in:
int2d.cint2d_c.out
Additional reference books include:
"Handbook of Mathematical Functions" by Abramowitz and Stegun
A classic reference book on numerical integration and differentiation.
"Numerical Recipes in Fortran" by Press, Teukolsky, Vetterling and Flannery
There is also a "Numerical recipes in C" yet in my copy, there have
been a number of errors in the C code due to poor translation from Fortran.
Many very good numerical codes for general purpose and special purposes.
Integration of functions over triangles
An impressive example of quadrature over triangles is shown
by using Gauss Legendre integration coordinates rather than
splitting a triangle into smaller congruent triangles.
The routine "tri_split" converts a single triangle into four
congruent triangles that exactly cover the initial triangle.
Linear quadrature then uses just the center point of the
split, four equal area, triangles.
triquad.java triangle quadrature
test_triquad.java test
test_triquad.out results, see last two sets
Reducing the step size improves integration accuracy.
In two or more dimensions, reducing area, volume, etc.
improves integration accuracy. Two sets of triangle
splitting are shown below. Using higher order integration
provides more integration accuracy than smaller step size,
smaller area, volume, etc.
From tri_split.java
and test_tri_split.java
These will appear later in the solution of
partial differential equations using the Finite Element Method.
Scan the curve you want to digitize, sample curves listed below.
Compile Digitize.java
Run java Digitize your.jpg > your.out
Edit your.out to suit your needs. Output is scaled to your first 3 points.
(0,0) (1,0) (0,1)
Additional binary graphics files are:
pi.gifcurve.jpgc6thrust.jpgchess2.pngDigitize.out some output from curve.jpg
When working with calculus, you may encounter the "divergence theorem"
For a vector function F in a volume V with surface S
integrate over volume (∇˙F)dV =
integrate over surface (F˙normal)dS
Just for fun, I programmed a numerical test to check the theorem
for one specific case: program and results of run
divergence_theorem.cdivergence_theorem_c.out
The general form of the trapezoidal integration.
a = integral y(x) dx over a set of increasing values of x, x_1 to x_n,
a = sum i=1 to i=n-1 of (x_i+1 - x_i) * (y(x_i) + y(x_i+1))/2
The (x_i+1 - x_i) is a variable version of h.
For best accuracy, pick x values where slope is changing,
in general use dx smaller for larger slope.
HW3 is assigned
When all else fails, there is adaptive numerical quadrature.
This is cleaver because the integration adjusts to the function
being integrated. And, yes, it can have problems.
The basic principle is to divide up the integration into
intervals. Use two methods of integration in each interval.
If the two methods do not agree in some interval, divide that
interval in half, and repeat. The total integral is the sum
of the integrals of all the intervals.
Consider integrating the function y = 1/x from 0.01 to 2.0,
with a high point at y=100 with a very steep slope and
a low relatively flat area from y=1.0 to y=0.5.
Note that the analytic solution is ln(2.0)-ln(0.01).
Do not try integration from 0 to 2, the integral is infinite.
The integral from 0.1 to 1 of 1/x = 2.302585093
The integral from 0.01 to 0.1 of 1/x = 2.302585093
The integral from 0.001 to 0.01 of 1/x = 2.302585093
Of course, ln(1)-ln(1/10) = 0 + ln(1/10) = 2.302585093
Thus, as we integrate closer and closer to zero, each factor
of 10 only adds 2.302585093 to the integral.
Here is a simple implementation of adaptive quadrature, in "C"
/* aquad.c adaptive quadrature numerical integration */
/* for a desired accuracy on irregular functions. */
/* define the function to be integrated as: */
/* double f(double x) */
/* { */
/* // compute value */
/* return value; */
/* } */
/* */
/* integrate from xmin to xmax */
/* approximate desired absolute error in all intervals, eps */
/* accuracy absolutely not guaranteed */
#undef abs
#define abs(x) ((x)<0.0?(-(x)):(x))
double aquad(double f(double x), double xmin, double xmax, double eps)
{
double area, temp, part, h;
double err;
int nmax = 2000;
double sbin[2000]; /* start of bin */
double ebin[2000]; /* end of bin */
double abin[2000]; /* area of bin , sum of these is area */
int fbin[2000]; /* flag, 1 for this bin finished */
int i, j, k, n, nn, done;
int kmax = 20; /* maximum number of times to divide a bin */
n=32; /* initial number of bins */
h = (xmax-xmin)/(double)n;
for(i=0; i<n; i++)
{
sbin[i] = xmin+i*h;
ebin[i] = xmin+(i+1)*h;
fbin[i] = 0;
}
k = 0;
done = 0;
nn = n; /* next available bin */
while(!done)
{
done = 1;
k++;
if(k>=kmax) break; /* quit if more than kmax subdivisions */
area = 0.0;
for(i=0; i<n; i++)
{
if(fbin[i]==1) /* this interval finished */
{
area = area + abin[i]; /* accumulate total area each pass */
continue;
}
temp = f((sbin[i]+ebin[i])/2.0); /* two integration methods */
part = f((3.0*sbin[i]+ebin[i])/4.0);
part = part + f((sbin[i]+3.0*ebin[i])/4.0);
abin[i] = (part+2.0*temp)*(ebin[i]-sbin[i])/4.0;
area = area + abin[i];
err = abs(temp-part/2.0);
if(err*1.414 < eps) /* heuristic */
{
fbin[i] = 1; /* this interval finished */
}
else
{
done = 0; /* must keep dividing */
if(nn>=nmax) /* out of space, quit */
{
done = 1;
for(j=i+1; j<n; j++) area = area + abin[j];
break; /* result not correct */
}
else /* divide interval into two halves */
{
sbin[nn] = (sbin[i]+ebin[i])/2.0;
ebin[nn] = ebin[i];
fbin[nn] = 0;
ebin[i] = sbin[nn];
nn++;
}
}
} /* end for i */
n = nn;
} /* end while */
return area;
} /* end aquad.c */
The results of integrating 1/x for various xmin to xmax are:
(This output linked with aquadt.c that has extra printout.)
test_aquad.c testing aquad.c 1/x eps=0.001
75 intervals, 354 funeval, 6 divides, small=0.101855, maxerr=0.000209298
xmin=0.1, xmax=2, area=2.99538, exact=2.99573, err=-0.000347413
261 intervals, 1470 funeval, 11 divides, small=0.0100607, maxerr=0.000228422
xmin=0.01, xmax=2, area=5.29793, exact=5.29832, err=-0.000390239
847 intervals, 4986 funeval, 16 divides, small=0.00100191, maxerr=0.000226498
xmin=0.001, xmax=2, area=7.6005, exact=7.6009, err=-0.000399734
2000 intervals, 9810 funeval, 18 divides, small=0.000100238, maxerr=0.0141083
xmin=0.0001, xmax=2, area=9.78679, exact=9.90349, err=-0.116702
2000 intervals, 9444 funeval, 17 divides, small=1.04768e-05, maxerr=49.455
xmin=1e-05, xmax=2, area=11.2703, exact=12.2061, err=-0.935768
1.0/((x-0.3)*(x-0.3)+0.01) + 1.0/((x-0.9)*(x-0.9)+0.04) -6
387 intervals, 2226 funeval, 6 divides, small=0.00390625, maxerr=0.000595042
xmin=0, xmax=1, area=29.8582, exact=29.8583, err=-0.000113173
test_aquad.c finished
Notice how the method failed with xmin=0.0001, maxed out on storage.
Yes, a better data structure would be a tree or linked list.
It can be made recursive yet that may not be a good idea.
(Programs die from stack overflow!)
On one case the adaptive quadrature used the bins shown in the figure:
On some bigger problems, I had run starting with n = 8;
This missed the area where the slope was large.
Just like using the steepest descent method for finding roots,
you may hit a local flat area and miss the part that should
be adaptive.
The above was to demonstrate the "bins" and used way too much
memory. The textbook has pseudo code on Page 301 that uses
much less storage. A modified version of that code is:
/* aquad3.c from book page 301, with modifications */
#undef abs
#define abs(a) ((a)<0.0?(-(a)):(a))
static double stack[100][7]; /* a place to store/retrieve */
static int top, maxtop; /* top points to where stored */
void store(double s0, double s1, double s2, double s3, double s4,
double s5, double s6)
{
stack[top][0] = s0;
stack[top][1] = s1;
stack[top][2] = s2;
stack[top][3] = s3;
stack[top][4] = s4;
stack[top][5] = s5;
stack[top][6] = s6;
}
void retrieve(double* s0, double* s1, double* s2, double* s3, double* s4,
double* s5, double* s6)
{
*s0 = stack[top][0];
*s1 = stack[top][1];
*s2 = stack[top][2];
*s3 = stack[top][3];
*s4 = stack[top][4];
*s5 = stack[top][5];
*s6 = stack[top][6];
} /* end retrieve */
double Sn(double F0, double F1, double F2, double h)
{
return h*(F0 + 4.0*F1 + F2)/3.0; /* error term 2/90 h^3 f^(3)(c) */
} /* end Sn */
double RS(double F0, double F1, double F2, double F3,
double F4, double h)
{
return h*(14.0*F0 +64.0*F1 + 24.0*F2 + 64.0*F3 + 14.0*F4)/45.0;
/* error term 8/945 h^7 f^(8)(c) */
} /* end RS */
double aquad3(double f(double x), double xmin, double xmax, double eps)
{
double a, b, c, d, e;
double Fa, Fb, Fc, Fd, Fe;
double h1, h2, hmin;
double Sab, Sac, Scb, S2ab;
double tol; /* eps */
double val, value;
maxtop = 99;
top = 0;
value = 0.0;
tol = eps;
a = xmin;
b = xmax;
h1 = (b-a)/2.0;
c = a + h1;
Fa = f(a);
Fc = f(c);
Fb = f(b);
Sab = Sn(Fa, Fc, Fb, h1);
store(a, Fa, Fc, Fb, h1, tol, Sab);
top = 1;
while(top > 0)
{
top--;
retrieve(&a, &Fa, &Fc, &Fb, &h1, &tol, &Sab);
c = a + h1;
b = a + 2.0*h1;
h2 = h1/2;
d = a + h2;
e = a + 3.0*h2;
Fd = f(d);
Fe = f(e);
Sac = Sn(Fa, Fd, Fc, h2);
Scb = Sn(Fc, Fe, Fb, h2);
S2ab = Sac + Scb;
if(abs(S2ab-Sab) < tol)
{
val = RS(Fa, Fd, Fc, Fe, Fb, h2);
value = value + val;
}
else
{
h1 = h2;
tol = tol/2.0;
store(a, Fa, Fd, Fc, h1, tol, Sac);
top++;
store(c, Fc, Fe, Fb, h1, tol, Scb);
top++;
}
if(top>=maxtop) break;
} /* end while */
return value;
} /* end main of aquad3.c */
The same test cases, using aquad3t.c, gave the following result:
test_aquad3.c testing aquad3.c 1/x eps=0.001
aquad3 hitop=3, funeval=45, hmin=0.0148437
xmin=0.1, xmax=2, area=2.99573, exact=2.99573, err=9.25606e-07
aquad3 hitop=3, funeval=109, hmin=0.00048584
xmin=0.01, xmax=2, area=5.29832, exact=5.29832, err=1.50725e-06
aquad3 hitop=4, funeval=221, hmin=3.05023e-05
xmin=0.001, xmax=2, area=7.6009, exact=7.6009, err=1.65548e-06
aquad3 hitop=5, funeval=425, hmin=1.90725e-06
xmin=0.0001, xmax=2, area=9.90349, exact=9.90349, err=1.66753e-06
aquad3 hitop=6, funeval=777, hmin=1.19209e-07
xmin=1e-05, xmax=2, area=12.2061, exact=12.2061, err=1.66972e-06
1.0/((x-0.3)*(x-0.3)+0.01) + 1.0/((x-0.9)*(x-0.9)+0.04) -6
aquad3 hitop=5, funeval=121, hmin=0.0078125
xmin=0, xmax=1, area=29.8583, exact=29.8583, err=-1.6204e-06
test_aquad3.c finished
More accurate integration by yet another method:
This uses equally spaced points and gets more accuracy
by using more values of the function being integrated,
F0, F1, F2, ... (the first aquad above used h and 2h,
the aquad3 used 2h and 4h)
For case h, area = h(F0 + F1)/2
error = 1/12 h f''(c)
For case 2h, area = 2h(F0 + 4F1 + F2)/6
error = 2/90 h^3 f''''(c)
For case 3h, area = 3h(F0 + 3F1 + 3F2 + F3)/8
error = ? h^5 f'^(6)(c)
For case 4h, area = 4h(14F0 + 64F1 + 24F2 + 64F3 + 14F4)/180
error = 8/945 h^7 f'^(8)(c) eighth derivative, c is largest value
Then, the MatLab version:
% test_aquad.m 1/x eps = 0.001
function test_aquad
fid = fopen('test_aquad_m.out', 'w');
eps=0.001;
fprintf(fid, 'test_aquad.m running eps=%g\n', eps);
sprintf('generating test_aquad_m.out \n')
xmin = 0.1;
xmax = 2.0;
q = quad(@f,xmin,xmax,eps);
e = log(xmax)-log(xmin);
fprintf(fid,'xmin=%g, mmax=%g, area=%g, exact=%g, err=%g \n', xmin, xmax, e, q, e-q);
xmin = 0.01;
xmax = 2.0;
q = quad(@f,xmin,xmax,eps);
e = log(xmax)-log(xmin);
fprintf(fid,'xmin=%g, mmax=%g, area=%g, exact=%g, err=%g \n', xmin, xmax, e, q, e-q);
xmin = 0.001;
xmax = 2.0;
q = quad(@f,xmin,xmax,eps);
e = log(xmax)-log(xmin);
fprintf(fid,'xmin=%g, mmax=%g, area=%g, exact=%g, err=%g \n', xmin, xmax, e, q, e-q);
xmin = 0.0001;
xmax = 2.0;
q = quad(@f,xmin,xmax,eps);
e = log(xmax)-log(xmin);
fprintf(fid,'xmin=%g, mmax=%g, area=%g, exact=%g, err=%g \n', xmin, xmax, e, q, e-q);
xmin = 0.000001;
xmax = 2.0;
q = quad(@f,xmin,xmax,eps);
e = log(xmax)-log(xmin);
fprintf(fid,'xmin=%g, mmax=%g, area=%g, exact=%g, err=%g \n', xmin, xmax, e, q, e-q);
fprintf(fid,'1.0/((x-0.3)*(x-0.3)+0.01) + 1.0/((x-0.9).*(x-0.9)+0.04) -6\n');
xmin = 0.0;
xmax = 1.0;
q = quad(@f1,xmin,xmax,eps);
e = 29.8583;
fprintf(fid,'xmin=%g, mmax=%g, area=%g, exact=%g, err=%g \n', xmin, xmax, e, q, e-q);
fprintf(fid,'test_aquad.m finished\n');
return;
function y=f1(x)
y=1.0 ./ ((x-0.3) .*(x-0.3)+0.01) + 1.0 ./ ((x-0.9).*(x-0.9)+0.04) -6.0;
return
end
function y=f(x)
y = 1.0 ./ x;
return
end
end
with results:
test_aquad.m running eps=0.001
xmin=0.1, mmax=2, area=2.99573, exact=2.99597, err=-0.000241595
xmin=0.01, mmax=2, area=5.29832, exact=5.29899, err=-0.000668946
xmin=0.001, mmax=2, area=7.6009, exact=7.60218, err=-0.0012822
xmin=0.0001, mmax=2, area=9.90349, exact=9.90415, err=-0.000660689
xmin=1e-06, mmax=2, area=14.5087, exact=14.5101, err=-0.00148357
1.0/((x-0.3)*(x-0.3)+0.01) + 1.0/((x-0.9).*(x-0.9)+0.04) -6
xmin=0, mmax=1, area=29.8583, exact=29.8583, err=-8.17859e-06
test_aquad.m finished
The last case used the curve
y = 1.0/((x-0.3)*(x-0.3)+0.01) + 1.0/((x-0.9)*(x-0.9)+0.04) -6
that looks like:
Passing a function as an argument in Python is easy:
# test_aquad3.py
from aquad3 import aquad3
xmin = 1.0
xmax = 2.0
eps = 0.001
def f(x):
return x*x
print "test_aquad3.py running"
area=aquad3(f, xmin, xmax, eps) # aquad3 compiled alone
print "aquad3(f, xmin, xmax, eps) =", area
With Java, some extra is needed to pass a user function to
a pre written integration class.
You need an interface to the function:
lib_funct.java
The user then provides an implementation to the function
to pass to the integration method and may provide values:
pass_funct.java
The pre written integration, trapezoidal method example,
has the numeric code, yet does not know the function (yet):
main_funct.javamain_funct.out
Then, a hack of test_aquad.c to test_aquad.java using just x^2
test_aquad.javatest_aquad_java.out
Another example with a two parameter function is:
(The Runga-Kutta method of integration is covered in Lecture 26.
This just shown the interface can be more general.)
You need an interface to the function:
RKlib.java
The user then provides an implementation to the function
to pass to the integration method and may provide values:
RKuser.java
The pre written integration method example,
has the numeric code, yet does not know the function (yet):
RKmain.javaRKmain.out
Go over WEB pages Lecture 1 through 8.
Work the homework 1 and homework 2.
Open book, open notes, exam.
Multiple choice and short answer.
Read the instructions.
Follow the instructions, else lose points.
Read the question carefully.
Answer the question that is asked, not the question you want to answer.
No programming required.
Some code may appear as part of a question.
Example questions and answers will be presented in class.
Some things you should know for the exam:
IEEE float has 6 or 7 decimal digits precision
IEEE double has 15 or 16 decimal digits precision
Adding or subtracting numbers of large differences in magnitude
causes precision lose. Known as roundoff error.
RMS error is root mean square error, a reasonable intuitive measure
Maximum error may be the most important measure for some applications
Average error is not very useful, typically smallest error.
The machine "epsilon" for a specific floating point arithmetic is
the smallest positive machine number added to exactly 1.0 that
results in a sum that is not exactly 1.0, actually greater than 1.0 .
Be careful reading programs that use "epsilon" or "eps". Sometimes
it may be the "machine epsilon" and other times it may be a
tolerance on how close the something should be.
Most languages include the elementary functions sin and cos, yet many
do not include a complete set of reciprocal functions such as cosecant
or inverse hyperbolic functions such as inverse hyperbolic cotangent.
If given only a natural logarithm function, log2(x) and be computed
as log(x)/log(2.0) and log10(x) as log(x)/log(10.0)
Homework 1 used a simple approximation for integration:
position s = integral for time=0 to time=t of velocity(t) dt
as s = sum i=1 to n s_i = s_i-1 + dv_i-1 * dt where n*dt=t
In order to guarantee a mathematical unique solution to a set of
linear simultaneous equations, two requirements are needed:
There must be the same number of equations as unknown variables and
the equations must be linearly independent.
For two equations to be linearly independent, there must not be
two constants p and q such that equation1 * p + q = equation2
The numerical solution of liner simultaneous equations can fail even
though the mathematical condition for a unique solution exists.
Surprisingly, a good numerical solution usually results from
linear simultaneous equations based on random numbers.
The Gauss Jordan method of solving simultaneous equations A x = y produces
the solution x by performing operations on A y, reducing A to the
identity matrix such that y is replaced by x at the end of the computation
Improved numerical accuracy is obtained in the solution of linear
systems of equations by interchanging rows such that the largest
magnitude diagonal is used as the pivot element.
Some very large systems of equations can be solved accurately.
The system of linear equations is solved by the same method when
the equations have complex values.
It is better to directly solve simultaneous equations rather than
invert the "A" matrix and multiply by the "y" vector to find "x".
There are small matrices that are very hard to invert accurately.
Matrix inversion uses very similar computation to direct solving
of simultaneous equations. The matrix times vector, A * y, introduces
more error.
Given data, a least square fit of the data minimizes the RMS error
for a given degree polynomial at the data points. Between the
data points there may be large variations.
When trying to fit large degree polynomials, there may be numerical
errors such that the approximation becomes worse.
A polynomial of degree n, will exactly fit n+1 points.
It may have extreme humps and valleys between the points.
Mathematically, a least square fit of n data points should be
exactly fit by a n-1 degree polynomial. The numerical computation
may not give this result.
A least square fit may use terms: powers, sine, cosine or any other
smooth function of the data. The basic requirement is that
all functions must be linearly independent of each other.
A least square fit uses a solution of simultaneous equations
Least square fit is the easiest method of fitting data that
is given with unequal spacing.
A polynomial of degree n has n+1 coefficients. Thus n+1 data
points can be exactly fit by an n degree polynomial.
A polynomial of degree n has exactly n roots (possibly with multiplicity)
Given roots r1, r2, ... rn a polynomial with these roots is created by
(x-r1)*(x-r2)* ... *(x-rn)
Finding a root, r, divide the polynomial by (x-r) to reduce one degree.
Newton's method x_next = x - P(x)/P'(x) will have quadratic
convergence except near multiple roots or derivative P'(x)
small. An error of 1/16 will reduce to an error of 1/64
on the next iteration.
Horner's method of evaluating polynomials provides accuracy and
efficiency by never directly computing high powers of the variable.
Given the numerical coefficients of a polynomial, the numerical
coefficients of the integral, derivative are easily computed.
Given the numerical coefficients of two polynomials, the
sum, difference, product and ratio are easily computed.
Any functions that can be continuously differentiated can be
approximated by a Taylor series expansion. It is called
"truncation error" when evaluating a Taylor series with a
specific number of terms.
Orthogonal polynomials are used to fit data and perform numerical
integration. Examples of orthogonal polynomials include:
Legendre, Chebyshev, Laguerre, Lagrange, Fourier.
Chebyshev polynomials are used to approximate smooth functions
while minimizing the maximum error at the given data points.
Legendre polynomials are used to approximate smooth functions
while minimizing RMS error at the given data points.
Lagrange polynomials are used to approximate smooth functions
while exactly fitting the given data points.
For non smooth functions, including square waves and pulses,
a Fourier series approximation may be used.
The Fejer approximation can be used to eliminate many oscillations
in the Fourier approximation at the expense of a less accurate fit.
Numerical integration is typically called numerical quadrature.
The Trapezoidal integration method requires a small step size
and many function evaluations to get accuracy.
I used uniform step size and the method is easy to code:
area = (b-a)/n * ( (f(a)+f(b))/2 + sum i=1..n-1 f(a+i*h) )
In general a smaller the step size, usually called "h", will
result in a more accurate value for the integral. This
implies that more evaluations of the function to be
integrated are needed.
The Gauss Legendre integration of smooth functions provides
good accuracy with a minimum number of function evaluations.
The weights and ordinates are needed for the summation:
area = sum i=1..n w[i]*f(x[i]);
An adaptive quadrature integration method is needed to
get reasonable accuracy of functions with large derivatives
or large variation in derivatives.
Adaptive quadrature uses a variable step size and at least
two methods of integration to determine when the desired
accuracy is achieved. This method, as with all integration
methods, can fail to give accurate results.
Two dimensional and higher dimensions can use simple extensions
of one dimensional numerical quadrature methods, except adaptive
quadrature.
The difference between a Taylor Series and a Maclaurin Series
is that the Maclaurin Series always expands a function about
zero.
The two equations for Standard Deviation are mathematically
equivalent, yet may give different numerical results.
sigma = sqrt((sum(x^2)- sum(x)^2/n)/n) = sqrt(sum((x-mean)^2)/n)
For n samples of x with a mean of mean.
RMS error is computed from a set of n error measurements using
rms_error = sqrt(sum(err^2)/n)
Note that this is the same value as Standard Deviation when
the mean value of the errors is zero. Thus, considered a
reasonable intuitive measure of error.
Multiple choice and short answer.
No help from other students, that is cheating!
The exam covers:
lectures 1 through 9
homework 1 and 2
sample quiz with sample answers for submit
Then on linux.gl.umbc.edu
based on last name a-j a, m-r b, s-z c
cp /afs/umbc.edu/users/s/q/squire/pub/download/q1_455a.doc
cp /afs/umbc.edu/users/s/q/squire/pub/download/q1_455b.doc
cp /afs/umbc.edu/users/s/q/squire/pub/download/q1_455c.doc
Use Microsoft Word or libreoffice to edit.
submit cs455 quiz1 q1_455?.doc ? is a, b, or c
Download other files using the Linux cp command:
Get any of my files you need on linux.gl.umbc.edu
cp /afs/umbc.edu/users/s/q/squire/pub/download/some_file .
Be careful saving a file from a web page. You may have
to fix broken lines.
quiz1 instructions for download:
quiz1 on homework page"
There may be times when you have to do numerical computation
on complex values (scalars, vectors, or matrices).
If you are programming in Fortran, no problem, the types
complex and complex*16 or double complex are built in.
In Ada 95, the full set of complex arithmetic and functions
come with the compiler as packages. MatLab and Python use
complex as needed, automatically (e.g. output of FFT).
In other programming languages you need to know how to do
complex computation and how to choose the appropriate
numerical method.
older
C Java Fortran 95 Fortran Ada 95 MATLAB Python
------ ------ ---------------- ---------- ------------ -------- --------
complex
32 bit 'none' 'none' complex complex complex N/A N/A
64 bit 'none' 'none' double complex complex*16 long_complex 'default' 'default'
'none' means not provided by the language (may be available as a library)
N/A means not available, you get the default.
Background:
A complex number is stored in the computer as an ordered pair
of floating point numbers, the real part and the imaginary part.
The floating point numbers may be single or double precision as
needed by the application. It is suggested that you use double
precision as the default.
These are called Cartesian coordinates. Polar coordinates are
seldom used for computation, yet, are usually made available by
a conversion function to magnitude and angle. Note that
numerical analyst call the angle by the name "argument".
The naming convention depends on the programming language and
personal (or customer) choice.
Basic complex arithmetic is covered first and then complex
functions: sin, cos, tan, sinh, cosh, tanh, exp, log, sqrt, pow
are covered in the next lecture.
The simplest need for complex numbers is solving for the roots
of the polynomial equation x^2 + 1 = 0 .
There must be exactly two roots and they are sqrt(-1) and
-sqrt(-1) that are named "i" and "-i".
The quadratic equation for finding roots of a second order
polynomial should use the complex sqrt, even for real coefficients
a, b, and c, because the roots may be complex.
given: a x^2 + b x + c = 0 find the roots
b +/- sqrt(b^2 - 4 a c)
solution x = -----------------------
2 a
that computes complex roots if 4 a c > b^2
Of course, the equation correctly computes the roots when
a, b, and c are complex numbers.
Complex Arithmetic
i=sqrt(-1) indicates imaginary part. a and b are real numbers.
(a+ib) is just two real numbers with b interpreted as imaginary.
Complex numbers are added by adding the real and imaginary parts:
(a+ib) + (c+id) = (a+c) + i(b+d)
Similarly, subtraction:
(a+ib) - (c+id) = (a-c) + i(b-d)
The multiplication of two complex numbers is:
(a+ib)*(c+id) = (a*c - b*d) + i(b*c + a*d)
The division of two complex numbers is:
r = c*c + d*d
(a+ib)/(c+id) = (a*c + b*d)/r + i(b*c - a*d)/r
Complex numbers define a plane and are typically Cartesian coordinates.
Polar coordinates also define a plane in terms of radius, r and angle θ.
x = r * cos(θ) r = sqrt(x*x+y*y)
y = r * sin(θ) θ = arctan(y/x) or atan2
Other coordinate systems are:
Cylindrical Coordinates
Cylindrical coordinates in terms of radius r, angle θ and height z.
x = r * cos(θ) r = sqrt(x*x+y*y)
y = r * sin(θ) θ = arctan(y/x) or atan2
z = z z = z
Spherical Coordinates
Spherical coordinates in terms of radius r, angles θ and φ
x = r * sin(φ) * cos(θ) r = sqrt(x*x+y*y+z*z)
y = r * sin(φ) * sin(θ) θ = arctan(y/x) or atan2
z = r * cos(φ) φ = arctan(sqrt(x*x+y*y)/z)
Toroidal Coordinates 1
The five independent variables are a, σ, θ, φ and z0
denom = cosh(θ)-cos(σ)
x = a * sinh(θ) * cos(φ) / denom
y = a * sinh(θ) * sin(φ) / denom
z = a * sin(σ) / denom optional + z0
-π < σ < π θ > 0 0 < φ < 2π a > 0
φ = arctan(y/x)
temporaries r1 = sqrt(x^2 + y^2)
d1 = sqrt((r1+a)^2 + z^2)
d2 = sqrt((r1-a)^2 + z^2)
θ = ln(d1/d2)
σ = arccos((d1^2+d2^2-4*a^2)/(2*d1*d2))
If you have code that works for double, extending to complex
may be easy.
Python evaluating a polynomial on "real" coefficients
test_polyfit.py polyfit in Python, needs numpy
from numpy import array
from numpy import polyfit
from numpy import polyval
import pylab
def f(x):
return 5.0 + 4.0*x + 3.0*x*x + 2.0*x*x*x + 1.0*x*x*x*x
print "test_polyfit.py a x,y,5) "
print "fit 5.0 + 4.0*x + 3.0*x*x + 2.0*x*x*x + 1.0*x*x*x*x"
xx = [0.0 for i in range(5)]
yy = [0.0 for i in range(5)]
for i in range(5):
xx[i] = 0.1*(i+1)
yy[i] = f(xx[i]) # function we will fit
print "i=",
print i,
print " ,xx=",
print xx[i],
print " ,yy=",
print yy[i]
print "p=polyfit(xx,yy,4) for 5 points"
p=polyfit(xx,yy,4) # 5 input values, 4th order
print "polyfit coefficients"
print p
print "backwards? largest power first, expected 5, 4, 3, 2, 1"
print " "
print "polyval values of fit"
yy1=polyval(p,xx)
print yy1
print "should be values above"
output
test_polyfit.py a x,y,5)
fit 5.0 + 4.0*x + 3.0*x*x + 2.0*x*x*x + 1.0*x*x*x*x
i= 0 ,xx= 0.1 ,yy= 5.4321
i= 1 ,xx= 0.2 ,yy= 5.9376
i= 2 ,xx= 0.3 ,yy= 6.5321
i= 3 ,xx= 0.4 ,yy= 7.2336
i= 4 ,xx= 0.5 ,yy= 8.0625
p=polyfit(xx,yy,4) for 5 points
polyfit coefficients
[ 1. 2. 3. 4. 5.]
backwards? largest power first, expected 5, 4, 3, 2, 1
polyval values of fit
[ 5.4321 5.9376 6.5321 7.2336 8.0625]
should be values above
Python evaluating a polynomial on "complex" coefficients
test_cxpolyfit.py complex polyfit in Python, needs numpy
from numpy import array
from numpy import polyfit
from numpy import polyval
import pylab
def f(x): # changed to some complex coefficients
return (5.0+1.0j) + (4.0+1.1j)*x + (3.0+1.2j)*x*x + 2.0*x*x*x + 1.0*x*x*x*x
print "test_cxpolyfit.py a x,y,5) using complex numbers "
print "fit (5.0+1.0j) + (4.0+1.1j)*x + (3.0+1.2j)*x*x + 2.0*x*x*x + 1.0*x*x*x*x"
xx = [0.0 for i in range(5)]
yy = [0.0 for i in range(5)]
for i in range(5):
xx[i] = 0.1*(i+1)
yy[i] = f(xx[i]) # now has complex coefficients
print "i=",
print i,
print " ,xx=",
print xx[i],
print " ,yy=",
print yy[i]
print "p=polyfit(xx,yy,4) for 5 points"
p=polyfit(xx,yy,4) # 5 input values, 4th order
print "polyfit coefficients"
print p
print "backwards? largest power first, expected about 5, 4, 3, 2, 1"
print " "
print "polyval values of fit"
yy1=polyval(p,xx)
print yy1
print "should be values above"
output
test_cxpolyfit.py a x,y,5) using complex numbers
fit (5.0+1.0j) + (4.0+1.1j)*x + (3.0+1.2j)*x*x + 2.0*x*x*x + 1.0*x*x*x*x
i= 0 ,xx= 0.1 ,yy= (5.4321+1.122j)
i= 1 ,xx= 0.2 ,yy= (5.9376+1.268j)
i= 2 ,xx= 0.3 ,yy= (6.5321+1.438j)
i= 3 ,xx= 0.4 ,yy= (7.2336+1.632j)
i= 4 ,xx= 0.5 ,yy= (8.0625+1.85j)
p=polyfit(xx,yy,4) for 5 points
polyfit coefficients
[ 1. +1.45680467e-12j 2. -1.77472663e-12j 3. +1.20000000e+00j
4. +1.10000000e+00j 5. +1.00000000e+00j]
backwards? largest power first, expected about 5, 4, 3, 2, 1
polyval values of fit
[ 5.4321+1.122j 5.9376+1.268j 6.5321+1.438j 7.2336+1.632j 8.0625+1.85j ]
should be values above
No difference in polyfit or polyval, they accept real or complex.
double polyval(int order, double p[], double x) // return y = p(x)
{ // using Horner's rule
double y;
int i;
y = p[order]*x; // p one larger than order
for(i=order-1; i>0; i--)
y = (p[i]+y)*x;
y = p[0]+y;
return y;
} // end polyval
"C" language polyval on complex
easy editing a*b becomes cxmul(a,b)
a+b becomes cxadd(a,b)
double becomes complex
// #include "complex.h" compile with complex.c
complex cxpolyval(int order, complex p[], complex x) // return y = p(x)
{ // using Horner's rule
complex y;
int i;
y = cxmul(p[order],x); // p one larger than order
for(i=order-1; i>0; i--)
y = cxmul(cxadd(p[i],y),x);
y = cxadd(p[0],y);
return y;
} // end cxpolyval
Java polyval on real
// polyval.java with simple test
class polyval
{
polyval(){}
double polyval(int order, double p[], double x) // return y = p(x)
{ // using Horner's rule
double y;
y = p[order]*x; // p one larger than order
for(int i=order-1; i>0; i--)
y = (p[i]+y)*x;
y = p[0]+y;
return y;
} // end polyval
public static void main (String[] args)
{
double y;
double x = 2.0;
double p[] = {1.0, 2.0, 3.0};
polyval c = new polyval();
y = c.polyval(2, p, x);
System.out.println("polyval.java on real");
System.out.println("p = {"+p[0]+","+p[1]+","+p[2]+"}");
System.out.println("y=p(x) x="+x+", y="+y);
} // end main
} // end polyval.java
polyval.java real output
polyval.java on real
p = {1.0,2.0,3.0}
y=p(x) x=2.0, y=17.0
Java polyval on complex
easy editing a*b becomes a.multiply(b)
a+b becomes a.add(b)
double becomes Complex
// cxpolyval.java with simple test on Complex
// needs class Complex.class that defines type and functions
class cxpolyval
{
cxpolyval(){}
Complex cxpolyval(int order, Complex p[], Complex x) // return y = p(x)
{ // using Horner's rule
Complex y;
y = p[order].multiply(x); // p one larger than order
for(int i=order-1; i>0; i--)
y = (p[i].add(y)).multiply(x);
y = p[0].add(y);
return y;
} // end cxpolyval
public static void main (String[] args)
{
Complex y;
Complex x = new Complex(2.0, 0.1);
Complex p[] = {new Complex(1.0,0.01), new Complex(2.0,0.02),
new Complex(3.0)};
cxpolyval c = new cxpolyval();
y = c.cxpolyval(2, p, x);
System.out.println("cxpolyval.java on Complex");
System.out.println("p = {"+p[0]+","+p[1]+","+p[2]+"}");
System.out.println("y=p(x) x="+x+", y="+y);
} // end main
} // end cxpolyval.java
cxpolyval.java complex output
cxpolyval.java on Complex
p = {(1.0,0.01),(2.0,0.02),(3.0,0.0)}
y=p(x) x=(2.0,0.1), y=(16.968,1.4500000000000002)
The notation (1.0,2.0) comes from Fortran where the
syntax was 1.0 real, 2.0 imaginary.
Even least square fit can be performed on complex values
and the polynomial may have powers of all variables
* Y_actual X1 X2
* -------- ----- -----
* 32.5+i21.2 1.0+i1.0 2.5+i2.0
* 7.2+i5.3 2.0+i1.1 2.5
* 6.9+i1.1 3.0+i1.2 2.7
* 22.4+i0.4 2.2+i2.2 2.1+i2.1
* 10.4+i2.3 1.5+i3.4 2.0
* 11.3+i4.0 1.6+i1.0 2.0
* ... ... ... need at least as many as c's
*
* Find coefficients c0, c1, c2, ... such that
* Y_approximate = c0 + c1*X1 + c2*X2 + c3*X1^2 + c4*X1*X2 + c5*X2^2 +
* c6*X1^3 + c7*X1^2*X2 + c8*X1*X2^2 + c9*X2^3 + ...
* and such that the sum of (Y_actual - Y_approximate) squared is minimized.
* The same simultaneous equations are built, be sure to use a simultaneous
* equation solver where all arithmetic was changed to complex.
some examples with multiple variables to higher powers
(Sorry, modifying to complex is left as an exercise to the student.)
least_square_fit_2d.cleast_square_fit_3d.cleast_square_fit_4d.cleast_square_fit_4d.javaleast_square_fit_4d_java.out
Some complex functions may be computed accurately from the basic
definition. But, preserving relative accuracy over the complete
domain of a few complex functions requires special techniques.
Note that the domain and range of complex functions may not be
obvious to the average user.
First, look at the mappings from domain z1 to range z2 for some
complex functions:
With Java applets enabled
If this does not work,
Complex Function Screen Shots
Various identities for elementary functions in the complex plane,
including implementing the complex function using only real functions.
Let z = x + i y
then arg z = arctan y/x real function using signs of x and y
modulus z = sqrt(x*x+y*y) = |z| length of complex vector
conjugate(z) = x - iy
re z = x
im z = y
SQRT
i (arg z)/2
sqrt z = sqrt(|z|) e
thus yielding half the angle with magnitude sqrt(|z|)
modulus z + re z modulus z - re z
sqrt z = sqrt ---------------- +/- i sqrt ----------------
2 2
where the sign of the imaginary part of the result is
the sign of the imaginary part of z
2 3
z z z
sqrt 1+z = 1 + - - -- + -- - ...
2 8 16
sqrt( 0 + i0) = 0 + i0
sqrt(-0 + i0) = 0 + i0
sqrt( 0 - i0) = 0 - i0
sqrt(-0 - i0) = 0 - i0
sqrt(z)**2 = z
sqrt(z*z) = z re z > 0
sqrt(1/z) = 1/sqrt(z) not for -0 or negative real axis
conjugate(sqrt(z)) = sqrt(conjugate(z)) not for -0 or negative real axis
Branch cut:
The branch cut lies along the negative real axis.
Domain:
Mathematically unbounded
Range:
Right half plane including the imaginary axis. The algebraic sign
of the real part is positive.
The sign of the imaginary part of the result is the same as the
sign of im z.
LOG
log z = ln( modulus z) + i argument z
2 3
z z
log 1+z = z - -- + -- - ... for |z| < 1
2 3
log(exp(z)) = z im z in [-Pi,Pi]
log(1/z) = -log(z) not on negative real axis
conjugate(log(z)) = log(conjugate(z)) not on negative real axis
Branch cut:
The branch cut lies along the negative real axis.
Domain:
Modulus z /= 0.0
Range:
Real part mathematically unbounded, imaginary part in range - Pi to Pi
The sign of the imaginary part of the result is the same as the
sign of im z.
EXP
z re z re z
e = e cos (im z) + i e sin (im z)
i x
e = cos(x) + i sin(x)
2 3
z z z
e = 1 + z + -- + -- + ...
2! 3!
z z+i 2Pi
e = e periodic with period i 2Pi
exp(-z) = 1/exp(z) not on negative real axis
exp(log(z)) = z |z| non zero
conjugate(exp(z)) = exp(conjugate(z))
Branch cut:
None
Domain:
re z < ln 'LARGE
Range:
Mathematically unbounded
For modulus z = 0.0, the result is 1.0 + z
"**"
w (w * ln z)
z = e
2 3
w (w ln z) (w ln z)
z = 1 + w ln z + --------- + --------- + ... for |z| non zero
2! 3!
w
log(z ) = w * log(z)
SIN
sin z = sin(re z) cosh(im z) + i cos(re z) sinh(im z)
iz -iz
e - e
sin z = -----------
2i
3 5
z z
sin z = z - -- + -- - ...
3! 5!
sin z = cos(z - Pi/2) periodic with period Pi
sin(z) = sin(z+2Pi)
sin z = -i sinh iz
sin(arcsin(z)) = z
sin(z) = -sin(-z)
conjugate(sin(z)) = sin(conjugate(z))
Domain:
|im z| < ln 'LARGE
COS
cos z = cos(re z) cosh(im z) - i sin(re z) sinh(im z)
iz -iz
e + e
cos z = -----------
2
2 4
z z
cos z = 1 - -- + -- - ...
2! 4!
cos z = sin(z + Pi/2) periodic with period Pi
cos(z) = cos(z+2Pi)
cos(z) = cos(-z)
cos z = cosh iz
cos(arccos(z)) = z
conjugate(cos(z)) = cos(conjugate(z))
Domain:
|im z| < ln 'LARGE
TAN
tan z = sin z / cos z
iz -iz
e - e
tan z = -i ---------- limit = -i when im z > ln 'LARGE
iz -iz limit = i when im z < - ln 'LARGE
e + e
3 5 7
z 2z 17z
tan z = z + -- + --- + ---- + ... for |z| < 1
3 15 315
sin x cos x sinh y cosh y
tan z = ---------------- +i ----------------
2 2 2 2
cos x + sinh y cos x + sinh y
tan z = cot(Pi/2 - z) periodic with period Pi
tan z = tan(z+2Pi)
tan z = 1/cot z
tan(z) = -tan(-z)
tan z = -i tanh iz
tan(arctan(z))=z
conjugate(tan(z)) = tan(conjugate(z))
Branch cut:
None
Domain:
Mathematically unbounded
Range:
Mathematically unbounded
For modulus z = 0.0, the result is z
COT
cot z = cos z / sin z
iz -iz
e + e
cot z = i ---------- limit = i when im z > ln 'LARGE
iz -iz limit = -i when im z < -ln 'LARGE
e - e
sin x cos x sinh y cosh y
cot z = ---------------- -i ----------------
2 2 2 2
sin x + sinh y sin x + sinh y
3 5
1 z z 2z
cot z = - - - - -- - --- - ...
z 3 45 945
cot z = tan(Pi/2 - z) periodic with period Pi
cot(z) = cot(z+2Pi)
cot(z) = -cot(-z)
cot z = 1/tan z
conjugate(cot(z)) = cot(conjugate(z))
Branch cut:
None
Domain:
Mathematically unbounded
Range:
Mathematically unbounded
For modulus z = 0.0, the result is z
ARCSIN
2
arcsin z = - i ln(i z + sqrt(1-z ))
arcsin z = - i ln(i z + sqrt(1-z)*sqrt(1+z))
3 5 7
z 3z 5z
arcsin z = z + -- + --- + ---- + ... for |z| < 1
6 40 112
1 3 15
arcsin z = -i( ln(2iz) - --- - ---- - ----- - ... ) for |z| > 1
2 4 6
4z 32z 288z
2
arcsin z = arctan(z/sqrt(1-z )) fix re of result
arcsin z =
2 2 1/2 2 2 1/2
arcsin(1/2 (x + 2 x + 1 + y ) - 1/2 (x - 2 x + 1 + y ) )
+ i
2 2 1/2 2 2 1/2
csgn(-i x + y) ln(1/2 (x + 2 x + 1 + y ) + 1/2 (x - 2 x + 1 + y )
2 2 1/2 2 2 1/2 2 1/2
+ ((1/2 (x + 2 x + 1 + y ) + 1/2 (x - 2 x + 1 + y ) ) - 1) )
note: The csgn function is used to determine in which half-plane
(`left' or `right') the complex-valued expression
or number x lies. It is defined by
/ 1 if Re(x) > 0 or Re(x) = 0 and Im(x) > 0
csgn(x) = < -1 if Re(x) < 0 or Re(x) = 0 and Im(x) < 0
\ 0 if x = 0
arcsin z = pi/2 - arccos z
arcsin z = -i arcsinh iz
arcsin(sin(z)) = z
Branch cut:
The real axis not in [ -1.0, 1.0 ]
Domain:
Mathematically unbounded
Range:
Imaginary part mathematically unbounded, real part in [ -Pi/2, Pi/2 ]
For modulus z = 0.0, the result is z
ARCCOS
1+z 1-z
arccos z = -i 2 ln( sqrt --- +i sqrt --- )
2 2
2
arccos z = -i ln(z + i sqrt(1-z ) )
3 5
Pi z 3z
arccos z = -- - z - -- - -- - ... for |z| < 1
2 6 40
arccos z = -i( ln(2z) for |z| > 1/sqrt(epsilon)
arccos z =
2 2 1/2 2 2 1/2
arccos(1/2 (x + 2 x + 1 + y ) - 1/2 (x - 2 x + 1 + y ) )
+ i
2 2 1/2 2 2 1/2
csgn(I x - y) ln(1/2 (x + 2 x + 1 + y ) + 1/2 (x - 2 x + 1 + y )
2 2 1/2 2 2 1/2 2 1/2
+ ((1/2 (x + 2 x + 1 + y ) + 1/2 (x - 2 x + 1 + y ) ) - 1) )
2
arccos z = arctan(sqrt(1-z )/z) fix re of result
arccos z = pi/2 - arcsin z
arccos(cos(z)) = z
Branch cut:
The real axis not in [ -1.0, 1.0 ]
Domain:
Mathematically unbounded
Range:
Imaginary part mathematically unbounded, real part in [ 0.0, Pi ]
ARCTAN
arctan z = -i ( ln(1 + i z) - ln(1 - i z) )/2
i i+z
arctan z = - ln --- must be fixed on slit for iz < -1
2 i-z
3 5 7
z z z
arctan z = z - -- + -- - -- + ... for |z| < 1
3 5 7
arctan z = Pi/2 - arccot z
arctan z = arccot(1/z)
arctan z = -i arctanh iz
arctan(tan(z)) = z
arctan z = 1/2 arctan(x, 1 - y) - 1/2 arctan(- x, y + 1)
2 2
x + (y + 1)
+i 1/4 ln(-------------)
2 2
x + (y - 1)
ARCCOT
i z-i
arccot z = - ln ---
2 z+i
3 5
Pi z z
arccot z = -- - z + -- - -- + ... for |z| < 1
2 3 5
arccot z = Pi/2 - arctan z
arccot z = arctan(1/z)
arccot(cot(z)) = z
arccot z = 1/2 Pi - 1/2 arctan(x, 1 - y) + 1/2 arctan(- x, y + 1)
2 2
x + (y + 1)
-i 1/4 ln(-------------)
2 2
x + (y - 1)
Range:
Imaginary part mathematically unbounded, real part in [0.0 , Pi]
SINH
sinh z = sinh(re z) cos(im z) + i cosh(re z) sin(im z)
z -z
e - e
sinh z = --------
2
3 5
z z
sinh z = z + -- + -- + ...
3! 5!
sinh z = -i cosh(z +i Pi/2) periodic with period i Pi
sinh z = -i sin iz
COSH
cosh z = cosh(re z) cos(im z) + i sinh(re z) sin(im z)
z -z
e + e
cosh z = --------
2
2 4
z z
cosh z = 1 + -- + -- + ...
2! 4!
cosh z = -i sinh(z +i Pi/2) periodic with period i Pi
cosh z = cos iz
TANH
tanh z = sinh z / cosh z
z -z
e - e
tanh z = --------
z -z
e + e
3 5 7
z 2z 17z
tanh z = z - -- + -- - ---- + ... for |z| < 1
3 15 315
tanh z = -i coth(z +i Pi/2) periodic with period i Pi
tanh z = 1/coth z
tanh z = -i tan iz
sinh x cosh x sin y cos y
tanh z = ---------------- +i ----------------
2 2 2 2
sinh x + cos y sinh x + cos y
COTH
coth z = cosh z / sinh z
z -z
e + e
coth z = --------
z -z
e - e
3 5
1 z z 2z
coth z = - + - - -- + --- - ... for |z| < 1
z 3 45 945
coth z = -i tanh(z +i Pi/2)
coth z = 1/tanh z
sinh x cosh x sin y cos y
coth z = ---------------- -i ----------------
2 2 2 2
sinh x + sin y sinh x + sin y
ARCSINH
2
arcsinh z = ln(z + sqrt(1 + z ) )
3 5 7
z 3z 5z
arcsinh z = z - -- + --- - --- + ... for |z| < 1
6 40 112
1 3 15
arcsinh z = ln(2z) + --- - ---- + ----- - ... for |z| > 1
2 4 6
4z 32z 288z
arcsinh z = -i arcsin iz
arcsinh(sinh(z)) = z
arcsinh z =
2 2 1/2 2 2 1/2
csgn(x + I y) ln(1/2 (x + y + 2 y + 1) + 1/2 (x + y - 2 y + 1)
2 2 1/2 2 2 1/2 2 1/2
+ ((1/2 (x + y + 2 y + 1) + 1/2 (x + y - 2 y + 1) ) - 1) )
+ i
2 2 1/2 2 2 1/2
arcsin(1/2 (x + y + 2 y + 1) - 1/2 (x + y - 2 y + 1) )
ARCCOSH
z+i z-i
arccosh z = 2 ln( sqrt --- + sqrt --- )
2 2
arccosh z = ln(z + sqrt(z-1) sqrt(z+1) ) not sqrt(z**2-1)
1 3 15
arccosh z = ln(2z) - --- - ---- - ----- - ... for |z| > 1
2 4 6
4z 32z 288z
arccosh(cosh(z)) = z
arccosh z =
2 2 1/2
- csgn(I - I x + y) csgn(I x - y) ln(1/2 (x + 2 x + 1 + y )
2 2 1/2
+ 1/2 (x - 2 x + 1 + y )
2 2 1/2 2 2 1/2 2 1/2
+ ((1/2 (x + 2 x + 1 + y ) + 1/2 (x - 2 x + 1 + y ) ) - 1) )
+ i
csgn(I - I x + y)
2 2 1/2 2 2 1/2
arccos(1/2 (x + 2 x + 1 + y ) - 1/2 (x - 2 x + 1 + y ) )
ARCTANH
arctanh z = ( ln(1+z) - ln(1-z) )/2
1 1+z
arctanh z = - ln --- must fix up on slit for z > 1
2 1-z
3 5 7
z z z
arctanh z = z + -- + -- + -- + ... for |z| < 1
3 5 7
arctanh z = -i arctan iz
arctanh(tanh(z)) = z
arctanh z = arccoth(z) + i Pi/2
2 2
(x + 1) + y
arctanh z = 1/4 ln(-------------)
2 2
(x - 1) + y
+i 1/2 arctan(y, x + 1) - 1/2 arctan(- y, 1 - x)
ARCCOTH
1 z+1
arccoth z = - ln --- must fix up on slit
2 z-1
3 5
i Pi z z
arccoth z = ---- + z + -- + -- + ... for |z| < 1
2 3 5
arccoth z = arctanh(z) +i Pi/2
arccoth z = arctanh(1/z)
arccoth(coth(z)) = z
2 2
(x + 1) + y
arccoth z = 1/4 ln(-------------)
2 2
(x - 1) + y
+i 1/2 Pi + 1/2 arctan(y, x + 1) - 1/2 arctan(- y, 1 - x)
Range:
Real part mathematically unbounded, imaginary part in [0.0 , i Pi]
There are many complex functions provided by Ada
generic_complex_elementary_functions.ads
Fortran 90 has a few complex functions
complex_func.f90 source code
complex_func_f90.out output
Python has many complex functions
test_complex.py3 source code
test_complex_py3.out output
Haskell has many complex functions
test_complex.hs source code
test_complex_hs.out output
Complex.hs Library module
I programmed some complex functions and utility routines in java.
Complex.java source code
TestComplex.java source code test
TestComplex_java.out output
Csimeq.java source code
test_Csimeq.java source code test
test_Csimeq_java.out test output
Cinvert.java source code
test_Cinvert.java source code test
test_Cinvert_java.out test output
Cnuderiv.java source code
test2_Cnuderiv.java source code test
test2_Cnuderiv_java.out test output
I programmed some complex functions and utility routines in ballerina.
complex_math.bal source code
complex_math_bal.out test output
Solving PDE with Complex functions, Complex solutions and Complex boundaries:
First the real, double, version second order, two dimensions
pdenu22_eq.java real source code test
pdenu22_eq_java.out test output
Then, converted to complex, Complex, version (using utilities above)
pdeCnu22_eq.java Complex source code test
pdeCnu22_eq_java.out test output
First the real, double, version fourth order, four dimensions
pde44h_eq.java real source code test
pde44h_eq_java.out test output
Then, converted to complex, Complex, version (using utilities above)
pdeC44h_eq.java Complex source code test
pdeC44h_eq_java.out test output
A much easier conversion to complex in Fortran
simeqC.f90 complex source code
inverseC.f90 complex source code
nuderivC.f90 complex source code
nuderivC_test.f90 complex source code
nuderivC_test_f90.out test output
pdeC44h_eq.f90 Complex source code test
pdeC44h_eq_f90.out test output
Similar maximum errors in real and complex code
MatLab overloads all functions that take floating point
to also work with complex numbers.
Python need to use cmath.
Eigenvalue and Eigenvector computation may be the most prolific for
special case numerical computation. Considering the size and speed
of modern computers, I use a numerical solution for a general
complex matrix. Thus, I do not have to worry about the constraints
on the matrix (is it numerically positive definite?)
The eigenvalues of a matrix with only real elements may be complex.
| 2.0 -2.0 |
| 1.0 0.0 | has eigenvalues 1+i and 1-i
Thus, computing eigenvalues needs to use complex arithmetic.
This type of numerical algorithm, you do not want to develop yourself.
The technique is to find a working numerical code, test it yourself,
then adapt the code to your needs, possibly converting the code to
a different language.
Eigenvalues have application in the solution of some physics problems.
They are also used in some solutions of differential equations.
Some statistical analysis uses eigenvalues. Sometimes only the
largest or second largest eigenvalues are important.
My interest comes from the vast types of testing that can, and should,
be performed on any code that claims to compute eigenvalues and
eigenvectors. Thorough testing of any numeric code you are going
to use is essential.
Information about eigenvalues, e (no lambda in plain ASCII)
and eigenvectors, v, for arbitrary n by n complex matrix A.
There are exactly n eigenvalues (some may have multiplicity greater than 1)
For every eigenvalue there is a corresponding eigenvector.
For eigenvalues with multiplicity greater than 1,
each has a unique eigenvector.
The set of n eigenvectors form a basis, they are all mutually orthogonal.
(The dot product of any pair of eigenvectors is zero.)
View this page in a fixed width font, else the matrices are shambles.
Vertical bars, | |, in this lecture is not absolute value,
it means vector or matrix
| 1 0 0 |
det|A-eI| = 0 defines e, where I is an n by n identity matrix | 0 1 0 |
zero except 1 on the diagonal | 0 0 1 |
(n e's for n by n A)
For a 3 by 3 matrix A:
| a11-e a12 a13 |
det| a21 a22-e a23 | = (a11-e)*(a22-e)*(a33-e)+a12*a23*a31+a13*a32*a21-
| a31 a32 a33-e | a31*(a22-e)*a13-a21*a12*(a33-e)-a32*a23*(a11-e)
Writing out the above determinant gives the "Characteristic Equation"
of the matrix A.
Combining terms gives e^n + c_n-1 * e^n-1 + ... + c_2 * e^2 + c_1 * e + c_0 = 0
(Divide through by c_n, to have exactly n unknown coefficients 0..n-1).
(Special case because of " = 0 " )
There are exactly n roots for an nth order polynomial and the
n roots of the characteristic equation are the n eigenvalues.
The relation between each eigenvalue and its corresponding eigenvector is
Av = ev where |v| is non zero.
Typically, we require the length of v to be 1.
Given a matrix A and a non singular matrix P and P inverse p^-1
B = P A P^-1 the matrix B has the same eigenvalues as matrix A.
The eigenvectors may be different.
B is a similarity transform of A.
The diagonal elements of a diagonal matrix are the eigenvalues of the matrix.
| a11 0 0 0 |
| 0 a22 0 0 |
| 0 0 a33 0 | has eigenvalues a11, a22, a33, a44 and
| 0 0 0 a44 |
corresponding eigenvectors
v1=|1 0 0 0| v2=|0 1 0 0| v3=|0 0 1 0| v4=|0 0 0 1|
Notice that the eigenvectors are not necessarily unique and may be
scaled by an arbitrary, non zero, constant. Normalizing the length
of each eigenvector to 1.0 is common.
The eigenvalues of a 2 by 2 matrix are easily computed as the roots
of a second order equation.
det| a11-e a12 |=0 or (a11-e)*(a22-e) - a12*a21 = 0 or
| a21 a22-e |
e^2 - (a11+a22) e + (a11*a22-a12*a21) = 0
Let a=1, b=-(a11+a22), c= (a11*a22-a12*a21)
then e = (-b +/- sqrt(b^2-4*a*c)/2*a computes the two eigenvalues.
The roots and thus the eigenvalues may be complex.
Note that a matrix with all real coefficients may have complex eigenvalues
and or complex eigenvectors:
A= | 1 -1 | Lambda= 1 + i vec x= | i -1 |
| 1 1 | Lambda= 1 - i vec x= |-i 1 |
Computing the characteristic equation is usually not a good way
to compute eigenvalues for n greater than 4 or 5.
It becomes difficult to compute the coefficients of the characteristic
equation accurately and it is also difficult to compute the roots accurately.
Note that given a high order polynomial, a matrix can be set up from
the coefficients such that the eigenvalues of the matrix are the roots
of the polynomial. x^4 + c_3 x^3 + c_2 x^2 + c_1 x + c_0 = 0
| -c_3 -c_2 -c_1 -c_0 | where c_4 of the polynomial is 1.0
| 1 0 0 0 |
| 0 1 0 0 |
| 0 0 1 0 |
Thus, if you have code that can find eigenvalues, you have code
that can find roots of a polynomial. Not the most efficient method.
matrixfromroots.py3matrixfromroots_py3.out
The maximum value of the sum of the absolute values of each row and column
is a upper bound on the absolute value of the largest eigenvalue.
This maximum value is typically called the L1 norm of the matrix.
Test, get biggest absolute value of every row and column, then
compare to absolute value of every eigenvalue.
Scaling a matrix by multiplying every element by a constant causes
every eigenvalue to be multiplied by that constant. The constant may
be less than one, thus dividing works the same.
Usually test times 2.
The eigenvalues of the inverse of a matrix are the reciprocals
of the eigenvalues of the original matrix. They may be printed
in a different order. Harder test.
The eigenvectors of the inverse of a matrix are the same as
the eigenvectors of the matrix. They may differ in order and sign.
Sample test on 3 by 3 matrix, real and complex:
eigen3q.meigen3q_m.outeigen3r.meigen3r_m.out
A matrix with all elements equal has one eigenvalue equal to
the sum of a row and all other eigenvalues equal to zero.
The eigenvectors are typically chosen as the unit vectors.
| 0 0 0 |
A = | 0 0 0 | has three eigenvalues, all equal to zero
| 0 0 0 |
| 1 1 1 | | 2 2 2 2 |
| 1 1 1 | | 2 2 2 2 | 1 has eigenvalues 0, 0, 3
| 1 1 1 | | 2 2 2 2 | 2 has eigenvalues 0, 0, 0, 8
| 2 2 2 2 | in general n-1 zeros and a row sum
Each row that increases the singularity of a matrix, increases
the multiplicity of some eigenvalue.
The trace of a matrix is the sum of the diagonal elements.
The trace of a matrix is equal to the sum of the eigenvalues.
Another test, easy one.
In order to keep the same eigenvalues, interchanging two rows
of a matrix, then requires interchanging the corresponding two columns.
The eigenvectors will probably be different.
Sample test in Matlab showing the above
eigen10.m source codeeigen10_m.out results
Testing code that claims to compute eigenvalues
Testing a program that claims to compute eigenvalues and eigenvectors
is interesting because there are many possible tests. All should be
used.
Given A is an n by n complex matrix (that may have all real elements),
using IEEE 64-bit floating point and good algorithms:
1) Evaluate the determinant det|A-eI| for each eigenvalue, e.
The result should be near zero. 10^-9 or smaller can be expected
when A is small and eigenvalues are about the same magnitude.
2) Evaluate each eigenvalue with its corresponding eigenvector.
Av-ev should be a vector with all elements near zero.
Typically check the magnitude of the largest element.
10^-9 or smaller can be expected when A is small.
3) Compute the dot product of every pair of eigenvectors
and check for near zero. Eigenvectors should have length 1.0
4) Compute the trace of A and subtract the sum of the eigenvalues.
The result should be near zero. The trace of a matrix is the
sum of the diagonal elements of the matrix.
5) Compute the maximum of the sum of the absolute values of each
row and column of A. Check that the absolute value of every
eigenvalue is less that or equal this maximum.
6) Create a matrix from a polynomial and check that eigenvalues
are the roots of the polynomial.
7) Create a similar matrix B = P * A * P^-1 and check that
eigenvalues of B are same as eigenvalues of A.
Test cases may use random numbers or selected
real or complex numbers.
Generating test matrices to be used for testing.
1) Try matrices with n=1,2,3 first.
All zero matrix, all eigenvalues zero and eigenvectors should
be the unit basis vectors. If the length of the eigenvectors
is not 1.0, then you have to normalize them.
2) Try diagonal matrices with n=1,2,3,4
Typically put 1, 2, 3, 4 on the diagonal to make it easy to
check the values of the computed eigenvalues.
3) Generate a random n by n matrix, P, with real and imaginary values.
Compute P inverse, P^-1.
Compute matrix B = P A P^-1 for the A matrices in 2)
The eigenvalues of B should be the same as the eigenvalues of A,
yet the eigenvectors may be different.
4) Randomly interchange some rows and corresponding columns of B.
The eigenvalues should be the same yet the eigenvectors
may be different.
5) Choose a set of values, typically complex values e1, e2, ..., en.
Compute the polynomial that has those roots
(x-e1)*(x-e2)*...*(x-en) and convert to the form
x^n + c_n-1 x^n-1 + ... c_2 x^2 + c_1 x + c_0
Create the matrix n by n with the first row being negative c's
and the subdiagonal being 1's.
| -c_n-1 ... -c_2 -c_1 -c_0 |
| 1 0 0 0 |
...
| 0 0 0 0 |
| 0 1 0 0 |
| 0 0 1 0 |
The eigenvalues should be e1, e2, ..., en
Then do a similarity transform with a random matrix
and check that the same eigenvalues are computed.
Yes, this is a lot of testing, yet once coded, it can be used
for that "newer and better" version the "used software salesman"
is trying to sell you.
When 64-bit floating point is not accurate enough
When 64-bit integers are way too small
You will need this lecture for Homework 4 and
cs455 Project
"C" has gmp, gnu multiple precision.
Java has a bignum package.
Java has apfloat.jar
Ada has arbitrary decimal digit precision floating point.
Fortran has a multiple precision library.
Python has arbitrary precision integer arithmetic
Python has mpmath and other packages
Hmmm? Multiple precision must be important and have a use.
Computing Pi to a million places is a demonstration, but there
are more reasonable uses.
Types of multiple precision:
Unlimited length integers
Unlimited length fractions 1/integer
Unlimited length rationals integer/integer
Unlimited length floating point
Arbitrary yet fixed number of digits floating point
for "C" get gmp, GNU Multiple Precision!
download gmp from https://gmplib.orggmp.guide
Here are a few simple gmp samples
test_mpf.ctest_mpf.outtest_mpq.ctest_mpq.outtest_mpz.ctest_mpz.outgmp fact.cfact_gmp.out
Java BigDecimal that is BigInteger with a scale factor
Big_pi.java test program
Big_pi_java.out test results
Big_pi_check.java test program
Big_pi_check_java.out test results
test_apfloat.java test program
test_apfloat.out test results
mytest_apfloat.java test program
mytest_apfloat.out test results
Big_math.java various functions exp, sin
Big_simeq.java simultaneous equations
Big_inverse.java invert matrix
Fortran 95 module that implements big integers
big_integers_module.f90test_big.f90 test program
test_big_f90.out test results
Ada 95 precise, rational and digits arithmetic
directory of Ada 95 files
Python simple factorial(52) is
factorial.py program
factorial_py.out output
factbig.py3 really big 1,000,000!
Python simple 2^200 integer, yet floating point is IEEE 64 bit
power2.py program
power2_py.out output
test_mpmath.py program
test_mpmath_py.out output
test_mpmath.py3 program
test_mpmath_py3.out output
mpmath_example.py program
mpmath_example_py.out output
mpmath_example.py3 program
mpmath_example_py3.out output
A quick conversion of simeq.c to mpf_simeq.c solves simultaneous
equations with 50 digits, could be many more digits using gmp mpf_t.
Using the very difficult to get accurate answers matrix:
test_mpf_simeq.cmpf_simeq.cmpf_simeq.htest_mpf_simeq.outtest_mpf_simeq_300.out
Can irrational numbers be combined to produce an integer?
It appears that e^(Pi*sqrt(163)) is the integer 262,537,412,640,768,744
and that is rather amazing to me.
How might this be validated? It has been tried using higher and
higher precision and the value computes to about
262,537,412,640,768,743.999,999,999,999,999,999,999
We know 1.5 base 10 can be represented as 1.1 base 2 exactly.
We know 1/3 can not be represented exactly in a finite number of
decimal digits or a finite number of binary digits
1/3 = 0.3333333333333333333333333333333 decimal approximately
1/3 = 0.0101010101010101010101010101010 binary approximately
And, for what it is worth
We know 1/3 can be represented exactly as 0.1 base 3.
A quick "C" program gives the first 14 digits, using IEEE 64-bit floating point
/* irrational.c e^(Pi*sqrt(163)) is an integer? */
#include <math.h>
#include <stdio.h>
#define e 2.7182818284590452353602874713526624977572
#define Pi 3.1415926535897932384626433832795028841971
int main(int argc, char * argv[])
{
printf("e^(Pi*sqrt(163)) is an integer? \n");
printf(" 262537412640768744.0000000 check \n");
printf("%28.7f using 15 digit arithmetic \n",pow(e,Pi*sqrt(163.0)));
return 0;
}
With output:
e^(Pi*sqrt(163)) is an integer?
262537412640768744.0000000 check
262537412640767712.0000000 using 15 digit arithmetic
262537412640768743.99999999999925007259719818568887935385633733699 using 300 digit
What is needed is to show convergence from above and below,
and it would be nice if the convergence was uniform. This
could use 50, 100, 150, 200 digit precision for the computation.
Pick a number of digits. Increment the bottom digit of e, Pi and 163
then do the computation. Decrement the bottom digit of e, Pi and 163
then do the computation. Check if the upper and lower values of
the fraction are converging toward zero. Check if the convergence
is uniform, balanced, for the upper and lower values.
This is left as an exercise to the student.
Computing Pi to arbitrary precision
One method of computing Pi is to compute 4*atan(1) or
24*atan(b2) b2=(2*sqrt(b1)-1)/b1 b1=2-sqrt(3)
get_pi.cget_pi.out
get_pi.c using double and math.h atan
pi=24*atan(b2) b2=0.131652, b1=0.267949
Pi= 3.1415926535897932384626433832795028841971
get_pi= 3.14159265358980333
pi-Pi= 0.00000000000001021
get_mpf_pi.cget_mpf_pi.out
get_mpf_pi.c using digits=50
b1= 0.26794919243112270647255365849412763305719474618962
b2= 0.13165249758739585347152645740971710359281410222324
mpf_pi= 3.1415926535897932384626433832795028841971693999457
Pi= 3.1415926535897932384626433832795028841971
Now you can do Homework 4
First, a special case, the "roots of unity"
x +1=0 root is x=-1
x^2+1=0 roots are x=i x=-i i=sqrt(-1)
x^3+1=0 roots are x=-1 x=1/2 +i sqrt(3)/2 x=1/2 -i sqrt(3)/2
x^4+1=0 roots are x=+sqrt(2)/2 +i sqrt(2)/2
(all four combinations of +/-)
Note that all roots are on the unit circle in the complex plane.
Consider the Maple root finding for x^12+1=0 :
root12g.html
In general the roots of a polynomial are complex numbers.
A general purpose root finder for polynomials is difficult to
develop. Fortunately, some very smart people have written the
function, cpoly, that can take a general polynomial with complex
coefficients and find the complex roots. It also works with
real coefficients and real roots.
MatLab root finding for polynomials is just r = roots(p)
Where p is the vector of polynomial coefficients and
r is the vector of roots. Both r and p may be complex.
find_roots.m source code
find_roots_m.out output
Maple root finding for polynomials, not great:
find_roots.mws source code
find_roots_mws.out output
real and complex work some times.
Finding roots of a polynomial seems easy:
given c_n x^n + c_n-1 x^n-1 + ... _ c_1 x + c_0 = 0
find a value, x=r, that satisfies the equation,
divide the polynomial by (x-r) and thus have a
polynomial of one degree lower. Repeat. Done.
Unfortunately there are a lot of pitfalls, solved by the codes below.
Standard techniques include dividing through by c_n,
if c_0 is zero, there is a root equal zero, divide through by x,
if highest power is 2, directly solve the quadratic equation.
"cpoly" is a very old routine to compute roots of a polynomial
with complex coefficients c_0 ... c_n.
The Fortran code, including the test program (driver) for cpoly is
419.for in Fortran IV, compiles with g95
419_f.out is the output of many test cases
Another version, with an automatic conversion of Fortran to C.
The program f2c converts .f to .c it may not be very pretty.
It also needs the library that comes with f2c.
419.f driver and cpoly
419.c driver and cpoly
The Java code, including the test program (driver) is
c419.java in standard Java
c419.out is the output of many test cases
The Ada code, including the test program (driver) is
long_complex_polynomial_roots.ada in Ada
To understand how some iterative methods can converge quickly,
consider how to compute sqrt when you have no math library:
Basically y=sqrt(x) guess y, y_next = (x/y+y)/2, repeat.
Do not guess zero for y, x/y does not work, use y = 1 as guess.
sqrt.csqrt_c.out
A professional implementation of sqrt would reduce the input, x,
to a range from 0.25 to 1.0 by dividing by an even power of 2.
The sqrt(x^2n) is just x^n. The square root of a product is
the product of the square root of the factors. This can be easy
with IEEE floating point numbers because they are stored with
an exponent of 2.
So, since the sqrt iteration was basically Newtons method,
why not just use Newtons method to find roots?
here is how it works, even for complex roots.
Given: f(x) = 0
guess x
x_next = x - f(x)/f'(x) f'(x) is the derivative of f(x)
avoid region where slope is zero
repeat until |x_next-x| < epsilon
newton.cnewton_c.out
Notice the oscillation in the last few steps.
The problem can come from a bad guess that causes a big
oscillation. The problem can come from hitting an x
where f'(x) is zero. Thus, there have been many workarounds.
"cpoly" is one of the most general solutions.
Nonlinear Equations
Other examples:
given f(x) = x + 2 x^2 + 3/x = 6
thus f'(x) = 1 + 4 x - 6/x^2
then guess a value of x, Newton iteration is
next x = x - f(x)/f'(x)
First, expect x=1, we happen to have at least two possible solutions:
simple_nl.csimple_nl_c.out
Second, expect x=2, have convergence to expected solution:
simple2_nl.csimple2_nl_c.out
Systems of Nonlinear Equations
in matrix form, the equations are: A * X = Y
| 1 2 3 | | x1 | = | 10.000 |
| 2 1 1 | * | x2^2 | = | 6.333 |
| 3 1 4 | | 1/x3 | = | 8.333 |
A * X = Y
from equations that have these derivatives
f1(x1, x2, x3) = x1 + 2 x2^2 + 3/x3 = 10
f1'(x1) = 1
f1'(x2) = 4 x2
f1'(x3) = -3/x3^2
f2(x1, x2, x3) = 2 x1 + x2^2 + 1/x3 = 6.333
f2'(x1) = 2
f2'(x2) = 2 x2
f2'(x3) = -1/x3^2
f3(x1, x2, x3) = 3 x1 + x2^2 + 4/x3 = 8.333
f3'(x1) = 3
f3'(x2) = 2 x2
f3'(x3) = -4/x3^2
create the Jacobian
| 1 | | 1 4*x2 -3/x3^2 |
A * | 2*x2 | = | 2 2*x2 -1/x3^2 | = Ja
|-1/x3^2 | | 3 2*x2 -4/x3^2 |
A * D = Ja
deriv of X wrt x1,x2,x3
The Newton iteration becomes
next X = X - (A*X-Y)/Ja, /Ja is times transpose inverse of Ja
Ja is, in general, dependent on all variables
Three equations in three unknowns, not direct convergence.
Experiments show great difference with widely different
initial conditions. Two sets of equations with a choice of
initial condition. x1, x2, x3 are the variables.
equation_nl.cequation_nl_c.outinverse.c used by code above
In other languages:
equation_nl.adbequation_nl_ada.outinverse.adbequation_nl.f90equation_nl_f90.outinverse.f90equation_nl.javaequation_nl_java.outinverse.java
A system of non linear equations with powers 1, 2, 3, -1 and -2.
This example has a lot of debug print and several checks.
Added is also, variable control of fctr, a multiplier on
how much correction is to be made each iteration.
The problem and method are described as comments in the code.
equation2_nl.adbequation2_nl_ada.out
Same example as above, in various languages, with debug removed:
equation2_nl.f90equation2_nl_f90.outequation2_nl.cequation2_nl_c.outequation2_nl.javaequation2_nl_java.out
A polynomial nonlinear example that converges with no fctr control.
equation4_nl.cequation4_nl_c.out
A more general routine for solving a system of nonlinear
simultaneous equations A x = y where there are n unknowns and
any unknown may be first, second, third power of negative one
or negative two power:
A is n by n+nonlinear terms.
y is n right hand sides of equations
x is n+nonlinear, initially the first guess at solution
of the linear terms with space for non linear terms.
The caller sets up var1 with indices of first power.
The caller sets up var2 with indices of second power.
The caller sets up var3 with indices of third power.
The caller sets up vari1 with indices of negative first power.
The caller sets up vari2 with indices of negative second power.
Must have each term, then non linear, note: -1 means no term
a*x[0] + b*x[0]*x[1]*x[2] + c*x[1]*x[1]*x1[1]/(x[0]*x[2]) = d
n=3 variables, nonlinear=2 in this example
x[0] x[1] x[2] b c
int var1[] = { 0, 1, 2, 0, 1 }; // zero based subscript
int var2[] = {-1, -1, -1, 1, 1 }; // e.g. last is x3*x4
int var3[] = {-1, -1, -1, 2, 1 }; // possible cube or 3 term
int vari1[] = {-1, -1, -1, -1, 0 }; // zero based subscript
int vari2[] = {-1, -1, -1, -1, 2 }; // e.g. last is x3*x4
simeq_newton5.javatest_simeq_newton5.javatest_simeq_newton5_java.outinverse.javasimeq_newton5.py3test_simeq_newton5.py3test_simeq_newton5_py3.out
A difficult numerical problem is the finding of a global minima
of an unknown function. This type of problem is called an
optimization problem because the global minima is typically
the optimum solution.
In general we are given a numerically computable function of
some number of parameters v = f(x_1, x_2, ... , x_n)
and must find the values of x_1, x_2, ... , x_n that gives
the smallest value of v. Or, by taking the absolute value,
find the values of x_1, x_2, ... , x_n that give the value
of v closest to zero.
Generally the problem is bounded and there are given maximum and
minimum values for each parameter. There are typically many
places where local minima exists. Thus, the general solution
must include a global search then a local search to find the
local minima. There is no general guaranteed optimal solution.
First consider a case of only one variable on a non differentiable
function, y = f(x) where x has bounds xmin and xmax. There may be
many local minima, valleys that are not the deepest.
* * * * *
* * * * * * * * * *
* * * * * * * * * * *
* * * ** * * *
* *
*
Global search:
Do a y=f(x) for x = xmin, x = x+dx, until x>xmax,
Save the smallest y and the x0=x for that y.
Then find the local minimum:
Consider evaluating f(x) at some initial point x0 and x0+dx.
If f(x0) < f(x0+dx) you might move to x0-dx.
if f(x0) > f(x0+dx) you might move to x0+dx+dx.
The above may be very bad choices!
Here are the cases you should consider:
Compute yl=f(x-dx) y=f(x) yh=f(x+dx) for some dx
yh y yh y yl yl
y yh yl yl yh y
yl yl y yh y yh
case 1 case 2 case 3 case 4 case 5 case6
For your next three points, always keep best x:
case 1 x=x-dx possibly dx=2*dx
case 2 x=x-dx dx=dx/2
case 3 dx=dx/2
case 4 x=x+dx dx=dx/2
case 5 dx=dx/2
case 6 x=x+dx possibly dx=2*dx
Then loop.
A simple version in Python is
optm.pyoptm_py.out
optm.py running
minimum (x-5)**4 in -10 to +10 initial 2, 0.001, 0.001, 200
xbest= 5.007 , fval= 2.401e-09 ,cnt= 21
minimum (x-5)**4 in -10 to +10 initial 2 using default
xbest= 4.998271 , fval= 8.9367574925e-12 ,cnt= 35
Another version, found on the web, slightly modified
min_search.pymin_search_py.out
min of (x-5)**4, h= 0.1
xmin= 4.9984375
fmin= 5.96046447754e-12
n= 86 , sx= 4.996875 , xx= 5.003125
min of (x-5)**4, h= 0.01 # smaller initial h, not as good
xmin= 4.9975
fmin= 3.90625000037e-11
n= 704 , sx= 4.995 , xx= 5.005
min of (x-5)**4, h= 2.0 # smaller tolerance, better
xmin= 4.99993896484
fmin= 1.38777878078e-17
n= 47 , sx= 4.99987792969 , xx= 5.00012207031
Sound, voice, music, etc. can be in many compressed formats in
many file types, such as .wav .mp3 .aac .avi .mka .mpeg .vox ...
If your computer has a speaker and software, you can hear sound files.
On almost all browsers, click and hear sound files.
On Ubuntu programs aplay, Rythmbox, and some languages provide
for listening to sound files. There are a few programs that can be
downloaded that convert some type of sound file to another type
of sound file. 30,000 volume samples per second is common.
sudo apt install aplay
We will read a sound file, volume data at very small
time intervals, use FFT to get frequency domain.
Modify the frequency domain, IFFT to get volume and convert
to a .wav file.
A basic Fourier transform can convert a function in the time domain
to a function in the frequency domain. For example, consider a sound
wave where the amplitude is varying with time. We can use a discrete
Fourier transform on the sound wave and get the frequency spectrum.
This is short_count.wav one, two, three
short_count.wav
The basic FFT plotting the magnitude of about 31,000 bins
has zero frequency on left, highest frequency in middle,
and back to lowest frequency on right:
Using fftshift to center the zero frequency:
Then taking the log of the amplitude is typical
when plotting a spectrum:
Above made using:
plot_fft.m reads .dat amplitudeswav_to_dat.c make .dat from .wav
cat.wav, sound file, was converted cat.dat, amplitude data file
cat.wav soundcat.dat amplitude in 18,814 time steps
Of course there is an inverse Fourier transform that converts
the frequency spectrum back to the time domain.
The spectrum of F(t)=sin(2 Pi f t) t=0..1 is a single frequency f.
The discrete transform assumes the function is repeated for all t.
The amplitude values must be sampled at equal time increments for
the spectrum to be valid. The maximum frequency that can be
computed, called the Nyquist Frequency, is one half the sampling rate.
Sampling at 10,000 samples per second has maximum frequency of 5kHz.
It turns out that a signal at some frequency has an amplitude and
a phase, or an in-phase and quadrature value. The convenient
implementation is to consider these values as complex numbers.
It turns out that some input can be collected as complex values and
thus most implementations of the FFT take an array of complex numbers
as input and produce the same size array of complex numbers as output.
For technical reasons, a simple FFT requires the size of the input
and output must be a power of 2. Special versions of the FFT as found in
FFTW the fastest Fourier transform in the west
handle arbitrary sizes. We just pad out our input data with zeros to make
the size of the array a power of 2.
Fourier transforms have many other uses such as image filtering and
pattern matching, convolution and understanding modulation.
A few, non fast, programs that compute Fourier Series coefficients for
unequally spaced samples, read from a file, are:
fourier.cfourier.javasin(2x) data sin2.dattest on sine2.dat fourier_java.out
More detail of the basic Fourier transforms and series,
both continuous and discrete is fourier.pdfsquare_wave_gen.c using seriessquare_wave_gen_c.out cos and sin
There are many many implementations of the FFT, the n log_2 n
complexity method of computing the discrete Fourier transform.
Two of the many methods are shown below.
One method has the coefficients precomputed in the source code.
A second method computes the coefficients as they are needed.
Precomputed constants for each size FFT
fft16.cfft16.adbfft32.cfft32.adbfft64.cfft64.adbfft128.cfft128.adbfft256.cfft256.adbfft512.cfft512.adbfft1024.cfft1024.adbfft2048.cfft2048.adbfft4096.cfft4096.adb
Precomputed constants of each size inverse FFT
ifft16.cifft16.adbifft32.cifft32.adbifft64.cifft64.adbifft128.cifft128.adbifft256.cifft256.adbifft512.cifft512.adbifft1024.cifft1024.adbifft2048.cifft2048.adbifft4096.cifft4096.adb
Header file and timing program
fftc.hfft_time.cfft_time.adbfft_time_c.outfft_time_ada.out
A more general program that can compute the fft and inverse fft for
an n that is a power of two is:
fft.hfft.cfftest.cfftin.dat binary data
fftin.out binary data
A more general program for the FFT of large data in Java is:
Cxfft.javaread_wav.java reads and writes .wav
read_wave_java.out output
these may be combined for use in HW5.
fft_wav.java transform and inverse
fft_frame.java with a very crude frequency plot
very crude frequency plot
For Matlab, fft_wav.m may be useful for HW5
fft_wav.m transform and inverse, frequency plot
Another version in "C" and Java, the Java demonstrates convolution
fftd.hfftd.ctest_fft.ctest_fft_c.outCxfft.javaTestCxfft.javaTestCxfft.out
Python using downloaded numpy has fft
testfft.pytestfft.outfftwav.py3fftwav.py3_outwrite_ran_wav.py3plotwav.py3print_afile.py3
MatLab has fft as well as almost every other function
test_fft.mtest_fft_m.outtest_fft2.mtest_fft2_m.out
In order to get better accuracy on predicted coefficients
for square wave, triangle wave and saw tooth wave, 1024 points.
The series are shown as comments in the code.
Also shown, is the anti aliasing and normalization needed to
get the coefficients for the Fourier series.
test_fft_big.ctest_fft_big_c.out
Reading and Writing .wav files for HW5
Here is a program that just reads a .wav file and writes a copy,
and does a little printing. This with extensions can be the basis
of Homework 5. fftwav.py3 , fft1_wav.c , You may modify
to suit your desires. These only work for single channel, 8 bit
per sample PCM .wav files. (not all the .wav files listed below)
Note: sound does not work from a remote computer, e.g. ssh
Some programs require special libraries or operating systems.
read_wav.cread_wav.outtrain1.wavshort_count.wavrocky4.wavcat.wav
Splitting out read and write wav:
rw_wav.hrw_wav.ctest_rw_wav.ctest_rw_wav.out
And here are three .wav files from very small to larger for testing:
ok.wavtrain.wavroll.wavfft1t.wavshort_count.wavfft1c.wav
Suppose you wanted to compute a sound. Here is a generator for
a simple sine wave. It sounds cruddy.
sine_wav.csine_wav.outsine.wav
Computing FFT of a .wav file for HW5
For the homework, one example using a 64 point FFT and just doing
the transform and inverse transform, essentially no change, is
fft1_wav.cfft1_wav.outtrainf.wavrockey2.wavcount_out.wavtrainf2.wav experiment
Hopefully you can find some more interesting .wav files.
P.S. When using MediaPlayer or QuickTime be sure to close the
file before trying to rewrite it.
Your web browser can usually play .wav files.
Use file:/// path to your directory /xxx.wav
In Java, ClipPlayer.java reads and
plays .wav and other files. The driver program is ClipPlayerTest.java
In Python, M$ only, with the WX graphics available, play .wav files with
sound.py
or on linux
linux_sound.pyrocky4.wav
In MatLab play .wav, old wave replaced by audio,
must be on local computer with speaker.
waveplay.mshort_count.wav
Im MatLab play .dat files that have a value for each
sample.
soundplay.mshort_count.dat
For students running Ubuntu, this sequence of "C" programs
demonstrate direct use of playing sampled amplitude sound.
These do not require generating a .wav or other type sound file.
pcm_min.cpcm_sin.cpcm_dat.cMakefile_soundshort_count.datlong_count.dat
You may modify any of these to suit your needs.
An interactive WEB site with with a few functions and their spectrum is
heliso.tripod.com/java_hls/gccs/spectrum1.htm
Here is Rocky speaking and the dynamic spectrum
rocky4.wav
This needs modifications for 2 channel and 16 bits per sample.
Now you can do homework 5
You may not have your own web page activated to play your .wav files:
do these commands:
cp my.vav ~/pub/www/
cd ~/pub/www/
chmod go+rx my.wav
cd ..
chmod go+rx www
cd ~
you are back in your login directory, and from home you can
http://www.gl.umbc.edu/~your-user-id
then click on my.wav
optional plots of FFT's using Python and C with gnuplot
plot_fft_py.htmlplot_fft_c.html
Now sound in db:
cs455_l18a.html
Then FFT to determine material, molecules.
Making spectrum to get wave shape, Gibb to square wave
wave = sum (1.0/g)*sin(g*x) g = 1, 3, 5, 7, 9, ... 75
plot_gibb.py3 source codeplot_gibb_py3.out output
Plotted results:
plot_gibb_1.png
plot_gibb_3.png
plot_gibb_5.png
plot_gibb_25.png
plot_gibb_75.png
Digital Filtering uses numerical computation rather than analog
components such as resistors, capacitors and inductors to
filter out frequency bands.
A low-pass filter will have a "cutoff frequency" f such that
frequencies above f will be attenuated and frequencies below f
will be passed. The filter does not produce a sharp dividing
line and all frequencies are changed in both amplitude and
phase angle.
A high-pass filter will have a "cutoff frequency" f such that
frequencies below f will be attenuated and frequencies above f
will be passed. The filter does not produce a sharp dividing
line and all frequencies are changed in both amplitude and
phase angle.
A band-pass filter will pass frequencies between f1 and f2,
attenuating frequencies below f1 and attenuating frequencies
above f2. f1 <= f2.
The signal v(t)=sin(2 Pi f t) t=t0, t1, ..tn is a single frequency f.
For digital filtering we assume the digital value of v(ti) is
sampled at uniformly spaced times t0, t1,..., tn.
For an order m filter, with one based subscripts:
y(i) = ( a(1)*x(i) + a(2)*x(i-1) + a(3)*x(i-2) + ... + a(m+1)*x(i-m)
- b(2)*y(i-1) - b(3)*y(i-2) - ... - b(m+1)*y(i-m) )/b(1)
With array x and y in memory, the MatLab code for computing y(1:n) could be:
for i=1:n
y(i)=a(1)*x(i);
for j=1:m
if j>=i
break
end
y(i)=y(i) + a(j+1)*x(i-j) - b(j+1)*y(i-j);
end
y(i)=y(i)/b(1); % not needed if b(1) equals 1.0
end
or, use MatLab y = filter(a, b, x); % less typing
For an order m filter, with zero based subscripts:
y[i] = ( a[0]*x(i) + a[1]*x[i-1] + a[2]*x[i-2] + ... + a[m]*x[i-m]
- b[1]*y[i-1] - b[2]*y[i-2] + ... - b[m]*y[i-m] )/b[0]
With array x and y in memory, the C code for computing y[0] to y[n-1] could be:
for(i=0; i<n; i++) {
y[i]=a[0]*x[i];
for(j=1; j<=m; j++)
{
if(j>i) break;
y[i]=y[i] + a[j]*x[i-j] - b[j]*y[i-j];
}
y[i]=y[i]/b[0]; /* not needed if b[0]==1 */
}
For reading x samples and writing filtered y values:
read next x sample and compute y and output y
(the oldest x and y in RAM are deleted and the new x and y saved.
Typically use a circular buffer [ring buffer] so that the saved x and y
values do not have to be moved [many uses of modulo in this code].)
Given a periodic input, filters require a number of samples to build up
to a steady state value. There will be amplitude change and phase change.
Typical symbol definitions related to digital filters include:
ω = 2 Π f
z = ej ω t digital frequency
s = j ω analog frequency
z-1 is a unit delay (previous sample)
IIR Infinite Impulse Response filter or just recursive filter.
In transfer function form
yi a0 + a1 z-1 + a2 z-2 + a3 z-3 + ...
---- = -----------------------------------
xi b1 z-1 + b2 z-2 + b3 z-3 + ...
s = c (1-z-1)/(1+z-1) bilinear transform
db = 10 log10 (Power_out/Power_in) decibel for power
db = 20 log10 (Voltage_out/Voltage_in) decibel for amplitude
(note that db is based on a ratio, thus amplitude ratio may
be either voltage or current. Power may be voltage squared,
current squared or voltage times current. More on db at end.)
A Java program is shown that computes the a and b coefficients for
Butterworth and Chebyshev digital filters. Options include Low-Pass,
Band-Pass and High-Pass for order 1 through 16. Click on the sequence:
Design, Poles/Zeros, Frequency response, coefficients to get output
similar to the screen shots shown below.
The Pole/Zero plot shows the complex z-plane with a unit circle.
The poles, x, and zeros o may be multiple.
The frequency response is for the range 0 to 4000 Hz based on
the 8000 Hz sampling. 0 db is at the top, the bottom is the
user selected range, -100 db is the default.
The coefficients are given with zero based subscripts.
(These subscripts may be used directly in C, C++, Java, etc.
add one to the subscript for Fortran, MatLab and diagrams above.)
Note that MatLab has 'wavread' and 'wavwrite' functions as well
as a 'sound' function to play .wav files. 'wavwrite' writes
samples at 8000 Hz using the default.
The files to compile and run the Digital Filter coefficients are:
IIRFilterDesign.javaIIRFilter.javaPoleZeroPlot.javaGraphPlot.javamake_filter.batMakefile_filtermydesign.out
This is enough for you to be able to program and use simple
digital filters. This lecture just scratched the surface.
There are many types of digital filters with many variations.
There are complete textbooks on just the subject of digital filters.
The output from the low pass filter shown above "builds up and settles"
quickly:
A fourth order band pass filter with band 2000 to 2100 requires
more samples for the center frequency 2050 to build up and settle:
Some notes on db
+3 db is twice the power
-3 db is half the power
+10 db is ten times the power
-10 db is one tenth the power
+30 db is 1000 times the power, 10^3
-30 db is 1/1000 of the power, 10^-3
+60 db is one million times the power, 10^6
-60 db is very small
Sound is measured to average human hearing threshold,
technically 20 micropascals of pressure by definition 0 db.
Some approximate examples for sound from a close source:
0 db threshold of hearing
20 db whisper
60 db normal conversation
80 db vacuum cleaner
90 db lawn mover
110 db front row at rock concert
130 db threshold of pain
140 db military jet takeoff, gun shot, fire cracker
180 db damage to structures
194 db shock waves begin, not normal sound
The sound drops off with distance by the inverse square law.
Each time you move twice as far away from the source,
the sound is 1/4 as strong.
The effective loudness decreases with very short burst of sound.
160 db for 1/100 second is about a loud as 140 db for 1 second.
Sample db vs power code
dbtopower.c sourcedbtopower_c.out resultsdbtopower.py sourcedbtopower_py.out results
Many examples of spectra of molecules are presented on the
following pages. The spectra can be used to identify the
presents of specific molecules. Note that the same chemical
designation may result in more than one molecular structure
and the resulting spectra may be different.
Basic physics:
c = velocity of light = about 300,000,000 meters per second
f = frequency in Hz, Kilo KHz = 1000 Hz, Mega Mhz = 106Hz
Giga GHz = 109Hz Tera THz = 1012Hz
λ = wavelength in meters, μ = 10-6 meters = 1000 nano meters
c = λ * f
λ = c / f
f = c / λ
Example: wavelength 1 μ has frequency, f, computed by
f = 300*106 / 10-6 = 300*1012Hz = 300 THz
The "visible" spectrum, that which can be seen by average people,
is roughly given by wavelength in nm = nanometer or 10-9
The following pages show the spectrum of many molecules.
The Infrared spectrum has wavelength μ in micrometers, 10-6 meters
The notation Frequency, cm-1 must be multiplied by 30*106
to get Hz.
The Nmr, Nuclear magnetic resonance, data is in Nmr units and Hz.
molecular.pdf Molecular frequency response
Go over WEB pages Lecture 11 through 18.
Open book, open notes, no study guide or copy.
Read the instructions.
Follow the instructions, else lose points.
Read the question carefully.
Answer the question that is asked, not the question you want to answer.
Some things you should know:
The roots of a polynomial with real coefficients can be complex.
Some languages have "complex" as a built in type (e.g. Fortran).
Many languages have complex arithmetic available: Ada, Fortran,
MatLab, Mathematica, Python.
Some languages provide standard packages for "complex" (e.g. Ada and C++)
Some languages require you find or make your own "complex" (e.g. C)
All elementary functions are defined and computable for complex arguments.
All complex elementary functions may be computed with only real functions.
Example: sin(z)=sin(re z)*cosh(im z) + i cos(re z)*sinh(im z)
Many languages have either built in or available complex elementary
functions, e.g. Ada, C++, Fortran, Matlab, Mathematica, Java, Python
and some others.
Some complex elementary functions are difficult to compute accurately.
Some elementary functions have a branch cut in the complex plane.
The branch cut is often along the negative real axis.
The derivative is not defined across the branch cut.
Eigenvalues must satisfy several definitions.
There are exactly n eigenvalues for every n by n matrix.
The eigenvalues of a matrix of real elements may be complex.
The determinant of a matrix with a variable subtracted from each
diagonal element yields the characteristic equation of the matrix.
e.g det|A - eI| = c_n*e^n + ... _ c_1*e + c_0
The roots of the characteristic equation are eigenvalues.
det|A - eI|=0 is one definition of eigenvalues.
Given a polynomial equation, the coefficients of the polynomial may
be placed in a matrix such that the eigenvalues of the matrix are
the roots of the polynomial.
An eigenvector exists for every eigenvalue. Av = ev
It is always possible to find a set of orthogonal eigenvectors
that form the basis of an n dimensional space for an n by n matrix.
Eigenvectors may be complex when the matrix elements and the
eigenvalues are real.
The eigenvectors of a matrix of all ones are usually chosen as the
unit vectors [1 0 ... 0] [0 1 0 ... 0] ... [0 ... 0 1]
The equation A v = e v must be satisfied for every eigenvalue, e,
and its corresponding eigenvector, v. |v| non zero.
There are many possible test for a program that is supposed to compute
eigenvalues and eigenvectors.
MatLab may not produce unique eigenvectors using [v e]=eig(a)
MatLab may not produce eigenvectors that are mutually orthogonal.
LAPACK is a source of many numerical algorithms implemented in Fortran.
LAPACK code may be converted to any other language.
LAPACK object files may be used with most languages via some interface.
LAPACK source code needs the BLAS, Basic Linear Algebra Subroutines.
The BLAS may be available, optimized, for your computer.
LAPACK is available with files to compile and install on
many computers from www.netlib.org/lapack
TOMS is numerical source code from the ACM Transactions on Mathematical
software and may be found on www.netlib.org/toms
Some problems require more than double precision floating point.
"gmp" is the gnu multiple precision library of "C" functions.
gmp is available for free download from the Internet.
gmp provides multiple precision integer, rational and floating point.
Any language can implement multi-precision, Java has it available,
other languages use libraries.
52! has too many significant digits to exactly represent in double
precision floating point.
Solving large systems of equation may require multi-precision in order
to get reasonable results.
A simple example of the need for complex roots is x^2+1 = 0
with roots i and -i .
The roots of high order polynomials may be difficult to compute
accurately, e.g. x^16 + 5 = 0 .
There is no algorithm that can guarantee finding all the
roots of every polynomial to a specified accuracy.
Newtons method works most of the time for finding complex roots.
z_next = z - P(z)/P'(z)
(Using a different initial guess usually fixes the problem.)
Optimization is finding the values of independent variables such
that a function is minimized.
Optimizing the absolute value of a function finds the values of
the independent variables that make the function closest to zero.
There is no algorithm that can guarantee finding the optimum
value of every function to a required accuracy.
There are many methods of implementing code to find local minima.
The code varies depending on the assumption of a differentiable
function and on the assumption of no local minima.
The spiral trough is one test case that can be used to test
optimization code.
A Fourier transform of a finite set of data assumes that the
data is repeated infinitely many times before and after the data.
A FFT, Fast Fourier Transform implements a discrete Fourier transform
using order n log n computations for an n point transform. Typically
n must be a power of 2.
The FFT of real data may be complex.
The inverse FFT of the result of and FFT will be reasonably close
to the original data.
The Discrete Fourier transform and FFT of data sampled at time
steps dt will be the frequency spectrum with highest unaliased
frequency 1/(2*dt), the Nyquist frequency.
Digital Filters can be low-pass, high-pass, or band-pass.
A band-pass filter requires two frequencies.
The cutoff frequency is typically 3db down (half power) and
the amplitude decreases faster for higher order filters.
Digital Filters can be described by a set of "a" coefficients
for the numerator and "b" coefficients for the denominator.
Sound power is measured in db, decibel. 0 db for minimum
threshold of hearing. db increases as 10 log10(new_power/old_power)
60 db is 100 times the power of 40 db.
Sound can hurt, 130 db, and greater values may damage buildings.
New applications are happening. Not on the exam,
yet may be of interest:
FFT used on light
Multiple choice and short answer.
No help from other students, that is cheating!
Read quiz carefully. You may think you saw that question
before and know the answer. But, question may have changed.
Dependent and independent are opposite.
"No" or "not" may have been added or deleted.
"some" is different from "all". Numbers can change.
Have all homework 1, 2, 3, 4 finished.
hw4 requires two numbers, both integers, one big 52!
You must download quiz2
last name a-j a, last name k-r b, last name s-z c ?
cp /afs/umbc.edu/users/s/q/squire/pub/download/q2_455a.doc
cp /afs/umbc.edu/users/s/q/squire/pub/download/q2_455b.doc
cp /afs/umbc.edu/users/s/q/squire/pub/download/q2_455c.doc
Use Microsoft Word or libreoffice to edit.
Then on linux.gl.umbc.edu
submit cs455 quiz2 q2_455?.doc or .docx
A computer benchmark will typically be some code that is executed
and the running time measured.
A few simple rules about benchmarks:
1) Do not believe or trust any person, any company, any data.
2) Expect the same code to give different times on:
different operating systems,
different compilers,
different computers from various manufacturers
(IBM, Sun, Intel, AMD) even at same clock speed,
(IBM Power fastest, AMD next fastest with same memory, cache)
different languages, even for line by line translation.
3) If you want to measure optimization, turn it on,
otherwise prevent all optimization.
(Most compilers provide optimization choices)
(Add code to prevent inlining of functions, force store)
4) You will probably be using your computers clock to measure time.
Test that the clock is giving valid results for the language
you are using. The constant CLOCKS_PER_SEC in the "C" header
file time.h has been found to be wrong.
One manufacturer put a capacitor across the clock circuitry
on a motherboard and all time measurements were half the
correct time. See sample test below.
5) For measuring short times you will need to use the
"double difference method". This method can be used to
measure the time of a single instruction. This method
should be used for any benchmark where one iteration of
the code runs in less than a second. See sample test below.
6) Some methods of measuring time on a computer are only
accurate to one second. Generally run enough iterations of
your code in loops to get a ten second measurement.
Some computers provide a real time clock as accurate as
one microsecond, others one millisecond and some poorer than
a fiftieth of a second.
7) Turn off all networking and stop all software that might run
periodically. If possible, run in single user mode. You want to
measure your code, not a virus checker or operating system.
I once did measurement on a Sun computer running Solaris. It
seemed to slow down periodically. I found that the operating
system periodically checked to see if any disk files needed
to be written.
8) If you are interested in how fast your application might run
on a new computer, find reputable benchmarks that are for
similar applications. I do a lot of numerical computation, thus
all my benchmarks are heavily floating point. You may be
more interested in disk performance or network performance.
9) Do not run all all zero data. Some compilers and very smart and
may precompute your result without running you code.
Be sure to use every result. Compilers do "dead code elimination"
that checks for code where the results are not used and just
does not produce instructions for that "dead code." An "if" test
or printing out the result is typically sufficient. For vectors
and arrays, usually printing out one element is sufficient.
10) It helps to be paranoid. Check that you get the same results
by running n iterations, then 2n iterations. If the time did
not double, you do not have a stable measurement. Run 4n and 8n
and check again. It may not be your benchmark code, it may be
an operating system activity.
11) Do not run a benchmark across midnight. Most computers reset
the seconds to zero at midnight.
12) Keep values of time as a double precision numbers.
13) Given an algorithm where you can predict the time increase
as the size of data increases: e.g. FFT is order n log2 n,
multiplying a matrix by a matrix is order n^3, expect
non uniform results for some values of n.
Consider the case where all your code and all your data fit
in the level one caches. This will be the fastest.
Consider when you data is much larger than the level one cache
yet fits in the level two cache. You are now measuring the
performance of the level two cache.
Consider when your data fits in RAM but is much larger than
your level two (or three) cache. You are now measuring the speed
of your code running in RAM.
Consider when your data is much larger than your RAM, you are
now running in virtual memory from your disk drive. This will
be very slow and you are measuring disk performance.
In order to check the hardware and software time from your computers
clock, run the following two programs. Translate to the language
of your choice. The concept is to have the program display the number
of seconds every 5 seconds for a minute and a half. You check
the display against a watch with a second hand.
The first code uses "clock()" for process time and the second code
uses "time()" for wall clock time.
/* time_cpu.c see if C time.h clock() is OK */
/* in theory, this should be only your processes time */
#include <stdio.h>
#include <time.h>
int main()
{
int i;
double now;
double next;
printf("Should be wall time, 5 second intervals \n");
now = (double)clock()/(double)CLOCKS_PER_SEC;
next = now+5.0;
for(i=0; i<90;)
{
now = (double)clock()/(double)CLOCKS_PER_SEC;
if(now >= next)
{
printf("%d\n",i);
i=i+5;
next=next+5.0;
}
}
return 0;
}
(If time_cpu runs slow, your process is being interrupted.)
time_cpu_c.out
/* time_test.c see if C time.h time(NULL) is OK */
#include <stdio.h>
#include <time.h>
int main()
{
int i;
double now;
double next;
printf("Should be wall time, 5 second intervals \n");
now = (double)time(NULL);
next = now+5.0;
for(i=0; i<90;)
{
now = (double)time(NULL);
if(now >= next)
{
printf("%d\n",i);
i=i+5;
next=next+5.0;
}
}
return 0;
}
time_test_c.out
The "Double Difference Method" tries to get accurate measurement
for very small times. The code to time a single floating point
add instruction is shown below. The principal is:
measure time, t1
run a test harness with loops that has everything except the code
that you want to time. Count the number of executions as a check.
measure time, t2
measure time, t3
run exactly the same code from the test harness with only the
feature you want to measure added. Count number of executions.
measure time, t4
check that the number of executions is the same.
check that t2-t1 was more than 10 seconds
the time for the feature you wanted to measure is
t5 = ((t4 - t3) - (t2 - t1))/ number of executions
basically measured time minus test harness time divided by the
number of executions.
/* time_fadd.c try to measure time of double A = A + B; */
/* roughly time of one floating point add */
/* using double difference and minimum and stability */
#include <time.h>
#include <stdio.h>
#include <math.h>
#define dabs(a) ((a)<0.0?(-(a)):(a))
void do_count(int * count_check, int rep, double * B);
int main(int argc, char * argv[])
{
double t1, t2, t3, t4, tmeas, t_min, t_prev, ts, tavg;
double A, B, Q;
int stable;
int i, j;
int count_check, outer;
int rep, min_rep;
t_min = 10.0; /* 10.0 seconds typical minimum measurement time */
Q = 5.0; /* 5.0 typical approximate percentage stability */
min_rep = 32; /* minimum of 32 typical */
outer = 100000; /* some big number */
printf("time_fadd.c \n");
printf("min time %g seconds, min stability %g percent, outer loop=%d\n",
t_min, Q, outer);
stable = 5; /* max tries */
t_prev = 0.0;
for(rep=min_rep; rep<100000; rep=rep+rep) /* increase until good measure */
{
A = 0.0;
B = 0.1;
t1 = (double)clock()/(double)CLOCKS_PER_SEC;
for(i=0; i<outer; i++) /* outer control loop */
{
count_check = 0;
for(j=0; j<rep; j++) /* inner control loop */
{
do_count(&count_check, rep, &B);
}
}
t2 = (double)clock()/(double)CLOCKS_PER_SEC;
if(count_check != rep) printf("bad count_check_1 %d \n", count_check);
A = 0.0;
t3 = (double)clock()/(double)CLOCKS_PER_SEC;
for(i=0; i<outer; i++) /* outer measurement loop */
{
count_check = 0;
for(j=0; j<rep; j++) /* inner measurement loop */
{
do_count(&count_check, rep, &B);
A = A + B; /* item being measured, approximately FADD time */
}
}
t4 = (double)clock()/(double)CLOCKS_PER_SEC;
if(count_check != rep) printf("bad count_check_2 %d \n", count_check);
tmeas = (t4-t3) - (t2-t1); /* the double difference */
printf("rep=%d, t measured=%g \n", rep, tmeas);
if((t4-t3)<t_min) continue; /* need more rep */
if(t_prev==0.0)
{
printf("tmeas=%g, t_prev=%g, rep=%d \n", tmeas, t_prev, rep);
t_prev = tmeas;
}
else /* compare to previous */
{
printf("tmeas=%g, t_prev=%g, rep=%d \n", tmeas, t_prev, rep);
ts = 2.0*(dabs(tmeas-t_prev)/(tmeas+t_prev));
tavg = 0.5*(tmeas+t_prev);
if(100.0*ts < Q) break; /* above minimum and stable */
t_prev = tmeas;
}
stable--;
if(stable==0) break;
rep = rep/2; /* hold rep constant */
} /* end loop increasing rep */
/* stable? and over minimum */
if(stable==0) printf("rep=%d unstable \n", rep);
if(tmeas<t_min) printf("time measured=%g, under minimum \n", tmeas);
printf("raw time=%g, fadd time=%g, rep=%d, stable=%g\% \n\n", tmeas,
(tavg/(double)outer)/(double)rep, rep, ts);
return 0;
} /* end time_fadd.c */
/* do_count to prevent dead code elimination */
void do_count(int * count_check, int rep, double * B)
{
(*count_check)++;
/* could change B but probably don't have to. */
}
time_fadd_sgi.outtime_fadd_c_sun.outtime_fadd_c_mac.out
If you have a 64-bit computer and more than 4GB of RAM, here is a
test you may want to run in order to check that your operating
system and compiler are both 64-bit capable:
big_malloc.c
big_malloc.c running, 10 malloc and free 1GB
about to malloc and free 1GB the 1 time
about to malloc and free 1GB the 2 time
about to malloc and free 1GB the 3 time
about to malloc and free 1GB the 4 time
about to malloc and free 1GB the 5 time
about to malloc and free 1GB the 6 time
about to malloc and free 1GB the 7 time
about to malloc and free 1GB the 8 time
about to malloc and free 1GB the 9 time
about to malloc and free 1GB the 10 time
about to malloc and free 2000000000
about to malloc and free 3000000000
about to malloc and free 4000000000
about to malloc and free 5000000000
about to malloc and free 6000000000
about to malloc and free 7000000000
try calloc on 800000000 items of size 8 6.4GB
successful end big_malloc
A 28,000 by 28,000 matrix of double requires about 6.4GB of RAM.
In general, 64-bit is currently supported for the "C" type long
rather than int. Hopefully this will change as most desktop and
laptop computers become 64-bit capable. A small test of various
types and the size in bytes of the types and objects of various
types is:
big.c and its output (older OS)
big.c compiled gcc -m64 -o big big.c (64 bit long)
sizeof(int)=4, sizeof(int1)=4
sizeof(long)=8, sizeof(long1)=8
sizeof(long long)=8, sizeof(llong1)=8
sizeof(float)=4, sizeof(fl1)=4
sizeof(double)=8, sizeof(d1)=8
sizeof(size_t)=8, sizeof(sz1)=8
sizeof(int *)=8, sizeof(p1)=8
n factorial with n of type long
0 ! = 1
1 ! = 1
2 ! = 2
3 ! = 6
4 ! = 24
5 ! = 120
6 ! = 720
7 ! = 5040
8 ! = 40320
9 ! = 362880
10 ! = 3628800
11 ! = 39916800
12 ! = 479001600
13 ! = 6227020800
14 ! = 87178291200
15 ! = 1307674368000
16 ! = 20922789888000
17 ! = 355687428096000
18 ! = 6402373705728000
19 ! = 121645100408832000
20 ! = 2432902008176640000
21 ! = -4249290049419214848 BAD!
22 ! = -1250660718674968576
23 ! = 8128291617894825984
24 ! = -7835185981329244160
big12.c and its output (newer OS)
Note: 'sizeof' change to long, needs %ld rather than %d
big12.c compiled gcc -o big12 big12.c (64 bit long)
-m64 needed on some older OS
sizeof(sizeof(int))=8, sizeof needs pct ld
sizeof(int)=4, sizeof(int1)=4
sizeof(long)=8, sizeof(long1)=8
sizeof(long long)=8, sizeof(llong1)=8
sizeof(float)=4, sizeof(fl1)=4
sizeof(double)=8, sizeof(d1)=8
sizeof(size_t)=8, sizeof(sz1)=8
sizeof(int *)=8, sizeof(p1)=8
n factorial with n of type long (same)
...
trying to malloc 10GB
malloc returned
stored 10GB of 'a'
Some information on the long history of 64-bit computers:
Other code to find out how big an array can be in your
favorite language on your computer:
big_mem.f90 and its output
big_mem.java and its output
big_mem.c and its output
big_mem.adb and its output
Then, more like big_malloc.c, Ada "new"
source code big_new.adboutput from a PC big_new_ada.outoutput from a MAC big_new_mac.outoutput from a AMD big_new_amd.out
Yes, 50,000 by 50,000 8-byte floating point, not sparse.
20GB of RAM in one matrix.
Now do HW6
Project writeup
Use a plotting utility to look at the shape of the function. sample shown
Do a global global search to find candidate staring point, first xbest,ybest.
-1.0 < x < 1.0 -1.0 < y < 1.0 for this project
You need dx, dy of 0.001 to get you into the global minima well.
The global minimum value must be between -3 and -4 by analysis of equation.
You saved best as xbest, ybest, zbest. OK to print.
Top view f(x-dx,y+dy) f(x,y+dy) f(x+dx,y+dy) x is xbest, y is ybest
f(x-dx,y) f(x,y) f(x+dx,y) f(x,y) smallest yet
f(x-dx,y-dy) f(x,y-dy) f(x+dx,y-dy)
Refine from your search result to get a set of eight surrounding
values greater or lower than the zbest value found
refine dx and dy, dx = dx/2, dy=dy/2
example:
*
* |
| | * *
| * | | * |
* | | | | | |
| | | 0 | | | (x+dx,y+dy)
| | | | | | * |
| | | (x,y) | | | (x+dx,y-dy)
| | | | | |
(x-dx,y-dy) | (x,y-dy) (x,y+dy) | (x-dx,y+dy)
| |
(x-dx,y) (x+dx,y)
If none of the 8 z's are smaller, xbest and ybest do not change.
dx = dx/2, dy= dy/2, quit if dx and dy are smaller than desired accuracy.
Use smallest of 8 and adjust xbest,ybest to midpoint between old x,y and
best (x+/-dy, y+/-dy) 8 possible cases.
e.g. (x-dx,y+dy) was smallest:
xbest = xbest-dx
dx = dx/2
ybest = ybest+dy
dy = dy/2
It is OK to print and see progress, typically not at every step.
Or, given a good candidate staring point, use an available minimization
code, typically name "optm" is in file name.
Or, use the previously described expanding and contracting search
that used pseudo derivatives,Lecture 17.
To get high accuracy, 100 digits, requires that the function be
evaluated using multiple precision, covered in an earlier lecture.
Typically, if 100 digit accuracy is desired, then all computation
would be performed at 110 to 200 digit accuracy. This is slow yet
not difficult for sin(x) and exp(x) for small values of x.
exp(x)=1 + x + x^2/2! + x^3/3! + x^4/4! + ...
For |x|<1 you need the nth term where n! > 10^110 or n=75
Or, you can use range reduction and a much shorter series
normalize and use series
e^x = e^j * e^xc integer j, xc = x-j
if xc<1/4 series(xc), if xc<1/2 series(xc/2)^2, else series(xc/4)^4
series 1 + x + x^2/2! + x^3/3! + ... be sure to use multiple precision
Similarly for sin(x) = x - x^3/3! + x^5/5! - x^7/7! + ...
Similarly for cos(x) = 1 - x^2/2! + x^4/4! - x^6/6! + ...
For sqrt(x) make a guess y=1, then iterate y=(y+x/2)/2
until |y-y_previous| < 10^110
Actually, not quite that easy. You must use range reduction
to get |x|<1 and better if |x| < 1/2.
e.g. for sin(x): if abs(x)>2Π then x=x rem 2Π
if x>Π then x=x-2Π
if x<Π then x=x+2Π
if x>Π/2 then x=Π-x
if x<Π/2 then x=Π+x now abs(x)<Π/2 1.57
use sin(x)=2 sin(x/2) cos(x/2) on x/2<Π/4 0.78
Several of the implementations to 200 digits, that I have programmed:
generic_digits_arithmetic.adbmpf_sin_cos.cmpf_exp.cmpf_exp.hmpf_base.cmpf_math.hBig_math.javatest_Big_math.javaMakefile_proj
Obviously, you find a good staring point with global search
using only double precision and
then use a reasonable optimization method to get a good
starting x,y before going to multiple precision computation.
Print, at least the first few, multiple precision values of
x, y, and function and compare to your code that used
double precision floating point. C, Java, Python, Matlab, etc.
Here is why you must do a global search:
z = exp(sin(50.0*x)) + sin(60.0*exp(y)) +
sin(70.0*sin(x)) + sin(sin(80.0*y)) -
sin(10.0*(x+y)) + (x*x+y*y)/4.0
check you equation when doing project:
for x = -0.0245 y = 0.2105 z should be close to = -3.3067
z = f(x,y) typically set x = xbest, y= ybest, then (x+dx,y) etc.
save z, then zbest = z
When using java, you may need many compilations:
optmn_run.makeProject writeup
Another problem that requires minimation needs physics:
space satellite problem
One possible simple solution:
satellite.java source codesatellite_java.out solutionsatellite.py3 source codesatellite_py3.out solution
Could get closer, not needed for launch of satellite.
The function to find the minimum changes every semester.
See project page for this semesters f(x,y)
Some volumes and areas you should already know:
x1,y1
/\
/ \
x3,y3 /___ \ x2,y2
area of a triangle is 1/2 base times height
| 1 1 1 | | 1 x1 y1 |
area of a triangle is 1/2 det| x1 x2 x3 | = 1/2 det| 1 x2 y2 |
| y1 y2 y3 | | 1 x3 y3 |
area of a triangle is |(V2 - V1) x (V3 - V1)| length of cross product
area of a rectangle is base times height
area of a parallelogram is base times height
area of a circle is Pi r^2
area.txt more area and volume formulas
Some variations that you may need some day:
You may not know the area of a five point star inscribed in
a unit circle or the area of an arbitrary closed polygon.
Easy to calculate, using the
Sailors Algorithm:
Given the n points (x,y) of a closed polygon where no edges cross,
compute the sum i=1..n (x_i+1 - x_i) * (y_i+1 + y_i)/2
(The first point is repeated as the n+1 point,
add enough to the y's to make them all positive)
The intuition for the Sailors Algorithm, on a shape with only
vertical and horizontal edges is:
a vertical line adds no area
a horizontal line to the right adds area (length times average height)
a horizontal line to the left subtracts area.
The computed area will be positive if an upper outside edge is
listed in counter clockwise order, else negative, take absolute value.
A sample program is:
area2_dat.py3 source codestar.dat inputarea2_dat_py3.out outputpoly_area.cpoly_area_maze.outmaze.pathpoly_area_star.outstar.pathsailors_area.javasailors_area_java.out
The maze (8 units wide, 7 units high, area is 35) is:
The star (inscribed in a circle with radius 5, area about 28.06) is:
Volume of a sphere 4/3 Pi r^3
Area of a sphere 4 Pi r^2
Volume of a cone, tetrahedron, any solid with a planar base that goes
to a point 1/3 base area times height
| 1 x1 y1 z1 |
Volume of a tetrahedron 1/6 det | 1 x2 y2 z2 |
| 1 x3 y3 z3 |
| 1 x4 y4 z4 |
using just the x,y,z of the four vertices t1, t2, t3, t4
t4
/\
/ \ missing line from t2 to t4
/ \
/______\
t1\ /t3
\ /
\ /
\/
t2
four surface triangles t1 t2 t3, t1 t2 t4, t1 t3 t4, t2 t3 t4
The general volume computation:
of a closed surface is a little more complicated.
The sailors algorithm is still the basic idea.
Consider a closed surface covered by triangles. Each triangle is
three points and are coded counter clockwise such that the normal
vector to the triangle points out of the volume.
Make all z coordinates positive.
Compute the average z for a triangle, "height".
Compute the area for a triangle, "base".
Compute the z component of the normal vector, znorm.
The volume of this piece of the surface is
height times base times znorm
If znorm is positive, this triangle is on top and contributes
positive volume.
If znorm is negative, this triangle is on the bottom and
contributes negative volume.
The area in the x-y plane is the area of the triangle times znorm.
A vertical triangle has znorm = 0.
A horizontal triangle has znorm = 1 on top and -1 on bottom.
A triangle tipped 45 degrees has a znorm = cos(45 degrees) = 0.7071
znorm is computed using 3 or more points that lie in a plane,
x[0],y[0],z[0] x[1],y[1],z[1] x[2],y[2],z[2] forming
two vectors a and b, then computing the vector cross product:
float znormal(int n, float x[], float y[], float z[], float *area)
{
float ax, ay, az, bx, by, bz, nx, ny, nz, ss;
float cx, cy, cz, sss, sstot, ssstot;
int i;
sstot = 0.0;
ssstot = 0.0;
for(i=2; iNumerical approximation
Using Utah Graphics .dat file for input
volume_dat.py3 source codevolume_dat_py3_cubet.outvolume_dat_py3_tetra.outtetra.dat input datacubet.dat input data
A program that uses graphics data files to compute the volume and
area of a closed volume is:
volume_dat2.c
from data:
sphere_div.dat
output for a crude, 32 triangle, sphere is:
volume_dat2.c reading sphere_div.dat
status=0, zmax=1, points=18, polys=32
xmin=-1.000000, xmax=1.000000, ymin=-1.000000, ymax=1.000000
zmin=-1.000000, zmax=1.000000
enclosing area= 24, enclosing volume= 8
final total area = 10.4178, total volume = 2.94281
should be 12.56 and 4.189
from data:
sphere_div2.dat
output for a better, 128 triangle, sphere is:
volume_dat2.c reading sphere_div2.dat
status=0, zmax=1, points=66, polys=128
xmin=-1.000000, xmax=1.000000, ymin=-1.000000, ymax=1.000000
zmin=-1.000000, zmax=1.000000
enclosing area= 24, enclosing volume= 8
final total area = 11.9549, total volume = 3.81773
should be 12.56 and 4.189
from data:
sphere_div3.dat
output for a better, 512 triangle, sphere is:
volume_dat2.c reading sphere_div3.dat
status=0, zmax=1, points=258, polys=512
xmin=-1.000000, xmax=1.000000, ymin=-1.000000, ymax=1.000000
zmin=-1.000000, zmax=1.000000
enclosing area= 24, enclosing volume= 8
final total area = 12.4082, total volume = 4.0916
should be 12.56 and 4.189
from data:
sphere_div4.dat
output for a good, 2048 triangle, sphere is:
volume_dat2.c reading sphere_div4.dat
status=0, zmax=1, points=1026, polys=2048
xmin=-1.000000, xmax=1.000000, ymin=-1.000000, ymax=1.000000
zmin=-1.000000, zmax=1.000000
enclosing area= 24, enclosing volume= 8
final total area = 12.5264, total volume = 4.16419
should be 12.56 and 4.189
The area of a perfect sphere of radius 1 is about 12.56
The volume of a perfect sphere of radius 1 is about 4.189
from data:
bball.dat Buckminster Fuller Geodesic
output for small, 90 triangle, sphere is:
volume_dat2.c reading bball.dat
status=0, zmax=1, points=42, polys=90
xmin=-0.951060, xmax=0.951060, ymin=-1.000000, ymax=1.000000
zmin=-1.000000, zmax=1.000000
enclosing area= 23.217, enclosing volume= 7.60848
final total area = 13.1894, total volume = 3.94159
should be 12.56 and 4.189
No bull? let us compute the volume of this bull
datread.java needed for volume_dat2volume_dat2.java programbull.dat data
volume_dat2.c reading bull.dat
status=0, zmax=3177.82, points=6202, polys=12398
xmin=-2060.574463, xmax=1978.578857, ymin=-1580.072998, ymax=1429.878052
zmin=356.500702, zmax=3177.816406
enclosing area= 6.40908e+07, enclosing volume= 3.43006e+10
final total area = 2.07025e+07, total volume = 3.64616e+09
That's a lot of bull!
Seems scaled up by 500 relative to feet, in all three dimensions.
Thus, about 29.16 cubic feet of bull.
read_stl.java needed for volume_stlvolume_stl.java programbull.stl data
volume_stl.c reading bull.stl
volume_stl reading file bull.stl
num_tri=12398
xmin=-2060.574463, xmax=1974.23877
ymin=-1580.072998, ymax=1429.878052
zmin=356.500702, zmax=3177.816406
compute volume and area
final total area =2.0702515385253366E7, total volume =3.646510695181979E9
Numerical differentiation is typically considered poor at best.
In general, high order techniques are needed to get reasonable
values.
Similar to numerical integration, the more values of the function
that are used, then the more accuracy can be expected.
Similar to numerical integration, given a function that can be
fit accurately by an n th order polynomial, an equation for
the derivative exists to give very good accuracy.
Consider a function f(x) that you can compute but do not know
a symbolic representation. To find the derivatives at a few points,
say 4 for example, and computing only 4 evaluations of f(x):
f(x2)
* f(x3)
| *
f(x1) | |
* | |
| | |
f(x0) | | |
* | | |
| | | |
| | | |
x0 x1 x2 x3 h = x3-x2 = x2-x1 = x1-x0
We use f'(x0) to represent the derivative of f(x) at x0.
We looked up the formulas for computing the first derivative
using four equally spaced points, and write the equations:
f'(x0) = 1/6h (-11 f(x0) +18 f(x1) -9 f(x2) 2 f(x3) )
f'(x1) = 1/6h ( -2 f(x0) -3 f(x1) 6 f(x2) -1 f(x3) )
f'(x2) = 1/6h ( 1 f(x0) -6 f(x1) 3 f(x2) 2 f(x3) )
f'(x3) = 1/6h ( -2 f(x0) +9 f(x1) -18 f(x2) 11 f(x3) )
The error estimate is 1/h^4 f''''(z) for worse z in x0..x3
Thus, if the four points are fit by a third degree polynomial,
f''''(z), the fourth derivative is always zero and the numerical
derivatives are exact within roundoff error.
The values will be in an array cx[0]..[3] 1/6h*(-11, 18, -9, 2) .
Then f'(x0) = sum i=0..3 cx[i]*f(x[i])
To check that the coefficients such as -11 18 -9 2 are correct,
we symbolically fit the 4 points with a third order polynomial,
then symbolically take the derivatives. Too much work to do by
hand, thus I used Maple to check and found the coefficients correct.
Using 5 evaluations of f(x) provides the five derivatives:
f'(x0) = 1/12h (-25 f(x0) 48 f(x1) -36 f(x2) 16 f(x3) -3 f(x4) )
f'(x1) = 1/12h ( -3 f(x0) -10 f(x1) 18 f(x2) -6 f(x3) 1 f(x4) )
f'(x2) = 1/12h ( 1 f(x0) -8 f(x1) 0 f(x2) 8 f(x3) -1 f(x4) )
f'(x3) = 1/12h ( -1 f(x0) 6 f(x1) -18 f(x2) 10 f(x3) 3 f(x4) )
f'(x4) = 1/12h ( 3 f(x0) -16 f(x1) 36 f(x2) -48 f(x3) 25 f(x4) )
The above were typed by hand and could have an error.
Some quick checking: If all function evaluations are the same,
then the function is flat and the derivative must be zero at
all points. Thus, the sum of the coefficients must be zero for
every derivative. Note that the first and last coefficients have
the same values, in reverse order with reverse sign.
We will call the "12" in 1/12h the variable b or bb.
We will call the coefficients "-12", "48", "-36" the 'a' array
Always try to cut-and-paste from the computer printout:
Tabulation to compute derivatives at various points
First through sixth order, various number of points
nderiv.out
The above was generated by any of the computer programs listed below:
nderiv.c computes these cases.
nderiv.f90 in Fortran 95
nderiv.adb in Ada 95
nderiv.java in Java
deriv.py in Python 2
rderiv.py in Python 2
nderiv.py in Python 2
deriv.py3 in Python 3
Deriv.scala in Scala
deriv.go in Go language
deriv.jl in Julia
Note that the coefficients can be generated in your program by
copying the function deriv and calling it for a specific
derivative order, number of evaluations and point where derivative
is to be computed. For the second derivative using 4 evaluations
and computing the derivative at point zero,
deriv( 2, 4, 0, a, &bb); returns a={2, -5, 4, -1) and bb=1
thus f''(x0) = 1/bb h^2 ( a[0] f(x0) + a[1] f(x1) + a[2] f(x2) + a[3] f(x3) )
or f''(x0) = 1/h^2 (2 f(x0) -5 f(x0+h) +4 f(x0+2h) -1 f(x0+3h) )
deriv.h header for three versions.
deriv.c computes values.
Note that integers are returned and should be cast to appropriate
floating point type. Various languages require minor changes in the call.
This computation of derivatives will be used extensively in the
lectures that follow on ordinary and partial differential equations.
For non uniformly spaced x values, the program nuderiv.c
demonstrates how to compute the coefficients of f(x0), f(x1), ...
Note that the function nuderiv must be called in the application because
the coefficients depend on the specific problems x values.
The development for non uniformly spaced derivative coefficients is:
given x0, x1, x2, x3 non uniformly spaced
find the coefficients of y0, y1, y2, y3 y0 = f(x0) ...
in order to compute the derivatives:
y'(x0) = c00*y0 + c01*y1 + c02*y2 + c03*y3
y'(x1) = c10*y0 + c11*y1 + c12*y2 + c13*y3
y'(x2) = c20*y0 + c21*y1 + c22*y2 + c23*y3
y'(x3) = c30*y0 + c31*y1 + c32*y2 + c33*y3
Method: fit data to y(x) = a + b*x + c*x^2 + d*x^3
y'(x) = b + 2*c*x + 3*d*x^2
y''(x) = 2*c + 6*d*x
y'''(x) = 6*d
with y0, y1, y2 and y3 variables:
y0 y1 y2 y3
form: | 1 x0 x0^2 x0^3 1 0 0 0 | = a
| 1 x1 x1^2 x1^3 0 1 0 0 | = b
| 1 x2 x2^2 x2^3 0 0 1 0 | = c
| 1 x3 x3^2 x3^3 0 0 0 1 | = d
reduce | 1 0 0 0 a0 a1 a2 a3 | = a
| 0 1 0 0 b0 b1 b2 b3 | = b
| 0 0 1 0 c0 c1 c2 c3 | = c
| 0 0 0 1 d0 d1 d2 d3 | = d
this is just the inverse
y'(x0) = b0*y0 + b1*y1 + b2*y2 + b3*y3 +
2*c0*y0*x0 + 2*c1*y1*x0 + 2*c2*y2*x0 + 2*c3*y3*x0 +
3*d0*y0*x0^2 + 3*d1*y1*x0^2 + 3*d2*y2*x0^2 + 3*d3*y3*x0^2
or c00 = b0 + 2*c0*x0 + 3*d0*x0^2
c01 = b1 + 2*c1*x0 + 3*d1*x0^2
c02 = b2 + 2*c2*x0 + 3*d2*x0^2
c03 = b3 + 2*c3*x0 + 3*d3*x0^2
y'(x0) = c00*y0 + c01*y1 + c02*y2 +c03*y3 where y0 is f(x0) etc.
y'(x1) = b0*y0 + b1*y1 + b2*y2 + b3*y3 +
2*c0*y0*x1 + 2*c1*y1*x1 + 2*c2*y2*x1 + 2*c3*y3*x1 +
3*d0*y0*x1^2 + 3*d1*y1*x1^2 + 3*d2*y2*x1^2 + 3*d3*y3*x1^2
or c10 = b0 + 2*c0*x1 + 3*d0*x1^2
c11 = b1 + 2*c1*x1 + 3*d1*x1^2
c12 = b2 + 2*c2*x1 + 3*d2*x1^2
c13 = b3 + 2*c3*x1 + 3*d3*x1^2
cij = bj + 2*cj*xi + 3*dj*xi^2
y'(x1) = c10*y0 + c11*y1 + c12*y2 +c13*y3
y''(x0) = 2*c0*y0 + 2*c1*y1 + 2*c2*y2 + 2*c3*y3 +
6*d0*y0*x0 + 6*d1*y1*x0 + 6*d2*y2*x0 + 6*d3*y3*x0
or c00 = 2*c0 + 6*d0*x0
c01 = 2*c1 + 6*d1*x0
c02 = 2*c2 + 6*d2*x0
c03 = 2*c3 + 6*d3*x0
y'''(x0) = 6*d0*y0 + 6*d1*y1 + 6*d2*y2 + 6*d3*y3
or c00 = 6*d0 independent of x
c01 = 6*d1
c02 = 6*d2
c03 = 6*d3
The function prototype is
nuderiv.h
void nuderiv(int order, int npoint, int point, double x[], double c[])
where order is the order of the desired derivative
where npoint is the desired number of points (length of x[] and c[] arrays)
where point is desired x index for the derivative
where x[] is input x0, x1, ...
where c[] is output c_point_0, c_point_1, ...
For reasonably smooth functions, using larger numbers of points
allows greater spacing between points and thus fewer equations in
the system of equations to be solved.
Intermediate points may then be found by multidimensional interpolation.
See Lecture 4 for method.
nuderiv.cnuderiv.hnuderiv_test.cnuderiv_test.outnuderiv.adbnuderiv_test.adbnuderiv_test_ada.outinverse.adbnuderiv_test_exp.adbnuderiv_test_exp_ada.outnuderiv.f90nuderiv_test.f90nuderiv_test_f90.outinverse.f90nuderiv.javanuderiv_test.javanuderiv_test_java.outtest_nuderiv.javatest_nuderiv_java.outnuderiv.pynuderiv_test.pynuderiv_test_py.outDeriv.scalaTestDeriv.scalaTestDeriv_scala.out some definitions and explanation of formulas
A test application of nuderiv.c on a 2D PDE that should be exact is
pdenu22_eq.cpdenu22_eq.out
A test application of nuderiv.adb on a 2D PDE that should be exact is
pdenu22_eq.adbpdenu22_eq_ada.out
A test application of nuderiv.java on a 2D PDE that should be exact is
pdenu22_eq.javapdenu22_eq_java.out
A test application of nuderiv.f90 on a 2D PDE that should be exact is
pdenu22_eq.f90pdenu22_eq_f90.out
A non perfect application to a smooth function exp(x*y)*sin(x+y)
pdenu_eq.cpdenu_eq.outpdenu_eq.adbpdenu_eq_ada.outpdenu_eq.javapdenu_eq_java.outpdenu_eq.f90pdenu_eq_f90.out
A test application of nuderiv.c on a 3D PDE that should be close is
Note how each term of the differential equation is processed.
This case used 1.0, 2.0, ... for coefficients yet any expression
in x,y,z could be used. A term such as u(x,y,z) * du(x,y,z)/dx
would create a non linear system of equations that would be very
difficult to solve.
pdenu23_eq.cpdenu23_eq.out
A test application of nuderiv.adb on a 3D PDE that should be close is
pdenu23_eq.adbpdenu23_eq_ada.out
A test application of nuderiv.f90 on a 3D PDE that should be close is
pdenu23_eq.f90pdenu23_eq_f90.out
The test code was generated with Maple:
Start by choosing a solution, u(x,y,z) any function of x,y,z.
Symbolically compute the derivatives.
c(x,y,z) is computed as the RHS (right hand side) of the
chosen differential equation, dx(x,y,z) = du(x,y,z)/dx etc.
For analytic derivatives see:
derivative.shtml
Extending derivative computation to partial derivatives uses
the previous work on computing derivatives. Here we consider
the Cartesian coordinate system in 2, 3, and 4 dimensions,
computing second, third and fourth partial derivatives
The algorithms and some code shown below will be used when the
function is unknown, when we are solving partial differential
equations.
Consider a function f(x,y) that you can compute but do not know
a symbolic representation. To find the derivatives at a point
(x,y) we will use our previously discussed "nuderiv" nonuniform
derivative function.
To compute the numerical derivatives of f(x,y) at a set of points
we choose the number of points in the x and y directions, nx, ny.
Thus we have a grid of cells typically programmed as a
two dimensional array.
The partial derivative with respect to x at any (x1,y1) point
just uses points (x0,y1) (x1,y1) (x2,y1) ... (xn,y1))
The partial derivative with respect to y at any (x1,y1) point
just uses points (x1,y0) (x1,y1) (x1,y2) ... (x1,yn))
Similarly for second, third, fourth derivatives.
The partial derivative with respect to x and y is more complicated.
We first take the derivative with respect to x, same as calculus.
But, numerically, we need the value for every y.
Then, as in calculus, we take the derivative with respect to y
of the previously computed derivative with respect to x.
(Typically we save the x derivatives as a matrix because we
typically need all the partial derivatives with respect to y.)
A code fragment could look like:
static int nx = 11; /* includes boundary */
static int ny = 10; /* includes boundary */
static double xg[11] = {-1.0, -0.9, -0.8, -0.6, -0.3, 0.0,
0.35, 0.65, 0.85, 1.0, 1.2};
static double yg[10] = {-1.1, -1.0, -0.9, -0.6, -0.2,
0.3, 0.8, 1.0, 1.2, 1.3};
/* nx-2 by ny-2 internal, non boundary cells */
static double cx[11];
static double cy[10];
static double cxy[11][10]; // d^2 u(x,y)/dx dy or Uxy
for(i=1; i<nx-1; i++)
{
for(j=1; j<ny-1; j++) // computing inside boundary
{
/* d^2 u(x,y)/dxdy */
nuderiv(1,nx,i,xg,cx);
nuderiv(1,ny,j,yg,cy);
for(kx=0; k<nx; kx++)
{
for(ky=0; k<ny; ky++)
{
cxy[kx][ky] = cx[kx] * cy[ky]; // numerical derivative at each point
}
}
}
} // have cxy inside the boundary
∂2f(x,y)/∂x ∂y at x=xg[i] y=yg[i] is
computed using coefficients
cxy[kx][ky] = ∂{∂f(x,y)/∂x}/∂y * cx[kx] * cx[ky]
Numerical differentiation can be applied to all coordinate systems.
It gets more complicated, yet a reasonable extension of Cartesian
coordinate systems. These are all partial derivatives.
The algorithms and some code shown below will be used when the
function is unknown, when we are solving partial differential
equations in non Cartesian coordinate systems.
Polar coordinates
Consider a function f(r,theta) that you can compute but do not know
a symbolic representation. To find the derivatives at a point
(r,theta) in a polar coordinate system we will use our previously
discussed "nuderiv" nonuniform Cartesian derivative function.
x = r * cos(Θ) r = sqrt(x^2 + y^2) 0 < r
y = r * sin(Θ) Θ = atan2(y,x) 0 ≤ θ < 2Π
^ ^
∇f(r,Θ) = r ∂f/∂r + Θ 1/r ∂f/∂Θ
∇2f(r,Θ) = 1/r ∂/∂r(r*∂f/∂r) + 1/r2 ∂2f/∂Θ2
= 2/r ∂f/∂r + ∂2f/∂r2) + 1/r2 ∂2f/∂Θ2
Neither radius grid "rg[]" nor theta grid "tg[]" need to be uniformly
spaced. To compute the derivatives at the green dot, (rg[3],tg[3])
Use the red circle grid points for ∂/∂r and ∂2/∂r2
Use the blue box grid points for ∂/∂Θ and ∂2/∂Θ2
The computation, in "C" language, at rg[3],tg[3] would be:
nuderiv(1, nr, 3, rg, cr); /* nr is 5 in this example */
Ur = 0.0;
for(k=0; k<nr; k++) Ur += cr[k]*f(rg[k],tg[3]); /* ∂/∂r */
nuderiv(1, nt, 3, tg, ct); /* nt is 8 in this sample */
Ut = 0.0;
for(k=0; k<nt; k++) Ut += ct[k]*f(rg[3],tg[k]); /* ∂/∂Θ */
^
nab1r = Ur /* r ∇f(r,Θ) */
^
nab1Θ = (1.0/rg[3])*Ut; /* Θ ∇f(r,Θ) */
nuderiv(2, nr, 3, rg, crr); /* nr is 5 in this example */
Urr = 0.0;
for(k=0; k<nr; k++) Urr += crr[k]*f(rg[k],tg[3]); /* ∂2/∂r2 */
nuderiv(2, nt, 3, tg, ctt); /* nt is 8 in this example */
Utt = 0.0;
for(k=0; k<nt; k++) Utt += ctt[k]*f(rg[3],tg[k]); /* ∂2/∂Θ2 */
nab2 = (2.0/rg[3])*Ur + Urr + (1.0/(rg[3]*rg[3]))*Utt; /* ∇2f(r,Θ) */
The actual code to check these equations at all grid points is:
check_polar_deriv.adb source code
check_polar_deriv_ada.out verification output
Cylindrical coordinates
Consider a function f(r,theta,z) that you can compute but do not know
a symbolic representation. To find the derivatives at a point
(r,theta,z) in a cylindrical coordinate system we will use our previously
discussed "nuderiv" nonuniform Cartesian derivative function.
x = r * cos(Θ) r = sqrt(x^2 + y^2 + z^2) 0 < r
y = r * sin(Θ) Θ = atan2(y,x) 0 ≤ θ < 2Π
z = z z = z
^ ^ ^
∇f(r,Θ,z) = r ∂f/∂r + Θ 1/r ∂f/∂Θ + z ∂f/∂z
∇2f(r,Θ,z) = 1/r ∂/∂r(r*∂f/∂r) + 1/r2 ∂2f/∂Θ2 + ∂2f/∂z2
= 2/r ∂f/∂r + ∂2f/∂r2) + 1/r2 ∂2f/∂Θ2 + ∂2f/∂z2
Neither radius grid "rg[]" nor theta grid "tg[]" nor the z grid "zg[]"
need to be uniformly spaced. To compute the derivatives at (rg[0],tg[4],zg[2])
Use the green box grid points for ∂/∂r; and ∂2/∂2r
Use the blue box grid points for ∂2/∂z2
Use the red circle grid points for ∂2/∂Θ2
The computation, in "C" language, would be:
nuderiv(1, nr, 0, rg, cr); /* nr is 3 in this example */
Ur = 0.0;
for(k=0; k<nr; k++) Ur += cr[k]*f(rg[k],tg[4],zg[2]); /* ∂/∂r */
nuderiv(1, nt, 4, tg, ct); /* nt is 7 in this sample */
Ut = 0.0;
for(k=0; k<nt; k++) Ut += ct[k]*f(rg[0],tg[k],zg[2]); /* ∂/∂Θ */
nuderiv(1, nz, 2, zg, cz); /* nz is 5 in this sample */
Uz = 0.0;
for(k=0; k<nz; k++) Uz += cz[k]*f(rg[0],tg[4],zg[k]); /* ∂/∂z */
^
nab1r = Ur /* r ∇f(r,Θ,z) */
^
nab1Θ = (1.0/rg[0])*Ut; /* Θ ∇f(r,&Theta,z) */
^
nab1z = Uz /* z ∇f(r,Θ,z) */
nuderiv(2, nr, 0, rg, crr); /* nr is 3 in this example */
Urr = 0.0;
for(k=0; k<nr; k++) Urr += crr[k]*f(rg[k],tg[4],zg[2]); /* ∂2/∂r2 */
nuderiv(2, nt, 4, tg, ctt); /* nt is 7 in this example */
Utt = 0.0;
for(k=0; k<nt; k++) Utt += ctt[k]*f(rg[0],tg[k],zg[2]); /* ∂2/∂Θ2 */
nuderiv(2, nz, 2, zg, czz); /* nz is 5 in this example */
Uzz = 0.0;
for(k=0; k<nz; k++) Uzz += czz[k]*f(rg[0],tg[4],zg[k]); /* ∂2/∂z2 */
nab2 = (2.0/rg[0])*Ur + Urr + (1.0/(rg[0]*rg[0]))*Utt + Uzz; /* ∇2f(r,Θ,z) */
The actual code to check these equations at all grid points is:
check_cylinder_deriv.adb source code
check_cylinder_deriv_ada.out verification output
Spherical coordinates
interchange Φ and Θ on diagram to agree with equations
x = r * cos(Θ) * sin(Φ) r = sqrt(x^2 + y^2 + z^2) 0 < r
y = r * sin(Θ) * sin(Φ) Θ = atan2(y,x) 0 ≤ Θ < 2Π
z = r * cos(Φ) Φ = atan2(sqrt(x^2+y^2),z) 0 < Φ < Pi
^ ^ ^
∇f(r,Θ,Φ) = r ∂f/∂r + Θ 1/r sin(Φ) ∂f/∂Θ + Φ 1/r ∂f/∂φ
∇2f(r,Θ,Φ) = 1/r2 ∂/∂r(r2*∂f/∂r) + 1/(r2 sin(Φ)2) ∂2/∂Θ2f + 1/(r2 sin(Φ)) ∂/∂Φ(sin(Φ) ∂/∂f&Phi);
= 2/r ∂f/∂r + ∂2f/∂r2 + 1/(r2 sin(Φ)2) ∂2f/∂Θ2 + cos(Φ)/(r2 sin(Φ)2 ∂f/∂Φ + 1/(r2 ∂2f/∂Φ2
Partial derivatives in R, Theta, Phi follow their color.
All possible are shown.
Some "C" code to compute the derivatives at (rg[2],tg[3],zg[4])
nuderiv(2, nr, 2, rg, crr); /* nr is 3 in this example */
Urr = 0.0;
for(k=0; k<nr; k++) Urr += crr[k]*f(rg[k],tg[3],pg[4]); /* ∂2/∂r2 */
nuderiv(2, nt, 3, tg, ctt); /* nt is 7 in this example */
Utt = 0.0;
for(k=0; k<nt; k++) Utt += ctt[k]*f(rg[2],tg[k],pg[4]); /* ∂2/∂Θ2 */
nuderiv(2, np, 4, pg, cpp); /* np is 5 in this example */
Upp = 0.0;
for(k=0; k<np; k++) Upp += cpp[k]*f(rg[2],tg[3],pg[k]); /* ∂2/∂Φ2 */
nab2 = (1.0/rg[0])*Ur + Urr + (1.0/(rg[0]*rg[0]))*Utt + Uzz; /* ∇2f(r,Θ,Φ) */
The actual code to check these equations at all grid points is:
check_sphere_deriv.adb source code
check_sphere_deriv_ada.out verification output
A rather thorough test:
check_spherical_gradient.c source code
check_spherical_gradient_c.out verification output
Other checking of code and one figure came from
draw_sphere_deriv.javadraw_sphere_deriv_java.out
Many functions that do everything with spherical coordinates.
Overboard testing including Laplacian in spherical coordinates.
pde_sphere_mws.out analyticpde_sphere.c source codepde_sphere_c.out output
An "ordinary" differential equation has exactly one independent
variable and at least one derivative of that variable.
For notation we use y=f(x) to say y is a function of x.
For the derivative of y with respect to x, we may write
d f(x)/dx or we can use dy/dx or y' when the independent
variable x is understood.
One of the simplest ordinary differential equations is
y' = y
which has the analytic solution y(x) = exp(x) the exponential function
also written as e^x .
For a second derivative we use notation d^2 y/dx^2 or y''.
Many ordinary differential equations have analytic solutions. Yet,
many do not. We are interested in ordinary differential equations
that do not have analytic solutions and must be approximated by
numerical solutions. For testing purposes, we will start with an
equation that has an analytic solution so that we can check our
numeric solution method. The test case is the classic beam problem.
given a beam of length L, from 0 < x < L
attached rigidly at both ends
with Young's Modulus of E
with moment of inertia I
with p(x) being the load density e.g. force per unit length
with both ends fixed, meaning:
y=0 at x=0, y=0 at x=L, dy/dx=0 at x=0, dy/dx=0 at x=L
then the differential equation that defines the y position of
the beam at every x coordinate is
d^4 y
EI ----- = p(x) with the convention for downward is negative
dx^4
for uniformly distributed force (weight) p(x) becomes - W/L
This simple case can be integrated and solved analytically:
d^4 y
EI ----- = -W/L
dx^4
d^3 y
EI ----- = -W/L x + A ( A is constant of integration, value found later)
dx^3
d^2 y
EI ----- = -W/L x^2/2 + A x + B
dx^2
dy
EI -- = -W/L x^3/6 + A x^2/2 + B x + C we see C=0 because dy/dx=0 at x=0
dx
EI y = -W/L x^4/24 + A x^3/6 + B x^2/2 + D we see D=0 because y=0 at x=0
substituting y=0 at x=L in the equation above gives
0 = -W/L L^4/24 + A L^3/6 + B L^2/2
substituting dy/dx=0 at x=L in the equation above the above gives
0 = -W/L L^3/6 + A L^2/2 + B L
solving two equations in two unknowns A = W/2 B = - WL/12
then substituting for A and B in EI y = ... and combining terms
W
y = -------- x^2 (x-L)^2
24 L EI
The known solution for a specific case is valuable to check your
programming of a numerical solution before computing a more
general case of p(x) where no closed form solution may exists.
An aside: Much information on physics equations and units and
units conversion may be found in
units.shtml, physics equations near end
In order to find a numerical solution, set up a system of linear
equations such that the solution of the equations are values
of y(x) at some points. The codes below just use seven points, n=7,
yet could use a much larger number. Because we have a fourth order
differential equation, we will use fourth order difference equations.
The solution is fourth order (or less, actually second order) thus
we will get results with no truncation error and only small
roundoff error. The seven points will be equally spaced by h.
h = L/(n+1). The value of y(0)=0 is given and the value y(L)=0
is given. The unknown points are y(h), y(2h), ... , y(7h).
We need the difference equations for fourth order derivatives from
nderiv.out
From nderiv.out the difference equation for d y^4/dx^4
using 5 y values is:
d y^4/dx^4(x) =(1/h^4)( y(x)-4y(x+h)+6y(x+2h)-4y(x+3h)+y(x+4h) )
We will start x at h rather than zero and solve at 7 points.
Since d y^4/dx^4 = -W/L for uniform load, we can get equations
at x=2h ... x=6h. We will need special equations at x=h
and x=7h that take into account the boundary conditions..
the partial system of equations we want looks like:
| | | y( h) | | 0 |
| -4 6 -4 1 0 0 0 | | y(2h) | | -W/L |
| 1 -4 6 -4 1 0 0 | | y(3h) | | -W/L |
| 0 1 -4 6 -4 1 0 | * | y(4h) | = | -W/L |
| 0 0 1 -4 6 -4 1 | | y(5h) | | -W/L |
| 0 0 0 1 -4 6 -4 | | y(6h) | | -W/L |
| | | y(7h) | | 0 |
The boundary conditions dy/dx(0)=0, dy/dx(L)=0
must now be applied. Looking again at nderiv.out for a
first derivative that uses 5 points we find:
dy/dx(x) = 1/12h ( -25y(x) +48y(x+h) -36y(x+2h) +16y(x+3h) -3y(x+4h) )
For x=0, and knowing y(0)=0 we get the first row of the matrix
| 48 -36 16 -3 0 0 0 | | y( h) | | 0 |
At x=L we find
dy/dx(x) = 1/12h ( 3y(x-4h) -16y(x-3h) +36y(x-2h) -48y(x-h) +25y(x) )
For x=L, working backward, and knowing y(L)=0 we get
the last row of the system of equations.
| 0 0 0 -3 16 -36 48 | | y(7h) | | 0 |
Solve the equations and we have y(h), y(2h), ..., y(7h) as
shown in the output files below in beam2_c.out.
In the code h=1 so that y(h) = y(x=1), y(7h) = y(x=7)
Note that the rather large step size works because we are
using a high enough order method. The analytic solution
is shown with the slopes at each point. The numeric
solution at the end checks very close to the analytic
solution.
beam2.cbeam2_c.outbeam2.f90beam2_f90.outbeam2.adbbeam2_ada.outbeam2.javabeam2_java.outbeam2.pybeam2_py.out
For more versions of static and dynamic beam equations
Lecture 28d near the end
Using the notation y for y(x), y' for d y(x)/dx, y'' for d^2 y(x)/dx^2 etc.
To solve a(x) y''' + b(x) y'' + c(x) y' + d(x) y + e(x) = 0
for y at one or more values of x,
with initial condition constants c1, c2, c3 at x0:
y(x0) = c1 y'(x0) = c2 y''(x0) = c3
Create a function that computes y''' =
f(ypp, yp, y, x) = -(b(x)*ypp + c(x)*yp + d(x)*y + e(x))/a(x)
(moving everything except y''' to right-hand-side of equation,
then divide by a(x) )
a(x), b(x), c(x), d(x) and e(x) may be any functions of x and
that includes constants and zero. a(x) must not equal zero in
the range of x0 .. desired largest x.
Diagram for y = y + h*yp moving from x to x+h
similarly yp = yp + h*ypp
similarly ypp = ypp + h*yppp
Initialize x = x0
y = c1
yp = c2
ypp = c3
loop: use some method with the function, f, to compute the next ypp
yppp=f(ypp, yp, y, x)
ypp = ypp + h*yppp first order or use some higher order
yp = yp + h*ypp or some higher order method
y = y + h*yp or some higher order method
x = x + h increment x and loop
optional, print y at this x
Quit when you have solved for y at the largest desired x
You may save previous values of x and y, interpolate at desired
value of x to get y.
You may run again with h half as big, keep halving h until you get
approximately the same result.
Quick and dirty finished! This actually works sometimes.
"x" is called the independent variable and
"y" is called the dependent variable.
This is known as an "initial value problem"
The conditions are known at the start and the definition of the
problem is used to numerically compute each successive value of
the solution. This was the method used to compute the flight
of the rocket in homework 1.
Later, we will cover a "boundary value problem" where at least
some information is given at the beginning and end of the
independent variable(s).
Better numerical solutions, Runge Kutta
To see some more accurate solutions to very simple ODE's
Solve y' = y y(0)=1 which is just y'(x) = y(x) = exp(x)
Initialize x = 0
y = 1
loop: use yp = y for exp low order method
y = y + h*y yp is just y
x = x + h increment x and loop
print y at this x
Initialize x = 0 Runge Kutta 4th order method
y = 1 this would be exact, within roundoff error,
if solution was 4th order or less polynomial
loop: use yp = y for exp fourth order method for y' = y y=exp(x)
k1 = h*y yp = f(x,y) in general
k2 = h*y+k1/2 f(x+h/2.0,y+k1/2.0)
k3 = h*y+k2/2 f(x+h/2.0,y+k2/2.0)
k4 = h*y+k3 f(x+h,y+k3)
y = y+(k1+2.0*k2+2.0*k3+k4)/6.0
x = x+h print y at this x
rk1_xy_acc.java using RK 4th orderrk1_xy_acc_java.out using RK 4th orderrk1_xy_acc.py3 using RK 4th orderrk1_xy_acc_py3.out using RK 4th orderode_exp.c using RK 4th order methodode_exp_c.out note very small h is worse
To see some more accurate solutions to simple second order ODE's
Solve y'' = -y y(0)=1 y'(0)=0 which is just y(x)= cos(x)
ode_cos.code_cos_c.out many cycles
An example of an unstable ODE is Hopf:
p is a bifurcation parameter, constant for each solution.
Given: dx = gx(x,y,p) = p*x - y - x*(x^2+y^2)
dy = gy(x,y,p) = x + p*y - y*(x^2+y^2)
dr = sqrt(dx^2 + dy^2)
z = f(x,y,p)
Integrate from t=0 to t=tfinal
dz(x,y,p)/dt = dx/dr and
dz(x,y,p)/dt = dy/dr
The plot below uses initial conditions p=-0.1, x=0.4, y=0.4, t=0.0, dt=0.01
and runs to t=10.0
hopf_ode.c
A three body problem, using simple integration, shows the
sling-shot effect of gravity when bodies get close.
Remember force of gravity F = G * mass1 * mass2 / distance^2
This is numerically very unstable.
body3.c needs OpenGL to compile and link.
Hopefully, it can be demonstrated running in class.
body3.java plain Java, different version.
Hopefully, it can be demonstrated running in class.
Center of Mass, Barycenter, general motion
For more typical ordinary differential equations there are some
classic methods. A common higher order method is Runge-Kutta fourth
order.
Given y' = f(x,y) and the initial values of x and y
L: k1 = h * f(x,y)
k2 = h * f(x+h/2.0,y+k1/2.0)
k3 = h * f(x+h/2.0,y+k2/2.0)
k4 = h * f(x+h,y+k3)
y = y + (k1+2.0*k2+2.0*k3+k4)/6.0
x = x + h
quit when x>your_biggest_x
loop to L:
When the solution y(x) is a fourth order polynomial, or lower
order polynomial, the solution will be computed with no truncation
error, yet may have some roundoff error. Very large step sizes, h,
may be used for this very special case.
f(x)=(x-1)*(x-2)*(x-3)*(x-4) 4 roots, and =24 at x=0, x=5
RK4th.java Java example RK4th_java.out RK4th.py Python example RK4th_py.out RK4th.rb Ruby example RK4th_ruby.out RK4th.java Scala example RK4th_scala.out
In general, the solution will not be accurately approximated by
a low order polynomial. Thus, even the Runge-Kutta method may
require a very small step size in order to compute an accurate
solution. Because very small step sizes result in long computations
and may have large accumulations of roundoff error, variable
step size methods are often used.
Typically a maximum step size and a minimum step size are set.
Starting with some intermediate step size, h, the computation
proceeds as shown below.
Given y' = f(x,y) and the initial values of x and y
L: y1 = y + dy1 using some method with step size h compute dy1
the value of y1 is at x = x + h
y2a = y + dy2a using same method with step size h/2 compute dy2a
the value of y2a is at x = x + h/2
y2 = y2a + dy2 using same method, from y2a at x = x + h/2
with step size h/2 compute dy2
the value of y2 is at x = x + h
abs(y1-y2)>tolerance h = h/2, loop to L:
abs(y1-y2) < tolerance/4 h = 2 * h
y = y2
x = x + h
loop to L: until x > final x
There are many variations on the variable step size method.
The above is one simple version, partially demonstrated by:
FitzHugh-Nagumo equations system of two ODE's
v' = v - v^3/3.0 + r initial v = -1.0
r' = -(v - 0.2 - 0.2*r) initial r = 1.0
initial h = 0.001
t in 0 .. 25
fitz.c just decreasing step size
fitz.out about equally spaced output
fitz.m easier to understand, MatLab
Of course, we can let MatLab solve the same ODE system
fitz2.m using MatLab ode45
The ode45 in MatLab uses a fourth order Runge-Kutta and then
a fifth order Runge-Kutta and compares the result. A neat formulation
is used to minimize the number of required function evaluations.
The fifth order method reuses some of the fourth order evaluations.
Written in "C" from p355 of textbook
fitz2.c similar to fitz.c
fitz2.out close to same results
A second order differential equation use or Runge-Kutta is demonstrated in:
rk4th_second.crk4th_second_c.out
A fourth order differential equation use or Runge-Kutta is demonstrated in:
rk4th_fourth.crk4th_fourth_c.outrk4th_fourth.javark4th_fourth_java.out
Another coding of fourth order Runge-Kutta known as Gill's method
is the (subroutine, method, function) Runge given in a few languages.
The user provided code comes first, then Runge.
This code solves a system of first order differential equations and
thus can solve higher order differential equations also, as shown above.
Systems of ordinary differential equations
The system of differential equations is N equations in N dependent
variables, Y, with one independent variable X.
Y_i = Y'_i(X, Y_1, Y_2, ... , Y_N) for i=1,N
The user provides the code for the functions Y'_i and places
the results in the F array. X goes from initial value, set by user,
to XLIM final value set by user. The user initializes the Y_i.
sim_difeq.f90 Fortran version
sim_difeq_f90.outsim_difeq.java Java version
sim_difeq_java.out close to same results
sim_difeq.c C version
sim_difeq_c.out close to same results
sim_difeq.m MatLab version (main)
ode_test1.m funct computes derivatives
ode_test4.m funct computes derivatives
more ordinary differential equations for Matlab solution
Some partial differential equations are difficult to solve symbolically
and some are difficult to solve numerically.
This lecture covers the numerical solution of simple partial
differential equations where the boundary values are known.
See simple definitions .
We are told there is an unknown function u(x,y) that satisfies
the partial differential equation:
(read 'd' as the partial derivative symbol, '^' as exponent)
d^2u(x,y)/dx^2 + d^2u(x,y)/dy^2 = c(x,y)
and we are given the function c(x,y)
and we are given the values of u(x,y) on the boundary
of the domain of x,y in xmin,ymin to xmax,ymax
To compute the numerical solution of u(x,y) at a set of points
we choose the number of points in the x and y directions, nx, ny.
Thus we have a grid of cells typically programmed as a
two dimensional array.
The number of solution points, unknowns, is typically called
the degrees of freedom, DOF.
From the above we get a step size hx = (xmax-xmin)/(nx-1) and
hy = (ymax-ymin)/(ny-1). For our first solution we choose
hx = hy = h and just use a step size of h.
If we need u(x,y) at some point not on our solution grid,
we can use two dimensional interpolation to find the desired value.
Now, write the differential equation as a second order
numerical difference equation.
(Numerical derivatives from nderiv.out order 2, points 3, term 1)
d^2u(x,y)/dx^2 + d^2u(x,y)/dy^2 =
1/h^2(u(x-h,y)-2u(x,y)+u(x+h,y)) +
1/h^2(u(x,y-h)-2u(x,y)+u(x,y+h)) +O(h^2)
Setting the above equal to c(x,y) and collecting terms
d^2u(x,y)/dx^2 + d^2u(x,y)/dy^2 = c(x,y) becomes
1/h^2(u(x-h,y)+u(x+h,y)+u(x,y-h)+u(x,y+h)-4u(x,y))=c(x,y)
now solve for a new u(x,y), based on neighboring grid points
u(x,y)= (u(x-h,y)+u(x+h,0)+u(x,y-h)+u(x,y+h)-c(x,y)*h^2)/4
The formula for u(x,y) is programmed based on indexes i and j
such that x=xmin+i*h, y=ymin+j*h for i=1,nx-2 j=1,ny-2
Note that the boundary values are set and unchanged for the four
sides i=0 for j=0,ny-1 i=nx-1 for j=0,ny-1
j=0 for i=0,nx-1 j=ny-1 for i=0,nx-1
and the interior cells are set to some value, typically zero.
Then iterate, computing new interior cells based on the previous
set of cells including the boundary.
Compute the absolute value of the difference between each new
interior cell and the previous interior cell. A stable problem
will result in this value getting monotonically smaller.
The stopping condition may be based on the above difference
value or on the number of iterations.
To demonstrate the above development, a known u(x,y) is used.
The boundary values are set and the iterations computed.
Since the solution is known, the error can be computed at
each cell and reported for each iteration.
At several iterations, the computed solution and the errors
from the known exact solution are printed.
The same code has been programmed in "C", Java, Fortran 95, Ada 95 and
Matlab as shown below with file types .c, .java, .f90, .adb and .m:
Chose the language you like best and get an understanding of
how the method above was programmed.
"C" source code pde3.coutput of pde3.cFortran 95 source code pde3.f90output of pde3.f90Java source code pde3.javaoutput of pde3.javaAda 95 source code pde3.adboutput of pde3.adbMatlab 7 source code pde3.moutput of pde3.m
You may tailor any of the above code to solve second order
partial differential equations in two independent variables.
1) You need a function c(x,y) that is the forcing function, the right
hand side of the differential equation.
2) You need a function u(x,y) that provides at least the boundary
conditions. If your u(x,y) is not a test solution, remove the "check"
code. And a choice of xmin, xmax, ymin, ymax.
3) If your differential equation is different, then use nderiv.out
to find the coefficients and create the function u00(i,j) that
computes the next value of a solution point from the surrounding
previous solution points.
The following examples will be covered for a more general case
in the next lecture. The same problem is solved as given above,
using a method of solving simultaneous linear equations, rather
than an iterative method. The files corresponding to the above
examples are with minimum modifications are:
derivation of pde3_eq.c"C" source code pde3_eq.coutput of pde3_eq.cMatlab 7 source code pde3_eq.moutput of pde3_eq.m
Method using solution of simultaneous equations,
rather than an iterative method
Some simplified definitions and terminology related to differential equations:
Given a function y = f(x)
In plain text, without mathematical symbols, the derivative of
f(x) with respect to x may be written many ways:
df(x)/dx = df/dx = f'(x) = f'
or for y=f(x) dy/dx = y'(x) = y'
Plain "d" will be used also for the partial derivative symbol.
The fundamental theorem of calculus is:
Given f(x) is continuous on the interval a <= x <= b and
F(x) is the indefinite integral of f(x) then
integral from x=a to x=b of f(x)dx = F(b) - F(a)
Derivative f(x) = dF(x)/dx = F'(x) = F'
Note that any letters, upper or lower case, may be used for any
function or variable.
Second Derivative is simply the derivative of the first derivative
d^2 f(x)/dx^2 = d (df(x)/dx)/dx = f''(x) = f''
Ordinary Differential Equation (ODE)
A differential equation with only one independent variable.
Example: d^2 f(x)/dx^2 = -f(x) f(0)=0, f'(0)=1 thus f(x)=sin(x)
Partial Differential Equation (PDE)
A differential equation with more than one independent variable.
Example: dU/dx + dU/dy = f(x,y) given f(x,y)=x+y then U(x,y)=xy
Dimension of a differential equation is the number of independent
variables. Typically the independent variables are:
x for one dimension
x,y for two dimensions
x,y,z for three dimensions
x,y,z,t for four dimensions, t usually being time.
x,y,z,u,v,w,t seven independent variables for fluid problems
With many combinations, we use a shorthand, for example for
a function U(x,y,z):
Uxxx is notation for third derivative with respect to x
Uxy is notation for partial derivative with respect to x and
with respect to y
Uyzz is notation for partial derivative with respect to y and
second partial derivative with respect to z
Order of a differential equation is the highest derivative appearing.
Example: First order: dF/dx + dF/dy + F(x,y)^3 + x^4 + y^5 = 0
Example: Second order: d^2 F/dx^2 = F''(x) Uxx
Example: Third order: d^3 F/dx^3 = F'''(x) Uxxx
Example: Fourth order: d^4 F/dx^4 = F''''(x) Uxxxx
Example: Fourth order: d^4 F/dx^2dydz = F''''(x,y,z) = Uxxyz
Degree of a differential equation is the highest power of any derivatives.
Example: First degree(linear): d^2 F/dx^2 + (dF/dx) = 0
Example: Second degree(quadratic): (d^3 F/dx^3) + (dF/dx)^2 = 0
Example: Third degree(cubic): (d^2 F/dx^2)^3 + (d^4 F/dx^4) = 0
Example: Third degree: (d^2 F/dxdy)^3 + d^4 F/dxdydzdt = f(x,y,z,t)
Example: Fourth degree (dF/dx)*(dF/dy)*(dF/dz)*(dF/dt) = 0
Note: A "linear" differential equation has highest degree one
for all orders. No U'*U, U^2, U'*U'' etc.
The solution methods that use solving a linear system of
equations only work with linear differential equations.
Higher degree differential equations require solving
non-linear systems of equations, see Newtons method.
Note: The Right Hand Side, RHS, is the part to the right of
the equal sign. This may be zero or a computable function,
yet must not contain any derivatives or the solution function.
All derivatives and the solution must be to left of equal sign.
Example 7*U'(x,y,z)+x*U''(x,y,z)-z*U(x,y,z) = c(x,y,z)^3+x
Initial value differential equation problems have values given at one end
of the domain.
Boundary value differential equation problems have values given at all ends
of the domain.
One dimensional has two values.
Two dimensional has values on an enclosing path. e.g. square, circle
Three dimensional has values on an enclosing surface. e.g. cube, sphere
Four dimensional has values on an enclosing volume. e.g. hyper cube
Dirichlet boundary values are the values at a point on the boundary.
Neumann boundary values are the derivative at a point on the boundary,
typically the first derivative in the direction of the outward normal.
Mixed value partial differential equation problems may have some variables
initial value and some variables boundary value. Often the time
variable is given only as an initial value.
Types of second order, first degree, partial differential equations
in two variables, very common, thus named:
Given A d^2 U/dx^2 + 2B d^2 U/dxdy + C d^2 U/dy^2 + other terms = f(x,y)
Parabolic when B^2 = A C e.g. Diffusion equation
one unique real characteristic
system has one zero eigenvalue, others
all positive or all negative
Elliptic when B^2 < A C e.g. Laplace's equation
two unique complex characteristics
system has eigenvalues all positive or all negative
Hyperbolic when B^2 > A C e.g. Wave equation
two unique real characteristics
system has no zero eigenvalues and at least one
positive and one negative
Parabolic Diffusion equation: B=0, C=0
k d^2 V/dx^2 - dV/dt = 0
Elliptic Laplace's equation: B=0, A>0, C>0
d^2 V/dx^2 + d^2 V/dy^2 = 0
Hyperbolic Wave equation: B=0, A>0, C<0
d^2 V/dx^2 - 1/c^2 d^2 V/dt^2 = 0
The above definitions are motivated by the equation of a cone
cut by a plane at various angles, giving the conic sections:
parabola, ellipse and hyperbola.
The definitions are extended by some authors to first order
equations such as dV/dt + a dV/dx = 0 to be called a hyperbolic
one-way wave equation.
The three example equations above are called "homogeneous" because
no term has an independent variable.
More general equations have a forcing function that would replace
the "0" with f(x,y) or f(x,t) and thus have an inhomogeneous equation.
Using web math symbols:
Laplace Equation is ΔU = 0 or ∇2U = 0 or
∂2U/∂x2 + ∂2U/∂y2 + ∂2U/∂z2 = 0
Poisson Equation is ΔU = f or ∇2U = f or
∂2U/∂x2 + ∂2U/∂y2 + ∂2U/∂z2 = f(x,y,z)
Some terminology is used for various methods of numerical solution
of differential equations.
The Finite Difference Method, FDM, replaces the continuous differential
operators with finite difference approximations. The order of
the approximation may be checked by substitution of the Taylor series.
The FDM is explicit if the solution at the next cell can be expressed
entirely in terms of previously computed cells. Our "nderiv".
The FDM is implicit if the solution at the next group of cells must
be represented as a set of simultaneous equations based on a
previous group of cells.
The Finite Element Method, FEM, develops a system of simultaneous
equations for the solution at every cell.
The FEM is explicit if one solution of the simultaneous equations
yields the solution of the differential equation.
The FEM is iterative if each solution of the simultaneous equations
yields the next approximation to the solution of the differential equation.
more definitions on ODE and PDE
ODE Ordinary Differential Equation - one independent variable
second order - written many ways:
a(x)*ΔU + b(x)*∇U + c(x)*U = f(x)
a(x)*∇2U + b(x)*∇U + c(x)*U = f(x)
a(x)*d2U/dx2 + b(x)*dU/dx + c(x)*U = f(x)
a(x)*U''(x) + b(x)*U'(x) + c(x)*U(x) = f(x)
a(x)*Uxx(x) + b(x)*Ux(x,y) + c(x)*U(x) = f(x)
x is independent variable
the solution is U(x)
U'(x) is derivative of U(x) with respect to x, written Ux, etc.
functions a(x), b(x), c(x) and f(x) must be known
enough initial conditions must be known for a unique solution
Further definition is needed for choice of coordinate system:
Cartesian, Cylindrical, Polar, Toroidal, etc.
PDE Partial Differential Equation - more than one independent variable
a(x,y)*ΔU + b(x,y)*∇U + c(x,y)*ΔU +
d(x,y)*∇U + e(x,y)*∇U + g(x,y)*U = f(x,y) ambiguous
p(x,y)*ΔU + q(x,y)*∇U + r(x,y)*U = f(x,y) typical
a(x,y)*∇2U + b(x,y)*∇2U + c(x,y)*∇2U +
d(x,y)*∇ + e(x,y)*∇U + g(x,y)*U = f(x,y) ambiguous
p(x,y)*∇2U + q(x,y)*∇U + r(x,y)*U = f(x,y) typical
a(x,y)*∂2U/∂x2 + b(x,y)*∂2U/∂x∂y +
c(x,y)*∂2U/∂y2 + d(x,y)*∂U/∂x +
e(x,y)*∂U/∂y + g(x,y)*U = f(x,y)
a(x,y)*Uxx(x,y) + b(x,y)*Uxy(x,y) + c(x,y)*Uyy(x,y) +
d(x,y)*Ux(x,y) + e(x,y)*Uy(x,y) + g(x,y)*U(x,y) = f(x,y)
x and y are independent variables
the solution is U(x,y)
Ux(x,y) is partial derivative of U(x,y) with respect to x,
written Ux, Note: U' is ambiguous because it could be Ux or Uy
Uxy(x,y) is partial derivative of U(x,y) with respect to x and
partial derivative with respect to y,
written Uxy,
functions a(x,y), b(x,y), c(x,y), d(x,y), e(x,y),
f(x,y) and g(x,y) must be known and computable.
Enough boundary condition must be known for a unique solution.
in three dimensions x,y becomes x,y,z
in four dimensions x,y becomes x,y,z,t
in n dimensions the variables are in Rn
and boundary conditions Ω are given.
Further definition is needed for choice of coordinate system:
Cartesian, Cylindrical, Polar, Toroidal, etc.
Methods for computing numerical solutions replace the continuous
variable x with discrete values x1, x2, x3, ... , x_nx
Uniformly spaced values may be used, given xmin, xmax and h,
xmin, xmin+h, xmin+2h, ... , xmax nx=1+(xmax-xmin)/h
Discrete values are also used for y, z, and t
The numerical solution is computed at the discrete values of
the independent variables U(x1), U(x2), U(x3), ... , U(x_nx)
or U(x1,y1), U(x1,y2), U(x2,y2), ... , U(x_nx,y_ny), etc.
The solution is a set of numbers, often written as
U1, U2, ... U_xn or U[1,1], U[1,2], U[2,2], ... U[nx,ny]
A numerical solution may use a uniform grid or a
nonuniform grid.
A numerical solution method may be iterative, computing
a closer approximation at each step.
A numerical solution may set up a system of linear equations
to solve for the solution values.
A numerical solution may use a combination of iterative and
linear equations to solve for the solution values.
Not that a non linear differential equation will create a
non linear system of equations to solve. Very difficult!
All of the above statements apply to a set of methods,
generally called discretization.
FEM Finite Element Method is a set of methods for finding the
numerical solution of a differential equation. Within FEM
there are various sub methods including the Galerkin method.
FEM may use equally spaced or variable spaced values for
the independent variables.
FEM may also use triangles, quads, or other polygons for two
independent variables.
FEM may also use tetrahedrons or other solids for three
independent variables. And four dimensional objects for four
independent variables.
see FEM lecture for the rest of the explanation.
cs455_l32.html Finite Element Method
We now extend the analysis and solution to three dimensions and
unequal step sizes.
We are told there is an unknown function u(x,y,z) that satisfies
the partial differential equation:
(read 'd' as the partial derivative symbol, '^' as exponent)
d^2u(x,y,z)/dx^2 + d^2u(x,y,z)/dy^2 + d^2u(x,y,z)/dz^2 = c(x,y,z)
and we are given the function c(x,y,z)
and we are given the values of u(x,y,z) on the boundary
of the domain of x,y,z in xmin,ymin,zmin to xmax,ymax,zmax
To compute the numerical solution of u(x,y,z) at a set of points
we choose the number of points in the x,y and z directions, nx,ny,nz.
The number of non boundary points is called the degrees of freedom, DOF.
Thus we have a grid of cells typically programmed as a
three dimensional array.
From the above we get a step size hx = (xmax-xmin)/(nx-1) ,
hy = (ymax-ymin)/(ny-1) and hz = (zmax-zmin)/(nz-1).
If we need u(x,y,z) at some point not on our solution grid,
we can use three dimensional interpolation to find the desired value.
Now, write the differential equation as a second order
numerical difference equation. The analytic derivatives are
made discrete by numerical approximation. "Discretization."
(Numerical derivatives from nderiv.out order 2, points 3, term 1)
d^2u(x,y,z)/dx^2 + d^2u(x,y,z)/dy^2 + d^2u(x,y,z)/d^2z =
1/h^2(u(x-h,y,z)-2u(x,y,z)+u(x+h,y,z)) +
1/h^2(u(x,y-h,z)-2u(x,y,z)+u(x,y+h,z)) +
1/h^2(u(x,y,z-h)-2u(x,y,z)+u(x,y,z+h)) + O(h^2)
Setting the above equal to c(x,y,z)
d^2u(x,y,z)/dx^2 + d^2u(x,y,z)/dy^2 + d^2u(x,y,z)/d^2z = c(x,y,z) becomes
1/hx^2(u(x-hx,y,z)+u(x+hx,y,z)-2u(x,y,z)) +
1/hy^2(u(x,y-hy,z)+u(x,y+hy,z)-2u(x,y,z)) +
1/hz^2(u(x,y,z-hz)+u(x,y,z+hz)-2u(x,y,z)) = c(x,y,z)
now solve for new u(x,y,z), based on neighboring grid points
u(x,y,z) = ((u(x-hx,y,z)+u(x+hx,y,z))/hx^2 +
(u(x,y-hy,z)+u(x,y+hy,z))/hy^2 +
(u(x,y,z-hz)+u(x,y,z+hz))/hz^2 - c(x,y,z)) /
(2/hx^2 + 2/hy^2 + 2/hz^2)
The formula for u(x,y,z) is programmed based on indexes i,j,k
such that x=xmin+i*hx, y=ymin+j*hy, z=zmin+k*hz
for i=1,nx-2 j=1,ny-2 k=1,nz-2
Note that the boundary values are set and unchanged for the six
sides i=0 for j=0,ny-1 k=0,nz-1 i=nx-1 for j=0,ny-1 k=0,nz-1
j=0 for i=0,nx-1 k=0,nz-1 j=ny-1 for i=0,nx-1 k=0,nz-1
k=0 for i=0,nx-1 j=0,nx-1 k=nz-1 for i=0,nx-1 j=0,ny-1
and the interior cells are set to some value, typically zero.
Then iterate, computing new interior cells based on the previous
set of cells including the boundary.
Compute the absolute value of the difference between each new
interior cell and the previous interior cell. A stable problem
will result in this value getting monotonically smaller.
The stopping condition may be based on the above difference
value or on the number of iterations.
To demonstrate the above development, a known u(x,y,z) is used.
The boundary values are set and the iterations computed.
Since the solution is known, the error can be computed at
each cell and reported for each iteration.
At several iterations, the computed solution and the errors
from the known exact solution are printed.
The same code has been programmed in "C", Java, Fortran 95 and Ada 95
as shown below with file types .c, .java, .f90 and .adb:
"C" source code pde3b.coutput of pde3b.cFortran 95 source code pde3b.f90output of pde3b.f90Java source code pde3b.javaoutput of pde3b.javaAda 95 source code pde3b.adboutput of pde3b.adbMatlab 7 source code pde3b.moutput of pde3b.m
It should be noted that a direct solution is possible.
The iterative solution comes from a large set of simultaneous
equations that may be able to be solved accurately. For the
same three dimensional problem above, observe the development
of the set of simultaneous equations:
We are told there is an unknown function u(x,y,z) that satisfies
the partial differential equation:
(read 'd' as the partial derivative symbol, '^' as exponent)
d^2u(x,y,z)/dx^2 + d^2u(x,y,z)/dy^2 + d^2u(x,y,z)/dz^2 = c(x,y,z)
and we are given the function c(x,y,z)
and we are given the values of u(x,y,z) on the boundary ub(x,y,z)
of the domain of x,y,z in xmin,ymin,zmin to xmax,ymax,zmax
To compute the numerical solution of u(x,y,z) at a set of points
we choose the number of points in the x,y and z directions, nx,ny,nz.
Thus we have a grid of cells typically programmed as a
three dimensional array.
From the above we get a step size hx = (xmax-ymin)/(nx-1) ,
hy = (ymax-ymin)/(ny-1) and hz = (xmax-zmin)/(nz-1).
If we need u(x,y,z) at some point not on our solution grid,
we can use three dimensional interpolation to find the desired value.
Now, write the differential equation as a second order
numerical difference equation.
(Numerical derivatives from nderiv.out order 2, points 3, term 1)
d^2u(x,y,z)/dx^2 + d^2u(x,y,z)/dy^2 + d^2u(x,y,z)/d^2z =
1/h^2(u(x-h,y,z)-2u(x,y,z)+u(x+h,y,z)) +
1/h^2(u(x,y-h,z)-2u(x,y,z)+u(x,y+h,z)) +
1/h^2(u(x,y,z-h)-2u(x,y,z)+u(x,y,z+h)) + O(h^2)
Setting the above equal to c(x,y,z)
d^2u(x,y,z)/dx^2 + d^2u(x,y,z)/dy^2 + d^2u(x,y,z)/d^2z = c(x,y,z) becomes
1/hx^2(u(x-hx,y,z)+u(x+hx,y,z)-2u(x,y,z)) +
1/hy^2(u(x,y-hy,z)+u(x,y+hy,z)-2u(x,y,z)) +
1/hz^2(u(x,y,z-hz)+u(x,y,z+hz)-2u(x,y,z)) = c(x,y,z)
Now, rearrange terms to collect the u(x,y,z) coefficients:
u(x-hx,y,z)/hx^2 + u(x+hx,y,z)/hx^2 +
u(x,y-hy,z)/hy^2 + u(x,y+hy,z)/hy^2 +
u(x,y,z-hz)/hz^2 + u(x,y,z+hz)/hz^2 -
u(x,y,z)*(2/hx^2 + 2/hy^2 + 2/hz^2) = c(x,y,z)
We build a linear system of simultaneous equations M U = V
by computing the values of the matrix M and right hand side vector V.
(The system of equations is linear if the differential equation is linear,
otherwise a system of nonlinear equations must be solved.)
The linear subscripts of U corresponding to u(i,j,k) is
m = (i-1)*(ny-2)*(nz-2) + (j-1)*(nz-2) + k
and the rows and columns and V use the same linear subscripts.
Now, using indexes rather than x, x+hx, x-hx,
solve system of linear equations for u(i,j,k).
u(i,j,k)=u(xmin+i*hx, ymin+j*hz, zmin+k*hz)
u(i-1,j,k)/hx^2 + u(i+1,j,k)/hx^2 +
u(i,j-1,k)/hy^2 + u(i,j+1,k)/hy^2 +
u(i,j,k-1)/hz^2 + u(i,j,k+1)/hz^2 -
u(i,j,k)*(2/hx^2 + 2/hy^2 + 2/hz^2) = c(x,y,z)
for i=1,nx-2 j=1,ny-2 k=1,nz-2 while substituting
u(i,j,k) boundary values when i=0 or nx-1, j=0 or ny-1,
k=0 or nz-1 using x=xmin+i*hx, y=ymin+j*hz, z=zmin+k*hz
Remember: hx, hy, hz are known constants, c(x,y,z) is computable.
The matrix has m = (nx-2)*(ny-2)*(nz-2) rows and columns for
solving for m equations in m unknowns. The right hand side vector
is the c(x,y,z) values with possibly boundary terms subtracted.
The unknowns are u(i,j,k) for i=1,nx-2, j=1,ny-2, k=1,nz-2 .
The first row of the matrix is for the unknown u(1,1,1) i=1, j=1, k=1
ub(xmin+0*hx,ymin+1*hy,zmin+1*hz)/hx^2 + u(2,1,1)/hx^2 +
ub(xmin+1*hx,ymin+0*hy,zmin+1*hz)/hy^2 + u(1,2,1)/hy^2 +
ub(xmin+1*hx,ymin+1*hy,zmin+0*hz)/hz^2 + u(1,1,2)/hz^2 +
u(1,1,1)*(2/hx^2 + 2/hy^2 + 2/hz^2) = c(xmin+1*hx,ymin+1*hy,zmin+1*hz)
Matrix cells in first row are zero except for:
M(1,1,1)(1,1,1) = 1/(2/hx^2 + 2/hy^2 + 2/hz^2)
M(1,1,1)(1,1,2) = 1/hz^2
M(1,1,1)(1,2,1) = 1/hy^2
M(1,1,1)(2,1,1) = 1/hx^2
V(1,1,1) = c(xmin+1*hx,ymin+1*hy,zmin+1*hz)
-ub(xmin+0*hx,ymin+1*hy,zmin+1*hz)/hx^2
-ub(xmin+1*hx,ymin+0*hy,zmin+1*hz)/hy^2
-ub(xmin+1*hx,ymin+1*hy,zmin+0*hz)/hz^2
Note that three of the terms are boundary terms and thus have a
computed value that is subtracted from the right hand side vector V.
The row of the matrix for the u(2,2,2) unknown has no boundary terms
u(1,2,2)/hx^2 + u(3,2,2)/hx^2 +
u(2,1,2)/hy^2 + u(2,3,2)/hy^2 +
u(2,2,1)/hz^2 + u(2,2,3)/hz^2 +
u(2,2,2)*(2/hx^2 + 2/hy^2 + 2/hz^2) = c(xmin+2*hx,ymin+2*hy,zmin+2*hz)
Matrix cells in row (2,2,2) are zero except for:
M(2,2,2)(1,2,2) = 1/hx^2
M(2,2,2)(3,2,2) = 1/hx^2
M(2,2,2)(2,1,2) = 1/hy^2
M(2,2,2)(2,3,2) = 1/hy^2
M(2,2,2)(2,2,1) = 1/hz^2
M(2,2,2)(2,2,3) = 1/hz^2
M(2,2,2)(2,2,2) = 1/(2/hx^2 + 2/hy^2 + 2/hz^2)
V(2,2,2) = c(xmin+2*hx,ymin+2*hy,zmin+2*hz)
The number of rows in the matrix is equal to the number of
equations to solve simultaneously and is equal to the
number of unknowns, the degrees of freedom.
The matrix is quite sparse and may be considered "band diagonal".
Typically, special purpose simultaneous equation solvers are
used, often with preconditioners.
A PDE that has all constant coefficients, in this case,
there are only four possible values in the matrix. These are
computed only once at the start of building the matrix.
A PDE with functions of x,y,z as coefficients is covered later.
Solving the system of equations for our test problem gave
the error on the order of 10^-12 compared with
the iterative solution error of 10^-4 after 150 iterations.
The same code has been programmed in "C", Java, Fortran 95, Ada 95 and
Matlab 7 as shown below with file types .c, .java, .f90, .adb and .m:
"C" source code pde3b_eq.coutput of pde3b_eq.cFortran 95 source code pde3b_eq.f90output of pde3b_eq.f90Java source code pde3b_eq.javaoutput of pde3b_eq.javaAda 95 source code pde3b_eq.adboutput of pde3b_eq.adbMatlab 7 source code pde3b_eq.moutput of pde3b_eq.m
Please note that the solution of partial differential equations
gets significantly more complicated if the solution is not
continuous or is not continuously differentiable.
You can tailor the code presented above for many second order partial
differential equations in three independent variables.
1) You must provide a function u(x,y,z) that computes the solution
on the boundaries. It does not have to be a complete solution and
the "check" code should be removed.
2) You must provide a function c(x,y,z) that computes the
forcing function, the right hand side of the differential equation.
3a) If the differential equation is not d^2/dx^2 + d^2/dy^2 +d^2/dz^2
then use coefficients from nderiv.out to create a new function
u000(i,j,k) that computes a new value of the solution at i,j,k
from the neighboring coordinates for the next iteration.
3b) If the differential equation is not d^2/dx^2 + d^2/dy^2 +d^2/dz^2
then use coefficients from nderiv.out to create a new matrix
initialization function and then solve the simultaneous equations.
3c) There may be constant or function multipliers in the given PDE.
For example exp(x+y/2) * d^2/dx^2 + 7 * d^2/dy^2 etc.
Constant multipliers are just multiplied when creating the
discrete equations. Functions of the independent variables, x,y
must be evaluated at the point where the discrete derivative is
being applied.
For an extended example of discretization of a fourth order PDE
in two dimensions see discrete2d.txt
Source code is available to compute coefficients for derivatives
of many orders. The source code is available in many languages,
deriv.h gives a good overview:
/* deriv.h compute formulas for numerical derivatives */
/* returns npoints values in a or c */
/* 0 <= point < npoints */
void deriv(int order, int npoints, int point, int *a, int *bb);
/* c[point][] = (1.0/((double)bb*h^order))*a[] */
void rderiv(int order, int npoints, int point, double h, double c[]);
/* c includes bb and h^order */
/* derivative of f(x) = c[0]*f(x+(0-point)*h) + */
/* c[1]*f(x+(1-point)*h) + */
/* c[2]*f(x+(2-point)*h) + ... + */
/* c[npoints-1]*f(x+(npoints-1-point)*h */
/* */
/* nuderiv give x values, not uniformly spaced, rather than h */
void nuderiv(int order, int npoints, int point, double x[], double c[]);
deriv.hderiv.cderiv.pyderiv.f90Deriv.scaladeriv.adbnuderiv.java non uniform, most general
Then the building of the simultaneous equations becomes:
double xg[] = {-1.0, -0.9, -0.8, -0.6, -0.3, 0.0,
0.35, 0.65, 0.85, 1.0, 1.2};
double cxx[] = new double[11]; // nuderiv coefficients
// coefdxx * d^2 u(x,y)/dx^2 for row ii in A matrix
coefdxx = 1.0; // could be a function of xg[i]
new nuderiv(2,nx,i,xg,cxx); // derivative coefficients
for(j=0; j<nx; j++) // cs is index of RHS
{
if(j==0 || j==nx-1)
A[ii][cs] = A[ii][cs] - coefdxx*cxx[j]*ubound(xg[j]);
else A[ii][j] = A[ii][j] + coefdxx*cxx[j];
}
extracted and modified from
pdenu22_eq.java
We can extend solutions to four dimensions, typically the physical
dimensions x,y,z and time t. (And use many languages.)
We can extend to solutions that have fourth derivatives in all
four dimensions.
A "well posed PDE problem" is defined as a specific PDE with
specific boundary conditions such that:
1) The solution is unique.
2) The solution is continuous inside and on the boundaries.
3) The solution is continuously differentiable.
A well posed problem is ideal for high order methods because
the solution can be approximated by a polynomial. The polynomial
may have to be a high order polynomial.
With a1, a2, ... a5 being computable functions a1(x) etc.
The notation Ux means first derivative of U(x) with respect to x.
Uxxxx means fourth derivative of U(x) with respect to x.
The maximal linear PDE in one dimension is
a fourth order PDE in one dimension:
a1*Uxxxx + a2*Uxxx + a3*Uxx + a4*Ux + a5*U = f(x)
With a1, a2, ... a15 being computable functions a1(x,y) etc.
The maximal linear PDE that can be solved in two dimensions is
The notation Uxy means the derivative of U(x,y)
with respect to x and with respect to y.
The maximal linear PDE in two dimensions is
a fourth order PDE in two dimensions:
a1*Uxxxx + a2*Uxxxy + a3*Uxxyy + a4*Uxyyy + a5*Uyyyy +
a6*Uxxx + a7*Uxxy + a8*Uxyy + a9*Uyyy +
a10*Uxx + a11*Uxy + a12*Uyy +
a13*Ux + a14*Uy + a15*U = f(x,y)
With a1, a2, ... a70 being computable functions a1(x,y,z,t) etc.
The notation Uxyzt means the derivative of U(x,y,x,t)
with respect to x and with respect to y and with respect to z
and with respect to t.
The maximal linear PDE in four dimensions is
a fourth order PDE in four dimensions:
a1*Uxxxx + a2*Uxxxy + a3*Uxxxz + a4*Uxxxt + a5*Uxxyy +
a6*Uxxyz + a7*Uxxyt + a8*Uxxzz + a9*Uxxzt + a10*Uxxtt +
a11*Uxyyy + a12*Uxyyz + a13*Uxyyt + a14*Uxyzz + a15*Uxyzt +
a16*Uxytt + a17*Uxzzz + a18*Uxzzt + a19*Uxztt + a20*Uxttt +
a21*Uyyyy + a22*Uyyyz + a23*Uyyyt + a24*Uyyzz + a25*Uyyzt +
a26*Uyytt + a27*Uyzzz + a28*Uyzzt + a29*Uyztt + a30*Uyttt +
a31*Uzzzz + a32*Uzzzt + a33*Uzztt + a34*Uzttt + a35*Utttt +
a36*Uxxx + a37*Uxxy + a38*Uxxz + a39*Uxxt + a40*Uxyy +
a41*Uxyz + a42*Uxyt + a43*Uxzz + a44*Uxzt + a45*Uxtt +
a46*Uyyy + a47*Uyyz + a48*Uyyt + a49*Uyzz + a50*Uyzt +
a51*Uytt + a52*Uzzz + a53*Uzzt + a54*Uztt + a55*Uttt +
a56*Uxx + a57*Uxy + a58*Uxz + a59*Uxt + a60*Uyy +
a61*Uyz + a62*Uyt + a63*Uzz + a64*Uzt + a65*Utt +
a66*Ux + a67*Uy + a68*Uz + a69*Ut + a70*U = f(x,y,z,t)
Sample code for testing with constant coefficients is:
"C" source code test_4d.c"C" header file test_4d.h"C" header file deriv.hrderiv being tested deriv.c"C" test code check_test_4d.c"C" test result check_test_4d.out
This includes u(x,y,z,t) and all derivatives through fourth order.
Just changing number of points and limits for more checking
"C" test code check2_test_4d.c"C" test result check2_test_4d.out
Checking non uniform grid
"C" test code checknu_test_4d.c"C" test result checknu_test_4d.out"C" header file nuderiv.hrderiv being tested nuderiv.cAda package test_4d.adsAda package body test_4d.adbrderiv being tested deriv.adbAda test code check_test_4d.adbtest results check_test_4d_ada.out
This includes u(x,y,z,t) and all derivatives through fourth order.
Fortran module test_4d.f90rderiv1 being tested deriv.f90Fortran test code check_test_4d.f90Fortran test result check_test_4d_f90.out
This includes u(x,y,z,t) and all derivatives through fourth order.
We may need to extend the computation to use higher order methods
to obtain the needed accuracy.
A few samples include:
Four dimensions, fourth order
"C" source code pde44_eq.coutput is pde44_eq_c.out"C" source code simeq.h"C" source code simeq.c"C" source code deriv.h"C" source code deriv.c"java" source code pde44_eq.javaoutput is pde44_eq_java.out"java" source code simeq.java"java" source code rderiv.javaFortran source code pde44_eq.f90output is pde44_eq_f90.outFortran source code simeq.f90Fortran source code deriv.f90"Ada" source code pde44_eq.adboutput is pde44_eq_ada.out"Ada" source code simeq.adb"Ada" source code rderiv.adb"Ada" source code real_arrays.ads"Ada" source code real_arrays.adb
From Lecture 32, using Finite Element Method, FEM
fem_check44_la.c fourth order, four dimension
fem_check44_la_c.out output with debug print
fem_check44_la.f90 fourth order, four dimension
fem_check44_la_f90.out output with debug print
fem_check44_la.adb fourth order, four dimension
fem_check44_la_ada.out output with debug print
fem_check44_la.java fourth order, four dimension
fem_check44_la_java.out output with debug print
Other files, that are needed by some examples above:
laphi.h "C" header file
laphi.c code through 4th derivative
laphi.ads Ada package specification
laphi.adb code through 4th derivative
laphi.f90 module through 4th derivative
laphi.java class through 4th derivative
This example computes the error of the numeric solution
against the definition of the PDE, and against the known solution.
The numeric solution is written to a file and gnuplot is used.
For the simple Laplace Equation, the maxerror verses the
size, degrees of freedom and quadrature order,
xmax, ymax, nx, ny, npx, npy are shown in:
fem_laplace_la.java various cases
fem_laplace_la_java.out output
Solution U(x,y) = exp(x)*sin(y)
xmin=0.0, ymin=0.0 DOF quadrature maximum error against known
xmax=1.0, ymax=1.0, nx= 4, ny= 4, npx=3, npy=3, maxerr=3.88E-4
xmax=1.0, ymax=1.0, nx= 4, ny= 4, npx=4, npy=4, maxerr=2.62E-4
xmax=1.0, ymax=1.0, nx= 4, ny= 4, npx=6, npy=6, maxerr=2.62E-4
xmax=1.0, ymax=1.0, nx= 6, ny= 6, npx=4, npy=4, maxerr=3.94E-6
xmax=1.0, ymax=1.0, nx= 6, ny= 6, npx=6, npy=6, maxerr=1.33E-6
xmax=1.0, ymax=1.0, nx= 6, ny= 6, npx=8, npy=8, maxerr=1.33E-6
xmax=1.0, ymax=1.0, nx= 8, ny= 8, npx=6, npy=6, maxerr=4.64E-9
xmax=1.0, ymax=1.0, nx= 8, ny= 8, npx=8, npy=8, maxerr=2.27E-9
xmax=1.5, ymax=1.5, nx= 8, ny= 8, npx=8, npy=8, maxerr=1.09E-7
xmax=3.0, ymax=3.0, nx= 8, ny= 8, npx=8, npy=8, maxerr=8.71E-5
xmax=3.0, ymax=3.0, nx=10, ny=10, npx=8, npy=8, maxerr=1.38E-6
xmax=3.0, ymax=6.3, nx=10, ny=10, npx=8, npy=8, maxerr=2.06E-4
non uniform distributed vertices
A PDE of fourth order in four dimensions with non uniformly
distributed vertices of arbitrary geometry is difficult to
solve accurately. The building blocks are non uniform
four dimensional discretization, nuderiv4d.java.
The test program to check all 70 possible derivatives is
test_nuderiv4d.java.
A simple PDE
Uxxxx(x,y,z,t) + Uyyyy(x,y,z,t) + Uzzzz(x,y,z,t) + Utttt(x,y,z,t) =
2*(exp(x)+exp(y)+exp(z)+exp(t))
with solution
U(x,y,z,t) = exp(x)+exp(y)+exp(z)+exp(t)
is shown below.
It runs very fast, yet accuracy problems as number of unknown
vertices increases.
This is very sensitive to the independence of the underlying
grid points.
nuderiv4d.java basic discretization
test_nuderiv4d.java test discretization
pde44e_nuderiv4d.java PDE source code
pde44e_nuderiv4d_1_java.out PDE solution
A third degree PDE over the more linearly independent 4D grid points
shows the error degradation with more free vertices.
The output has three accuracy tests, for 1, 5 and 10 free nodes.
pde34_nuderiv4d.java PDE source code
pde34_nuderiv4d_1_java.out PDE solution
Error increases with higher derivative and fewer points
The minimum number of points to compute a derivative of order n is n+1.
More points used decreases the error until numeric failure.
e.g. without multiple precision, third and fourth order derivative
fail with 11 points. An example of the variation of error from
computing the derivative on a sine wave is shown by
test_deriv.c source to test
deriv.c basic discretization, uniform spacing
test_deriv_c.out output showing test
Just general test, Python2 py and Python3 py3
deriv.py basic discretization, uniform spacing
test_deriv.py source to test
test_deriv_py.out output showing test
test_deriv.py3 source to test
test_deriv_py3.out output showing test
rderiv.py just deriv uniform spacing
test_rderiv.py source to test
test_rderiv_py.out output showing test
test_rderiv.py3 source to test
test_rderiv_py3.out output showing test
nuderiv.py just deriv, non uniform spacing
test_nuderiv.py source to test
test_nuderiv_py.out output showing test
More testing in ruby
Inverse.rb Inverse class
Deriv.rb Deriv class
test_deriv.prb source to test
test_deriv_rb.out output showing test
Non uniform discretization follows the same error pattern and the maximum
error goes approximately as the largest spacing between points.
Another Laplace forth order, four dimension,
with parallel (thread) solution
This PDE needed 12th order discretization to obtain an
accuracy near 0.0001. Almost all of the execution time
was used solving 10,000 equations in 10,000 unknowns, DOF.
10 cores were used to keep the execution time to below
20 minutes rather than about 3 hours for a single core
execution.
∂4U(x,y,z,t)/∂x4 + ∂4U(x,y,z,t)/∂y4 + ∂4U(x,y,z,t)/∂z4 + ∂4U(x,y,z,t)/∂t4 +
2 ∂4U(x,y,z,t)/∂x2∂y2 + 2 ∂4U(x,y,z,t)/∂x2∂z2 +
2 ∂4U(x,y,z,t)/∂x2∂t2 + 2 ∂4U(x,y,z,t)/∂y2∂z2 +
2 ∂4U(x,y,z,t)/∂y2∂t2 + 2 ∂4U(x,y,z,t)/∂z2∂t2 -
16 U(x,y,z,t) = f(x,y,z,t) = 0
pde_bihar44tl_eq.cpde_bihar44tl_eq_c.outtsimeq.cpde_bihar44tl_eq.adbpde_bihar44tl_eq_ada.outpsimeq.adb
Plotted with plot4d_gl, one static view of the 4D solution is:
One version of Navier Stokes equation
Each term above has units of length over time squared, acceleration,
in meters per second squared. u is the three component velocity
vector, each component in meters per second, ρ is the fluid density
in kilograms per cubic meter, p is the pressure in newtons per square meter,
μ is the dynamic viscosity newton seconds per square meter.
X is externally applied acceleration, F/m, newtons per kilogram.
Another representation of Navier Stokes equation, multiplying the
above equation by ρ, yields terms that are force per unit volume,
newtons per cubic meter. del ∇ is the differential operator,
gradient. v is the velocity vector. b is externally applied acceleration.
navier_gl.txt notes
navier_gl.c work in progress
navier_gl.out output attempt
Some notes for your consideration:
1) You are expected to be intelligent enough to know the practical
limitations of equations that are given to you. When you develop
the equations, you must take into account practical limitations.
Many scholarly textbooks on differential equations make the
statement that a differential equation describing the physical
world is an approximation of what will actually happen. If you
keep stretching a spring using F = k x, the equation only
applies until the spring breaks.
2) We use the statement that "mass is conserved" in many physical
equations. Understand that the limits, usually unstated, are
"in a closed system and with no E = M C^2" Typically fluid equations
do not account for evaporation or fluid loss do to leaks or spillage.
The equations assume an absolutely closed system.
3) We use, as a fact "energy is conserved" in many physical equations.
Energy may take many forms and energy is easily converted from one
form to another and may not be taken into account by a specific equation.
For example, potential energy, a given mass at a given height, may
be released and potential energy is converted to kinetic energy.
Then, splat, the mass hits the ground in an inelastic collision and
has zero potential and zero kinetic energy. Most of the energy is
converted to heat energy with the possible conversion to some
chemical energy and some electrical energy.
Understand that heat and electrical energy can take the form
of radiation and leave what you are considering a "closed system."
4) Momentum is not conserved during a non elastic collision!
Fortunately, molecules of most fluids do not stick very often.
Surface tension must be taken into account when the fluid has
a surface open to the atmosphere. Capillary action affect must
be taken into account when a fluid "wets" a vertical surface.
5) When considering an infinitesimal volume, for example a cube in
a Cartesian measurement system. We generally take z to be the
vertical axis, meaning along a gravitational force line. We
consider a non compressible fluid in this cube with side s, to have
pressure applied to all six faces. Several cases are possible:
a) The fluid is stationary, what are the pressure on each face?
We define pressure as positive force divided by area into the cube.
aa) The cube of fluid has the same fluid on all sides.
Because of gravity, not all faces have the same pressure.
Given pressure P on the more positive X face, there must
be the same force P on the more negative X face else
the cube would accelerate and move. The pressure on both
Y faces must be the same P else fluid would move into or
out of the cube in the Y direction.
But, we are making an assumption that the pressure is constant
all over the X and Y faces yet this is an approximation. The
fluid has density and thus the mass is density times volume.
Mass is attracted by gravity. We can reasonably assume the force
of gravity does not change enough to be considered for the height
s, of our infinitesimal cube. Yet the pressure on the more positive
Z face must be considered different from the pressure on
the more negative Z face. Gravity puts a force of mass times
the gravitational constant g in the negative Z direction
on the fluid in the cube. With g taken as a constant, the
force is linear in the Z direction, thus the pressure P
on the X and Y faces is actually the pressure at the center
of each face. The pressure on the top, most positive Z face
is P - 1/2 rho * s * g and the pressure on the bottom,
most negative Z face is P + 1/2 rho * s * g. Pressure must
increase as depth increases.
ab) One or more face is against an immovable wall.
Same as aa)
b) The fluid is moving at a constant velocity.
ba) The cube of fluid has the same velocity as fluid on all sides.
Same as aa)
bb) The fluid against some face is moving at a different velocity.
Now we can not ignore viscosity. Viscosity is essentially a
measure of sliding resistance between the fluids at the faces
of two adjacent infinitesimal cubes.
c) The fluid is accelerating.
All of the above must be taken into account with the addition
of F = m a applied to the center of mass of the cube.
Consider the X direction with pressure P1 on the most negative
X face and a smaller P2 on the most positive X face. We can
apply the force (P1 - P2)*s^2 in the positive X direction to
the center of mass. Differences the pressure on Y faces are
independently applied to the Y direction. Use caution in
the Z direction, subtracting the difference in pressure due
to gravity because this does not impart an acceleration on
the center of mass when the infinitesimal cube has fluid or
an immovable surface under the cube.
It may be too late, but, don't panic, this was a small part of
Computational Fluid Dynamics, CFD. CFD is a large area and modeling
includes supersonic flight in complex scenarios.
More references:
navier_stokes.equationnavier_stokes_2d.equationnavier.equations breifnavier_fem.pdf technical papermetric_units.txt metric definitions
also used in lecture 22 space satellite problem
air_prop.txt air propertiesphysics1.txt equationsconvert_metric.pdf conversionunits.txt units definitions
nuderiv.java basic non uniform grid derivative
rderiv.java basic uniform grid derivative
simeq.java basic simultaneous equation
deriv.h basic derivatives
deriv.c basic derivatives
real_arrays.ads 2D arrays and operations
real_arrays.adb 2D arrays and operations
integer_arrays.ads 2D arrays and operations
integer_arrays.adb 2D arrays and operations
Plotted output from pde49hn_eq.c execution
plot9d.java plotter source code
User can select any two variables for 3D view.
User can select values for other variables, option to run all cases.
You won't find many open source or commercial 9D PDE packages
many lesser problems have many open source and commercial packages
Go over WEB pages Lecture 1 through 29, including:
3a, 3b, 18a, 24a, 24b, 27a and 28a.
Open book, open notes
Read the instructions.
Follow the instructions, else lose points.
Read the question carefully.
Answer the question that is asked, not the question you want to answer.
Go over Quiz1 and Quiz2. Some of those questions may be reused.
Go over Lecture 9 Review, Lecture 19 Review.
Things you should know:
It is best to find working code to do the numerical computation that
you need, rather than to develop the code yourself.
Thoroughly test any numerical code you are planning to use.
Convert existing, tested, numerical code to the language of your choice.
It is usually better to convert working numerical code to the language
of your choice, rather than creating a multi language interface.
(The exception to this suggestion is LAPACK.)
Modify the interfaces as needed for your application.
Do not put trust in benchmarks that others have run. Run your own benchmarks.
It is possible for operating system code or library code to have an
error that causes incorrect time for a benchmark.
A benchmark must run long enough to avoid error due to the hardware
or software quantization of time. Ten seconds has been found acceptable
on most computers. Less that one second has been found to be bad on
a few computers.
Dead code elimination can cause a benchmark to appear to run very fast.
Using an "if" statement or "print" statement of computed variable
can prevent dead code elimination.
If you are unable to run a benchmark yourself, try to find benchmarks
that resembles what you are interested in.
It is possible to computer derivatives very accurately with high
order equations. A function is available in a few languages to
compute the required coefficients.
Derivatives can be computed for any function that can be evaluated.
By using more function evaluations, better accuracy can be obtained.
Making the step size extremely small may hurt accuracy.
A second derivative can be computed by numerically computing two
successive first derivatives. Yet, accuracy will be better when
using the formulas for second order derivatives.
More function evaluations are required in order to maintain the
same order of accuracy for each higher derivative.
Ordinary differential equations have only one independent variable.
Partial differential equations have more than one independent variable.
The "order" of a differential equation is the highest derivative in
the equation.
The "degree" of a differential equation is the highest power of
any combinations of variables and unknown solution.
The "dimension" of a differential equation is the number of
independent variables.
"initial value" problems have enough information to start
computing the solution from the initial point in every dimension.
"boundary value" problems have the solution, Dirchlet, values
given on the boundary of every dimension. The slope, Neumann,
values may also be provided.
Many specific names are given to differential equations:
"parabolic" "diffusion equation" "hyperbolic" "wave equation"
"elliptic" "Laplace's equation" "biharmonic equation" etc.
Given y=f(x) the first derivatives of f(x) may be written as
y' f' f'(x) df/dx .
For partial derivatives, given z=f(x,y) dz/dx may be written as fx,
dz/dy may be written as fy, d^3z/dx^2 dy man be written fxxy .
(and may use partial derivative symbol in place of d )
The Runge-Kutta method is a common way to compute an iterative solution,
initial value problem, to an ordinary differential equation.
Solving a system of linear equations is a common way to compute a
solution to both ordinary and partial differential equations.
Generally, boundary value problems.
The unknowns in the system of differential equations being used
to solve a differential equations are the values of the unknown
function at specific points. e.g. y(h), y(2h), y(3h), etc.
Given a fourth order ordinary differential equation with only
initial conditions, y'''' = f(x), in order to find a unique
numerical solution, values for c must be given for:
c1 = y(x0) c2 = y'(x0) c3 = y''(x0) c4 = y'''(x0)
and all the values must be at the same x0.
A method that might give reasonable results for the above equation can be:
x = x0
y = c1
yp = c2 yp is for y'
ypp = c3
yppp = c4
L: ypppp = f(x)
yppp = yppp + h * ypppp
ypp = ypp + h * yppp
yp = yp + h * ypp
y = y + h * yp this is the solution at x0+h
x = x + h loop to L: for more values.
could print or save x,y
Much of Lecture 27 and 28 were from the source code:
pde3.c A second order partial differential equation with
boundary values, two independent variables, with equal
step size, using an iterative method.
pde3_eq.c same as pde3.c with the iteration replaced by a call
to simeq and the setup of the matrix in init_matrix.
pde3b.c A similar second order partial differential equation
with boundary values, three independent variables and
each independent variable can have a unique step size
and a unique number of points to be evaluated.
pde3b_eq.c same as pde3b.c with the iteration replaced by a call
to simeq and the setup of the matrix in init_matrix.
Notice that these programs were also provided in various other
languages and looked very similar.
The problem was provided in the comments and in the code.
These "solvers" had the solution programmed in so that the method
and the code could be tested. In general the solution is not
known. The solution is computed at some specific points. If the
solution is needed at other points, then interpolation is used.
These "solvers" were based on solving a partial differential
equation that was continuous and continuously differentiable.
There are many specialized solvers for specific partial differential
equations. These lectures just covered two of the many methods
You have learned about modeling and simulation
You have worked with physical world modeling and simulation
by numerical computation. This is a valuable skill in helping
your future employer reach good decisions, as shown in the
diagram from SIAM Review.
Multiple choice and short answer.
No help from other students, that is cheating!
Read exam carefully! You may think you saw that question
before. Dependent and independent are opposite.
"not" or "no" may have been added or deleted.
"All" and "some" are different.
Do not guess at answer, you can look it up.
I want you to learn to read carefully so you will
do good on your first employment.
You must download quiz3.
last name a-j a, last name k-r b, last name s-z c ?
cp /afs/umbc.edu/users/s/q/squire/pub/download/q3_455a.doc
cp /afs/umbc.edu/users/s/q/squire/pub/download/q3_455b.doc
cp /afs/umbc.edu/users/s/q/squire/pub/download/q3_455c.doc
Use Microsoft Word or libreoffice to edit.
Then on linux.gl.umbc.edu
submit cs455 quiz3 q3_455?.doc or .docx
Due December 20, 2021
A modified version of fem_50, a Matlab program to use
the Finite Element Method, FEM, to solve a specific
partial differential equation is applied to three very
small test cases with full printout to show the details
of one software implementation.
The differential equation is:
d^2 u(x,y) d^2 u(x,y)
- ---------- - ---------- = f(x,y) or -Uxx(x,y) -Uyy(x,y) = F(x,y)
dx^2 dy^2
For testing, the solution is u(x,y)=1 + 2/11 x^2 + 3/11 y^2
and then f(x,y)= - 10/11
The modified fem_50d.m is also shown below
Case 1 used a triangularization with 6 triangles
7 nodes
6 boundary nodes
1 degree of freedom
The blue edges and dots are boundary.
The red edges and dots are internal, free.
The one free node is in the center, node number 7.
The full output is A7_fem_50d.out
The solution for the 7 nodes is at the end.
Input files are:
A7_coord.datA7_elem3.datA7_dirich.dat
The files A7_elem4.dat and A7_neum.dat exists and are empty.
The solution plot is:
Case 2 used a triangularization with 16 triangles
15 nodes
12 boundary nodes
3 degrees of freedom
The blue edges and dots are boundary.
The red edges and dots are internal, free.
The three free nodes are in the center, number 13, 14, 15
The full output is A15_fem_50d.out
The solution for the 15 nodes is at the end.
Input files are:
A15_coord.datA15_elem3.datA15_dirich.dat
The files A15_elem4.dat and A15_neum.dat exists and are empty.
The solution plot is:
Case 3 used a triangularization with 18 triangles
16 nodes
12 boundary nodes
4 degrees of freedom
The blue edges and dots are boundary.
The red edges and dots are internal, free.
The four free nodes are in the center, numbered 1, 2, 3, 4
The full output is A16_fem_50d.out
The solution for the 16 nodes is at the end.
Input files are:
A16_coord.datA16_elem3.datA16_dirich.dat
The files A16_elem4.dat and A16_neum.dat exists and are empty.
The solution plot is:
The original fem_50.m all in one file with lots of output added
%% fem_50d applies the finite element method to Laplace's equation.
function fem_50d
% input a prefix, e.g. A7_
% files read A7_coord.dat one x y pair per line, for nodes in order 1, 2, ...
% A7_elem3.dat three node numbers per line, triangles, any order
% function stima3 is applied to these cooddinates
% A7_elem4.dat four node numbers per line, quadralaterals
% function stima4 is applied to these coordinates
% A7_dirich.dat two node numbers per line, dirichlet boundary
% function u_d is applied to these coordinates
% A7_neum.dat two node numbers per line, neumann bounday
% function g is applied to these coordinates
%
% Discussion:
% FEM_50d is a set of MATLAB routines to apply the finite
% element method to solving Laplace's equation in an arbitrary
% region, using about 50 lines of MATLAB code.
%
% FEM_50d is partly a demonstration, to show how little it
% takes to implement the finite element method (at least using
% every possible MATLAB shortcut.) The user supplies datafiles
% that specify the geometry of the region and its arrangement
% into triangular and quadrilateral elements, and the location
% and type of the boundary conditions, which can be any mixture
% of Neumann and dirichlet.
%
% The unknown state variable U(x,y) is assumed to satisfy
% Laplace's equation:
% -Uxx(x,y) - Uyy(x,y) = F(x,y) in Omega
% with dirichlet boundary conditions
% U(x,y) = U_D(x,y) on Gamma_D
% and Neumann boundary conditions on the outward normal derivative:
% Un(x,y) = G(x,y) on Gamma_N
% If Gamma designates the boundary of the region Omega,
% then we presume that
% Gamma = Gamma_D + Gamma_N
% but the user is free to determine which boundary conditions to
% apply. Note, however, that the problem will generally be singular
% unless at least one dirichlet boundary condition is specified.
%
% The code uses piecewise linear basis functions for triangular elements,
% and piecewise isoparametric bilinear basis functions for quadrilateral
% elements.
%
% The user is required to supply a number of data files and MATLAB
% functions that specify the location of nodes, the grouping of nodes
% into elements, the location and value of boundary conditions, and
% the right hand side function in Laplace's equation. Note that the
% fact that the geometry is completely up to the user means that
% just about any two dimensional region can be handled, with arbitrary
% shape, including holes and islands.
%
% Modified:
% 29 March 2004
% 23 February 2008 JSS
% 3 March 2008 JSS
% Reference:
% Jochen Alberty, Carsten Carstensen, Stefan Funken,
% Remarks Around 50 Lines of MATLAB:
% Short Finite Element Implementation,
% Numerical Algorithms,
% Volume 20, pages 117-137, 1999.
%
clear
format compact
prename = input('enter prefix to file names: ', 's')
diary 'fem_50d.out'
disp([prename 'fem_50d.out'])
debug = input('enter debug level:0=none, 1=input and matrices, 2=assembly ')
%
% Read the nodal coordinatesinate data file.
% ordered list of x y pairs, coordinates for nodes
% nodes must be sequentially numbered 1, 2, 3, ...
coordinates = load(strcat(prename,'coord.dat'));
if debug>0
disp([prename 'coord.dat ='])
disp(coordinates)
end % debug
%
% Read the triangular element data file.
% three integer node numbers per line
elements3 = load(strcat(prename,'elem3.dat'));
if debug>0
disp([prename 'elem3.dat ='])
disp(elements3)
end % debug
%
% Read the quadrilateral element data file.
% four integer node numbers per line
elements4 = load(strcat(prename,'elem4.dat'));
if debug>0
disp([prename 'elem4.dat ='])
disp(elements4)
end % debug
%
% Read the dirichlet boundary edge data file.
% two integer node numbers per line, function u_d sets values
% there must be at least one pair
% other boundary edges are set in neumann
dirichlet = load(strcat(prename,'dirich.dat'));
if debug>0
disp([prename 'dirich.dat ='])
disp(dirichlet)
end % debug
%
% Read the Neumann boundary edge data file.
% two integer node numbers per line, function g sets values
neumann = load(strcat(prename,'neum.dat'));
if debug>0
disp([prename 'neum.dat ='])
disp(neumann)
end % debug
A = sparse ( size(coordinates,1), size(coordinates,1) );
b = sparse ( size(coordinates,1), 1 );
%
% Assembly from triangles.
%
for j = 1 : size(elements3,1)
VV = coordinates(elements3(j,:),:);
MM = stima3(coordinates(elements3(j,:),:));
if debug>1
disp(['MM = stima3(VV) at j=' num2str(j) ' nodes(' ...
num2str(elements3(j,1)) ',' num2str(elements3(j,2)) ',' ...
num2str(elements3(j,3)) ')'])
disp('VV=')
disp(VV)
disp('MM=')
disp(MM)
end % debug
A(elements3(j,:),elements3(j,:)) = A(elements3(j,:),elements3(j,:)) + MM;
end
if debug>0
disp('assembly from triangles A=')
disp(A)
end % debug
%
% Assembly from quadralaterals.
%
for j = 1 : size(elements4,1)
VV = coordinates(elements3(j,:),:);
MM = stima4(coordinates(elements4(j,:),:));
if debug>1
disp(['MM = stima3(VV) at j=' num2str(j) ' nodes(' ...
num2str(elements4(j,1)) ',' num2str(elements4(j,2)) ',' ...
num2str(elements4(j,3)) ',' num2str(elements4(j,4)) ')'])
disp('VV=')
disp(VV)
disp('MM=')
disp(MM)
end % debug
A(elements4(j,:),elements4(j,:)) = A(elements4(j,:),elements4(j,:)) + MM;
end
if debug>0
disp('assembly from triangles and rectangles A=')
disp(A)
end % debug
%
% Volume Forces from triangles.
%
for j = 1 : size(elements3,1)
b(elements3(j,:)) = b(elements3(j,:)) ...
+ det( [1,1,1; coordinates(elements3(j,:),:)'] ) * ...
f(sum(coordinates(elements3(j,:),:))/3)/6;
end
if debug>0
disp('forces from triangles b=')
disp(b)
end % debug
%
% Volume Forces from quadralaterals.
%
for j = 1 : size(elements4,1)
b(elements4(j,:)) = b(elements4(j,:)) ...
+ det([1,1,1; coordinates(elements4(j,1:3),:)'] ) * ...
f(sum(coordinates(elements4(j,:),:))/4)/4;
end
if debug>0
disp('forces from triangles and rectangles b=')
disp(b)
end % debug
%
% Neumann conditions.
%
for j = 1 : size(neumann,1)
GG = norm(coordinates(neumann(j,1),:) - coordinates(neumann(j,2),:)) * ...
g(sum(coordinates(neumann(j,:),:))/2)/2;
if debug>2
disp(['Neumann at j=' num2str(j)])
disp(GG)
end % debug
b(neumann(j,:)) = b(neumann(j,:)) + GG
end
if debug>0
disp('using g() add neumann conditions b=')
disp(b)
end % debug
%
% Determine which nodes are associated with dirichlet conditions.
% Assign the corresponding entries of U, and adjust the right hand side.
%
u = sparse ( size(coordinates,1), 1 );
BoundNodes = unique ( dirichlet );
if debug>1
disp('BoundNodes=')
disp(BoundNodes)
end % debug
u(BoundNodes) = u_d ( coordinates(BoundNodes,:) );
if debug>0
disp('using u_d() add dirichlet conditions u=')
disp(u)
end % debug
b = b - A * u;
if debug>0
disp('apply u to b, dirichlet conditions b=')
disp(b)
end % debug
%
% Compute the solution by solving A * u = b
% for the, un bound, remaining unknown values of u.
%
FreeNodes = setdiff ( 1:size(coordinates,1), BoundNodes );
u(FreeNodes) = A(FreeNodes,FreeNodes) \ b(FreeNodes);
if debug>0
disp('solve for A * u = b u=')
disp(u)
end % debug
%
% Graphic representation.
%
show ( elements3, elements4, coordinates, full(u), dirichlet );
%
% check solution
%
for i=1:size(coordinates,1)
disp([num2str(i) ' x= ' num2str(coordinates(i,1)) ' y= ' num2str(coordinates(i,2)) ...
' analytic solution= ' num2str(uana(coordinates(i,1),coordinates(i,2)))])
end
diary off
return % logical end of function fem_50d
%% uana computes boundary values and analytic solution for checking
% Discussion:
% This is generally unknown yet is used here for testing
% Parameters:
% Input, real pair xx,yy are x,y coordinates
% Output, value of solution at x,y
%
function ana = uana(xx, yy)
ana= 1.0+(2.0/11.0)*xx*xx+(3.0/11.0)*yy*yy;
end % uana
%% f evaluates the right hand side of Laplace's equation.
% Discussion:
% This routine must be changed by the user to reflect a particular problem.
% Parameters:
% Input, real U(N,M), contains the M-dimensional coordinates of N points.
% Output, VALUE(N), contains the value of the right hand side of Laplace's
% equation at each of the points.
function valuef = f ( uf )
valuef = ones ( size ( uf, 1 ), 1 );
valuef = valuef.*(-10/11.0);
end % f
%% g evaluates the outward normal values assigned at Neumann boundary conditions.
% Discussion:
% This routine must be changed by the user to reflect a particular problem.
% Parameters:
% Input, real U(N,M), contains the M-dimensional coordinates of N points.
% Output, VALUE(N), contains the value of outward normal at each point
% where a Neumann boundary condition is applied.
function valueg = g ( ug )
valueg = zeros ( size ( ug, 1 ), 1 );
end % g
%% u_d evaluates the dirichlet boundary conditions.
% Discussion:
% The user must supply the appropriate routine for a given problem
% Parameters:
% Input, real U(N,M), contains the M-dimensional coordinates of N points.
% Output, VALUE(N), contains the value of the dirichlet boundary
% condition at each point.
function valued = u_d ( ud )
valued = zeros ( size ( ud, 1 ), 1 );
for kk=1:size(ud,1)
% U(x,y) = 1 + 2/11 x^2 + 3/11 y^2 solution values on boundary
valued(kk)=uana(ud(kk,1),ud(kk,2));
end
end % u_d
%% STIMA3 determines the local stiffness matrix for a triangular element.
% Discussion:
%
% Although this routine is intended for 2D usage, the same formulas
% work for tetrahedral elements in 3D. The spatial dimension intended
% is determined implicitly, from the spatial dimension of the vertices.
% Parameters:
%
% Input, real VERTICES(1:(D+1),1:D), contains the D-dimensional
% coordinates of the vertices.
%
% Output, real M(1:(D+1),1:(D+1)), the local stiffness matrix
% for the element.
function M = stima3 ( vertices )
d = size ( vertices, 2 );
D_eta = [ ones(1,d+1); vertices' ] \ [ zeros(1,d); eye(d) ];
M = det ( [ ones(1,d+1); vertices' ] ) * D_eta * D_eta' / prod ( 1:d );
end % stima3
%% STIMA4 determines the local stiffness matrix for a quadrilateral element.
% Parameters:
% Input, real VERTICES(1:4,1:2), contains the coordinates of the vertices.
% Output, real M(1:4,1:4), the local stiffness matrix for the element.%
function M = stima4 ( vertices )
D_Phi = [ vertices(2,:) - vertices(1,:); vertices(4,:) - vertices(1,:) ]';
B = inv ( D_Phi' * D_Phi );
C1 = [ 2, -2; -2, 2 ] * B(1,1) ...
+ [ 3, 0; 0, -3 ] * B(1,2) ...
+ [ 2, 1; 1, 2 ] * B(2,2);
C2 = [ -1, 1; 1, -1 ] * B(1,1) ...
+ [ -3, 0; 0, 3 ] * B(1,2) ...
+ [ -1, -2; -2, -1 ] * B(2,2);
M = det ( D_Phi ) * [ C1 C2; C2 C1 ] / 6;
end % stima4
%% SHOW displays the solution of the finite element computation.
% Parameters:
% Input, integer elements3(N3,3), the nodes that make up each triangle.
% Input, integer elements4(N4,4), the nodes that make up each quadrilateral.
% Input, real coordinates(N,1:2), the coordinates of each node.
% Input, real U(N), the finite element coefficients which represent the solution.
% There is one coefficient associated with each node.
function show ( elements3, elements4, coordinates, us, dirichlet )
figure(1)
hold off
%
% Display the information associated with triangular elements.
trisurf ( elements3, coordinates(:,1), coordinates(:,2), us' );
%
% Retain the previous image, and overlay the information associated
% with quadrilateral elements.
hold on
trisurf ( elements4, coordinates(:,1), coordinates(:,2), us' );
%
% Define the initial viewing angle.
%
view ( -67.5, 30 );
title ( 'Solution to the Problem' )
hold off
figure(2)
for kk=1:size(elements3,1)
plot([coordinates(elements3(kk,1),1) ...
coordinates(elements3(kk,2),1) ...
coordinates(elements3(kk,3),1) ...
coordinates(elements3(kk,1),1)], ...
[coordinates(elements3(kk,1),2) ...
coordinates(elements3(kk,2),2) ...
coordinates(elements3(kk,3),2) ...
coordinates(elements3(kk,1),2)],':r')
hold on
end
for kk=1:size(dirichlet,1)
plot([coordinates(dirichlet(kk,1),1) ...
coordinates(dirichlet(kk,2),1)], ...
[coordinates(dirichlet(kk,1),2) ...
coordinates(dirichlet(kk,2),2)],'-b')
hold on
end
title('triangularization')
xlabel('X')
ylabel('Y')
axis tight
axis square
grid off
hold on
xb=coordinates(BoundNodes,1);
yb=coordinates(BoundNodes,2);
plot(xb,yb,'.b')
hold on
xb=coordinates(FreeNodes,1);
yb=coordinates(FreeNodes,2);
plot(xb,yb,'.r')
hold off
end % show
end % fem_50d
The message here:
Thoroughly test any software you write or are about to use.
Use tools to generate test when possible.
Generate a test case for a second order PDE with two independent variables.
Make the PDE general by having it be elliptic in some regions,
hyperbolic in some regions and passing through parabolic.
Note: This is for the set of solutions that are continuous and
continuously differentiable. There are many special solutions and
corresponding special solvers that should be used for special cases.
This lecture is about a general case and a general solver.
The solution will be u(x,y).
The notation for derivatives will be
uxx(x,y) second derivative of u(x,y) with respect to x
uxy(x,y) derivative of u(x,y) with respect to x and with respect to y
uyy(x,y) second derivative of u(x,y) with respect to
ux(x,y) derivative of u(x,y) with respect to x
uy(x,y) derivative of u(x,y) with respect to y
We will create functions a1(x,y), b1(x,y), c1(x,y), d1(x,y), e1(x,y) and
f1(x,y).
The PDE we intend to solve, compactly, is:
a1 uxx + b1 uxy + c1 uyy + d1 ux + e1 uy + f1 u = c
The PDE written for computer processing is:
a1(x,y)*uxx(x,y) + b1(x,y)*uxy(x,y) + c1(x,y)*uyy(x,y) +
d1(x,y)*ux(x,y) + e1(x,y)*uy(x,y) + f1(x,y)*u(x,y) = c(x,y)
Yes, we could use algebra to reorganize the PDE, yet we will not.
We choose the functions a1, b1 and c1 such that b1^2 -a1 * c1 is
elliptic, hyperbolic and parabolic in various regions.
a1(x,y) = exp(x/2)*exp(y)/2
b1(x,y) = 0.7/(x*x*y*y + 0.5)
c1(x,y) = (4 - exp(x) - exp(y/2))*2
then we plot b1^2 - a1 * c1 over the x-y plane.
abc.m MatLab to plot a1, b1, c1 and b1^2-a1*c1
Below the yellow plane is elliptic, above the yellow plane is
hyperbolic and on the yellow plane is parabolic. This has little
to do with the numerical solution of our PDE. For some classic
PDE's only in one region, there are specialized solutions.
Equations of mixed type:
If a PDE has coefficients which are not constant, as shown above,
it is possible that it will not belong to any of the three categories.
A simple but important example is the Euler-Tricomi equation
uxx = xuyy
which is called elliptic-hyperbolic because it is elliptic in the region
x < 0, hyperbolic in the region x > 0, and degenerate parabolic
on the line x = 0.
The other functions for this test case were chosen as:
d1(x,y) = x^2 y
e1(x,y) = x y^2
f1(x,y) = 3x + 2y
Now we have to pick our solution function u(x,y).
Since we will be taking second derivatives, we need at least a
third order solution in order to be interesting.
Out of the air, I chose:
u(x,y) = x^3 + 2y^3 + 3x^2 y + 4xy^2 + 5xy + 6x + 7y + 8
We do not want the solution symmetric in x and y.
We want a smattering of terms for testing.
I use 1, 2, 3, 4, ... to be sure each term tested.
Now we have to compute the forcing function c(x,y)
Doing the computation by hand is too error prone.
I use Maple.
Maple computation of c(x,y) Look at this!
Maple work sheetMaple output
c(x,y) = 0.5*exp(x/2.0)*exp(y)*(6.0*x+6.0*y) +
0.7*(6.0*x + 8.0*y + 5.0)/(x*x*y*y+0.5) +
(8.0 - 2.0*exp(x) - 2.0*exp(y/2.0))*(12.0*y + 8.0*x) +
(x*x+y)*(3.0*x*x + 6.0*x*y + 4.0*y*y + 5.0*y + 6.0) +
x*y*y*(6.0*y*y + 3.0*x*x + 8.0*x*y + 5.0*x +7.0) +
(3.0*x + 2.0*y)*(x^3 + 2y^3 + 3x^2 y + 4xy^2 + 5xy + 6x + 7y + 8)
We have to code the function c(x,y) for our solver.
We have to code the function u(x,y) for our solver to compute
boundary values and we will use the function to check our solver.
The solution will be u(x,y) and the solver must be able to evaluate
this function on the boundary. The region will be
xmin <= x <= xmax
ymin <= y <= ymax
We will solve for nx points in the x dimension, ny points in y dimension with
hx=(xmax-xmin)/(nx-1) step size in x
hy=(ymax-ymin)/(ny-1) step size in y
We will compute the approximate solution at u(i,j) for the point
u(xmin+i*hx,ymin+j*hy). With subscripts running i=0..nx-1 and j=0..ny-1.
Known boundary values are at i=0, i=nx-1, j=0, and j=ny-1.
These subscripts are for "C" and Java, add one for Ada, Fortran and
MatLab subscripts.
The actual numeric values can be set at the last minute, yet we will
use xmin = ymin = -1.0 xmax = ymax = 1.0 nx = 7 ny = 7
Now, we have to generate the matrix that represents the system of
linear simultaneous equations for the unknown values of u(x,y)
at xmin+i*hx for i=1..nx-2 ymin+j*hy for j=1..ny-2
I am using solution points 1, 2, ... , nx-2 by
1, 2, ... , ny-2 stored in a matrix starting at ut[0][0] for coding in "C".
The solution points will be the same as used in pde3:
The PDE will be made discrete by using unknown values in difference
equations as given in nderiv.out or
computed by deriv.c, deriv.adb, deriv.java, deriv.py, deriv.scala, etc.
For this test case, I choose to use five point derivatives for
both first and second order. Note that the uxy term has the
first derivative with respect to x at five y values then
the first derivative of these values with respect to y.
Taking a term at a time from the PDE and writing the discrete version:
a1(x,y)*uxx(x,y) + b1(x,y)*uxy(x,y) + c1(x,y)*uyy(x,y) +
d1(x,y)*ux(x,y) + e1(x,y)*uy(x,y) + f1(x,y)*u(x,y) = c(x,y)
At (x,y) the discrete approximation of a1(x,y)*uxx(x,y) =
a1(x,y) * (1/(12*hx*hx)) *
(-1*u(x-2hx,y) +16*u(x-hx,y) -30*u(x,y) +16*u(x+hx,y) -1*u(x+2hx,y))
yet, we want to solve for u(i,j) in a matrix,
thus using x=xmin+i*hx and y=ymin+j*hy rewrite the above approximation as
a1(xmin+i*hx,ymin+j*hy) * (1/(12*hx*hx)) *
(-1*u(i-2,j) +16*u(i-1,j) -30*u(i,j) +16*u(i+1,j) -1*u(i+2,j))
At (x,y) c1(x,y)*uyy(x,y) = the discrete approximation
c1(x,y) * (1/(12*hy*hy)) *
(-1*u(x,y-2hy) +16*u(x,y-hy) -30*u(x,y) +16*u(x,y+hy) -1*u(x,y+2hy))
again, we want to solve for u(i,j) in a matrix,
thus using x=xmin+i*hx and y=ymin+j*hy rewrite the above approximation as
c1(xmin+i*hx,ymin+j*hy) * (1/(12*hy*hy)) *
(-1*u(i,j-2) +16*u(i,j-1) -30*u(i,j) +16*u(i,j+1) -1*u(i,j+2))
What happened to b1(x,y)*uxy(x,y) ?
Well that one is more difficult, thus we now use the combination of
the first derivative with respect to x, then the first derivative
with respect to y, 25 terms.
Writing all the terms using i and j
b1(xmin+i*hx,ymin+j*hy) * (1/(144*hx*hy)) *
( 1*u(i-2,j-2) -8*u(i-1,j-2) +0*u(i,j-2) +8*u(i+1,j-2) -1*u(i+2,j-2)
-8*u(i-2,j-1) +64*u(i-1,j-1) +0*u(i,j-1) -64*u(i+1,j-1) +8*u(i+2,j-1)
+0*u(i-2,j ) +0*u(i-1,j ) +0*u(i,j ) +0*u(i+1,j ) +0*u(i+2,j )
+8*u(i-2,j+1) -64*u(i-1,j+1) +0*u(i,j+1) +64*u(i+1,j+1) -8*u(i+2,j+1)
-1*u(i-2,j+2) +8*u(i-1,j+2) +0*u(i,j+2) -8*u(i+1,j+2) +1*u(i+2,j+2))
Just writing the remaining terms using i and j
using the discrete coefficients.
d1(xmin+i*hx,ymin+j*hy) * (1/(12*hx) *
(1*u(i-2,j) -8*u(i-1,j) +0*u(i,j) +8*u(i+1,j) -1*u(i+2,j))
e1(xmin+i*hx,ymin+j*hy) * (1/(12*hy) *
(1*u(i,j-2) -8*u(i,j-1) +0*u(i,j) +8*u(i,j+1) -1*u(i,j+2))
The u(x,y) term is just
f1(xmin+i*hx,ymin+j*hy)*u(i,j)
The right hand side of every approximation above is just
c(xmin+i*hx,ymin+j*hy)
If any of the u(i+a,j+b) are boundary elements, we plug in the
numeric value of the boundary element, multiply by the
coefficient, and subtract the product from the right hand side.
Now we have many equations for u(i,j) using values for i and j,
and we must compute the coefficients for the set of equations
for i=2..nx-3 j=2..ny-3 the central case.
Remember, u(i=0,j), u(i=nx-1,j), u(i,j=0) and u(i,j=ny-1) are
known boundary values.
What about u(1,j), u(nx-2,j) for j=1..ny-2 and
u(i,1), u(i,ny-2) for i=2..nx-3 (do not use u(1,1) twice !!!)
(do not use u(nx-2,ny-2) twice)
Fortunately, the coefficients for the discrete derivatives can be
computed by deriv.c, deriv.adb, etc.
At (1,j) the discrete approximation of
a1(xmin+1*hx,ymin*j*hy)*uxx(xmin+1*hx,ymin+j*hy) =
a1(xmin+1*hx,ymin+j*hy) * (1/(12*hx*hx)) *
(-3*u(i-1,j) -10*u(i,j) +18*u(i+1,j) -6*u(i+2,j) +1*u(i+3,j))
We could not use i-2 because it is outside our region, thus
note "deriv" is called with 'point' set to 1.
At (nx-2,j) the discrete approximation of
a1(xmin+(nx-2)*hx,ymin*j*hy)*uxx(xmin+(nx-2)*hx,ymin+j*hy) =
a1(xmin+(nx-2)*hx,ymin+j*hy) * (1/(12*hx*hx)) *
(-1*u(i-3,j) +6*u(i-2,j) -18*u(i-1,j) +10*u(i,j) +3*u(i+1,j))
We could not use i+2 or (nx-2)+2 because it is outside our region, thus
note "deriv" is called with 'point' set to 3.
There is nothing special about uxx(i,1) or uxx(nx-2,1) use the general case.
At (i,1) the discrete approximation of c1(x,y)*uyy(x,y) =
c1(xmin+i*hx,ymin+1*hy) * (1/(12*hy*hy)) *
(-3*u(i,j-1) -10*u(i,j) +18*u(i,j+1) -6*u(i,j+2) +1*u(i,j+3))
We could not use j-2 because it is outside our region, thus
note "deriv" is called with 'point' set to 1.
At (i,nx-2) the discrete approximation of c1(x,y)*uyy(x,y) =
c1(xmin+i*hx,ymin+(nx-2)*hy) * (1/(12*hy*hy)) *
(-1*u(i,j-3) +6*u(i,j-2) -18*u(i,j-1) +10*u(i,j) +3*u(i,j+1))
We could not use j+2 or (ny-2)+2 because it is outside our region, thus
note "deriv" is called with 'point' set to 3.
There is nothing special about uyy(1,j) or uxx(nx-2,j) use the general case.
Oh, and yes, ux(x,y) uy(x,y) uxy(x,y) also have to be shifted for the
"just inside the boundary case." Think of how much easier it is using five
points rather than sever points. With seven points the two rows and
columns inside the boundary are special cases.
Now we have enough equations to exactly compute the approximate solution.
We build a system of linear equations of the form:
| ut00 ut01 ut02 ut03 | | u(1,1) | | k0 |
| ut10 ut11 ut12 ut13 | * | u(1,2) | = | k1 |
| ut20 ut21 ut22 ut23 | | u(2,1) | | k2 |
| ut30 ut31 ut32 ut33 | | u(2,2) | | k3 |
We know the values at the boundary u(0,0), u(0,1), u(0,2), u(0,3)
u(1,0), u(1,3)
u(2,0), u(2,3)
u(3,0), u(3,1), u(3,2), u(3,3)
For this specific system of equations, nx=4, ny=4 and there are four
internal, non boundary, values to be found. The number of equations
will always be (nx-2)*(ny-2).
The value found for u(1,1) from solving the system of linear equations
is the desired solution at u(xmin+1*hx,ymin+1*hy). u(2,2) is the value of
u(xmin+2*hx,ymin+2*hy). Additional values, not on hx or hy steps,
of u(x,y) may be found by two dimensional interpolation.
The matrix "ut", used in the source code pde_abc_eq.c
is cleared to zero. Then each equation is used and the
coefficient of u(i,j) is added the appropriate ut entry.
There will be (nx-2)*(ny-2) equations that must be used.
The i and j for these equations are for i=1..nx-2 for j=1..ny-2.
The PDE approximation above must be used for these i,j pairs.
Note that u(i,j), u(i-2,j) etc are variables and it is the
coefficients of these variables that get added into the "ut" matrix.
The first equation will add to entries in the first row of the matrix.
The second equation will add to entries in the second row of the matrix.
The boundary values are not stored in the ut matrix and using algebra,
a boundary value coefficient is multiplied by the boundary value and
subtracted from the constant term. This can cause many special cases
in the solver.
In general many entries in a row will not be changed from zero.
The final matrix will be in the class of a band matrix.
Now the gruesome work of evaluating the u(i,j) coefficients.
We cheat here. Now assume nx>4 and ny>5 and thus
i=2, j=3 is a general center case. From above, we have
a1(x,y)*uxx(x,y) + b1(x,y)*uxy(x,y) + c1(x,y)*uyy(x,y) +
d1(x,y)*ux(x,y) + e1(x,y)*uy(x,y) + f1(x,y)*u(x,y) = c(x,y)
converted to a discrete approximation for the central elements is
a1(xmin+i*hx,ymin+j*hy) * (1/(12*hx*hx)) *
(-1*u(i-2,j) +16*u(i-1,j) -30*u(i,j) +16*u(i+1,j) -1*u(i+2,j)) +
b1(xmin+i*hx,y+j*hy) * (1/(144*hx*hy)) *
( 1*u(i-2,j-2) -8*u(i-1,j-2) +0*u(i,j-2) +8*u(i+1,j-2) -1*u(i+2,j-2)
-8*u(i-2,j-1) +64*u(i-1,j-1) +0*u(i,j-1) -64*u(i+1,j-1) +8*u(i+2,j-1)
+0*u(i-2,j ) +0*u(i-1,j ) +0*u(i,j ) +0*u(i+1,j ) +0*u(i+2,j )
+8*u(i-2,j+1) -64*u(i-1,j+1) +0*u(i,j+1) +64*u(i+1,j+1) -8*u(i+2,j+1)
-1*u(i-2,j+2) +8*u(i-1,j+2) +0*u(i,j+2) -8*u(i+1,j+2) +1*u(i+2,j+2)) +
c1(xmin+i*hx,ymin+j*hy) * (1/(12*hy*hy)) *
(-1*u(i,j-2) +16*u(i,j-1) -30*u(i,j) +16*u(i,j+1) -1*u(i,j+2)) +
d1(xmin+i*hx,ymin+j*hy) * (1/(12*hx) *
(1*u(i-2,j) -8*u(i-1,j) +0*u(i,j) +8*u(i+1,j) -1*u(i+2,j)) +
e1(xmin+i*hx,ymin+j*hy) * (1/(12*hy) *
(1*u(i,j-2) -8*u(i,j-1) +0*u(i,j) +8*u(i,j+1) -1*u(i,j+2)) +
f1(xmin+i*hx,ymin+j*hy)*u(i,j) =
c(xmin+i*hx,ymin+j*hy)
Collecting terms for u(i,j) we compute the coefficient ct
a1(xmin+i*hx,ymin+j*hy) * (1/(12*hx*hx)) *(-30) +
b1(xmin+i*hx,ymin+j*hy) * (1/(144*hx*hy)) *(0) +
c1(xmin+i*hx,ymin+j*hy) * (1/(12*hy*hy)) *(-30) +
d1(xmin+i*hx,ymin+j*hy) * (1/(12*hx) *(0) +
e1(xmin+i*hx,ymin+j*hy) * (1/(12*hy) *(0)
and the right hand side is c(xmin+i*hx,ymin+j*hy)
A bit of bizarre subscripting, assuming we are at i=2, j=3 this
is equation number ii=(i-1)+(nx-2)*(j-1), right?
Check i=1,j=1 is equation 0 if i or j equal zero, we have a boundary
i=2,j=1 is equation 1
...
i=nx-2,j=1 is equation nx-3
i=1,j=2 is equation nx-2
i=2,j=2 is equation nx-1
...
i=2,j=3 is equation (2-1)+(nx-2)*(3-1) the row in the ut matrix.
cs, the subscript for the right hand side is cs=(nx-2)*(ny-2) stored in ut.
Thus we add ct to ut(ii,ii). The second subscript is where u(i,j) is.
Well, to tell the truth, we add each component of the sum in a loop,
directly into ut(ii,ii). If we had ct we would just store it in ut(ii,ii).
ut(ii,cs) is set to c(xmin+i*hx,ymin+j*hy) - f1(xmin+i*hx,ymin+j*hy)
We must, of course, compute all the coefficients, e.g. u(i-1,j), ctm1
a1(xmin+i*hx,ymin+j*hy) * (1/(12*hx*hx)) *(16) +
b1(xmin+i*hx,ymin+j*hy) * (1/(144*hx*hy)) *(0) +
c1(xmin+i*hx,ymin+j*hy) * (1/(12*hy*hy)) *(0) + does not exists
d1(xmin+i*hx,ymin+j*hy) * (1/(12*hx) *(-8) +
e1(xmin+i*hx,ymin+j*hy) * (1/(12*hy) *(0) does not exists
ij=(((i-1)-1)+(nx-2)*(j-1) where u(i-1,j) is in the solution vector.
ut(ii,ij) = ctm1
We must, of course, compute all the coefficients, e.g. u(i-2,j), ctmm1
a1(xmin+i*hx,ymin+j*hy) * (1/(12*hx*hy)) *(-1) +
b1(xmin+i*hx,ymin+j*hy) * (1/(144*hx*hy)) *(0) +
c1(xmin+i*hx,ymin+j*hy) * (1/(12*hy*hy)) *(0) + does not exists
d1(xmin+i*hx,ymin+j*hy) * (1/(12*hx) *(1) +
e1(xmin+i*hx,ymin+j*hy) * (1/(12*hy) *(0) does not exists
ij=(((i-2)-1)+(nx-2)*(j-1) where u(i-2,j) is in the solution vector.
ut(ii,ij) = ctmm1 Wrong if i is 2! u(0,j) is a boundary value, thus
ut(ii,cs) = ut(ii,cs) - ctmm1 * u(0,j)
And, one more coefficients, e.g. u(i,j+1), ctp1
a1(xmin+i*hx,ymin+j*hy) * (1/(12*hx*hx)) *(0) + does not exists
b1(xmin+i*hx,ymin+j*hy) * (1/(144*hx*hy)) *(0) +
c1(xmin+i*hx,ymin+j*hy) * (1/(12*hy*hy)) *(16) +
d1(xmin+i*hx,ymin+j*hy) * (1/(12*hx) *(0 ) + does not exists
e1(xmin+i*hx,ymin+j*hy) * (1/(12*hy) *(8)
"C" and Java subscripting
ij=((i-1)+(nx-2)*((j+1)-1) when u(i,j+1) is in the solution vector.
ut(ii,ij) = ctp1
Ada, Fortran and MatLab subscripting
ij=(i-1)+(nx-2)*((j+1)-2) when u(i,j+1) is in the solution vector.
ut(ii,ij) = ctp1
where uc(ii) is the right hand side.
Yes, the code is nested four levels deep in iterations.
Now do u(1,j), u(nx-2,j) for j=1..ny-2 and
u(i,1), u(i,ny-2) for i=2..nx-3 (do not use u(1,1) twice !!!)
(do not use u(nx-2,ny-2) twice)
Once the "ut" matrix is initialized, solve the simultaneous equations
and print the answers. Now, wasn't that easy.
A crude layout for nx=9, ny=9 is shown below with subscripts
The i,j notation is for a known boundary value,
the number is the ii subscript of the unknown value inside the I====I
I ut I
I====I.
j
0 1 2 3 4 5 6 7 8 ny=9
+-----------------------------------------------------+
0 | 0,0 | 0,1 | 0,2 | 0,3 | 0,4 | 0,5 | 0,6 | 0,7 | 0,8 |
| I=========================================I |
1 | 1,0 I 0 | 7 | 14 | 21 | 28 | 35 | 42 I 1,8 |
| I-----------------------------------------I |
2 | 2,0 I 1 | 8 | 15 | 22 | 29 | 36 | 43 I 2,8 |
| I-----------------------------------------I |
3 | 3,0 I 2 | 9 | 16 | 23 | 30 | 37 | 44 I 3,8 |
| I-----------------------------------------I |
i 4 | 4,0 I 3 | 10 | 17 | 24 | 31 | 38 | 45 I 4,8 |
| I-----------------------------------------I |
5 | 5,0 I 4 | 11 | 18 | 25 | 32 | 39 | 46 I 5,8 |
| I-----------------------------------------I |
6 | 6,0 I 5 | 12 | 19 | 26 | 33 | 40 | 47 I 6,8 |
| I-----------------------------------------I |
7 | 7,0 I 6 | 13 | 20 | 27 | 34 | 41 | 48 I 7,8 |
| I=========================================I |
8 | 8,0 | 8,1 | 8,2 | 8,3 | 8,4 | 8,5 | 8,6 | 8,7 | 8,8 |
+-----------------------------------------------------+
nx=9
ut(ii) where ii = (i-1)+(nx-2)*(j-1)
These are "C" and Java subscripts.
j
1 2 3 4 5 6 7 8 9 ny=9
+-----------------------------------------------------+
1 | 1,1 | 1,2 | 1,3 | 1,4 | 1,5 | 1,6 | 1,7 | 1,8 | 1,9 |
| I=========================================I |
2 | 2,1 I 1 | 8 | 15 | 22 | 29 | 36 | 43 I 2,9 |
| I-----------------------------------------I |
3 | 3,1 I 2 | 9 | 16 | 23 | 30 | 37 | 44 I 3,9 |
| I-----------------------------------------I |
4 | 4,1 I 3 | 10 | 17 | 24 | 31 | 38 | 45 I 4,9 |
| I-----------------------------------------I |
i 5 | 5,1 I 4 | 11 | 18 | 25 | 32 | 39 | 46 I 5,9 |
| I-----------------------------------------I |
6 | 6,1 I 5 | 12 | 19 | 26 | 33 | 40 | 47 I 6,9 |
| I-----------------------------------------I |
7 | 7,1 I 6 | 13 | 20 | 27 | 34 | 41 | 48 I 7,9 |
| I-----------------------------------------I |
8 | 8,1 I 7 | 14 | 21 | 28 | 35 | 42 | 49 I 8,9 |
| I=========================================I |
9 | 9,1 | 9,2 | 9,3 | 9,4 | 9,5 | 9,6 | 9,7 | 9,8 | 9,9 |
+-----------------------------------------------------+
nx=9
ut(ii) where ii = (i-1)+(nx-2)*(j-2)
These are Ada, Fortran and Matlab subscripts.
A differential equation that has regions hyperbolic, elliptic and parabolic
defined in the various languages "abc" file.
The basic functions coded in "C" for this test case are:
abc.txt instructions for user
abc.h sample test case described above
abc.c implementation of test case
deriv.h discretization header file
deriv.c discretization
pde_abc_eq.c solver program
pde_abc_eq_c.out sample output
abc.c implementation of test case
The basic functions coded in Fortran 90 for this test case are:
abc.f90 implementation of test case
deriv.f90 discretization
simeq.f90 solve simultaneous equations
pde_abc_eq.f90 solver program
pde_abc_eq_f90.out sample output
The basic functions coded in Java for this test case are:
abc.java sample test case described above
nderiv.java discretization
simeq.java solve simultaneous equations
pde_abc_eq.java solver program
pde_abc_eq_java.out sample output
The basic functions coded in Ada 95 for this test case are:
abc.ads sample test case described above
abc.adb implementation of test case
deriv.adb discretization
rderiv.adb discretization
simeq.adb solve simultaneous equations
pde_abc_eq.adb solver program
pde_abc_eq_ada.out sample output
Sparse matrix storage is needed for the system of linear equations
when nx or ny is significantly greater than the discretization
order, nd.
The same PDE's as above, using the definition in "abc" are
easily modified to use sparse matrix storage and a direct
sparse matrix solution to the linear equations.
The basic functions coded in "C" for this test case are:
abc.txt instructions for user
sparse.h sparse matrix header file
sparse.c sparse matrix for PDE's
deriv.h discretization header file
deriv.c discretization
abc.h sample test case described above
abc.c implementation of test case
sparse_abc.c sparse solver program
sparse_abc_c.out sample output
The basic functions coded in Java for this test case are:
abc.java sample test case described above
sparse.java sparse matrix for PDE's
nderiv.java discretization
sparse_abc.java solver program
sparse_abc_java.out sample output
The basic functions coded in Ada 95 for this test case are:
abc.ads sample test case described above
abc.adb implementation of test case
sparse.ads sparse matrix specification
sparse.adb sparse matrix for PDE's
deriv.adb discretization
sparse_abc.adb solver program
sparse_abc_ada.out sample output
Some challenging fourth order partial differential equations
including biharmonic equations
are covered in lecture 28d
Solving nonlinear PDE is covered in Lecture 31b.
The specific case for the nonlinear Navier Stokes Equations is
covered in Lecture 28b
For nonlinear PDE to be solved by solving a system of equations,
the simultaneous equations are non linear.
To solve a system of nonlinear equations reasonably efficiently,
use a Jacobian matrix and an iterative solution.
An example that can handle up to third order nonlinear systems of
equations is shown by using a Newton iteration based on the inverse
of the Jacobian:
x_next = x_prev - J^-1 * ( A * x_prev - Y)
simeq_newton3.java basic solver
test_simeq_newton3.java test program
test_simeq_newton3_java.out test_output
on three test cases (minor heuristic used to build J)
Plotting Solutions
Plotting solutions can be easy with Linux and gnuplot.
To demonstrate, I have a program that generates a 2D solution
that needs a 3D plot. make_gnuplotdata.c
The data file, for this case, is gnuplot.dat
The input file to gnuplot is my2.plot
The shell script to make the plot is test2_gnuplot.sh
The plot is run by typing sh test2_gnuplot.sh
or from inside a program with test2_gnuplot.c
or from command line gnuplot -persist my2.plot
The result is the plot
A special case is tests for Laplace Equation
ΔU = 0 or ∇2U = 0 or Uxx(x,y) + Uyy(x,y) = 0 or
∂2U/∂x2 + ∂2U/∂y2 = 0
Many known solutions and thus Dirichlet boundary conditions
for test cases are known:
k>0 is any constant
U(x,y)=k
U(x,y)=k*x
U(x,y)=k*y
U(x,y)=k*x*y
U(x,y)=k*x^2 - k*y^2
U(x,y)=sin(k*x)*sinh(k*y)
U(x,y)=sin(k*x)*cosh(k*y)
U(x,y)=cos(k*x)*sinh(k*y)
U(x,y)=cos(k*x)*cosh(k*y)
U(x,y)=sin(k*x)*exp( k*y)
U(x,y)=sin(k*x)*exp(-k*y)
U(x,y)=cos(k*x)*exp( k*y)
U(x,y)=cos(k*x)*exp(-k*y)
r=sqrt(x^2+y^2)
th=atan2(y,x) -Pi..Pi principal value is
U(x,y)=r^k * sin(k*th) discontinuous along negative X axis
using atan2 in 0..2Pi wrong!
causes discontinuous derivative along positive X axis
When the number of degrees of freedom gets large, the size of
the matrix for the system of simultaneous equations may get
larger than acceptable for your computer.
For example a four dimensional PDE with 10 DOF in each dimension
has 10^4 = 10,000 DOF. The non sparse simultaneous equations
would require at least 10,000 by 10,000 entries, 10^8 times
8 bytes = 0.8GB.
Much larger PDE's, need to be solved.
A sparse matrix stores only the non zero entries.
A system of simultaneous equations does not have a unique
solution if any row is all zeros, thus a sparse structure
has at least one entry of each row. Then in each row,
the column index and value are stored for the non zero
values. Typical spare data structures use a singly linked
list for each row, keeping the column index in increasing
numerical order.
Optimized code is typically written for inserting and
adding entries, solving simultaneous equations, vector
times matrix multiplication and a few other operations.
For various languages, the sparse packages are shown with
the initial test programs and several PDE solutions.
Some of the most difficult to solve PDE's have an
"order" greater than one. These are nonlinear PDE's.
A specific problem, challenge if you prefer, is the
possibility of "non physical solutions" and
"bifurcating solutions" in addition to no possible solution.
A small change in the boundary conditions or a different
starting vector for the nonlinear solver can produce
different results with no way of knowing the result is
ambiguous.
A PDE having any product or ratio of U, Ux, Uxx, etc.
is nonlinear. In two dimensions Uxx*Uxx, Uxx*UYY, etc.
Several nonlinear PDE of interest are:
Second order, third degree:
U^2 U'' + 2 U U' + 3U = f(x)
Reciprocal nonlinear:
D U' - E(x) U'/U^2 - F U'' = f(x)
An example that can handle up to third degree nonlinear systems of
equations is shown by using a Newton iteration based on the inverse
of the Jacobian:
x_next = x_prev - J^-1 * ( A * x_prev - Y)
The nonlinear PDE can not be solved by simple application
of Finite Element Method, covered in lecture 32.
For example, the above nonlinear PDE is not solved by:
source code fem_checknl_la.coutput fem_checknl_la_c.out
Using discretization with a nonlinear solver, such as simeq_newton5,
works accurately and efficiently.
There seem to be many variations of the Kuramoto-Sivashinsky
non linear, fourth order PDE, depending on the physical problem.
I have seen in various reference papers:
Ut + Uxxxx + Uxx + 1/2 (Ux)^2 = 0 mainly variations in the last term, and
∂U(x,t)/∂t + ∂^4 U(x,t)/∂x^4 + ∂^2 U(x,t)/∂x^2 -(∂U(x,t)/∂x)^2 = 0
In the figure below, read h as U, read y as t.
For nonlinear PDE to be solved by solving a system of equations,
the simultaneous equations are non linear.
To solve a system of nonlinear equations reasonably efficiently,
use a Jacobian matrix and an iterative solution and adaptive
solver such as simeq_newton5, available in a few languages.
Two dimensional nonlinear PDE
Solve (Uxx(x,y) + Uyy(x,y))^2 = F(x,y)
expand to
Uxx(x,y)*Uxx(x,y) + 2*Uxx(x,y)*Uyy(x,y) + Uyy(x,y)*Uyy(x,y) = F(x,y)
For pde_nl22 (Uxx(x,y)+Uyy(x,y))^2 = F(x,y)
2D grid for partial differential equation grid nx by ny
Unknown U, given boundary B, for this case nx=7, ny=6.
Zero based indexing, xmin=0.0, xmax=0.6, ymin=0.0, ymax=0.4
Does not need to be uniform, this grid has hx=0.1, hy=0.08
Uk=S(i,ii)=(i-1)*ny+(ii-1)
0.0 0.1 0.2 0.3 0.4 0.5 0.6 X
ii +------+------+------+------+------+------+------+ Y
5 | B0,5 : B1,5 : B2,5 : B3,5 : B4,5 : B5,5 | B6,5 |
+------+======+======+======+======+======+------+ 0.40
4 | B0,4 | U3 | U7 | U11 | U15 | U19 | B6,4 |
+------+------+------+------+------+------+------+ 0.32
3 | B0,3 | U2 | U6 | U10 | U14 | U18 | B6,3 |
+------+------+------+------+------+------+------+ 0.24
2 | B0,2 | U1 | U5 | U9 | U13 | U17 | B6,2 |
+------+------+------+------+------+------+------+ 0.16
1 | B0,1 | U0 | U4 | U8 | U12 | U16 | B6,1 |
+------+======+======+======+======+======+------+ 0.08
0 | B0,0 : B1,0 : B2,0 : B3,0 : B4,0 : B5,0 : B6.0 |
+------+------+------+------+------+------+------+ 0.0
0 1 2 3 4 5 6 i
Source code for solution using C, with output and debugging output
source code pde_nl22.coutput pde_nl22_c.outoutput with debugging pde_nl22_c.ckout
files needed to compile and link
source code simeq_newton3.hsource code simeq_newton3.csource code deriv.hsource code deriv.csource code invert.hsource code invert.c
Source code for solution using Java, with output and debugging output
source code pde_nl22c,javaoutput pde_nl22c_java.outoutput with debugging pde_nl22c_java.ckout
files needed to build
source code simeq_newton3.javasource code nuderiv.javasource code invert.java
Maple used to generate analytic solution for building test case.
Maple analytic solution pde_nl22_mws.out
Note approximate quadratic convergence to solution, from output files:
solve non linear equations A * U = F
simeq_newton3 running
simeq_newton3 itr=0, prev=1e+12, residual=71032.2
simeq_newton3 itr=1, prev=71032.2, residual=17757.5
simeq_newton3 itr=2, prev=17757.5, residual=4438.85
simeq_newton3 itr=3, prev=4438.85, residual=1109.29
simeq_newton3 itr=4, prev=1109.29, residual=276.986
simeq_newton3 itr=5, prev=276.986, residual=68.9598
simeq_newton3 itr=6, prev=68.9598, residual=17.0073
simeq_newton3 itr=7, prev=17.0073, residual=4.08366
simeq_newton3 itr=8, prev=4.08366, residual=0.906544
simeq_newton3 itr=9, prev=0.906544, residual=0.164953
simeq_newton3 itr=10, prev=0.164953, residual=0.0220222
simeq_newton3 itr=11, prev=0.0220222, residual=0.00342033
simeq_newton3 itr=12, prev=0.00342033, residual=0.000446783
simeq_newton3 finished
Now we attack solving the PDE on multicore and multiprocessor
computer architectures.
The basic concept is to use worker task to do the parallel
computation and a master control task, the main procedure,
to keep the workers synchronized. These examples, in several
languages, use the barrier method of synchronization.
The first example is converting fem_check44_la.adb to
a parallel version. We developed psimeq, parallel simultaneous
equations solver in lecture 3b
Thus we need to choose the number of processors, NP, to pass
to psimeq.
Then we need to parallelize the longest computation, creating
the stiffness matrix, as covered in lecture 32
Care must be taken to insure no worker tasks set any variable that
might by modified by any other worker task or the master task.
Parallelizing Ada, FEM example
fem_check44_pla.adbfem_check44_pla_ada.out
The other files needed to compile and execute include:
psimeq.ada
laphi.ads
laphi.adb
gaulegf.adb
real_arrays.ads
real_arrays.adb
Ada barrier software
Parallelizing Java, FEM example
The first example is converting fem_check44_la.adb to
a parallel version. We developed psimeq, parallel simultaneous
equations solver in lecture 3b
Thus we need to choose the number of processors, NP, to pass
to psimeq.
Then we need to parallelize the longest computation, creating
the stiffness matrix, as covered in lecture 32
Care must be taken to insure no worker tasks set any variable that
might by modified by any other worker task or the master task.
fem_check44_pla.javafem_check44_pla_java.out
Now we attack solving the PDE on multicore and multiprocessor
computer architectures using multiple precision arithmetic.
Many examples use GMP, Gnu Multiple Precision.
The basic concept is to use worker task to do the parallel
computation and a master control task, the main procedure,
to keep the workers synchronized. These examples, in several
languages, use the barrier method of synchronization.
Using pthreads and gmp to solve large systems of
simultaneous equations.
mpf_tsimeq.hmpf_tsimeq.ctime_mpf_tsimeqb.ctime_mpf_tsimeqb.out
A sample of using top -H
A brief introduction to the "Finite Element Method" FEM,
using triangles or tetrahedrons rather than a uniform grid.
The previous Galerkin Method galerkin.pdf
applies with some changes.
There are entire books on FEM and this just covers one small view.
Examples are shown below. The examples are for Degree 1 (linear),
with Order 1 through 4,
Dimension 2 (triangle) and Dimension 3 (tetrahedron)
Some specialty stuff that may be used later:
affine_map_tri.txt triangle mapping
test_affine_map.c test program
test_affine_map.out output
test_affine_map2.c test program
test_affine_map2.out output
test_affine_map.html Maple solution
test_affine_map.mw Maple worksheet
Given a set of coordinates and a boundary path, generate a set of
triangles that cover the interior. Four sets are shown below:
The programs are triangle.c and showme.c run using commands:
triangle A.poly generates file A.1.poly A.1.node A.1.ele
showme -p A.1.poly plots boundary, click on ele to plot triangles
A.poly initial input (simpler ones follow)
A.1.poly generated by program triangle
A.1.ele generated by program triangle
A.1.node generated by program triangle
triangle -D -a0.01 -q30 -p B.poly generates B.1.*
showme -p B.1.poly wait for menus at bottom, click on ele
B.poly initial input (square)
B.1.poly generated by program triangle
B.1.ele generated by program triangle
B.1.node generated by program triangle
triangle -D -a0.1 -q30 -p C.poly generates C.1.*
showme -p C.1.poly wait for menus at bottom, click on ele
C.poly initial input (triangle)
C.1.poly generated by program triangle
C.1.ele generated by program triangle
C.1.node generated by program triangle
triangle -D -a2.0 -q45 -p D.poly generates D.1.*
showme -p D.1.poly wait for menus at bottom, click on ele
D.poly initial input (channels)
D.1.poly generated by program triangle
D.1.ele generated by program triangle
D.1.node generated by program triangle
triangle.c source code
triangle.help documentation
triangle.readme sample commands
triangle.README README
showme.c source code
note: the C.1.node C.1.ele C.1.poly can be used for fem_check below
C.coord C.tri C.bound by deleting sequence numbers
and extra trailing stuff
C.coord from C.1.node
C.tri from C.1.ele
C.bound from C.1.poly
plot_fem_tri.c plots .coord,.tri,.bound
D.coord from D.1.node
D.tri from D.1.ele
D.bound from D.1.poly
The most basic geometry and several triangle splits:
22_t.coord 3 triangles, 4 vertices
22_t.tri 3 boundary, 1 free
22_t.bound
Any FEM group can be split, every triangle becomes 4 triangles that
are one fourth the area and congruent to the original triangles.
This cuts "h" the longest edge in half and should improve accuracy,
up to some limit, at the cost of longer execution time.
do_tri_split.c22_ts.coord 12 triangles, 10 vertices
22_ts.tri 6 boundary, 4 free
22_ts.bound22_tss.coord 48 triangles, 31 vertices
22_tss.tri 12 boundary, 19 free
22_tss.bound
The D.poly from above as original, split once, split again
Circles can be split into triangles:
circle_tri.coordcircle_tri.tricircle_tri.boundcircle_tri.dat
Circles can be grided:
circle_grid.datcircle_grid.java
Program files that produces the images above:
tri_input.javatri_split.javatest_tri_split.javacircle_tri.java
The first example of FEM on triangles is a first order in two dimensions.
fem_check21_tri.c source code
fem_check21_tri.out output 22_tri
fem_check21_tria.out output 22_t
fem_check21_tri_C.out output C
additional files needed for execution of the above:
phi_tri.h source code
phi_tri.c source code
phi_tri_cm.h source code
phi_tri_cm.c source code
phi_tri_pts.h source code
phi_tri_pts.c source code
A similar example in Java of FEM on triangles is a first order in two dimensions.
This is an improved version from fem_check21
fem_check21a_tri.java source code
fem_check21a_tri22_t.out output
fem_check21a_tri22_ts.out output
fem_check21a_tri22_tss.out output
additional files needed for execution of the above:
triquad.java source code
simeq.java source code
the test for accuracy of triquad.java are:
test_triquad.java source code
test_triquad.out results, see last two sets
the test for accuracy of triquad_int.c are:
triquad_int.c source code
test_triquad_int.c test code
In order to understand the complexity of the code,
the following figures show one vertex, V1, that is a
part of three triangles, T1, T2 and T3.
For vertex V1 there are three test functions, phi1, phi2 and phi3,
represented by blue for the phi1, yellow for phi2 and red for phi3.
These test functions are shown as linear "hat" functions, yet could
be higher order test functions.
The triangles are in the plane Z=0.0.
All three test functions are at Z=1.0 above the vertex of interest,
V1. All three test functions are at Z=0.0 for all other vertices
in the triangles that include vertex V1.
Looking down on the triangles, the three test functions can be seen
to cover the area of the three triangles T1, T2 and T3 with functions
phi1, phi2 and phi3.
The equation
is, in the general case, computed by numerical quadrature
of the linear operator, L, with the dependent variable and
its derivatives replaced by a test function and corresponding
derivatives of the test function.
For the triangulation shown above and for the first equation,
for vertex V1, there are three numerical quadratures that
are summed to get the integral as shown.
The first region, omega is T1 and the test function is phi1.
The second region, omega is T2 and the test function is phi2.
The third region, omega is T3 and the test function is phi3.
In the integral equation, the omega is the sum of T1, T2 and T3.
In the integral equation, the symbol φ1 is phi1 for T1,
phi2 for T2 and phi3 for T3.
In general, the global stiffness matrix is constructed by
processing one triangle at a time. The the integral shown
above accumulates the sum as the three triangles containing
the vertex V1 are processed in some order. In general there
will be more than three triangles that contain a vertex.
A general outline of FEM for triangles is presented in
femtri.pdf
Second Order Basis Functions
There is a large variety of basis functions, test functions,
or "phi" functions using FEM terminology. For triangles,
a family of second order, 6 point, functions is:
tri_basis.h source code
tri_basis.c source code
test_tri_basis.c test code
test_tri_basis.out test output
plot_tri_basis.c plotting code
plot_tri_basis2.c plotting code
plot is dynamic with motion, run it yourself, type "n" for next function
Scaled Lobatto shape functions may be used in conjunction with
other basis functions.
lobatto_shape.h source code
lobatto_shape.c source code
plot_lobatto.c source code
Lobatto k-2 shape functions
basis functions
Solving Maxwell's equations for electric fields can be
very challenging. I have not been able to numerically
compute the discharge pattern shown below.
Maxwell's Equations
The numerical solution of Maxwell's Equations for electro-magnetic
fields may use a large four dimensional array with dimensions
X, Y, Z, T. Three spatial dimensions and time.
Phi functions for triangles.
Using basic Lagrange polynomials to get a value 1.0 at
the vertex of interest and a value of 0.0 at the other
two vertices.
A "C" implementation is shown by:
phi_tril.cphi_tril.h
A Matlab implementation with resulting plots is:
phi_tril.m
Recent update.
There are many file formats that may be of interest.
Collected here are a tiny fraction of what is available.
Search the web for code to read and convert formats.
Adapt existing code to the language of your choice.
Without changing the basic UCD format,
the 4th coordinate can be provided along with
Dirichlet and Neumann boundary conditions.
pde_ucd4.inp 4D geometry
Basically, this is a study to see if the 1933 wind tunnel
measurements of airfoils for coefficient of lift CL and
coefficient of drag CD can be reproduced by computer
simulation.
2. Solutions of the form
alpha = Pi/omega
r = sqrt(x^2+y^2)
theta = atan2(y,x) 0..2Pi
U(x,y)=r^alpha * sin(alpha*theta)
PDE Uxx(x,y) + Uyy(x,y) = 0
boundary x -1..1, y -1..1
The red dot is the positive direction of the normal to the surface.
Plotted with light_normal.c to be sure all surfaces have normal
pointing out. This is needed to be able to determine if a
point is inside the volume and process Neumann boundary conditions.
poly43.txt
All 4th order derivatives in 3 dimensions.
pde_star3t.java
Used in pde_star3t.java.
We have arbitrarily scattered boundary points and DOF.
No structure is provided. The problem is to discretize
and build a system of equations, solve the equations,
and determine the values at the DOF.
You can prepare high quality math and engineering
papers using LaTeX, pronounced lah tech.
This is not intended to cover all or even most of LaTeX.
You will prepare a LaTeX file foo.tex
running LaTeX
On UMBC GL machines you have several options to get output:
latex foo.tex
latex foo.tex # this is needed for TOC, bibliography, etc
dvips -o foo.ps foo.dvi # compiling produces a .dvi and other files
lpr foo.ps # print high quality PostScript
# your browser can view foo.ps, lower quality print
ps2pdfwr foo.ps foo.pdf # optional to also produce a .pdf
pdflatex foo.tex # generate a .pdf
acroread foo.pdf # use acroread to view and print
evince foo.pdf # use evince to view and print
# your browser can view and print foo.pdf
and, many other auxiliary programs are available.
Use your favorite editor to create foo.tex
Best on-line reference is:
http://en.wikibooks.org/wiki/LaTeX
Getting LaTeX on your computer
Linux: sudo apt-get install texlive
MacOSX: get mactex
Windows: get protext or miktex
First example
Here is a simple, complete, .tex file
hello.texhello.pshello.pdf
% hello.tex absolute minimum to get output
\documentclass[12pt]{letter} % could be paper, article, report, book, slides ...
\begin{document} % start of document
Hello world from LaTeX.
\end{document} % end of document, nothing after this line
The Makefile commands to generate hello.ps and hello.pdef are:
latex hello.tex
latex hello.tex
dvips -o hello.ps hello.dvi
ps2pdfwr hello.ps hello.pdf
Note some syntax "\" and "{ }"
"\" starts a command, there are many.
You can define new commands to save on typing.
More commands can be brought in from packages
"{ }" are for grouping or enclosing. Usually REQUIRED!
e.g. \frac {numerator equation} {denominator equation}
Generally, arbitrary nesting is allowed, just use
lots of { { {} {} } { } }
one or mode spaces is a space.
carriage return and tab are a space.
lines automatically formatted unless you interfere
No semicolons at end of lines or statements
Printing equations
equation.texequation.ps
% equation.tex just a few samples
\documentclass[12pt]{article}
\begin{document} % start of document
For subscript use underscore
$A_1 \; \; A_i \; \; A_{i,j} $ % \; for more spaces
For power, superscript use hat % $ your equation $
$x^2 \; \; x^{p+q} \; \; x^{y^z} $
${n \choose k} = \frac{n!}{k!(n-k!)}$
$z = \sqrt {x^2 + y^2}$
\[ \int _{i=0} ^{\infty} \frac {1}{{(x+1)}^2} dx \]
\( \sum _{j=1} ^{10} A_j \)
\end{document} % end of document, nothing after this line
printing matrices and vectors
Using \matrix
pdemat.texpdemat.pspdemat.pdf
% pdemat.tex
\documentclass[11pt,fleqn]{article} % preamble follows
\setlength{\textheight}{8.5in}
\setlength{\topmargin}{0.5in}
\setlength{\textwidth}{6.5in}
\setlength{\headheight}{0.0in}
\setlength{\headsep}{0.0in}
\setlength{\oddsidemargin}{0in}
\newcommand{\LL}{\mit L}
\begin{document} % start of document
System of simultaneous equations that are too wide for paper
$$\left| \matrix{ \int_{\Omega} \LL \left( \phi_1(x,y) \right) \phi_1(x,y) dx \, dy \
\int_{\Omega} \LL \left( \phi_2(x,y) \right) \phi_1(x,y) dx \, dy \
\ldots \
\int_{\Omega} \LL \left( \phi_{nxy}(x,y) \right) \phi_1(x,y) dx \, dy \cr
\int_{\Omega} \LL \left( \phi_1(x,y) \right) \phi_2(x,y) dx \, dy \
\int_{\Omega} \LL \left( \phi_2(x,y) \right) \phi_2(x,y) dx \, dy \
\ldots \
\int_{\Omega} \LL \left( \phi_{nxy}(x,y) \right) \phi_2(x,y) dx \, dy \cr
\ldots \cr
\int_{\Omega} \LL \left( \phi_1(x,y) \right) \phi_{nxy}(x,y) dx \, dy \
\int_{\Omega} \LL \left( \phi_2(x,y) \right) \phi_{nxy}(x,y) dx \, dy \
\ldots \
\int_{\Omega} \LL \left( \phi_{nxy}(x,y) \right) \phi_{nxy}(x,y) dx \, dy \cr
} \right|
\times
$$
$$ \left| \matrix{ U_1 \cr U_2 \cr \ldots \cr U_{nxy} \cr } \right|
=
\left| \matrix{ \int_{\Omega} f(x,y) \phi_1(x,y) dx \, dy \cr
\int_{\Omega} f(x,y) \phi_2(x,y) dx \, dy \cr
\ldots \cr
\int_{\Omega} f(x,y) \phi_{nxy}(x,y) dx \, dy \cr } \right|
$$
\end{document} % end of document, nothing after this line