Matlab Tutorial 6: Analysis of Functions, Interpolation, Curve Fitting, Integrals and Differential Equations


In the previous example the zeros for a polynomial were decided with the command roots. Here on the other hand we have no polynomial, but only a function, and we can instead use the command fzero. The fzero command uses a repetetive algorithm. We must always add an initial guess and then the fzero tries to localize a zero closest to the guess.

Assume we would like to find the zero in the interval 0-1 in the example above. The function has an infinite number of zeros outside this interval 0-1. Our guess becomes:

>> x=-10:0.1:10;
>> plot(x,3*x.^3+2*x.^2-2*x+4), title('p(x)=3*x^3+2*x^2-2*x+4')
>> xlabel('x'), grid

They all produce the same zero!

>> p=[3 2 -2 4] % Represents the coefficients in p(x)

Our three guesses seems to use the fact that all of them have the same sign of the time-derivative, which and is why the algorithm converges toward 0.3423. If we change the initial guess to be on the other side of the peak, fzero will give us a new answer (zero).

>> polyval(p,6), polyval(p,7), polyval(p, -5)
ans= 712 , ans = 1117 , ans = -311

Localize minima and maxima of functions

Let us try to find the local minima and maxima for the function func(x). The interval of interest is [-6 0]. The algorithms are iterative. There are 2 methods to use. The first one decides x in a given interval, and the second one looks for x around an initial guess. To decide the maxima we are looking for an x that minimizes the negative function: -func(x).

fminbnd(func,x1,x2): Gives the x-value for a local minima.

>> C=1 % C is a integration constant.
>> Pder=polyder(p), Pint=polyint(p,C)
Pder =
9 4 -2
Pint =
0.7500 0.6667 -1.0000 4.0000 1.0000

As above, but we also get the y-value. Flag gives a positive number, if minima is localized and a negative number otherwise. We store the information about the iterations in the variable info.
fminsearch(func, x0,): Gives the x-value for a local minima somewhere close to the initial guess x0.
Decide the global minima and maxima that exist on the interval -8 to 0 for the function func(x). This gives:

>> q=[1 0]

This seems to be true for our function. If we want to find the maxima values, the same command can be used. The only difference will be a minus sign in front of the function. Whenever we call the function fzero. The command looks for a minima but will in fact localize the maxima due to the minus sign. See below!

>> conv(p,q) % Performs a polynomial multiplication of p and q.
 
ans= 3 2 -2 4 0

Look below for the answer. Note that the y-value is negated. Compare the plot func(x to our result below.

roots(p) % Gives a vector with zeros to the polynomial p(x).
ans =
-1.6022
0.4678 + 0.7832i
0.4678 - 0.7832i

Our maxima occurs at x=-1.8691, and the value of y becomes y=5.0046. Is this an accurate result according to your opinion?

Interpolation of data sets

Assume that we have made a number of measurements and thereby guessed a function relating the input and output. Then we also have some knowledge of the values between the measured points. This is very common in sampled applications. Think of a sampled continuous sine wave function with a very low sampling frequency. This can be done very simply, if we use a vector x with low resolution. Let us create one. Use the x vector values and calculate the corresponding sine function values and plot it in Figure 3.

[ t,n,the_rest]=residue(q,p) % There are 3 output arguments from residue.
t =
-0.1090
0.0545 - 0.0687i
0.0545 + 0.0687i
n =
-1.6022
0.4678 + 0.7832i
0.4678 - 0.7832i
the_rest =
[]
Low resolution sine plot

Low resolution sine plot

We could think of it as a rather badly shaped sine wave curve probably due to a too long sampling period. By interpolation one decides on a function P(x) that passes through certain known data points. The interpolated function can be used to calculate the function value approximately for any x. Normally one uses polynomials or functions combined from polynomials (spline functions) to make interpolations, but also sine and cosine waves can be useful.
Now, let us see what happens when we try to make a curve fit 10 random numbers.

% The function func.m created by MatlabCorner.Com
function f=func(x)
f=3*sin(x)-2+cos(x.^2);

See the figure below. Investigate whether it is possible to find a polynomial that can be adjusted to the random data.
Enter the figure window under Tools-> Basic Fitting. Mark the ninth-degree polynomial. This gives a polynomial that passes through all random data points. The basic fitting uses the least square method. Even if we use a lower degree it is still the best solution according to the least square method.

Mark the small box show equation. We can then explicitly see the polynomial in the figure window. Remove the ninth-degree polynomial and choose a linear equation instead. Matlab will now try to find the best linear equation. Mark the small box plot residuals. The error residual between each data point and the linear equation is calculated and shown. Despite large discrepancies this is the best first-order polynomial.

The error residual between each data point and the linear equation

The error residual between each data point and the linear equation

Pages: 1 2 3