Statistical Modelling Functions In R – Part 2

Ok, lets start, where I have left out Part 1(please read, in case you didn’t read first), start right there.

Polynomial functions: are functions in which x appears several times, each time raised to
a different power. They are useful for describing curves with humps, inflections or local
maxima.

See graphs below..

polynomialfunctionplot1

polynomialfunctionplot2

polynomialfunctionplot3

polynomialfunctionplot4

x<-seq(0,10,0.1)

y1<-2+5*x-0.2*x^2

y2<-2+5*x-0.4*x^2

y3<-2+4*x-0.6*x^2+0.04*x^3

y4<-2+4*x+2*x^2-0.6*x^3+0.04*x^4

par(mfrow=c(2,2)
plot(x,y1,type=”l”,ylab=”y”,main=”decelerating”)
plot(x,y2,type=”l”,ylab=”y”,main=”humped”)
plot(x,y3,type=”l”,ylab=”y”,main=”inflection”)
plot(x,y4,type=”l”,ylab=”y”,main=”local maximum”)

Inverse polynomials are an important class of functions which are suitable for setting up
generalized linear models with gamma errors and inverse link functions:

1/y = a+b*x+c*x^2 +d*x^3 +…+z*x^n

Various shapes of function are produced, depending on the order of the polynomial (the
maximum power) and the signs of the parameters:

par(mfrow=c(2,2))
y1<-x/(2+5*x)
y2<-1/(x-2+4/x)
y3<-1/(x^2-2+4/x)
plot(x,y1,type=”l”,ylab=”y”,main=”Michaelis-Menten”)
plot(x,y2,type=”l”,ylab=”y”,main=”shallow hump”)
plot(x,y3,type=”l”,ylab=”y”,main=”steep hump”)

See Below
polynomialfunctionplot5 polynomialfunctionplot6 polynomialfunctionplot7

 

 

 

There are two ways of parameterizing the Michaelis–Menten equation:

y= a*x/1+b*x and y= x/c+d*x

In the first case, the asymptotic value of y is a/b and in the second it is 1/d.

Gamma function: The gamma function t is an extension of the factorial function, t!, to positive real numbers:

Γt=∫x^(t−1)*e^(−x)dx      0<x<∞

It looks like this:
t<-seq(0.2,4,0.01)
plot(t,gamma(t),type=”l”)
abline(h=1,lty=2)

Note that t is equal to 1 at both t=1 and t=2. For integer values of t t+1=t!, and.

Asymptotic functions: Much the most commonly used asymptotic function is

y= a*x/1+b*x

which has a different name in almost every scientific discipline. For example, in biochemistry it is called Michaelis–Menten, and shows reaction rate as a function of enzyme concentration; in ecology it is called Holling’s disc equation and shows predator feeding rate as a function of prey density.
The other common function is the asymptotic exponential

y=a*(1−e^(−b*x))

This, too, is a two-parameter model, and in many cases the two functions would describe
data equally well.
Let’s the behaviour at the limits of our two asymptotic functions, starting with the asymptotic exponential.

For x=0 we have

y=a*(1−e−^(b×0))=a*(1−e^0)=a*(1−1)=a×0=0

so the graph goes through the origin. At the other extreme, for x=∞, we have

y=a*(1−e^(−b×∞))=a*(1−e^(−∞))=a*(1−0)=a*1=a

have look just like this

asymptoticfunction

which demonstrates that the relationship is asymptotic, and that the asymptotic value of y
is a.  For the Michaelis–Menten equation, determining the behaviour at the limits is somewhat more difficult, because for x=∞ we end up with y=∞/∞ which you might imagine is
always going to be 1 no matter what the values of a and b. In fact, there is a special
mathematical rule for this case, called l’Hospital’s rule: when you get a ratio of infinity to infinity, you work out the ratio of the derivatives to obtain the behaviour at the limit.

For x=0 the limit is easy:

y= a×0/1+b×0 = 0/1+0= 0/1=0

For x=∞ we get

y=∞/1+∞=∞/∞.

The numerator is a*x so its derivative with respect to x is a. The denominator is 1+b*x so its derivative with respect to x is 0 +b = b.

So the ratio of the derivatives is a/b, and this is the asymptotic value of the Michaelis–Menten equation.

Parameter estimation in asymptotic functions
There is no way of linearizing the asymptotic exponential model, so we must resort to nonlinear
least squares (nls) to estimate parameter values for it (p. 662). One of the advantages
of the Michaelis–Menten function is that it is easy to linearize. We use the reciprocal
transformation

1/y= (1+b*x)ax

which, at first glance, isn’t a big help. But we can separate the terms on the right because
they have a common denominator. Then we can cancel the xs, like this:

1/y = (1/a*x)+b*x/a*x = 1/ax+ b/a

If we simplify the above equation by putting y=1/y, x=1/x, A=1/a, and C=b/a, we see that

Y =AX +C

which is linear: C is the intercept and A is the slope. So to estimate the values of a and b
from data, we would transform both x and y to reciprocals, plot a graph of 1/y against 1/x,
carry out a linear regression, then back-transform, to get:

a= 1/A
b=a*C

Suppose, the graph passed through the two points (x1,y1) as (0.2, 44.44) and (x2,y2) as (0.6,70.59). How do we work out the values of the parameters a and b?

slope A = (y2-y1)/(x2-x1)

First, we calculate the four reciprocals. The slope of the linearized function, A

 A = (70.59-44.44)/(0.6 – 0.2) = 0.002500781

so a = 1/A = 1/0.0025 = 400. Now we rearrange the equation and use one of the points
(say x =0.2 y=44.44) to get the value of b:

b= (1/x)*[(a*x/y)−1] = (1/0.2)[(400×02/4444)−1] = 4

Remaining function we wil discuss in later part.

If you have any doubts please let me know by shooting an email irrfankhann29@gmail.com or comment section.

Advertisements

2 thoughts on “Statistical Modelling Functions In R – Part 2

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s