CSS

Sunday, May 15, 2011

Differentiation with Big O notation

This is a follow up on the post regarding Big O Notation for Calculus. You will need MathML enabled in order to see this post properly...

Contents

1 Weird Numbers

1.1 Slope

Consider some linear function

g(x) = mx + b (1.1)

for some nonzero real number m, and an arbitrary real number b. We can calculate the slope by considering

Δx0 (1.2)

some constant "shift" in x, and using this to figure out the change in g

Δg(x) = g(x + Δx) g(x). (1.3)

What is this? Well, we plug in the definition of g to find

Δg(x) = (m(x + Δx) + b) (mx + b) (1.4)

which reduces to

Δg(x) = mΔx. (1.5)

Thus we may write the slope of g as

m = Δg(x) Δx (1.6)

which is independent of both x and the choice of Δx.

Can we do this in general for a polynomial

f(x) = xn (1.7)

for some n ? Let us try! We consider some nonzero Δx term, and we write (for n > 1)

f(x + Δx) = (x + Δx)n = xn + nxn1Δx + (Δx)2( bonus parts) (1.8)

where the "bonus parts" are other stuff. Actually by the binomial theorem, it would have to be a polynomial in Δx with a nonzero constant term. This information is really encoded in

(Δx)2( bonus parts) = O(Δx)2 (1.9)

where O() is a more rigorous way of saying "bonus parts at least quadratic in Δx." This gives us a more precise way to specify the error when writing out terms at most linear in Δx.

Problem 1.1. What is ΔxO(Δx)? What is (Δx)1O(Δx)2?

We see that we are abusing notation and writing

h(x)Δx + (Δx)2( bonus parts) = h(x)Δx + O(Δx)2 (1.10)

for some h(x). So by dividing through by Δx we obtain

h(x) + (Δx)( bonus parts) = h(x) + O(Δx). (1.11)

This implies

(Δx)1O(Δx)2 = O(Δx) (1.12)

and similar reasoning suggests

(Δx)O(Δx)2 = O(Δx)3 . (1.13)

So let us go on with our considerations.

We then have

f(x + Δx) = f(x) + nxn1Δx + O(Δx)2 , (1.14)

where we were slick and noted the definition of f in order to plug it in. So, we can rewrite this as

f(x + Δx) f(x) = nxn1Δx + O(Δx)2 (1.15)

and we want to divide both sides by Δx. But we know how to do this now! First we will write

Δf(x) = f(x + Δx) f(x) (1.16)

as shorthand, and rewrite our equation as

Δf(x) = nxn1Δx + O(Δx)2 . (1.17)

We divide both sides by Δx

Δf(x) Δx = nxn1 + O(Δx). (1.18)

But we have a problem that we didn't have before: the slope depends on Δx and x.

Historically, people noted that we were working with a term O((Δx)2). If we could make that term equal to 0, then everything would work out nicely. How do we do this? Well, we formally invent a number ε and use it instead of a finite nonzero number Δx.

1.2 ε and 1

We know that we have a "number" i satisfying

i2 = 1. (1.19)

There is no real number which satisfies this, but we can "adjoin" i to . That is, we pretend that i is a variable satisfying equation (1.19), then we have polynomials of the form

p(x,y) = x + i y. (1.20)

Of course, we can formally multiply these polynomials together, and we end up with the number system ("ring") of complex numbers (we would have to prove that 1i exists to make it a field).

Problem 1.2. Why do we not have higher order terms in i? That is, a general polynomial x + i y + i2 z + ?

Lets consider it. Suppose we did have

p(x,y) = x + i y + i2 z. (1.21)

Then we plug in (1.19) to find

p(x,y) = x + i y + (1)z (1.22)

which simplifies to merely

p(x,y) = (x z) + i y. (1.23)

But this is precisely of the form we described: there is some term which is a multiple of i (the imaginary term) and another independent of i (the real term).

Lets consider a similar problem. We want a nonzero "number" ε which is the "smallest" number possible. What would this mean? Suppose we have a "small" finite number

0 < x < 1. (1.24)

Then we see the property specifying that x is small would be

0 < x2 < x. (1.25)

But if we had the smallest number, then the general argument is we expect

ε2 = 0. (1.26)

We call such a ε an "Infinitesimal" number. If we formally consider such an ε (i.e., pretend it exists and obeys this relationship), then we can run into some problems. For example: what is ε1?

1.3 Division by Zero?

The problem is: what is ε1? The answer is: we don't know.

However, why would ε ever be useful? We can consider

f(x) = xn (1.27)

for some n . Then

f(x + ε) = (x + ε)n (1.28)

can be simplified to what? Lets consider the n = 2 case:

(x + ε)2 = x2 + 2ε x + ε2. (1.29)

But the ε2 term vanishes, so

(x + ε)2 = x2 + 2ε x. (1.30)

We see that

(x + ε)3 = (x2 + 2ε x)(x + ε) (1.31)

can be carried out as if it were polynomial multiplication. We then obtain

x2(x + ε) + 2ε x(x + ε) = x3 + x2ε + 2εx2 + 2ε2x (1.32)

and again, the ε2 term vanishes. We thus obtain

(x + ε)3 = x3 + 3ε x. (1.33)

Indeed the general pattern appears to be

(x + ε)n = (x)n + nε xn1. (1.34)

We would like to write

(x + ε)n (x)n = nε xn1. (1.35)

Notice the difference this time: we don't have any O(ε2) terms. The only price we paid is we cannot get rid of the factor ε.

1.4 Big O for the Bonus Parts

The take home moral is that O() enables us to rigorously consider infinitesimals. How? Well, the most significant terms are written out explicitly, and the rest are swept under the rug with O(). For our example of

f(x) = xn (1.36)

we saw we could write

f(x + Δx) f(x) = nxn1Δx + O(Δx)2 (1.37)

which tells us the error of "truncating," or cutting off the polynomial to be explicitly first order plus some change. This change we consider to be in effect "infinitesimal" in comparison to the Δx term.

2 Derivative

We still have these bonus parts when considering the slope. That is, for some nonzero Δx and arbitrary f(x), we have

f(x + Δx) f(x) = h(x)Δx + O(Δx)2 (2.1)

which gives us

Δf(x) Δx = h(x) + O(Δx). (2.2)

We want to get rid of that Δx on the right hand side. How to do this?

Lets be absolutely clear before moving on. We want to consider the slope of our function f. To do this we considered a nonzero Δx, and then constructed

Δf(x) = f(x + Δx) f(x). (2.3)

This function described the difference between the values of f at x + Δx and at x. So, to describe the rate of change we take

Δf(x) Δx = h(x) + O(Δx). (2.4)

But we want to describe the instantaneous rate of change. Although this sounds scary, it really means we don't want to work with some extra parameter Δx. We want to consider the rate of change and describe it in such a way that it doesn't depend on Δx.

So what do we do? Well, the first answer is to set Δx to be 0. This is tempting, but wrong, because we end up with

f(x + Δx) f(x) Δx f(x + 0) f(x) 0 (2.5)

which is not well-defined. The second answer is to consider the limit Δx 0, so we can avoid division-by-zero errors. This is better, and we write

limΔx0Δf(x) Δx = df(x) dx (2.6)

following Leibniz's notation. This is the definition of the derivative of f.

2.1 Divide by Zero, and You Go To Hell!

Well, formally, we need to take the limit Δx 0. What does that mean for the left hand side? Could we accidentally be dividing by Δx and get infinities? This is a problem we have to seriously consider.

The first claim is that

f(x + Δx) = f(x) + O(Δx). (2.7)

This would imply that

Δf(x) Δx = h(x) + O(Δx) (2.8)

for some function h(x). There would be no division by zero errors, but still we have to prove that equation (2.7) is true in general, i.e. for every function f(x). We have seen it is true only for polynomials.

So, let us consider a function

F(x) = 1 xn (2.9)

for some n . What to do? Well, lets consider what happens when xx + Δx, we change x to be x + Δx. We have

F(x + Δx) = 1 (x + Δx)n (2.10)

by definition of F. We would expect then

ΔF(x) = F(x + Δx) F(x) = 1 (x + Δx)n 1 xn. (2.11)

What to do? Well, lets gather the terms together

ΔF(x) = xn xn(x + Δx)n (x + Δx)n xn(x + Δx)n (2.12)

which we can do, since we multiply both terms by 1 (the first term is xnxn, the second term is (x + Δx)n(x + Δx)n). We can then add the fractions together

ΔF(x) = xn (x + Δx)n xn(x + Δx)n (2.13)

and consider expanding the numerator and denominators out. We see that to first order, we have

xn (x + Δx)n = nxn1Δx + O(Δx)2 (2.14)

which shouldn't be surprising (we've proven this many times so far!). The denominator expands out to be

xn(x + Δx)n = xn(xn + O(Δx)) (2.15)

which, for nonzero x, cannot be made 0.

We combine these results to write

ΔF(x) = nxn1Δx + O(Δx)2 xn(xn + O(Δx)) . (2.16)

We observe that we can factor out a Δx in the numerator (the upstairs part of the fraction) and then we can divide both sides by it:

ΔF(x) Δx = nxn1 + O(Δx) xn(xn + O(Δx)) . (2.17)

So what happens if we set Δx = 0 on the right hand side? Do we run into problems? Well, we run into problems on the left hand side, but not on the right hand side.

So what to do? Well, the formal mathematical procedure is to take the limit Δx 0, which then lets us write

limΔx0ΔF(x) Δx = dF(x) dx (2.18)

for the left hand side. For the right hand side, we can symbolically just set Δx = 0. This is sloppy, because it's not quite true. But this is what's done in practice. We get

limΔx0 nxn1 + O(Δx) xn(xn + O(Δx)) = nxn1 xn(xn + 0). (2.19)

Observe that we can combine these results to write

dF(x) dx = n x2n(n1) = nxn1. (2.20)

There was no risk of dividing by zero anywhere.

2.2 Product Rule

Suppose we have two arbitrary functions f(x) and g(x). Lets define a new function

h(x) = f(x)g(x), (2.21)

then what's

Δh(x) =? (2.22)

I don't know, let us look. We see that we first pick some nonzero Δx and then consider

Δh(x) = h(x + Δx) h(x). (2.23)

Now we plug in this expression to equation (2.21), the equation where we defined h, and we find

h(x + Δx) h(x) = f(x + Δx)g(x + Δx) f(x)g(x). (2.24)

We do the following trick: add to both sides

0 = f(x)g(x + Δx) f(x)g(x + Δx) (2.25)

and we obtain

Δh(x) = f(x + Δx)g(x + Δx) f(x)g(x) + f(x)g(x + Δx) f(x)g(x + Δx). (2.26)

We can gather terms together

Δh(x) = (f(x + Δx) f(x))g(x + Δx) + f(x)(g(x) + g(x + Δx)) (2.27)

which simplifies to

Δh(x) = Δf(x) g(x + Δx) + f(x) Δg(x). (2.28)

As usual, we divide both sides by Δx

Δh(x) Δx = Δf(x) Δx g(x + Δx) + f(x)Δg(x) Δx . (2.29)

By taking the limit Δx 0 we end up with

dh(x) dx = df(x) dx g(x) + f(x)dg(x) dx . (2.30)

Notice that we implicitly noted

limΔx0g(x + Δx) = g(x). (2.31)

Of course, we assume that g is continuous at x, which turns out to be correct since differentiability implies continuity (we will prove this at some other time).

Theorem 2.1 (Product Rule). Let f, g be differentiable and

h(x) = f(x)g(x), (2.32)

then

dh(x) dx = df(x) dx g(x) + f(x)dg(x) dx (2.33)

is the derivative.

We've already proven this. So lets consider an example.

f(x) = xn1 (2.34)

where (n 1) , and

g(x) = x. (2.35)

Thus

h(x) = xn. (2.36)

The claim is that

dh(x) dx = nxn1. (2.37)

Is this surprising? No, but the surprising part is that it is a consequence of the product rule. How to prove this? Well, we need to do it by induction on n.

Base Case (n = 2) we see that

f(x) = x (2.38)

and we can see immediately that

dh(x) dx = df(x) dx g(x) + f(x)dg(x) dx = 1 x + x 1 = 2x. (2.39)

So this proves the base case.

Inductive Hypothesis: suppose this will work for arbitrary n.

Inductive Case: for n + 1, we have

dh(x) dx = d(xn1x) dx g(x) + xndg(x) dx (2.40)

Observe we can consider the first term and apply the base case

d(xn1x) dx g(x) = d(xn1) dx xg(x) + xn1d(x) dx g(x) (2.41)

which is then

d(xn1x) dx g(x) = (n 1)(xn2)xg(x) + xn1g(x) = nxn1g(x). (2.42)

The second term is (recall g(x) = x) simpler

xndg(x) dx = xn. (2.43)

We add both of these together to find

dh(x) dx = nxn + xn = (n + 1)xn. (2.44)

But this is precisely what we wanted! And that concludes the inductive proof.

2.3 Chain Rule

We can combine functions together through composition. This looks like

h(x) = g(f(x)). (2.45)

The question is: what's the derivative (rate of change) of h in terms of the derivatives of g and f?

Here we really take advantage of big-O notation. Observe for some nonzero Δx we have

h(x + Δx) = g(f(x + Δx)) (2.46)

but we argued that

f(x + Δx) = f(x) + F(x)Δx + O(Δx)2 . (2.47)

Lets plug this in

h(x + Δx) = g(f(x) + F(x)Δx + O(Δx)2). (2.48)

So we conclude that

Δh(x) = g(f(x) + F(x)Δx + O(Δx)2) g(f(x)). (2.49)

We can divide both sides by Δx simply

Δh(x) Δx = g f(x) + F(x)Δx + O(Δx)2 g(f(x)) Δx . (2.50)

Now what to do?

Well, we can do the following trick: multiply both sides by

1 = Δf(x) Δf(x). (2.51)

This would give us

Δh(x) Δx = g(f(x) + F(x)Δx + O(Δx)2) g(f(x)) Δf(x) Δf(x) Δx . (2.52)

But what is Δf(x)? We recall equation (2.47) and write

Δf(x) = f(x + Δx) f(x) = F(x)Δx + O(Δx)2 . (2.53)

Using this, we can simplify our equation

Δh(x) Δx = g(f(x) + Δf(x)) g(f(x)) Δf(x) Δf(x) Δx . (2.54)

Observe that we may take the limit as Δx 0, which gives us

dh(x) dx = dg(f(x)) df(x) df(x) dx (2.55)

which intuitively looks like fractions cancelling out to give the right answer. Although this is the intuitive idea, DO NOT cancel terms!

Moreover, we should really clarify what is meant by

dg(f(x)) df(x) = (2.56)

Let us first consider

y = f(x). (2.57)

Then really

dg(f(x)) df(x) = dg(y) dy (2.58)

describes what we should do. Namely, first take the derivative of g and then evaluate it at y = f(x).

Theorem 2.2. Let f, g be differentiable at x, and let

h(x) = g(f(x)). (2.59)

Then

dh(x) dx = dg(f(x)) df(x) df(x) dx (2.60)

describes the derivative of h at x.

Again, we also proved this, which concludes this post.

No comments:

Post a Comment