All right. Welcome back. In previous videos we looked at finite differences including a method of constructing arbitrarily high order schemes and arbitrarily high order finite differences using the method of [inaudible] coefficients. In this video, I'd like to look at the same problem from a slightly different angle, and as before, I'm going to use these numerical differentiation just as a simple illustration of general methods which have much broader range of applicability. Let me start in general words. First, suppose we have some quantity call it Z which depends on some parameter h, and we know that functionally Z of h is a homogeneous function of h and we even know the exponent. Now, suppose we can do computations at arbitrarily values of h which are non-zero. But what we're actually interested in is the limiting value called Z star at h equals zero. Then the question is, can we do the extrapolation? Can we compute the limiting value using only valuations at finite values of parameter. That is fairly easy if we use the functional form of that. Let's see. I pick some fraction q which is arbitrary, take some value of h again which is arbitrary and the two evaluations, one at h and the other one at the step size which is q times smaller. So Z divided by h. So we have two values which were computed, we have two unknowns that's Z star and k. So it's a simple matter of just expressing Z star in terms of what we computed. So in the system of equations you multiply the second one by q to power Alpha combine the terms, and what you get is Z star being a linear combination of these two things you've just computed. Q again you've picked up and Alpha you know from the functional form. This is nice, but how is it useful? Let's apply this two finite differences, and we can see there is a second order finite difference scheme for the first derivative. We'll look at the central scheme. The limiting value at zero stamp is the target value of the derivative, that's Z star from previous slide, and the leading correction is quadratic. In fact if we keep more terms in Taylor expansions, we'll see that old power terms vanished by symmetry, and so we have corrections which are even powers of h. So the subletting correction is quartic, the next one is h to power six and so on. All right. So let's use this idea of Richardson extrapolation. So notice on these slides I'm using the subscript. So this will be Z_h out of the argument just to make formulas a little shorter. So we use the Richardson extrapolation idea to get rid of these leading correction. This way we'd do evaluations at the step size h and they're joined by q. Combine them together, and we have an estimate for the first derivative which has the all accuracy of order of h to four. So we've just cancelled out the [inaudible]. We can take this further, we can construct the next step of this extrapolation. We just combine the Z2. Again, at step h and h by q which involves evaluation of my function at steps h, h divided by q, h divide by two square, and that extrapolant will have the order h to six because we've cancelled out this correction. In fact, we can carry on doing these sorts of computations up to any desired order. This will involve computations of the final differences and progressively smaller steps, so at some stage we'll need to stop, and that's separate. Notice that if you just do computations naively, do it order by order, then you are bound to recompute certain things over and over. So there is a question of how you organize your computations. So the organization of computations for this two-step recursions is standard, it's called Neville algorithm which is based on constructing at least in your head a triangular table of elements. Let's see. So you start in the first row, you compute your estimate. Then you go along the row. You compute an estimate, a finite difference estimate for twice smaller step, and once you've done that you can move downwards along the column, and you can compute behind this next order, Richardson extrapolant as just a linear combination of two previous things. That concludes the first column, the second column excuse me. Moving on, you have Z with the step of quarter of a step size, that's again the your second order estimate. Once you've computed, you can do one more fourth order Richardson extrapolant, again as a linear combination. Then the next Richardson extrapolation next order extrapolate, you can get as a linear combination of these two things you've already computed as will be somewhere in here. Then you keep going along this table all the way until they stop. But actually when they stop, let's see. In each row in this table, what you have are successive estimates of the derivative at a given order with progressive with smaller steps. You know that when the step is much too small, then round off error increases and spirals down everything. So what you do, you move along the row, you keep track of the values and you call their difference an error of the estimate. When this error stops decreasing that's when you stop and call the result, your best estimate. The result and the difference between the best and the next best, your error.