[MUSIC] All right, let's continue looking at this regime of very small step sizes. So what's going on here? Let's look at something which looks looks trivial at first sight. If I take some value of x, add some number to it, some h, then subtract x again, like in here, I should get h back. So if I do these algebraic manipulations or arithmetic manipulations, I should get exactly1 at exact arithmetics. Now, if I do it on a computer in floating point arithmetics, things are not as rosy. Floating point numbers, not all numbers have exact representation floating point. So what happens is, when you evaluate x plus h, it gets truncated to the nearest representable number. If h is large, then this roundoff error is negligible. But if the step size is small, it certainly is not negligible. Again, here is an example with the Python syntax, things of Python programming language. That's just pseudocode if you will. So here I'm doing exactly what's written in mathematical notation above. I take (x+h)- x, divide h and I should get 1. But instead I get something which is certainly different from 1 if h is small enough. For larger age, say if I take the step size of 10 to -10, the error is of the order 10 to -8, which is already reasonably small. So we can expect that as the steps size get larger this truncation error, or this roundoff error gets negligible. Okay, so let's see how can we avoid having this roundoff error. One way is, I would say the way is, to make sure that all your numbers are exactly representable in floating point. So again, in Python pseudocode, it's this. You make your step of size h, it gets truncated but then you recompute the actual step size as a difference. Here the x is exactly representable. And then you know that the both the numerator and denominator of your final different scheme are consistent with each other. Again, the step size is not exactly h, but at least it's consistent, right? Let's see if this changes anything. In fact it does. So in here on this plot, the squares are exactly the same squares from the previous slide, from previous plot. And the green line is the result of the routine from the previous one. And you see that actually, this changes the result quite significantly. And by just doing this manipulations, by ensuring that your numbers are representable, well you get, One order of magnitude. So you get an order of magnitude reduction in your error by just being a little bit more careful with floating point numbers. That's the lesson number three, I think, so that's the most important one. This is something you need to watch out in pretty much any computing context. Once you're subtracting small numbers or making small steps, you need to watch out. Okay, all right. For this particular problem, for these numerical derivatives, we do get an improvement. But there is still [COUGH] something going on. There is still this increase in the total error for smaller values of h. Let's see what can cause it. You can see there's a 2.4 difference scheme. And remember that we cannot compute things without roundoff errors. Now let's call the roundoff errors in evaluation of the function f by epsilon. So this certainly cannot be better than the machine epsilon, which is 10 to minus 16 and that's the precision. Right, if we have this relative accuracy then the absolute error in the numerator of the final difference scheme cannot be smaller than epsilon times the absolute value of f. That's a very crude estimate of course, but then the total absolute error in the final difference can be smaller than, well, it scales as 1 over h. And that's just roundoff error. Then we have this roundoff error and the truncation linearization errors which we discussed previously. These two sources of errors combined, so we have two terms, one is 1over h and one is h. And then the sum of these two things, one is linear and one decreasing with increasing age. The sum certainly should have a minimum somewhere. And the value where this minimum is achieved scales as a square root of the relative error of evaluations. Then the value of the error at a minimum also scales like a square root of the relative accuracy of evaluation of a function. So if we compute our function in double precision so that the relative accuracy is machine epsilon, 10 to minus 16, is forward different scheme we predict cannot get accuracy better than roughly to minus 8. Now what we've seen is that the central finite difference was capable of achieving better precision. So in here, that's the slide we've seen. For the central scheme, that's blue points, we indeed have something which has a minimum of around 10 to the minus 6. For the central scheme, we almost get down to 10 to -10. And again, this is reasonably easy to see by similar arguments. If truncation error scales, as h squared and roundoff error is still 1 over h, their combination, Has a minimum. The veil at the minimum is roughly epsilon to the power of two-thirds, which is close to 10 to -10. And we can make up your own steps in the central scheme larger than for the forward difference scheme. So in general, if you do find differences, then first you need to keep the numerator and denominator consistent. This improves the estimates at smaller h. And then if your scheme has the order D, then the optimal stamp is epsilon to power d + 1. So the larger the degree of the scheme, the larger steps it can take. And the best error you can achieve is epsilon to power of d divided by d + 1. So it gets progressively better with increasing the order of the scheme. Now, I'm seeing the scheme of higher degree, how can we construct those and what are those? This is coming up in the next video. [MUSIC]