Now, I lied when I said we're going to go over a coding example, but I forgot. Let's do the F distribution first and then we'll go over a coding example. Because it's very related to what we just did with deriving the T distribution. So I'm going to take a simple example. And I think you should be able to extend this to slightly more complex settings. So consider testing the hypothesis that H naught K beta is equal to zero. So now, we want a hypothesis test that K is a full row and K is a full row rank matrix. So it's some number of rows and then p columns. But it's at least a full row rain. And so because we want to test a vector, we can't use a t distribution or a univariate t distribution, so we need to use something like an f distribution. Our natural estimator of K beta is K beta hat which under our standard assumptions from the section, is going to be normally distributed with mean K beta and variance equal to K, X transpose X inverse. K transpose times sigma squared. Now I can construct a chi squared out of K data hat very easily. So let's take K beta transp, K beta hat minus K beta, minus its mean transposed and then simply times the inverse of it variance matrix. K x transpose x inverse k transpose inverse times k beta hat minus k beta. And that's clearly going to be chi squared rank of k. And the reason being, that this, by virtue of k being a full row ranked matrix is going to be an invertible matrix. And this is k by whatever the number of rows of k is. That by that square matrix and just by the results from the first section of this chapter, we have that this is exactly chi squared. Furthermore, the only thing random in this chi squared is beta hat. And we know that this independent of S squared. So, what I could do Is take this quantity, divided it by its degrees of freedom. Let's just say rank, okay. And then divide it by n minus p S squared over n minus p sigma squared. And I realized I made a mistake. I forgot my sigma squared right there. I forgot my sigma squared there. I just reminded myself of this. Okay, and the reason I remembered it, is because now when I divided these two things out, I have a chi squared rank of k in the numerator. I have a chi squared n minus p in the denominator. And their independent, because in the numerator, the only random thing is beta hat, and the denominator S squared is just a function of the residuals E. So the numerator is independent of the denominator, and what I've done is taken a chi square divided by its degrees of freedom and then divided by another kai squared, an independent kai squared divided by its degrees of freedom. The sigma squares of course will cancel out and that's why I remember that I forgot my sigma squared from above. So, and then these N minus P's of course cancel out. So this statistic right here, which I can just move this S squared up right into the denominator. That statistic right there Is the F test for a general linear hypothesis. The most clear one would be a matrix K that grabs all of the coefficients, except for the intercept, for example. And that is the standard F test that our linear regression model puts out. But I think, I hope you can see that this is a pretty easy result to prove now that we have all this machinery under our belt.