Okay, now let's go ahead and do our example. So first of all, I guess, the first thing I need to do is do data swiss. Okay, now, I'm going to fit the fertility as an outcome with all the other coefficients as predictors. And then, let's look at the summary table. So if you look, There's a couple of things I'd like you to look at here. First of all, there's the column that's the estimates, the column that's the standard errors, the column that's t values, and the P values which is the probability of getting a t statistic as or larger than the absolute value of this t statistic. Okay, so now let's recreate all these things manually. And so we're going to do every step manually. So first, I'm going to create my design matrix which is a matrix that has a one, is the first column the intersect, and all the other columns of the swiss data set minus the first column which is the fertility outcome column. And then the y is just the fertility variable. So the beta is just solving x transposed. Solving the normal equations. My residuals are just y-x beta, where beta's my fitted values. Okay, and then I'm going to define n as the number of rows of x, and p as the number of columns of x. Now I can estimate my residual, sum of the squares, my residual standard deviation estimate which is the sum of my squared residuals divided by n minus p. And then because I'm defining s as the residual standard deviation I want to square root that quantity. Okay, so let me compare the s that I get versus the output from LM, and you can see that they're identical. So let's get the variance of the betas, which remember, the variance of the betas was x transverse x inverse times sigma squared. So, the estimated variance would just be times s squared. So here, I have solve x transpose x times s squared. So that's my data variance. And then, let's just show that the standard errors agree with the output from LM. So what I'm going to do is get my summary table. So if I take my summary table and do coefficients, it shows that. But if I just do the second column, it'll just grab the second column of that. So I'm going to do that and I'm going to compare it with the diagonal of this beta variance that I just calculated. And you can see that the two sets of numbers are identical. So the way I calculated my beta variance up here agrees exactly with what LM is doing. So that column, the standard deviation column of my summary statistics agrees exactly. Now the t statistic is just going to be my estimated betas divided by their standard errors which is the square root of the diagonal of that data variance co variance matrix. Okay now, I am just going to show you that my third column of the summary table is exactly equal to my t statistics has calculated this way. So there you see them and the numbers are very perfectly. Now let's show that the p values agree. And here I'm not going to, I don't have any code. I don't have any new line of code to generate because the p value is just, let me get this thing outta the way a little bit. The p value is twice, remember it's two times because it's a two-sided hypothesis test. Twice times the pt value where I take the negative of the absolute value of the statistic and I give it m- p degrees of freedom. And that just tells you I want the upper tail. Oops, I'm missing a parenthesis there. And so, if you see that these agree of course Identically as well. Now, let me show you that the s statistic that r puts out is identical to the s statistic that we derived in class. So here, let me define k as zero, it's probably easier to just show you what k is. So it's this matrix. So it's a diagonal set of 1s except for 0. So notice if I were to multiply this times my beta coefficient, it would grab every column of beta, every element of beta, except the intercept. And this is a full row rank matrix. Okay, and so, my k, the variance associated with k beta. Okay, is just going to be this. Well, I don't have the s squared there. But remember the formula. The sigma squared, well so, I'm just going to plug directly into the formula here, and you can go back and check it with the formula that was the final equation in the lecture notes, okay? So I'm going to define this k variance thing as just k, solve x transpose x times k, and then my s stat is going to be this quantity here. And I want you to just check this formula with the formula that I wrote down in the notes. And I don't want to say it out loud because there's just too many variables to say out loud. So, there's my F statistic. I got some commented accidental comment code in there or something like that, okay. So there's my f statistic. Now, let me show that the f statistic that I derived is the same as the summary, f statistic. And you see the two agree. And so there's some funny little error message there because of a matrix conversion but you can see that the two numbers are identical, so I'm not going to worry too much about that because that's the main point. And then, let me get the p value. There's my p value, 5.593 times 10 to minus 10th. Just to show you here, if I do summary, so here's the p value for my f statistic down here at the end of the summary statement. So what we've done in this lecture is we've simply shown that all of the elements of LM that it outputs are simple functions of these basic distributional results. Basically, we showed how you can get chi squared results. From quadratic forms, we showed that the residuals were independent of the estimates slope coefficients. And then, we used that to derive t distributions and f distributions. And from here on, if there was any contrast of interest that you wanted for example, you wanted a t test or an f test for beta 1 minus beta 2, you should be able to derive that directly, okay. So this is where all this stuff is coming. It's not magical or even that hard. It's just a small set of basic distributional results.