When given sample from some random variable using Python, these samples are independent to each other. But it is also possible to generate dependent random variables. For example, correlated normal random variables. This can be done using a special function numpy random multivariate normal. Let us use this function to see how correlation affects the joined distribution of random variables. Again, I need to draw pictures, and import numpy as NP. Now, we will use multivariate normal to generate correlated, normally distributed random variables. First of all, let us define several variables. Variance 1 equals to 1. Variance 2 equals to 1. Covariance equals to 0.5. Then we have to create covariance matrix. That is the following matrix. We have variance 1 and covariance here, and covariance and variance 2 here. So you see that we have variances of our random variables on the diagonal of this matrix and covariance of diagonal elements. So now, we can use this multivariate normal. We have to specify expected values of each random variable. For example 0, 0, and then covariance matrix. I will also specify size of sample. Let us begin this 10. Now, you see that this function generated two-dimensional array. So it is basically a table, and the first column of this table is the value of first random variable and the second column is the second random variable. Now, to visualize this random variables and a relation between them, we want to use a scatter plot. So I just say that this is our data. Then use PLT scatter to visualize this data. To get a first column of this matrix, I have to use the following slides. Here is the index that we use, and this column is just a slide from start to end. This slide corresponds to the values of index that corresponds to role. So this index basically says that, we want to get our own elements from all roles, but for zeros column of our two-dimensional NumPy array. Some of this is the values of the first variable that we created. This is radius of the second variable. This figure is not very informative. Let us increase sample size. It is much better. Let me increase covariants. In fact, we use variables which covariance is equal to 1. In this case, covariance equals to correlation. So let us increase it, for example to 0.9 and redraw the picture. Now, we see that there is a clear correlation between our random variables, because we see that for example if the first random variable is large somewhere here, at the same time value of second random variable is also large. At least in our samples we see on the product high values of the second random variable. If our first random variable is small like here negative three, then the value of second random variable is also small. This means that we have a correlation. Let us play a little bit with this covariance or correlation parameter. For example, if I decrease it to zero, we get a picture like this. Just by looking at this picture, we see almost no correlation equals to this. Cloud of points is more or less symmetric. If we put covariance to zero, we see that there is no correlation here. If we know that the value of one variable is large like near three, we have no new information about the value of another variable. We can also make this covariance negative. For example, negative 0.5. You see slight negative correlation in this case, negative means that higher values of first variable corresponds to lower values of the second variable. We can decrease it further and get this picture. We can put here negative one to get this picture. In this case, we have direct linear relationship between these two random variables. So in this case, if I know that fast random variable has value two, then I know for sure that the second random variable has the value negative two. We can also put long here and get this line. Of course, we can play a little bit with these variances as well. For example, I can put variance of the second variable to be four, and you see that the range of the second variable is increased. But the co-variance that is equal to one does not give us this perfect correlation. To get perfect correlation, we have to increase covariance to two, and in this case, correlation again equals to one. So in this case, we used this multivariate normal function to get correlated normal random variables. But we can achieve similar result without using this function. Let us consider another example. How can we generate some correlated random variables? We can do it in the following way. First, we'll generate values of first random variable x. We will just use standard number of distribution for simplicity. Then we say that our second random variable y. We'll generate in the following way. It equals to x plus some noise plus sign shifts, that is also unknown variable. I will denote this random variable by Epsilon. I have to generate the corresponding values of Epsilon here. Let us assume that it is also normal random variable. Now, let us draw the corresponding scatter plot. You see that at this picture, this random variables are correlated. In fact, we can change the corresponding correlation coefficient by introducing some coefficients for example for Epsilon. If I put a small number here, the relation between y and x would become stronger, because these random shifts will be of less magnitude. So we see that these random variables are better correlated than before. We can change this coefficient and get small correlation or large correlation. In this case, the correlation is very small. But as you can find it is any way positive, but it is almost impossible to see the positiveness of the corresponding correlation from the data. This modal is important because when we will study linear regression, we will use this relation between two random variables, when we introduce this linear regression model.