When we discussed discrete random variables, we considered some properties of this random variables like expected value and variance. How are these properties are defined for all continuous random variables? Let me recall that if we have these get random variable x that have a distribution which is given by its values x1, xn, and their probabilities p1, pn then we can find expected value using sum from i. The idea of this summation is that we find in a sense average value of x, but take into account the probability with which this variance can be taken. So for example, if some value has low probability then is contribution to this average will be low according to its probability, and if some other value has large probability then its contribution to its own will be also large. When we speak about continuous random variable, we can use a very similar logic. But instead of summation, they have to use integration. Now, let us assume that x is continuous random variable, and instead of this distribution, x has probability density function p. How to define expected value. Theoretically, we have to consider all possible values that x can take, and multiply it by the corresponding probability, but there is infinite many values, and it is not even countable number of values so we cannot write a sum like this one. Instead, we have to write integral from minus infinity to plus infinity, and put here probability density of x times x dx. We see that this formula and this formula are quite similar. Here, we multiply the value that can be the result of our random variable. We multiply it by the probability density at this value. So for example, if our probability density looks like this, we will include these values that corresponds to areas for which probability density is large into this integral with larger weights that is given by this coefficient. For example, these numbers will be included with very small weight according to the value of the probability density function. Let me show an example of calculation of expected value for uniform distribution on a segment. Let x uniformly distributed on a segment from zero to one. This thing corresponds to the statement that x has a distribution or a probability density function that we discussed before. It is uniform distribution on the segment. So then we know how this probability density looks like. Here is zero, here is one, and our probability density is this. Now, to find the expected value of x, we have to integrate from negative infinity to infinity b of x times x, dx. However, p of x is equal to zero outside of the segment from zero to one. So as previously, we can replace this integral with integral from zero to one, and at this interval, the value of p of x is a constant, and it is equal to one. So we can replace this p of x by one and integrate this function. This integral can be calculated explicitly or geometrically. We can say that if this is x and we have zero here, one here, one here, the graph of function x is listing and this integral is an area like this one. So this is area of this triangle, and it's area is equal to 1.5. So the expected value of this random variable is 1.5 which is quite literal because it is in the middle of the segment. Actually, we can use the idea that we discussed before. If probability density function is symmetric, then the axis of symmetry have to be equal to expected value, if it exists. If probability density function is symmetric with respect to axis x equals to x-naught, vertical line x equals to x-naught, and expected value of x exists, then expected value of x is equal to x-naught. Actually, this integral can be non-existent. We'll consider some examples of random variables for which expected value does not exist. It is possible. But in general, we will work mostly with random variables for which expected value exists. Now, we will define variants for continuous random variable. Actually, when we defined expected value the variance will be defined exactly in the same way as for discrete random variables.