0:50

The first condition, the quadratic function of a vector x,

Â here I'm speaking about multivariate characteristic functions.

Â It is the mathematical expectation exponent in the power i and

Â then I shall try the scalar product of

Â the deterministic vector u and a random vector x.

Â So this mathematical expectation

Â is of the following form.

Â Its exponent, In the power i

Â multiplied by the scalar product of u and mu,

Â 2:54

mu is exactly the same vector

Â as the previous item.

Â And as for X0, X0 is a standard normal vector in

Â the sense that all of these components are independent and

Â all of them have a standard normal distribution.

Â Normal distribution with parameters 0 and 1.

Â Before I will prove the theorem, let me just remark

Â about the objects which you see in the formulation of the theorem.

Â My means is vector mu and matrices C and A.

Â In fact from the proof we'll show that vector mu is

Â a vector mathematical expectation.

Â That is, it is equal to mathematical expectation of x1 and

Â so on, mathematical expectation of xn.

Â As for the matrix C which appears also in the first item,

Â this is a covariance matrix.

Â If I denote the elements of this matrix by Cjk

Â where indices j and k run strand from 1 to n,

Â then this Cjk are the covariances between Xj and Xk.

Â 4:25

Well, this matrix is of course symmetric because covariance between Xj and

Â Xk is equal to the covariance between Xk and Xj.

Â And it is also positive semi-definite,

Â just recall that this property

Â means that the sum of UK, Ckj, Uj.

Â Sum by JK and J from 1 to n shall be

Â non-negative for any u1 and so

Â on, un from rn.I n other words,

Â we can write this in a more compact way.

Â So if you multiply a matrix C by vector u,

Â transport to the left, and by u to the right,

Â this object should be non-negative for nu.

Â There is some confusion in the notations so actually a positive definite and

Â positive semi-definite, these two terms are sometimes mixed.

Â For instance, you can see in the literature that

Â positive-definite is exactly a matrix with this property.

Â Or sometimes there is a distinction between positive-definite,

Â positive semi-definite.

Â Definite means that this equals here, this full field was strong sign, will

Â all be larger and positive semi-definite exactly is in our definition.

Â So to avoid any confusion during this lecture,

Â I will mean by positive semi-definite exactly this property.

Â It's easy to prove that the matrix C,

Â covariance matrix satisfies this assumption.

Â In fact, what is written in the left-hand side is a sum by kj from 1 to n.

Â Uk covariance is between Xj and

Â Xk, Uj, and this equal to

Â the covariance between the sum of

Â by j from 1 to n uj, sj and

Â sum t from 1 to n UK, XK.

Â And you see that actually these two variables are completely the same.

Â All the index of summation differs but they are the same.

Â So this is nothing more than the variance of one of this OUJ from 1 to n Uj Sj, and

Â you know that a variance is a non-negative function.

Â Therefore, this matrix C is a positive semi-definite,

Â and basically this matrix is used in item 1.

Â Let me now continue.

Â Now what about matrix A?

Â This matrix appears as a second item of the theorem.

Â So A is actually in the matrix C in the power Â½, what does this notation mean?

Â It means that this is a matrix,

Â says that if you multiply A by itself, you will get C.

Â 7:56

Well, you know that C is a positive semi-definite matrix.

Â Therefore, there exists octagonal matrix U,

Â the matrix of a change of.

Â So this matrix has the properties that reverse to U is equal to U transposed.

Â Such that matrix C is equal to U transposed

Â multiplied by the diagonal matrix d1 and

Â so on, dn, And multiplied by U.

Â 8:59

This matrix has exactly this property,

Â if you multiply A by A you will get exactly matrix C.

Â And this matrix is actually also symmetric.

Â Therefore C, in this case, also = AA transposed.

Â So now we know what this object means, so

Â we have the exact expression of mu of the matrix C and of the matrix A.

Â Let me know prove these facts, I mean the fact

Â that our definition is equivalent to the first and

Â to the second items of the theorem.

Â So the scheme of the proof for the following one.

Â I will firstly show that definition is equivalent to the item 1,

Â the first item of the theorem,

Â and then I will show that items 1 and 2 are equivalent.

Â Let me start with the first part.

Â So let me firstly show first from the definition, so

Â the characteristic function is exactly of this form.

Â 10:17

Well, this statement is in fact not so difficult,

Â and to show this, let me first mention.

Â That since we assumed that x is Gaussian by definition,

Â this scalar product of u and X has a normal distribution.

Â In fact, this scalar product is nothing more than some linear combination of

Â the concurrence of vector x.

Â And therefore, according to the definition,

Â it should have a normal distribution.

Â Therefore, what we have here,

Â this characteristic function of a vector X at u,

Â this object can be considered as a characteristic

Â function of a random variable, xi,

Â which is scalar product of u and x at 0.1.

Â So this is nothing more than the characteristic function of xi at .1.

Â 11:24

I mentioned it in the beginning of this lecture,

Â the characteristic function of a normal random variable is of the form exponent i.

Â Then we shall put parameter mu of this random variable,

Â xi and xi, and then I shall put U, and U = 1 in this situation.

Â So minus, and here I shall write Â½ sigma xi squared,

Â and multiply it by the parameter 1, by the argument 1,

Â and also square root multiplied to 1.

Â So to prove this item, it is sufficient to find the parameters mu and

Â sigma for the random variable XI, and let us do this now.

Â So what do we know about mu xi?

Â 12:22

This is exactly mathematical expectation of the sum Uk,

Â Xk, where k runs from 1 to n.

Â You know that the mathematical expectation is a linear function,

Â therefore it is equal to the sum from k to 1 to n, Uk, and

Â here I shall write mathematical expectation of Xk.

Â As you know, mu is exactly equal to this,

Â to the vector of mathematical expectations.

Â Therefore, I can simply write mu k here, and

Â conclude that this is a scalar product of u and mu.

Â 13:07

Now what about sigma xi squared?

Â This is actually a variance of random variable xi, and

Â let me write this variance as a covariance between xi and xi.

Â That is, a covariance between

Â the sum Uk Xk and the sum Uj Xj.

Â So here, k runs from 1 to n, here j from 1 to n.

Â 14:04

According to our notation this is another meant

Â as a covariance matrix c, and therefore,

Â the sum is equal to the product of u transposed Cu.

Â If you now substitute these expressions for sigma and

Â mu into this formula, namely this expression instead of mu.

Â And this expression instead of sigma,

Â you will get exactly the statement of item 1.

Â So you will get the characteristic function of X

Â is equal exactly to this formula.

Â 14:46

As for the inverse, so why it falls from 1 to the definition [INAUDIBLE].

Â Actually, there is nothing to prove because there is one to one correspondence

Â between the distributions and their characteristic functions.

Â Therefore, if we know that for Gaussian vectors,

Â characteristic function is of these four.

Â Then with no doubt, if you know that the characteristic function is of this form,

Â then vector is Gaussian, so there is nothing to prove.

Â 15:33

Actually, x0 is a standard normal vector and therefore,

Â this vector is Gaussian because any linear combination of independent,

Â normal distributed random variables has also normal distribution.

Â So vector x0 is Gaussian, And

Â therefore, we can use what we have already proven.

Â So its characteristic function is equal to, This expression.

Â Here mu stands for

Â the vector of mathematical expectations of the components of FX0.

Â So all components are standard normal and therefore this vector also equal to 0.

Â As for the matrix C, it's a covariance matrix,

Â so all elements are independent and

Â therefore outside the main diagonal or elements equal to 0.

Â And what is on the diagonal are units because of

Â variances of all components are equal to 1.

Â 16:39

So what we have here, so the characteristic

Â function is actually equal to the exponent

Â in the power minus Â½ U transpose U.

Â Now let me mention that if we now consider characteristic function of vector X.

Â It is closely related to the characteristic function

Â of X0 because what is characteristic function of x is

Â the mathematical expectation of exponent in the power IU.

Â And now instead of x,

Â I will write this formula ax0 + mu.

Â 18:08

And now we will substitute our expression for

Â the characteristic function of x0 into this formula.

Â What we'll get is the exponent

Â in the power iu mu multiplied by

Â exponent in the power minus Â½,

Â who transposed A, A transposed U.

Â And if we now denote AA transposed by matrix C we'll

Â definitely get the characteristic function is of this form.

Â C is symmetric because C transpose = C, and

Â this is also positive semi-definite because all matrices which

Â can represent this AA transposed are positive semi-definite.

Â Finally, let me mention that the opposite statement

Â says the second i that follows from the first was completely proven.

Â Because if now denoted by A the matrix C and the power Â½,

Â it's easy to see that the characteristic function

Â of this object will be exactly in this form.

Â So this theorem is completely proven.

Â And let me now show how the theorem helps us

Â to answer on some very interesting mathematical questions.

Â [MUSIC]

Â