So, let us formulate the phone book problem in a general way. We have some function f implemented as an oracle which maps n bit integers to n bit integers or maybe n bit integers, it doesn't matter here. We have exactly one point in the function's domain on which f printouts a, the desired value. This point, we will call Omega. The phone book problem is a phone number, and omega is a name. So, we have a phone number and we search for the name. Having this function defined, we can easily define and implement the function f omega which maps n bit integers to one bit and f omega of any x not equal to omega returns zero and omega, it returns one. Now, when we have the classical oracle, let's introduce the quantum one. No surprise here, we introduce it as usual. Here, the register x holds n cubits and the y register holds only one cubit. If we pass through this y register, this value which is Hadamard vector minus, we get this expression as usual. So, we have this minus one to the power of f of x here, and if we can define the projection of the U_f operator on a subspace of the argument. So, we are going to consider only this projection. So, this projection we will call U omega, and it works like this for any vector by this vector x which is not equal to omega. U omega acts like identity on this vector. For vector omega, it acts like minus identity. If we put that Hadamard minus in the value register, and the argument says a subspace, we have this operator of omega here. I want to rewrite this operator which helps on the arguments subspace like this. Here, we might need some explanation about this new notation or we never met in the course before. So, let us recall the bracket notation. What we have here, this vector is bra and this is ket. Vector bra is a row vector of conjugates. So, for example, if you have vector x bra, this is a row vector. Its components, organize it in a row, not its components but the conjugates. If you have a vector y as a ket, then it is a column vector. That's ordinary vector. The dot product we met many times before in this course and this notation is just a matrix multiplication of row and a column. So, this number, when you multiply row and the column, you'll get this sum over the one to n X_i conjugate Y_i, and it is a scalar value. So, it is our dot product and we see that in this notation it is just a matrix multiplication of row and the column. But here, we have the inverse order. First, we have a column and this case that would be this, and then we have a row. Again, if we employ matrix multiplication, column on the row gives us a matrix. The matrix, it is a square matrix in this case because n here and n here can be used as an operator in this space. So, we can act by this matrix on some other column vector. For example, z. All the matrices here compute, so instead of computing this matrix here, we can write this vector z right here. Instead of computing this matrix, we can first compute this product which is a dot product of x and z. This will be a scalar value, and then multiply this value on this vector which is just multiplication of a vector by a scalar. So, this thing here is an operator, and on any vector x, this operator acts like this. So, we compute the dot product, the scalar product of omega and x, and then multiply the vector omega by this scalar product. Now, with the success and knowledge, we can return to our oracle. For any vector x which is not equal to omega, the circle acts like this. So, identity gives us just x minus two omega dot product with x which is zero because x is a basis vector and omega is a basis vector, and they're not equal to each other so they are orthogonal, and we get an identity operator. On Omega, we have omega from here then we have this two omega, and dot product of omega which is one, and we have omega minus two omega which is minus omega. So, minus identity. We see that it is indeed our quantum oracle. The projection of the operator O_f on the argument subspace. Now, any algorithm acts on some input data. Usually, we have something like this and zeros, and the input register one, and the value register, and we apply Hadamard to all of it, and after that, we get this. So, the Grover's algorithm seems to be not an exception, appears to be not an exception from this rule. If we look at the projection of this vector on the arguments subspace, this is going to be the initial state of the system and the input data for the Grover's algorithm. As we have this vector S, we can define another useful operator U_s which acts like this. Now, this part is a bit trickier. When some operator is implemented, there's an oracle, nobody asks us if we can implement it. But when we introduce some operator on our own like we've done before this q of t, we have to prove that we can implement it and that the implementation will be efficient. You can get some clues about implementation of this operator U_s in the exercises for this week. Also, there are some examples in the quantum computer simulator. The link to which I already gave you in the previous lectures. So, now, we're not going to stop here on implementation of this U_s and we proceed to the Grover's Iteration. So, each iteration of the Grover's algorithm looks like this. Then you shall state as s, then we apply U omega to s, and then we apply this operator U_s, and this is what we get after the first iteration. After k iterations, we have something like this. So, let's see what this iteration does to our initial state S in the system subspace of the argument.