In this video, we start to talk about the Bayesian optimization.

So, what is Bayesian optimization?

Bayesian optimization is a method of finding the optimum of expensive cost function.

This cost function is also called objective function and denoted as f. It's

supposed that calculation of the objective function at one point is expensive.

The derivatives of objective functions are unknown.

The goal of Bayesian optimization is to find the optimum of

the objective function using as small number of the function calculations as possible.

Consider an example with the following objective function f.

The goal is to find minimum of this function using the Bayesian optimization.

Optimization algorithm is following.

The first, find the object function approximation using previously calculated values,

solving a regression problem.

Then, using the approximation,

find the optimum point of an acquisition function.

Then, sample the objective function in the next point and repeat these steps.

Acquisition function is used during Bayesian optimization

to estimate the next point of the objective function calculation.

There are variety of acquisition functions.

One of them is a Lower Confidence Bound shown on the slide.

In this objective function,

mu is the mean value of approximation of the given objective function.

Sigma is standard deviation of the approximation

and k is adjustable parameter of this function.

There are also Upper Confidence Bound for the objective function maximization.

Consider how

the optimization works for the following objective function shown on the slide.

Let's start the optimization from three observations.

At the first step of the Bayesian optimization,

find approximation of the objective function f,

using Gaussian processes and known observations.

This approximation is shown as

a green line with 3 sigmas confidence region of the approximation.

In this example, the Lower Confidence Bound is used as the acquisition function.

This acquisition function is shown as the blue line in the figure.

At the second step of the Bayesian optimization,

find minimum point x_4 of the acquisition function.

This minimum is shown as a blue dot on the figure.

At the third step of the Bayesian optimization,

sample the objective function at the found point x_4.

These steps are repeated several times.

Let's consider how the function approximation changes with iterations.

This is approximation after the second iteration.

This is after the third iteration,

after the 4th iteration.

This after the 10th iteration, after the 15th iteration,

and finally, after 20 iterations the minimum of the objective function is found.

The Gaussian processes approximate the function very well,

especially in the minimum region.

Also pay your attention that there are larger number of

observations in the minimum regions than in other regions.

In the end of this video,

I would like to remind you the Bayesian optimization algorithm.

As a first step, find the objective function approximation

using previously calculated values and solving the regression problem.

As a second step, using the approximation,

find optimum point of an acquisition function.

After that, sample the objective function at this new point and repeat all these steps.

In the next video,

we'll talk about the exploration-exploitation

trade of the Bayesian optimization.