0:00

Hello everyone.

Â This Tural Sadigov and today we're going to talk about SARIMA processes.

Â Objectives is to describe seasonal ARIMA models,

Â which is also called SARIMA models,

Â and rewrite the seasonal ARIMA models using backshift and difference operators.

Â So let's remember ARIMA processes Xt.

Â If we let Y_t to be Delta^d X_t â€“ remember,

Â Delta is the difference operator,

Â 1-B where B is the backshift operator,

Â d is the order of differencing.

Â So, we take difference of X_t d many times, we obtain Y_t,

Â and then Y_t becomes an ARMA model,

Â mixed ARMA model. What does it mean?

Â It has autoregressive parts right here, the p terms,

Â and it has a moving average parts,

Â which are a linear combination of the noises.

Â Now when Y_t is ARMA,

Â than X_t becomes ARIMA where d is the order of differencing.

Â Now we can also rewrite this as a polynomial operators.

Â For example, phi(B) is autoregressive polynomial and theta(B)

Â is moving average polynomial and this becomes our ARMA model.

Â Now, but sometimes it is possible that our data might contain some seasonality,

Â so the way to think about this is the following.

Â Let's say we are looking at the sales of refrigerators and if you look

Â at the sales in August of this year and then August of the last year,

Â there might be some relationship between those two months.

Â So there might be some seasonality going on and,

Â in that case, the observations might repeat itself after every,

Â let's say, s observations.

Â In this case, 12 observations.

Â So, for a time series of the monthly observation,

Â X_t might depend on annual lags.

Â For example, X_t might depend on X_{t-12},

Â which is last August;

Â X_{t-24}, which was August two years ago; and so forth.

Â In that case, we say that we have

Â seasonality and the span of the seasonality or the period is s=12.

Â Now it is also possible that our data comes as quarterly earnings, for example.

Â We have looked at such data.

Â We're going to revisit Johnson and Johnson

Â which was about quarterly earnings of a company.

Â In that case, the span of the seasonality is actually just four.

Â So, in that case,

Â we will like to discuss seasonal ARIMA models,

Â and Box and Jenkins basically developed these models.

Â So, what is a pure seasonal ARMA process?

Â Well, seasonal ARMA process is basically ARMA process but we have instead of little p, q,

Â we use capital P and Q for the order of the autoregressive terms,

Â order of the moving average terms,

Â and s is for span of the seasonality.

Â And we have the following format.

Â Only difference between this equation or

Â this process from the ARMA is that we have this is now at the s here.

Â Autoregressive polynomial is the following:

Â 1- Phi B^s â€“ not B â€“ B^{2s} â€“ not B^2 â€“ and so forth.

Â And moving average polynomial is exactly,

Â very similar, not exactly the same,

Â it's very similar, but we have B^s,

Â B^{2s}, and so forth.

Â Now, just like in the mixed ARMA process,

Â we want our process,

Â seasonal pure seasonal ARMA process or pure SARMA process,

Â to be stationary and invertible.

Â And for that reason, just like before,

Â we're going to require that these polynomials have these complex roots and

Â all of those complex roots are outside of a unit circle.

Â So let me give you an example.

Â For example, if I have ARMA(1,0)_12.

Â So, moving average. I have P=1.

Â I'm sorry, autoregressive order P=1;

Â moving average order is zero,

Â so I don't have any moving average term;

Â and seasonality is 12.

Â We basically have only polynomial of autoregressive polynomial of degree one,

Â but it's not really degree one, it's degree one times s, which is 12.

Â And if I rewrite this,

Â if I expand it and rewrite it,

Â you can see that X_t here depends on annual lags.

Â For example, if this is a monthly data,

Â it depends on X_{t-12} and plus some noise.

Â Let's look at ARMA(1,1)_12.

Â In this case, we don't now only have autoregressive polynomial,

Â we also have moving average polynomial.

Â Again, degree one times s, which is 12.

Â And if I expand it,

Â we obtain that X_t depends on X_{t-12},

Â but it also depends on the noise from last year if this was a monthly data.

Â Now, in general, not just pure seasonal ARMA process,

Â if you look at seasonal ARIMA process,

Â then we have seven parameters.

Â We have p, d, q,

Â capital P, capital D,

Â captive Q, and s. And this is the polynomial form of that process.

Â We have (1- B)^d;

Â this is basically the difference operator d many times.

Â This is coming from I here, ARIMA is I.

Â And I also have a seasonal differencing: (1-B^S)^D.

Â So this is seasonal differencing,

Â this is non-seasonal differencing.

Â We have usual autoregressive polynomial,

Â but we also have

Â seasonal autoregressive process â€“ I'm sorry â€“ seasonal autoregressive polynomial.

Â If you look at the right-hand side,

Â we have moving average polynomial,

Â we have seasonal moving

Â average polynomial as well and all of them are specified right here.

Â In this SARIMA models,

Â basically we have two parts.

Â We have a non-seasonal part and we have a seasonal part.

Â So p here is the order of non-seasonal AR terms,

Â d is the order of non-seasonal differencing,

Â q is the order of non-seasonal moving average terms,

Â capital P is the order of seasonal autoregressive terms.

Â In other words, sometimes we say SR, right, SAR terms.

Â D, capital D, is the order of seasonal differencing,

Â in other words, (1-B^s).

Â And Q is going to be order of seasonal MA terms and

Â sometimes we're gonna write this as SMA terms.

Â Now, as in ARIMA processes,

Â differencing â€“ we don't have much differencing usually,

Â it's either one or two in practice.

Â So if D, the capital D=1,

Â then Delta operator â€“ this is Delta_s â€“ seasonal differencing X_t is (1-B^s)X_t,

Â and this is basically X_t-X_{t-s}.

Â So you look at the differences.

Â So if this was again monthly data,

Â and this is basically we are looking at the differences of

Â the sales in last August and this August.

Â If D=2, then we are looking at differencing, double differencing, right?

Â It's gonna be (1-B^s)^2.

Â If I expand it and I open it up, it becomes X_t-2X_{t-s}+X_{t-2s}.

Â So let me give you example of SARIMA process.

Â So this SARIMA model, SARIMA process (1,0,0,1,0,1)_12.

Â So this is little p, little d, little q.

Â This is capital P,

Â capital D and capital Q and seasonality is 12.

Â I can see that there is no differencing,

Â so I don't expect any differencing in my model.

Â And there is no moving average term in my model,

Â but there is seasonal moving average terms.

Â There is seasonal autoregressive terms and there's

Â usual non-seasonal autoregressive terms as well.

Â So (1-phi(B)) â€“ that is basically coming from this one,

Â degree of that polynomial is one here.

Â This is coming from this one which is the degree of seasonal autoregressive polynomial,

Â which is one times 12,

Â which is 12 here.

Â And this part, this is seasonal moving average polynomial with degree one times 12.

Â If I expand this just like a polynomial and if I expand this,

Â and I can obtain that X_t is actually depends on X_{t-1}.

Â So, X_t depends on the previous lag,

Â it depends on previous year,

Â and surprisingly, it also depends on X_{t-13}.

Â Right? So this is one lag before the last year's data.

Â And it also depends on noise from last year as well.

Â Let me give you another example.

Â This is SARIMA(0,1,1,0,0,1).

Â Here, we do not have autoregressive terms or seasonal autoregressive terms,

Â and we do not have seasonal differencing,

Â but we have moving average terms;

Â we have seasonal moving average terms;

Â we also have non-seasonal differencing.

Â Now this four here is the span of the seasonality,

Â so you can think of this as a quarterly data.

Â So (1-B), it's coming from non-seasonal differencing; and (1+Theta_1(B)),

Â that's coming from non-seasonal moving average terms;

Â and (1+Theta_1 B^4), that comes

Â from seasonal moving average terms with the degree becomes 1 times 4, which is 4.

Â If I expand this and put everything to the right-hand side,

Â we obtain that X_t depends on X_{t-1}.

Â This is because of non-seasonal differencing.

Â And then we have noises from previous lags.

Â Z_t, Z_{t-1}, this is coming from moving average part,

Â and Z_{t-4}, Z_{t-5}, this is from seasonal moving average part in the model.

Â So, what have you learned?

Â You have learned how to describe

Â seasonal autoregressive integrated, moving average models;

Â and you have learned how to rewrite seasonal autoregressive moving

Â average models using backshift and difference operators.

Â