Once again we run through our probability tree:

two competing claims, breast cancer, and no breast cancer.

However what has changed now is that this

patient is no longer a nobody from the population.

We've tested them once and they tested positive.

So we have some additional information about them, and

we should update our prior with this additional information.

In other words we plug in the posterior from the previous iteration,

the previous test. To be our new prior.

And therefore, the probability of not having breast cancer is updated

to be the complement of this, 88%. Next, we can run through our tree again.

Remember, nothing about the test has changed.

So the probability of testing positive given

a patient has breast cancer is still 78%.

And the probability of testing

negative given the patient has breast cancer is still 22%.

Similarly with the lower branch, nothing about the

test has changed so the conditional probabilities of testing

positive or negative given the patient has, does not

have breast cancer is still 10% and 90% respectively.

Once again, we're only interested interested in the branches where

the patient has tested positive Because we're saying that the second

mammogram also yielded a positive result, we can

multiply through the branches to find our joint probabilities.

And these are going to change a little bit, because

our starting probabilities, the probabilities

in the first branch has changed.

And this time, our probability of having breast cancer and testing positive Is

higher at 0.0936. And our probability of no breast

cancer in positive is 0.088.

In this example, we have reviewed

a Bayesian approach to statistical inference.

Which involves setting a prior, collecting data, obtaining a posterior,

and updating the prior with the posterior from the previous step.

In addition we got some practice working with conditional

probabilities, probability trees, and the base theorem in general.

[MUSIC]