Let's take a look at some example code that shows when implicit conversions happen,

and how you have to be careful about types.

First, we make an int called nHrs and initialize it to 40.

Next, we make nDays and initialize it to seven.

Next, we have a float called average,

which is initialized to nHours divided by nDays.

Both nHours and nDays are ints so this is integer division.

We have 40 divided by seven,

which for integer division results in five.

Now since we are assigning an int to a float,

the compiler will implicitly convert that integer result to a floating point number.

But it does that after the division,

so we end up initializing average with a value of 5.0.

Then we print out that 40 hours in 7 days is 5.0 hours per day.

This is not the right answer,

so let's fix this code.

Here we've made one small change to the code.

We have explicitly cast nDays to a float before we do the division,

which is called out with the red underline here.

We start the same way,

nHours is created and initialized to 40,

and nDays is created and initialized to seven.

However, now things go differently.

The divisor of this expression is now seven cast to a float,

so we need to evaluate the integer 40 divided by the floating point numbers 7.0.

Computers divide integers by integers

or floating point numbers by floating point numbers,

so the compiler has to implicitly convert

40 to a floating point number before doing the division.

Now we are doing floating point division,

14.0 divided by 7.0 equals 5.71.

We now initialize average with this value which means that when we print out our results,

we get the correct answer.

Now you have seen both implicit conversions as well as explicit casts.

Casting is something you use when you need to.

However, you should use it sparingly,

only when you think through what you are converting between and why.