0:21
For the paddle to be interesting for our game,
we need to be able to be movable when the user touches the screen.
And so in order to work with that, we need to handle touches and handle call backs.
And I want to introduce how multi-touch works so that you can
extend it into your game and before we go and implement it in a breakout case study,
I want to kind of give you an overview of what's happening there.
So the high level,
what multi-touch is about is about providing games a way to receive input.
Games often respond to touch events in custom ways.
Now this is in contrast to, for example,
the way UI kit catches is tapped signals from the user touching UI view or
perhaps even gesture recognizers that capture pinch and zoom moves.
Sometimes games and graphic heavy applications need finer control
over the interaction with the user and
multi-touch enables a developer to get that kind of control.
1:21
Tracking multiple touches is often part of this.
So sometimes it's necessary to be able to keep track of where more than one finger
is on the screen at a given time and how some fact or some kind of a interaction
that's based on those multiple fingers touching the screen.
So high level that's what we're talking about.
Using multiple fingers on the screen and keeping track of that in order to do
something that we might find good for our game.
1:45
Now there are a lot of ways to handle touches.
And we've worked with a lot of them in the specialization,
primarily the standard controls.
So these are controls that are accessed via outlets.
This is like when we would control drag in the storyboard from a UI element
over to our view controller and that would create a function or a method within our
class that would get called when the user tapped on a button for example.
Or tapped on any other user interface element.
That's a standard control, it's the easiest.
It's for most, the high level widgets that you got.
The second way to handle touches is through gesture recognizers.
So gesture recognizers are things you can add to your app so
that you can allow the operating system to interpret multiple touches and
only tell you when they meet certain criteria.
And this criteria are gestures that have been defined already and
are standard across the iOS platform.
So examples of that are the pinch gesture, or the zoom gesture,
or the two finger rotate gesture, the swipe and the flick gesture.
2:55
Those kinds of gestures are available to you through pre-written code so
that you don't have to write low level multi-touch event recognizers
to capture these gestures.
So that's what you find yourself starting to do with multi-touch events,
take a step back and look at the documentation for
gesture recognizers, which I've already done that interpretation work for you and
will only give you a call when that particular gesture is seen.
3:22
The third way that you can handle touches, as least in the way that we are talking,
thinking about it today, is through multi-touch events.
And so this is a very low-level feedback that you get from the iOS platform about
the status of multiple touches on the screen, how they're moving, and just how
they're changing over the life-cycle of the user interacting with the screen.
Very low level information from user interaction.
That's the one we're talking about in this lecture.
3:49
All right.
So it's handled through call-backs.
Multi-touch events are delivered to subclasses of the UIResponder classes.
Now, some examples of subclasses of the UIResponder class include the UIView,
or our old familiar one, UIViewController, or the new
one that we're working with in this games, sensors, and media course, called SKScene.
So we could implement these call-backs by putting them into our objects.
Thus overriding the implementation of the default implementation from
the UIReponder class.
So we can put them into Rcode and receive them and
do something with Rcode rather than just letting them go un-handled.
All right, so
call-backs will be delivered to you if you are [INAUDIBLE] of the UIResponder class.
4:42
And you override these methods.
So the methods are, touchesBegan, touchesMoved,
touchesEnded and touchesCancelled.
Each one of those call-back is delivered to you,
the developer, with a set of touches that have objects.
One object corresponding to each placement of a finger onto the screen,
which what we think is a finger on the screen.
And then also you get an event,
which has a full collection of all of the touches even
though some of may not be associated with the event that the call-back is calling.
Now, you're also only going to get those call-backs if the view,
that has those methods overridden is visible.
You're also only going to get those call-backs if that view is touched.
5:43
So to do that, for example in the SKscene, when SKscene is selected,
you can go over to the property view, and you can look, and
you can see that there are two check boxes right there, two blue boxes.
One is user interaction enabled, and the other is multiple touch.
So, to get call backs, you're going to need to click user interaction enabled.
6:25
All right so the life cycle and the way these touches work is like this.
Imagine that you have a iOS device and nothing is touching it and
suddenly the user places three fingers down on the screen.
As the user places those three fingers down on the screen, the hardware and
the operating system will recognize those as touches,
will create UItouch objects associated with each one of those, and
will deliver a call-back to the object in which those touches occurred.
The call-back in this case will be a call-back to touchesBegan.
7:01
The parameter that has passed to touchesBegan,
that goes by the name touches will have three elements in the set.
That callback will also receive an event parameter.
That event parameter, within it,
has all of the touches that are in action right now.
That also happens to be three.
So three touches began in the set and then three touches are in the event object.
7:27
If after that,
the user keeps one finger study that sort of expands at two other fingers.
Then a new call-back is going to be delivered as part of Lifecycle of
a touches.
That call-back is going to be touchesMoved.
This call-back in the touches parameter is only going to contain
two UITouch elements in that initial set.
Those are the two touches that have moved.
Now there are three active touches in play right now on the screen.
But because this is an indication of which one has moved,
only two will be passed in the touches set.
If you would like access to all three of the currently active touches,
then you can look into the UIEvent object and look for
the complete set of touches that are there.
So the UIEvent parameter in this case will contain three touch objects.
All three that are down.
Two of them will be the same as the ones that just moved.
The next step in the Lifecycle is that the user may lift up one finger leaving two
fingers down.
And in this case, a view that is implemented at the call back will receive
the touch has ended call back In this case only one touch is ended.
And so that touches parameter will contain a set, is a set that has one object,
one UITouch, that UITouch which just disappeared.
Now if you look in the UIEvent object for
the property that has the collection of all of the touches, you'll find that there
are three there, two of which are still in progress and one which has ended.
8:58
Then if the user moves out original touch down,
a new touchesMoved of that will be sent.
Now in this case, only one of those touches will be part of the touches set,
the touch that just moved, the one that was stationary before.
9:15
Now if we go and we look in the event parameter, we look for
the collection of all touches, we won't be sure how many touches are there.
We know that there will be at least two, the two that are still being held down.
We know that there was one other touch and that touch is ended,
and it's not clear whether or not that one will be there or
that one will be removed from the overall event.
Regardless of whether it's there or not, all UITouches have a property
which indicates which phase of the lifecycle it's in.
9:44
Began, moved, ended.
Finally, if all of a sudden, you have touches down and a call comes in,
you're going to get the fourth call-back, which is touchesCancelled.
This is an example of when touches are cancelled.
This is when a call comes in.
So in this case we'll have at least two
UITouch objects cancelled due to incoming phone call.
And there's a question mark there, because there are other ways in which
your touches may be cancelled, but a phone call being one of them.
The touches that are cancelled will pass as part of the set.
And then that UIEvent object may help more than one, more than two UITouch objects.
Because maybe one more is already in the ended phase.
So that's kind of an idea of how these touches progress over time.
And one thing that is interesting to note is that those UITouch objects
are persistent across the whole movement cycle.
So if you keep a reference to one of those UITouches and
the operating system detects that it moves, your reference will reflect that
movement even though you're keeping track of it outside of these call-backs.
We'll see that come into play when we do our case study example of
moving the paddle in break out.
10:55
Now the UITouch object itself,
within code, has a couple properties that are of interest.
First of all, it has a location.
We look at locationInView in order to find
the coordinates of the touch within the view coordinate space.
We also have a property which is the view in which the UITouch has occurred.
That's helpful if we want to try and
figure out what the position of the touch on the overall screen.
You can do that by calling methods within the view object.
Another property that the UITouch has is a timestamp and
that indicates when the touch occurred.
11:34
The UITouch object has several qualities as well.
For example it has the phase, which can be whether the touch is in the begin phase,
the moved phase, the ended phase or the cancelled phase.
11:45
It has a force and that's new with 3D touch.
So this is a number and I double checked the documentation, but
I believe it's between zero and one.
It indicates how hard a user has pressed in the process of creating that
UITouch object.
You can also determine what is the source or what is the type of that UITouch and
the types in this case mean, is it a direct touch,
meaning a user has touched the screen?
Is it an indirect touch?
And this is new with iOS 9, indicating that the user has somehow touched
the screen but hasn't done it physically.
I'm not exactly sure what Apple has in mind for that, but some other way of
interacting with the screen that doesn't entail physically touching the screen.
And then finally, if you're familiar with the iPad Pro,
there's the ability now to specifically indicate a touch has been initiated
with a stylus, with a special stylus that indicates that kind of touch.
Last, a last property that may be of interest to you are other ones is
the radius of the touch.
And this kind of indicates how fine the touch is or how fat the touch is.
So if you have a touch, with your thumb for example, you get a very large ellipse
and a major radius will tell you how long that ellipse is in the major axis.
And if you rotate it you can draw that ellipse and see that it rotates.
And the major radius tells you how much of your finger is pressed or
how little is pressed.
So for complex, multi-touch gestures, once that experience to begin, moved, and
possibly having fingers lifted up and moved and coming and going,, like I said,
the UITouch objects are persistent across multiple call-back events.
So when one touch has been initiated, if it moves, you can keep a reference to that
UITouch object, and you'll know you have the same one every time.
13:30
These complex multi-touch gestures,
especially ones where you're trying to detect some kind of weird push or
flick or zoom or forefinger zoom or some sort of wiggle or something like that.
That's going to require some code in order to recognize that event because you're
going to have to calculate whether that event occurred over the course of several
different call-backs, and that can be a little bit tricky.
So care needs to be taken in keeping track of those objects,
especially if you're talking about an arbitrary number of touches.
And our breakout example is not quite as complicated because we're really only
going to keep track of one, the dominant touch,
regardless of how many fingers, maybe, are touching the screen.
14:06
If you do find yourself keeping track of many different
touches at one time in order to recognize complex gestures
take a look at the Apple Multi-Touch Event Documentation for
further reading because there's some good examples of best known practices for
how to keep track of multiple touches across that life cycle.
There are a lot of details to keep in mind.
For example, make sure that you always implement all four of those events if
you're going to implement any of them.
And in particular, in the case of the event cancellation method, make sure that
during cancellation you reset the state of your gesture to a clean slate,
because you never know if you're going to receive another cancel immediately.
But in any event,
you know that that whole gesture is going to begin again from the beginning.
And you don't want any strange intermediate states in the gesture you're
trying to keep track of.
If you handle these events in a class that is a subclass of UIView,
UIView controller, or UIResponder, for example, SKScene, make sure
you implement all of the even handling methods even if they end up doing nothing.
You don't want to mix whether or not the super class is handling it or
your class is handling it this case.
And if you're doing it in descendants of these three classes don't
call the superclass that implements the method as well.
15:24
Your coach should take care of them completely and then exit.
However, if you handle it in a subclass of any other UIKit responder class,
you don't have to implement all them, and in the methods you do implement,
make sure call the superclass implementation as well.
In our case, we're going to see the former,
where we want to implement all four and not make the superclass call.
Make sure you don't do any sort of rounding of the touch positions in
your handling, because you're going to lose precision.
And the reason why is because for
legacy reasons, touches occur in a 320 by 480 coordinate space.
But of course that's much lower resolution than what the screen can handle and
the larger screens can actually register twice that, or 640x960.
And so in order to keep backward compatibility, rather than giving a number
that the code didn't recognize, it will give a half, half pixel distances.
So it's saying you touched 100.5 or 100.5 reflected in greater screen resolution.
16:28
So in summary, tracking multiple touches can be a very important part of
a game's capability.
It can be part of a unique user interaction that you have in your game and
that flexibility is available to you.
Even simple interactions can leverage the multi touch events and
not do a lot of work to keep track them like we'll see.
It's easy to get access to those touches but
you have to take a lot of care to make sure the interaction works well.
What we'd like to do next is implement that in the breakout code and
show you one example of using that call-back functionality in order to
generate some user interactions.
Thank you for your attention.
[MUSIC]