And you should really thing about how long the tasks will be.
If you're recruiting somebody and
you're asking them to do a ten minute task on their phone, that's quite different
from putting a system in the field and asking them to use it everyday for a year.
As you think about these things, think about how your strategies for recruitment
participants, and also use strategies for compensation, if you are asking somebody
perhaps, a professional to use your system everyday for a year.
You need to compensate them for their time so that it's not a huge burden on them.
And lastly, I think it's really important to think about this idea of when the task
will be over.
I think perhaps this is more important in a lab, but in the field.
But, let's say that the participant has finished the task that you set out for
them, but they still seem to be messing with the interface.
Maybe that's because they don't understand that they have actually
completed the task.
Will you stop them at that point, or will you let them continue until they actually
explicitly say, I am done with this task.
So all these things are really important to think about.
Overall, for these three concepts, for users and test setting, for methods and
metrics, for tasks and prompts, I just say, just make sure that these
are consistent with your goals for actually running that user test study.
So you want to keep coming back to this idea of,
what do we hope to learn from this?
And are these question going to help us get there?
Are these task going to help us get the information we need?
Is this the right metric for the information we need?
Is this the right setting for the information we hope to gather?
Are these the right users for us to be working with?