So those are all potentially helpful.
Some possible downsides to interactivity
are that some tools require plug-ins and specialized software.
Which may make them less widely accessible.
And the increased use of interactivity may reintroduce social presence effects.
That is, by designing a questionnaire that seems more like a human than,
say, a paper questionnaire.
More like it’s being administered by a human interviewer.
This may result in inhibiting honest responses to sensitive questions.
Let’s start with a discussion of progress indicators.
As I said, it seems on kind of face value that this could be positive
interactive feature to Include.
Namely providing respondents with evidence about how much of the task
they have completed.
Progress indicators can be graphical or just numerical.
The example on the bottom is kind of a mix.
The idea is that if respondents know how they're doing,
this will motivate them to finish the task.
That it's the absence of this kind of feedback is what makes it
seem like will never end.
So that's the question, do progress indicators have this positive
impact on complication do they reduce break-offs?
A number of experimental studies of progress indicators have been conducted.
And overall the conclusion is that progress indicators do not work this way.
That they do not reduce and they may actually increase break offs.
There's a META analysis by VRs and colleagues that
reviews quite a number of studies and this is the general conclusion.
Progress indicators are especially likely to increase break-offs,
to do harm, with longer questionnaires.
The reason the progress indicators seem to reduce completion.
And especially with longer questionnaires seems to be that
if the content of the feedback is that the respondent has a long ways, still,
to go then that actually is a deterrent.
So it's not just that the act of getting feedback is helpful and
helps respondents, motivates respondents to complete the task.
But they're paying attention to what the feedback is telling them.
The feedback tells them you've got a long way to go.
Then in many cases, in more cases,
than without that feedback, they abandon the task, they break-off.
One of the studies that demonstrated this quite clearly,
Conrad et al, manipulated how progress was calculated.
And displayed what progress was displayed at different points in the questionnaire.
So, that's three different ways of doing this plus a control condition with no
progress indicator.
And they either presented feedback that
showed that as each page was completed, each question was answered.
The amount completed increased in a linear way.
In what they call the Fast-to-Slow progress indicator this is
sort of rigged so that the early progress seemed quite rapid.
That is respondents seemed were,
the feedback indicated they were advancing quickly for the early items.
But then it slowed down for later items.
And then, the final condition was kind of the opposite,
that progress accumulated very slowly over the course of the questionnaire.
Until the end when it's sped up,
and they call that the Fast-to-Slow progress indicator.
So the results indicated that, for the Slow-to-Fast progress indicator,
where again, it took respondents didn't see rapid progress until
the very end if they were still in fact respondents at that point.
The Slow-to-Fast progress indicator produced
more break-offs than the Fast-to-Slow progress indicator, almost 11% more.
The Slow-to-Fast progress indicator produced
longer estimates of perceived survey length.
So respondents, it seemed to be taking a longer amount of time even than with
the Fast-to-Slow progress indicator even though in fact it didn't take them longer.
So it really affected their perception of the task.
The Fast-to-Slow progress indicator resulted in higher ratings of
how interesting the survey was than in the Slow-to-Fast progress indicators.
So the way their designed makes a lot of difference.
The sort of garden variety progress indicator is linear and
in fact the constant speed progress indicator
did not produce reliably fewer break-offs than no progress indicator.
So the conclusion for
progress indicators is that the content of the information conveyed really matters.
So that encouraging information helps, increases completion.
Discouraging information hurts, promotes break-offs.
For long questionnaires it may be better not to provide progress indicators at all.
Yan and her colleagues found one situation
which progress Indicators increased completion rates.
And that's when the invitation to participate in the survey promised
the task would be short, only 5 minutes, and the questionnaire was in fact short.
But when the questionnaire took more time than was promised,
progress indicators hurts.
So this suggests that only under [LAUGH] kind
of atypical conditions are progress indicators helpful.
Namely when the respondents are told the questionnaire'll be short.
And in fact it is short.
And these are conditions under which they're probably least needed.
If the questionnaire is short and respondents are expecting it to be short.
Then they really don't need any additional motivation to complete the task.
So progress indicators generally are not so helpful it seems and
they may be harmful.
And there really is just this one
situation in which project indicators seem to increase completion.
Turning to another interactive feature known as constant sum or tally items.
This is a case where some interactivity does seem to be overall beneficial.
So these are items in which the questionnaire requires
a set of answers to sum to a fixed total, like 100% or 24 hours.
In a study investigating the use of feedback about the sum,
as it's accumulating, Conrad and colleagues presented one of four
types of feedback to respondents completing constant sum items.
What they call the concurrent tally or a running tally.
So that as each answer was entered, the tally increased.
A delayed message, only if the submitted tally did not equal 100%.
And a combination of the concurrent tally and the delayed feedback.
They also had a control condition with no feedback.
This is an example of the concurrent tally.
You can see down at the bottom there’s a total
which is just a sum of the answers above it.
And this is increasing as respondents are entering their answers.
When they submit an answer that's what we call ill form, that didn’t equal 100%.
They're given a message after they've actually submitted the response.
So the results were that this kind of feedback improved, quote,
accuracy, by which we really mean well formed answers.
Answers that which the component answers added up to the target, 100%.
Delayed feedback, getting that message that your answer was ill-formed, led to
slower responses than concurrent feedback, the running tally, or no feedback at all.
And that makes sense if you think the respondents have
gone through the process of answering as many of,
providing answers to as many of those component questions as possible.
Submitted it and then they're told that it was ill-formed.
In two follow-up studies,
the same author has assessed the impact on actual accuracy.
Using direct measures of accuracy rather than just well-formedness.
They looked at the correspondence between respondents.
Estimates of how much time they had spent on a variety of different activities.
And compared this to published time-use estimates from
the American Time-Use survey.
And they made this comparison when the respondents were given concurrent feedback
or were not.
And then in another study they looked at respondent's estimates for
the duration of each section in the questionnaire they had just completed and
compared these with actual durations.
And what they found is that feedback did improve the accuracy by this more direct
measure in both studies.
Although there's really no additional benefit from concurrent feedback.
Even delayed feedback, alone lead to this benefit.
So in our next segment, we will talk about the delivery of definitions to
respondents so they can better understand the questions that are being asked.
And this can be done interactively so that not every respondent gets a definition.
Much as interviewers can do this, which we'll talk about in the third lesson.
So see you in a moment.