Posted by: bbannan | March 15, 2010

Survey methods – Allison

If a survey will be used to gather data from your mobile app users, it’s important that the survey is executed and written correctly to provide user data that is useful in driving your design. According to Kuniavsky in “Observing the User Experience”, “the best tool to find out who your users are and their opinions are is the survey, “ (p. 303). Kuniavsky even states, “surveys can produce a higher degree of certainty in your user profile than any qualitative research method or indirect analysis of user behavior such as log files,” (p. 304).  That is a big statement that really markets the value of surveys in collecting user data.  Whether you need demographic information such as age or gender or the interests and opinions of a user, a survey works best to gather this type of information.  Although, I do believe other methods, such as a focus group, can delve deeper into a user’s thoughts and provide more insight than a survey.

A survey needs to be done correctly in how it’s executed and written or it will fail, especially when there are tight deadlines. Most importantly, you don’t want inaccurate or limited data that can’t be used to make changes to your prototype.  As for choosing a type of survey there are really two options: online and paper.  In “Measuring the User Experience,” Tullis and Albert believe that online surveys are more beneficial in, “capturing data on more subtle designs”.  I believe they can sometimes be limited in the data you obtain, but for the most part, they allow easy access at any time which increases user response rates.  They also perform the data analysis for you.  Although some find there can be disadvantages with online surveys.  In Kevin B. Wright’s article, “Researching Internet-Based Populations: Advantages and Disadvantages of Online Survey Research, Online Questionnaire Authoring Software Packages, and Web Survey Services”, he states there can be sampling issues such as self-selection bias (users with access to a computer vs. no access) and the possibility of users providing inaccurate demographic information since it’s self-reported data.

Once you determine your goals and information you want to collect from your users and the schedule, you can write the questions.  Kuniavsky mentions two types of survey questions: open-ended and closed-ended.  Kuniavsky doesn’t provide a ratio for good survey design, but in my experience I’ve made majority of the questions closed-ended and provided 2-3 that were open-ended.  In my experience, this allowed users to quickly complete the closed-ended multiple-choice questions, and then provide written feedback at the end of the survey.  Kuniavsky offers the following best practices when writing a survey:

  • Keep questions specific, precise, and relevant (i.e. Have you ever used a mobile app?)
  • Don’t phrase questions in a negative way (i.e. what site are you not interested in?)
  • Avoid using the following words in your questions: sometimes, around, any
  • Make sure there is an option for every user that applies to them (i.e. answers could include- “check all that apply” or “other”)
  • Incorporate Likert Scales.  In multiple-choice questions you will often see a Likert Scale, which provides 3-7 options for an answer range (i.e. very useful to very unuseful).
  • Use follow-up questions to expand on previous questions
  • Provide opt-out answers (i.e. “don’t know” or “none of the above”)
  • Make sure the questions follow an order and build off another
  • Provide instructions before the survey and offer a possible reward for completing the survey

Lastly, Kuniavsky mentions a couple times the importance of doing a pre-pilot survey.  Basically, just like the survey you prepare, a pre-survey is given to a select number of people (around 5-10) to see if their responses fit the questions you have written.  I think this is a great idea.  In my experience, I’ve conducted surveys at work without conducting a pre-pilot survey.  Usually, without conducting the pre-pilot survey, the final responses (specifically responses to the open-ended questions) did not fit the questions as I had hoped.  Even the closed-ended questions were either skipped or not answered correctly, because I had not given an “opt-out option” (Kuniavsky, p. 319).  If I had done a pre-pilot survey, I could have fixed both these open and closed ended questions and received the responses I was looking for.  Instead, I had to disregard a large number of responses and was left with minimal and possibly biased data.

Additional Resources:

Article: “Researching Internet-Based Populations: Advantages and Disadvantages of Online Survey Research, Online Questionnaire Authoring Software Packages, and Web Survey Services”

http://jcmc.indiana.edu/vol10/issue3/wright.html

Some Helpful Survey Design Information:

http://www.surveysystem.com/sdesign.htm

Online Survey Software (Free to Use):

www.surveymonkey.com

http://www.surveygizmo.com

www.sparklit.com (Kuniavsky Recommendation)

www.zoomerang.com (Kuniavsky Recommendation)

www.phpsep.sourceforge.net (Kuniavsky Recommendation)

Advertisements

Responses

  1. I know that when I was at my last job, we probably could have conducted a pre-pilot survey much better than we did. It seemed like that although we had people in our office to make sure that the questions made sense, it might have been helpful to run those survey questions by actual people taking the class to gauge the type of response to the question. I think many times we rush through the survey, when that is one of the most important places to focus on. This is where we can see our work as Instructional Designers come to fruition with actual users of what we create. Good points here, Alison!

  2. You make a very good point that Kuniavsky echoes multiple times in his book. When we recently did some phone interview surveys, we did not do a run through even though the questions made sense to us. Without actually answering the questions and taking the survey (considering the questions build off another and create sort of a story or dialogue), we ended up having problems. We skipped questions or the interviewee was confused about what the quesitons meant. All were open-ended as well. It makes sense to do a pre-pilot survey.

  3. Our textbooks contain very little discussion on traditional survey techniques (i.e., telephone, in-person and paper-mailed surveys). I think the use of in-person surveys can be a significant time-saving method when used properly. Although they have little use in our development and evaluation of learning systems, I think in-person group surveys can be useful and convenient.

    In-person group surveys are conducted by using group-administered questionnaires which can (1) ensure a high response rate and (2) provide instant clarification for questions about the questionnaires.

    The difference between a group-administered questionnaire and a group interview or focus group is that in a group-administered questionnaire, each respondent is completes and submits a paper questionnaire without group comment.

    In a group interview or focus group, an interviewer facilitates the session. The focus group (1) works together, (2) listens to each other’s comments and (3) answers the interviewer’s questions. Notes are taken for the entire group and no one completes an individual questionnaire.

  4. Thanks for sharing your thoughts. I also agree that in-person group surveys are very useful. I’ve conducted more of these types of surveys in my job than online, paper, or phone surveys combined. I think “groupthink” really helps and encourages each individual group member to provide more details and information than if they otherwise were giving this information solely.

  5. Alison, I was especially interested in your blog topic since, as it turned out, my group did surveys for both rounds of user research. What we found most challenging, in my opinion, was writing the questions so that we wouldn’t get potentially biased data, as you mention above. It was very difficult the first round to phrase our questions in ways that would not be misleading but would also be very clear and concise. This task seemed to be easier for us in round 2, once we got the hang of things.

    One thing you mentioned above that I found really interesting was the pre-survey pilot. I never would have thought to do this. I can see how it would be very helpful to send out a pilot to test your questions before sending the survey out to hundreds or thousands of people and getting bad data back.

    Thanks for the useful information!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: