Posted by: bbannan | March 22, 2012

Thoughts on Maintaining Ongoing Relationships – Hiba

After reading Chapter 12 of Kuniavsky’s (2003) Observing the User Experience: A Practitioner’s Guide to User Research, I realized that there is a lot more involved in understanding people’s relationship to a product than by simply administering one-time usability testing techniques.

As Kuniavsky (2003) suggests, “in nearly all cases, you want people to get comfortable with your product and learn its more subtle facets” (p. 367). However, Kuniavsky (2003) also notes that “people’s use of a product and their relationship to it change with time” (p. 367). This means that the product must support long-term use, not just one-time use, since products should be useful to their users over extended periods of time (months, years, etc.). Thus, as Kuniavsky (2003) states, “knowing a point on the curve is valuable [an understanding of people’s experience right now], but it doesn’t define the curve. Knowing the shape of the curve can help you predict what people will want and help you design an appropriate way to experience it” (p. 367).

So, I am going to reflect on a few of the techniques mentioned by Kuniavsky in this chapter to hopefully give us a better insight in understanding behavior and attitudinal changes over time and maintaining ongoing relationships with end users.


Before diving into the techniques, I want to recap some of the things that Kuniavsky (2003) says “happen” as a user progresses from newbie to expert (p. 368-369):

  • Mistakes are made.
  • Mental models are built.
  • Expectations are set.
  • Habits are formed.
  • Opinions are created.
  • A context is developed.

As Kuniavsky (2003) states, “all of these changes affect the user experience of the product and are difficult to capture and understand in a systematic way unless you track their process” (p. 369).


The first technique I want to discuss is diary studies. As Kuniavsky (2003) states, “diary studies are, as the name implies, based on having a group of people keep a diary as they use a product. They track which mistakes they make, what they learn, and how often they use the product… Afterward, the diaries are coded and analyzed to determine usage patterns and examined for common issues” (p. 369).


Two very important factors need to be determined before conducting a diary study:

  • The duration of the study.
  • The sampling rate of the study.

As Kuniavsky (2003) suggests, “since people aren’t diary-filling machines, picking a sampling rate and duration that won’t overly tax their time or bore them is likely to get you better-quality information” (p. 370).

Furthermore, there are two kinds of diary studies: unstructured and structured (Kuniavsky, 2003, p. 371).

  • Unstructured diary studies are participant driven. They are loosely structured, and focus on getting the diarists to relate their everyday experiences, track their learning, and the problems they encounter as they encounter them.
  • Structured diary studies resemble extended surveys or self-administered usability tests. They are more strictly structured, with the remote guidance of a moderator, and focus on having the diarists perform specific tasks and examine specific aspects of a product. Diarists are then asked to report their experiences in a predetermined diary format: 1) survey-structured diaries, 2) usability test diaries, or 3) problem report diaries (Kuniavsky, 2003, p. 375).

Advisory Boards

An advisory board “consists of a group of users who are tapped by the development team whenever the team feels that they need input from end users on changes to the product” (Kuniavsky, 2003, p. 385). Kuniavsky (2003) mentions that the members of the board need to have specific qualities, unlike focus groups, including the following (p. 387):

  • They have to know the task.
  • They need to be end users, at least some of the time.
  • They should be articulate.
  • They should be available.

Most major firms/organizations/institutions have established advisory boards that help them obtain valuable user input and provide advice when solicited. For example, Mason has an advisory board for each major school/unit within the institution; so does the U.S. Department of Education, which has a number of advisory committees that provide advice on specific policy and program issues.

Beta Testing

Beta testing is a “time-proven quality assurance (QA) technique” and one that I am personally very familiar with (Kuniavsky, 2003, p. 391). Using a structured feedback system, beta testing is done at the very end of the development cycle. After content, graphics, interface, functionality, and navigation have all been agreed upon and finalized, end users test the product and report any issues they encountered to the developers. This is the last stop before the product gets packaged and published (Kuniavsky, 2003, p. 391).


Beta testing can be an effective technique used in both an educational and corporate setting. For example, I read an article that described the analysis of beta testing results of a study on using on-line modules for professional development in action research at schools in Florida. Little and King (2007) stated that, “to address the need for responsive, individual, and contextualized support during the implementation process of evidenced-based instructional practices by teachers to determine impact of instruction, an on-line module in action research has been developed, implemented, and researched using a beta testing process”. Within the beta testing framework, Little and King (2007) also noted that “most of the data collected during the beta-testing was the result of surveys, interviews, and focus groups”. The results of the survey indicated that “both the knowledge and perceptions of the content of action research, as well as the process of implementing action research, could be effectively completed using an on-line environment” (Little and King, 2007).

On a personal note, the design and development cycle the training team and I use at work also involves alpha testing and beta testing. We develop electronic learning tools, or simply counter-based training courses, for our Air Force client. We first meet with the client to decide upon the content, graphics, and audio script that will be included in the course. Then, comes alpha testing, where we send the course to a select list of subject matter experts (SMEs) who review it for accuracy and consistency, in terms of the content/processes explained, functionality, and navigation. We address all comments made by the SMEs and update the course based on their feedback. Then, comes beta testing, where we send the course again to that list of SMEs for their feedback on any issues they may encounter while reviewing the course. We fix those issues as applicable, record audio, package the course, and publish it to their learning system environment… and voila it’s done!


Final Thoughts


In conclusion, I think the most important thing I took away from this chapter is best said by Kuniavsky (2003) himself: “it’s important to know how products change and how people’s relationships to them shift as time goes on so that the products can be designed to grow with the knowledge and needs of its users” (p. 394). I will definitely keep the techniques and best practices I learned in this chapter in mind as my ISD team (Group 5) continues to design our Museum on the Mall conceptual design prototype.


Kuniavsky, M. (2003). Observing the user experience: A practitioner’s guide to user research. San Francisco, California: Morgan Kaufmann Publishers.

Little, M., & King, L. (2007). Using on-line modules for professional development in action research: Analysis of beta testing results. Journal of Interactive Online Learning, 6(2), 87-99. Retrieved from

Mason – Volunteer Leadership – Advisory Boards:

U.S. Department of Education – Boards & Commissions:



  1. Great summation of the chapter. I would like to see if my team can incorporate a diary or even use an advisory board consisting of our current volunteers would be great additions for round 2 testing.

    At my job we also use Beta testing for our content before it is loaded to the LMS. This approach is also use for our technical upgrades as well. While the alpha and beta phases have their place, I like the other usability tests that occur at a point where there is more flexibility to change the end product. Unfortunately my work environment is a bit restrictive with making changes and oftentimes I feel like the usefulness of a product is diminished since it was created in a vacuum and deployed to a large audience who have never seen it before. For software development, I think using advisory boards and other usability tests are great and help to come up with a good product. I do prefer the more restrictive alpha and beta tests for elearning content since oftentimes we have alot of upfront conversation before a significant amount of work is put into recording audio and building complex scenarios in Flash. All content is created in PowerPoint during the analysis phase and discussed with the SMEs and end customer before the course is developed enough to go through the alpha and beta stages. Hopefully as time progresses, the ability to implement more flexible usability tests in software design is accepted and the current processes changed to reflect this new way of conducting business and creating an end product that is wanted and usable by the customer and the intended audience.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: