Posted by: bbannan | April 18, 2012

Creating Reports and Engaging Presentations – Jennifer

When I hear the word “report” or “presentation” I think of facts, data, stories, photos, videos and bulletin points that help me understand and remember the message the speaker is conveying.  In my job I’ve seen many presentations that served their purpose and held their audience’s attention.  I work for a government locality and one of my responsibilities is to videotape training seminars for county agencies.  I’ve recorded at least a dozen training seminars and the ones I remember stood out for a reason.

A few years ago I recorded an eight hour presentation on gang prevention for public safety employees, social workers and school officials and I remember almost every detail of it!  Yes, I know most people would be interested in listening to a topic on gang prevention, but if you think about it, eight hours is a long time for people to be listening to a bunch of talking heads. Let me tell you they found a way to keep everyone on the edge of their sits. They had videos with interviews of gang members and videos of gang initiations, PowerPoints with photos, graphs, bulletin points with facts and stats.  And they even had the author of, “This is for the Mara Salvatrucha: Inside the MS-13, America’s Most Violent Gang” by Samuel Logan.  The point is they know their target audience; they presented information that addressed the root of the problem and possible solutions that could be used to prevent youth from joining gangs.

I know Chapter 17 focuses on creating reports and presentations for products, but a lot of the techniques that I saw in the gang prevention seminar could be used in a user research presentation or any presentation for that matter.  Below is a list of Do’s on creating user research reports and presentations that I combined from various resources.

Report Do’s

Content: When organizing the findings of a product for a user research report, keep the “rule of three” in mind. Must Know…Should Know…Nice to Know. This will help you stay on topic and communicate all the important elements that stakeholders will need to consider before they continue to move forward with the development of a product.

Parts that should be included in your report are: executive summary, method(s) for data collection, problems/limitations of the study, user profile and conclusion.

Executive Summary: In a few sentences describe the participants, data collection method and results.

Method: Explain the type of method you use to collect data from the participants of your study.  (i.e. surveys, interviews, focus groups, contextual inquiry, usability testing)

Problems/Limitations: State the problems and limitations you had while collecting data from participates.  Being honest will help avoid a company from going in the wrong direction on developing a product.   

User Profile: Give a brief description of the participants and a few quotes from people who participated in the product study.  This will help humanize your report and justify your findings.

Conclusion: The conclusion should sum up the main points of the report; this will help the target audience remember key points.    

Six Data Points for User Research Documentation

http://userexperience.evantageconsulting.com/2009/09/user-research-documentation/

Format: Based on your audience’s needs, choose an appropriate format for your report. It can either be a formal written report, e-mail, in-person presentation, web-based or video conference call.

Reporting Software Tools

http://www.capterra.com/reporting-software?gclid=CO-gy4u8vK8CFWy-tgodz2UakQ

Audience: Develop different aspects of the same reports for different members of the target audience.  This will help the target audience from feeling patronized or confused.  Personal note: While teaching a video production class for a group of K-12 teachers, I started using terminology that some the teachers were not familiar with and I got called on it. I had a women yell out, “Excuse me Miss, I don’t even know how to use my cell phone; what makes you think I know what a waveform is?” I felt so bad, after that I started incorporating meaningful learning to help explain basic video production skills.

Presentation Do’s

Themes: Using a black background and yellow font as your theme color for your PowerPoint presentation is not the best option! I’ve seen it, trust me it doesn’t work. Choose themes that make your notes and visual aids easy to read and understand. Oh, and another pointer…when you insert graphs and charts…make sure they are legible and clearly labeled.  You have people like me who have really poor vision.

12 tips for Creating Better PowerPoint Presentations

http://www.microsoft.com/atwork/skills/presentations.aspx

A/V Equipment: If possible, check to make sure any necessary equipment works in the room you are planning to present.  This will save you time from trying to fix problems that you could be spending on your presentation.

Speaker: Please do not read your NOTES! There is nothing worse than some reading their PowerPoint, instead put the presentation into your own words and talk to the audience.  A PowerPoint should serve as an outline and provide key words or facts you want your audience to know and remember.

 

Practice:  Rehearse and time out your presentation.  This will allow you to know if you have enough time to share your results, ideas and answer questions from your target audience.

Toast Masters

http://www.toastmasters.org/

SMILE: This my personal tip on presenting.  I have a dance instructor that always yells at us if we’re not smiling while we’re dancing. After he imitates us (it’s funny) and he tells us an audience isn’t interested in watching a bunch of zombies dance around the stage (unless it’s a Michael Jackson’s Thriller video of course).   Anyways, the same thing goes when you’re talking to a group of people.  Smiling will help decrease your anxieties and make you more approachable when it comes to Q&A.

 

Q&A: Make sure you are listening to your audience’s questions and answer them to the best of your ability.  If you can’t answer their questions, make sure you follow up with the requested information.

Resources

Good and Poor Examples of Executive Summaries

http://unilearning.uow.edu.au/report/4bi1.html

12 Tips for Creating Better PowerPoint Presentations

http://www.microsoft.com/atwork/skills/presentations.aspx

18 Tips for Killer Presentations

http://www.lifehack.org/articles/communication/18-tips-for-killer-presentations.html

Presenting in Front of the Audience Best Practices and Techniques

http://www.powerpoint-presentation-power.com/presenting-in-front-of-the-audience-best-practices-and-techniques.html

13 Tips on Zap your Butterflies when Speaking in Public

http://www.lifehack.org/articles/lifehack/13-tips-to-zap-your-butterflies-when-speaking-in-public.html

Toast Masters

http://www.toastmasters.org/

Corporate Presentation Advice

http://www.presentationmagazine.com/corporate-presentation-advice-7605.htm

References

Dube, S. (2009, September 1). Six Data Points for User Research Documentation. Retrieved from User Experince Blog: http://userexperience.evantageconsulting.com/2009/09/user-research-documentation/

Kuniavsky, M. (2003). Obersevering the User Experince: A Practitioner’s Guide to User Research . San Francisco: Morgan Kaufmann Publishers.

Young, S. H. (n.d.). 13 Tips to Zap your Butterflies whn Speaking in Public. Retrieved from Stepcase Lifehack: http://www.lifehack.org/articles/lifehack/13-tips-to-zap-your-butterflies-when-speaking-in-public.html

Young, S. H. (n.d.). Stepcase Lifehack. Retrieved from http://www.lifehack.org/articles/communication/18-tips-for-killer-presentations.html.

 

 

Posted by: bbannan | April 18, 2012

Achieving User-Centered Design – Kate

Over the course of our Instructional Design and Development program, we’ve learned a lot about user-centered design and how to best design our teaching (or products) with the experience of the end-user in mind. There are many benefits to user-centered design for a corporation – efficiency (less product redesigning), reputation (positive experiences mean more word of mouth advertising), competitive advantage, user trust, and profits. (Kuniavsky, 2003).  In chapter 18, Kuniavsky discusses user-centered design in context of corporate culture. Hill and Jones, as cited in Lund, describe corporate culture as the “specific collection of values and norms that are shared by people and groups in an organization and that control the way they interact with each other and with stakeholders outside the organization.”  We, as instructional designers, can learn all we want about user-centered design and research but this knowledge does not improve our processes unless the corporate culture recognizes the value of and supports these techniques. Kuniavsky more eloquently states “Unless the benefits and techniques of user-centered design and research are ingrained in the processes, tools, and mind-set of the company, knowledge will do little to prevent problems” (2003, p. 505).

The concept of a user-centered development process means designers and engineers need to think not about how to make the product but to analyze how the product will be used and how to satisfy the needs of the users. As Kuniavsky states, the benefits of user-centered development are too often seen as inconsequential and not as vital parts of the design process and shifting this corporate culture towards user-centered design is often difficult. He recommends several ways to shift towards this design philosophy.

  • Introduce the change gradually to avoid major disruptions to how the organization currently works
  • Spend some time doing internal discovery on the current process (formal or ad-hoc) – this allows you to formulate your strategy better
  • Start small
  • Choose your method of attack: finding an executive to institute a top-down process change, float your ideas around informally for a period before introduction so people get used to the ideas before a formal introduction
  • Involve your stakeholders early – getting them in on the research is an easy way to sell them on the value and effectiveness of user-centered design
  • Show results with useful and convincing data and tailor your presentation to your audience

A real-life example of this corporate change was given by Arnold Lund, from Microsoft Corporation. In his article he describes his experience trying to shift corporate culture towards a more user-centered design as creating “a virtuous cycle of self-reinforcing activity whose impact grows as it operates and virally spreads across the organization.” The initial goal of his team at Microsoft was to set a vision and direction built on user needs and desires. His team began by defining a user-centered version of the waterfall process (planning, requirements, design, development, deployment, and maintenance) that was used by their IT department. The team overlaid user-centered design activities into each phase making the process start with understanding the users, putting this understanding into a design, testing, improving and repeating the cycle. They also implemented UX training in the organization. The training was designed to “enable teams without UX to do a better job in what they designed; would enable those taking the classes to partner with UX people more effectively; and would help UX people to scale their impact ad design direction beyond the simple number of UX people on a project.

Arnold also discusses leadership as an important factor into the culture shift towards user-centered design. The UX team should not be seen as a service team but rather take a leadership role—to  develop a vision, inspire, motivate, equip and educate. “These leadership opportunities become the new inspiring examples of the vision from which we harvest assets and best practices that we drive through the organization and the cycle begins again” (Lund, 2010).

I found Lund’s article a very interesting read and one that put another perspective on ideas for cultural change as it relates to user-centered design. I hope you find it interesting as well!

Resources:

Kuniavsky, M. (2003). Observing the user experience: a practitioner’s guide to user research. San Francisco: Morgan Kaufmann Publishers.

Lund, A.M. (2010). Creating a user-centered development culture. Interactions 17(3): 34-38.

 

 

Posted by: bbannan | April 17, 2012

Chapter 14 – Competitive Research

Kuniavsky’s chapter on competitive research enlightened for me.  Until now, I had thought of competitive research simply as the process of analyzing your competition’s products’ features and capabilities, in order to make your products better.  It seems obvious to me now, but I hadn’t thought about using Kuniavsky’s approach of conducting user research with the competitor’s products, invoking the various techniques outlined throughout Kuniavsky’s book – interviews, usability tests, surveys, etc.

This competitive research approach makes a lot of sense to me now, as my team engages in its own user research.  Discovering what a user or a potential user has to say about a product’s features – positive or negative – is much more powerful than just assuming what works or not from your own, somewhat biased, point-of-view.

According to Kuniavsky, a major benefit of competitive research is that it “ignores assumptions and/or constraints under which [a] product was created” (2003, p.420).   User research with your own product could potentially be unbiased, but being a vested stakeholder in the entire design process, it can be difficult to be non-judgmental when considering a user’s perceptions about why your product is the way it is.  A user’s thoughts may be brushed off as irrelevant because they don’t match your goals or because they conflict with constraints in your design or development process.  With a competitor’s product, you probably have very little knowledge about the conditions under which that product was created, so you can keep a more open mind with the user feedback.

My team is not using competitive research, so I don’t have first-hand experience, but I still feel it would be a useful approach for any user experience research project.  For our project specifically, it could help to (1) see what features or approaches are already working in other applications (that we could borrow) and (2) see what content is already available that users like, so we can make changes to our application, leading it to address content that is either currently unavailable in market, or is currently poorly presented.

In order to take advantage of competitive research, it’s important to accurately determine your competition.  Levinsohn and Feenstra indentify two main factors that determine a product’s competition – the physical characteristics being similar and whether or not consumers find those physical characteristics relevant (1990, p. 201).

Desarbo, Grewal, and Wind take it a step further by defining market structure (the competition) “as the set of products judged to be substitutes within those usage situations in which similar patterns of benefits are sought…” (2006, p. 104).  This says that a competitive product isn’t necessarily one that has similar characteristics, but one that has similar benefits, or satisfies similar needs.  In this context, a competitor to an augmented reality tour could be the physical tour, or even a book about the topic, not just other mobile applications.

Desarbo, Grewal, and Wind also point out that cultural norms and brand prevalence may influence a user’s perceptions of products (2006, p. 103-104).  So, even though a user states that they really like a product’s features, it may be that they simply identify with the brand.  This could be an easy fix by hiding the identity of products, but, especially in usability testing, a particularly prevalent brand may be recognized by the users anyway.  So, even though a major benefit of user experience competitive research is that you get an actual user’s perspective rather than your own, that user perspective may still be biased.

However, within our context, I find this competitive research approach to be very useful because it forces us to think beyond the features of our application, as we consider what needs we are actually satisfying.  For my team’s project, Museum on the Mall, we don’t want to just have the best augmented reality application available, we also want to be a viable alternative to someone actually visiting a museum, or sitting at home exploring a museum’s collection via the Internet, or reading a book about dinosaurs.  We want to connect directly with our user’s needs.

References:

Desarbo, W. S., Grewal, R., & Wind, J. (2006).  Who competes with whom? A demand-based perspective

for identifying and representing asymmetric competition.  Strategic Management Journal, 27 (2), 101-129. doi: 10.1002/sm.505

Levinsohn, J. & Feenstra, R. (1990).  Identifying the competition.  Journal of International Economics, 28

(3-4), 199-215.

Kuniavsky, M. (2003).  Observing the user experience: A practitioner’s guide to user research.  San

Francisco, CA: Morgan Kaufmann Publishers.

Posted by: bbannan | April 11, 2012

Innovation and Mash Up User Testing – Tangier

Not every project is the same, not everyone perceives an event the same.  Kuniavsky expands upon this level of thinking in chapter 16 where he summarizes all of the different methods of usability testing found throughout the book but then gives us the freedom to go out and ‘try what works’.  Every situation is different, thus following a prescribed process everytime will not produce the desired outcome.  We must not be hesitant to step outside the usability testing box in order to get the best possible results and feedback.  Innovation doesn’t come from doing the same thing, the same way over and over again, it comes  from the willingness to fly by the seat of your pants sometimes until you reach the end goal with a product or website that is used and liked by your audience.

Kuniavsky delves deeper into how you can make these tests fit your research.  If you want to do a focus group but the participants aren’t likely to be in the same physical room?  Then you should consider virtual focus groups using a conference call line or online virtual meeting room.  The concept of nominal and friction groups is also introduced in this chapter.  Both methods are a mashup of focus groups with a different take on how it’s conducted.  In the case of nominal groups, participants are required to provide a written comment on the product or service being evaluated before they join the focus group.  Providing their written comments reduces the chance of a dominate member swaying everyone’s opinions and ‘group think’ taking over (Kuniavsky, 2003).  Friction groups ensure that there will be a healthy amount of discussion and all possible options would most likely be touched on by choosing participants based on a single differentiating factor.

Eye tracking and parallel research are also two alternative methods for user testing.  Eye tracking literally tracks the movements of the center of the eye to see where on a website the user gaze stays the longest.  It can reveal the best placement for advertisements, text and anything else you wish to know in order to place important information where it will be seen by the most amount of people.  Parallel research is an interesting method, it basically has two separate research teams working with the same data set but using different methods for usability testing and analysis.  This method cuts down on bias throughout each phase of the research process by running research in two different ways.  It can add confidence to your results if the concurrent research projects provide similar outcomes.

Participatory design is something that I wish we could implement at my job.  Basically, you involve your end users in the design process at the beginning to ensure that their perspectives are included.  This can cut down on situations where a program is created in a vacuum but is not usable by the end users it was intended for.  Kuniavsky sees this as a great way to create solutions for end users but there is a risk that a board with limited vision may miss key factors to increase the usability and likeability of a product.  Often times at my job, we make an update to the LMS that we think from a program management standpoint is need but when the change reaches our end users, the point is sometimes missed.   I hope to implement this type of testing when my company eventually updates the user interface of our LMS.  Getting user input at the beginning will help us to create an interface that makes sense for our entire population.

Usability tests can also be combined to fit your needs.  Kuniavsky looks at combinations of focus groups with diaries, surveys and also usability tests combining with observational interviews, log files and task analysis.  These combinations can be used depending on your situation and also where you are in the design process.  Krueger and Casey (2009) researched the combination of the focus group with another testing method.  They found that oftentimes the focus group is the impetus for a survey since it provides some basic information for questions to expand their current realm of knowledge.  In my group, we are looking at doing a focus group for the second round of testing.  Since we are looking to have brand new participants, adding a survey to this portion of testing would be a good idea to gather demographic and other related information.  Although we are getting their feedback in the focus group, the survey will help us understand their background and possibly provide a view into how they reason.

 

References

Krueger, R .& Casey, M. (2009).  Focus Groups:  A Practical Guide for Applied Research.  Thousand Oaks: SAGE Publications.

Kuniavsky, M. (2003). Observing the user experience: a practitioner’s guide to user research. San Francisco: Morgan Kaufmann Publishers.

 

 I’ve learned over the years that if you want someone to pay attention to your message you get the important details out first.  I find this is often an issue in emails.  People go on about all the small details and the point that they really care about is buried somewhere in the middle so you miss it when you skim through the bulk of the minutia. 

 While this chapter is focusing on using already published information and contracted professionals to help with data collection, I really felt that this point is relevant in every area of project development and if you always keep it in the back of your mind you’ll work smarter so more of your time and energy can be devoted to getting to the essence of the project.

 In the world of user research I think it’s unlikely that you’ll find an exact match for what you need in data that has been previously published, but this can still be useful in helping to narrow your focus.  It seems to me that using contract researchers is a more viable option for getting relevant data in our arena of usability testing and user research.

 Some Key Points to Remember When Deciding to use Contract Researchers

Timing: Make the decision early in your process.  The research will still require a considerable amount of time and you don’t want to get balled-up because you waited too long to get things rolling.

 Expectations: Setting reasonable expectations is crucial in so many areas of research (not to mention life in general).  Unrealistic, unreasonable, or incongruous expectations between you and your contractors is likely result in a waste of time, energy, and money.  Understand that they are going to have different perspectives which you should use to enrich your knowledge not replace it.

 Selection: Choose your contractors wisely.  They are likely to have general areas of expertise.  This can be very helpful for providing you with assistance in selecting a solution that is more likely to be effective in a given situation.  Whether you use a formal RFP (request for Proposal) method or an informal email/referral system, spend your effort here on the frontend to ensure you get what you need on the backend.

Summary of Pros and Cons to using Research Contractors

Pros

  • Save Time
  • Save Money
  • Save Energy
  • Get data you can’t get on your own
  • Outsiders provide a high level of perspective unlike internal personnel
  • Provides a high level overview that can be used to provide focus for internal research

Cons

  • Data collection must be closely monitored to ensure it is trustworthy
  • Data collection must be closely monitored to ensure it is appropriately targeting your needs
  • There is a potential for interpretation bias using personnel who are less familiar with the content area
  • There may be a greater need for management of outside resources

The bottom line is that contractors are not miracle workers and using them does not necessarily make data collection easier.  You will still need to be closely involved in the process to ensure the data is usable.

 Remember: You Don’t Need To Reinvent The Wheel.

 Additional Resources:

American Education Research Association

http://www.aera.net/EducationResearch/tabid/10065/Default.aspx

This is an interesting site with a lot of information.  What drew me to it in relation to using contractors and previously published information for usability research is their DissertationGrants competition which “seeks to stimulate research on U.S. education issues using data from the large-scale, national and international data sets supported by the National Center for Education Statistics (NCES), NSF, and other federal agencies, and to increase the number of education researchers using these data sets.”

National Center for Education Statistics

http://nces.ed.gov/

As stated on their website; “The National Center for Education Statistics (NCES) is the primary federal entity for collecting and analyzing data related to education.” 

This is a link to a paper that explores research contracting.

http://www.sshrc-crsh.gc.ca/about-au_sujet/publications/consultations/knowledge_transfer-transfer_connaissances_e.pdf

 The Outsourcing Institute

http://www.outsourcing.com/content.asp?page=01b/articles/intelligence/oi_top_ten_survey.html

This group is focused on business process outsourcing instead of research, but a lot of the information is applicable.  This in particular is a link to some “Top 10” lists related to successful outsourcing.

Reference:

Kuniavsky, M. (2003). Observing the user experience: a practitioner’s guide to user research. San Francisco: Morgan Kaufmann Publishers.

Posted by: bbannan | March 28, 2012

Log Files, Caches and Cookies – Heather

When reading Chapter 13, the section about log files seemed very relevant. Right off the bat I thought of a day where I was browsing women’s apparel at Macy’s (at the mall, not online) and this sort of dopey man was following a women around asking her things like, “Where do you think we should put this line? What type of person would like this dress?” The women actually tried to describe the kind of women who would wear the dress, looked around and pointed out a customer who fit the profile. They were trying to identify where to place the items in the store based on the types of shopper who were browsing specific sections.

Karl Groves of User-Centered Design, Inc. (2007) points out that “server log files are inappropriate for gathering usability data. They are meant to provide server administrators data about behavior of the server, not the behavior of the user.” While Kuniavsky agrees and points out that log files do have their problems, he goes on to state that they are also very helpful in identifying the types of moves you make on a website by “logging” where people go and what they look at. Typical access log elements include things like IP addresses, date and time, and the type of request made. All of this can be very helpful in determining what a user wants to see on a site.

Imagine the dopey man following you around Macy’s documenting everything about your shopping experience…you just wouldn’t allow it. But when it comes to the online experience, sometimes we become annoyed with privacy-related features, but for the most part we all welcome the feature if it saves us time and money in the long run.

Cache
According to Kuniavsky (2003), “the most severe problems come from the caching of web pages.” The “problems” he is referring to is using caching to track user data. The term “caching” refers to local storage of remote data designed to reduce network transfers, increasing download speeds. Although, caching is great from a user perspective, to the log-analyzer it only creates problems. Because there may never be a connection between the user’s computer and the site’s server in order to fulfill the user’s request and have the page received, “web servers tend to translate access to cached pages incorrectly, thus making all site traffic completely inaccurate” (Groves 2007). As a user I am on the fence.  I have found that caching can either be extremely helpful or completely sabotage what I’m working on when I am specifically looking for an update to the web page.

Cookies

Session cookies can be used to gather information about a user’s navigation through a website. It seems that almost every site these days uses cookies for some purpose. Session cookies tend to have short expiration dates, usually minutes to hours. Identity cookies have longer expiration times and can stretch over multiple sessions.

To continue my shopping theme, I’m reminded of visiting a site recently that alerted me that I had items in my shopping cart. To my surprise I added the items several months ago. I thought to myself, “I really need to delete my cookies more often.” This is definitely an identity cookie.

One thing I think could be extremely useful in the cookies category is the clickstream. “Clickstreams tell you the order of pages visited in a session, which specific pages were accessed, and how much time was spent on each page” (Kuniavsky 2003). I think this concept would be a great help in Group 3’s second round of user testing. We’ve had varying results from the first round and I think it would be interesting to track exactly what path the user independently takes, once the site is functional. Of course our prototype will only function as a website, rather than an X-box Kinect program, but it would still be beneficial to our research.

References

Groves, K. (2007, Octover 24). The Limitations of Server Log Files for Usability Analysis. Retrieved March 27, 2012, from: http://www.boxesandarrows.com/view/the-limitations-of

Kuniavsky, M. (2003). Observing the user experience: a practitioner’s guide to user research. San Francisco: Morgan Kaufmann Publishers.

Posted by: bbannan | March 26, 2012

Customer Feedback and Log Analysis – Carmina

In Chapter 13, Kuniavsky (2003) focuses on the concepts of customer support and log analysis for products that are “live” (being used day-to-day by consumers).  Customer support includes the processes of supporting users while they use a product, and the data that comes out of that support.  Log analysis refers to gathering detailed data about how users are using the product via techniques such as tracking log files from websites.  He notes that while the usability testing that happens during design (which is what we’re focusing on this semester) is useful, it doesn’t actually get to the heart of real problems our users face.  The issue, Kuniavsky explains, is that the foundation of the data from usability testing is not actual users – it’s survey and focus group participants.  He notes that if you really want to improve your product long-term, you must eventually field it and collect information about how your real users are actually using the product in their daily lives.  Some of the benefits of this that Kuniavsky points out on page 396 are:

  • You get a sense of your users’ real mental models – how they talk and think about your product
  • You learn about users’ expectations, and how your product does or does not meet them
  • You learn about user frustrations
  • Through the data you gather on the points above, you are able to target future builds or enhancements for your product

The two methods Kuniavsky reviews in Chapter 13 at first glance seem better suited to the commercial world of product development.  I think he gets to a very important point, though, which is not new to us as instructional designers: you must continually evaluate your product after it has been made available to your users.  We’re all familiar with the many ways of doing this in the instructional design world – student evaluations, assessment results, implementation of the four levels of the Kirkpatrick Model, etc.

I have the most professional experience with the customer support techniques Kuniavsky reviewed.  My experience, however, has ranged greatly depending on the level of control I (or my company) have had over the methods with which I can gather that data.  I worked for many years at a company that developed and sold training courses for the general public.  At that company, I had a lot of customer support data at my disposal that I was able to regularly analyze and use for course maintenance and improvement.  As Kuniavsky mentions in the chapter, the amount of data can certainly be overwhelming.  My courses (which were instructor-led) were delivered hundreds of times per year, with 15 – 30 students in each session.  Part of my job was to work through the incredible amount of data from those course evaluations (and comments gathered by actual customer service reps) to identify trends and then use those trends to plan curriculum improvements.  I found Kuniavsky’s description of the coding process interesting, because I think my co-workers and I did that without really knowing it.  We would often categorize the comments by content area, activity type, etc. to try and narrow down on the trends as much as possible.

In contrast, more recently I have worked on contracts developing courses that are essentially handed to the customer for implementation and ongoing monitoring.  In these projects, if the customer hasn’t requested ongoing data analysis and support as part of the contract, I’m not able to provide it.  What often happens is that the data is gathered by the customer (usually via course evaluations), and then that data is handed to the team responsible for course improvements (which sometimes is my team, and sometimes is not).  Without direct control or influence over how the “customer support data” is collected in these cases, I sometimes feel like my ability to target the course improvements is limited.  One way of tackling this issue is to ask for the raw data that has been gathered so that my team can pick up at the coding and analysis steps — making sure we’re able to draw out the trends relevant to the project, and not just working with someone else’s summary of the data.

The data gathering techniques Kuniavsky describes for log analysis are only relevant to web-based products (which doesn’t really help ISDs who design and develop instructor-led or other paper-based solutions).  These techniques could, however, be very useful for our augmented reality design teams.  Using Kuniavsky’s chapter as a starting point, I found some great resources for how to implement analytics for mobile applications.  There are many companies that provide this service (either for free or for a fee), and I have listed a few of them in my resources.  These analytics companies examine many of the same areas Kuniavsky brought up, including:

  • Usage statistics (average session length; new vs. active users)
  • Benchmarks
  • Audience segmentation (demographics, user interest data)

I was also interested to find out that Nielsen, the company that we all know for their in-depth research on television viewing, has started a new push to look at mobile usage.  Their project is called “Nielsen Smartphone Analytics”, which “tracks and analyzes data from on-device meters installed on thousands of iOS and Android smartphones.” Nielsen has some interesting articles on their website that detail what they have found by segmenting the data (the link to their site is also in the resources, below).

Regardless of the amount of influence we have over customer support processes, or the amount of data we do (or don’t) have from in-depth data mining like log analysis, we do need to remember to step back and look at the big picture.  As Kuniavsky says on pages 398 and 399, “don’t jump to conclusions” and “resist the urge to solve problems as you learn about them.”  As we gather data on our products, we have to remember to step back, examine the trends, and prioritize our improvement efforts in order to tackle those items that will make the biggest difference to our users.

Resources:

Sample studies conducted by Nielsen, from the Nielsen Smartphone Analytics project:  http://blog.nielsen.com/nielsenwire/category/online_mobile/

Info from Google Analytics, including access to code:  http://code.google.com/apis/analytics/docs/mobile/overview.html

Flurry an example of a company that does free mobile app analytics: http://www.flurry.com/product/analytics/index.html

An article that provides an overview of several companies that do analytics for mobile apps: http://mobile.tutsplus.com/articles/marketing/7-solutions-for-tracking-mobile-analytics/

Posted by: bbannan | March 22, 2012

Modern Diaries and Perpetual Beta Testing – Melissa

In chapter twelve, Kuniavsky (2003) details the benefits and drawbacks of ongoing user research through the use of diaries (both structured and unstructured) as well as advisory boards, beta testing, and telescoping. Many of the benefits and drawbacks that exist with these ongoing, user research relationships mirror those discussed in earlier chapters in the text. As such, one will not be restating them here, but instead will discuss two other related items—namely, a more modern approach to diaries and perpetual beta testing and the user/researcher relationship.

When the text was published in 2003, the technological landscape was very different. Kuniavsky touches on the use of journals, electronic and paper forms, and emails to collect users’ diary entries. While these methods still work, more options are available today. One believes that if this chapter were written today, Kuniavsky would discuss how blogging software like Word Press, Posterous, and Tumblr (among others) could also be utilized to more efficiently capture data from the user. While accessible from a web interface, these three platforms also accept posts via email which may lessen the anxiety less technologically adept users might feel when trying to record their experiences and observations. An August 2010 USIT blog post describes setting up and using Posterous for user research and Christopher Khalil’s 2009 presentation briefly touches on using other technologies like Twitter, Facebook, or video to perform the research. Khalil posits that using these technologies is now more natural than writing traditional diary entries. If one were participating in user research, a video diary would be one’s preferred method as talking out loud seems more natural and would thus encourage more openness and disclosure. Bryan (2012) echoes these statements and describes video diaries as “[participant] lead” user research instead of researcher guided, which is important because every user is different and users’ relationships to the product are unique to each “[user’s] goals, job, and the other tools they use” (Kuniavsky, 2003). Researchers also benefit from the adoption of the aforementioned technologies in user research. Data is ready to be gathered and analyzed as soon as the user posts to the blog and researchers can be notified via RSS feeds and services like Feed My Inbox (http://www.feedmyinbox.com/) that new observations are available for their review. In video diaries, the researchers can see the users’ environments and gather data about the context that would not have otherwise shown up in written diaries. Setting these benefits aside, researchers need to determine which data collection methods best suit their research process and not pick a method just because it is new and shiny. The research focus should be on gathering valuable information about the experience as the user interacts with the product in context and in some cases, paper diaries or online forms may be the way to go.

The idea of beta testing as user research was not familiar prior to reading Kuniavsky, but one’s familiarity with beta testing is high. Many web services seem to be in “perpetual beta” (O’Reilly, 2005) or beta testing takes place for a long time. For example, Gmail, launched in 2004, exited beta testing in 2009 (Lapidos, 2009). O’Reilly (2005) states that “users must be treated as co-developers” when products are in perpetual beta, which mimics the approach taken in this course wherein prototype revisions are made after several rounds of user research. Kuniavsky presents beta testing as a usability testing option that occurs after development, but before the product is released. If O’Reilly’s statement regarding users as co-developers holds, could the case be made that beta testing is just another name for what the groups in the course have been doing all along, albeit with less than fully functional prototypes? Once sees both sides of the argument, but is curious as to others’ opinions.

While long-term user research is not feasible in a sixteen-week course, Kuniavsky’s text presents the different types of user research methods that could be used if more time were available. In closing the chapter, he stresses that user research should not be solely focuses on initial experiences because products and users’ relationships with those products change over time and that products can be further developed to “grow with the knowledge and needs of its users” (Kuniavsky, 2003).

 

References

Bryan, P. (2012, January 23). Video diaries: a method for understanding new usage patterns. Retrieved March 17, 2012, from UXmatters: insights and inspiration for the user experience community: http://www.uxmatters.com/mt/archives/2012/01/video-diaries-a-method-for-understanding-new-usage-patterns.php

Khalil, C. (2009, September 04). The new digital ethnographer’s toolkit: capturing a participant’s lifestream. Retrieved March 17, 2012, from Slideshare.net: http://www.slideshare.net/chris_khalil/the-new-digital-ethnographers-toolkit-capturing-a-participants-lifestream

Kuniavsky, M. (2003). Observing the user experienc: a practioner’s gude to user research. San Francisco: Morgan Kaufmann Publishers.

Lapidos, J. (2009, July 07). Why did it take google so long to take gmail out of “beta”? Retrieved March 16, 2012, from Slate: http://www.slate.com/articles/news_and_politics/recycled/2009/07/why_did_it_take_google_so_long_to_take_gmail_out_of_beta.html

O’Reilly, T. (2005, September 30). What is web 2.0: design patterns and business models for the next generation of software. Retrieved March 15, 2012, from O’Reilly: http://oreilly.com/web2/archive/what-is-web-20.html

USiT. (2010, August 13). Using posterous as an online cultural probe (user research diary) – USiT blog. Retrieved March 17, 2012, from USiT blog – user experience at news digital media: http://usit.com.au/using-posterous-as-an-online-cultural-probe-u

Posted by: bbannan | March 22, 2012

Thoughts on Maintaining Ongoing Relationships – Hiba

After reading Chapter 12 of Kuniavsky’s (2003) Observing the User Experience: A Practitioner’s Guide to User Research, I realized that there is a lot more involved in understanding people’s relationship to a product than by simply administering one-time usability testing techniques.

As Kuniavsky (2003) suggests, “in nearly all cases, you want people to get comfortable with your product and learn its more subtle facets” (p. 367). However, Kuniavsky (2003) also notes that “people’s use of a product and their relationship to it change with time” (p. 367). This means that the product must support long-term use, not just one-time use, since products should be useful to their users over extended periods of time (months, years, etc.). Thus, as Kuniavsky (2003) states, “knowing a point on the curve is valuable [an understanding of people’s experience right now], but it doesn’t define the curve. Knowing the shape of the curve can help you predict what people will want and help you design an appropriate way to experience it” (p. 367).

So, I am going to reflect on a few of the techniques mentioned by Kuniavsky in this chapter to hopefully give us a better insight in understanding behavior and attitudinal changes over time and maintaining ongoing relationships with end users.

Background

Before diving into the techniques, I want to recap some of the things that Kuniavsky (2003) says “happen” as a user progresses from newbie to expert (p. 368-369):

  • Mistakes are made.
  • Mental models are built.
  • Expectations are set.
  • Habits are formed.
  • Opinions are created.
  • A context is developed.

As Kuniavsky (2003) states, “all of these changes affect the user experience of the product and are difficult to capture and understand in a systematic way unless you track their process” (p. 369).

Diaries

The first technique I want to discuss is diary studies. As Kuniavsky (2003) states, “diary studies are, as the name implies, based on having a group of people keep a diary as they use a product. They track which mistakes they make, what they learn, and how often they use the product… Afterward, the diaries are coded and analyzed to determine usage patterns and examined for common issues” (p. 369).

 

Two very important factors need to be determined before conducting a diary study:

  • The duration of the study.
  • The sampling rate of the study.

As Kuniavsky (2003) suggests, “since people aren’t diary-filling machines, picking a sampling rate and duration that won’t overly tax their time or bore them is likely to get you better-quality information” (p. 370).

Furthermore, there are two kinds of diary studies: unstructured and structured (Kuniavsky, 2003, p. 371).

  • Unstructured diary studies are participant driven. They are loosely structured, and focus on getting the diarists to relate their everyday experiences, track their learning, and the problems they encounter as they encounter them.
  • Structured diary studies resemble extended surveys or self-administered usability tests. They are more strictly structured, with the remote guidance of a moderator, and focus on having the diarists perform specific tasks and examine specific aspects of a product. Diarists are then asked to report their experiences in a predetermined diary format: 1) survey-structured diaries, 2) usability test diaries, or 3) problem report diaries (Kuniavsky, 2003, p. 375).

Advisory Boards

An advisory board “consists of a group of users who are tapped by the development team whenever the team feels that they need input from end users on changes to the product” (Kuniavsky, 2003, p. 385). Kuniavsky (2003) mentions that the members of the board need to have specific qualities, unlike focus groups, including the following (p. 387):

  • They have to know the task.
  • They need to be end users, at least some of the time.
  • They should be articulate.
  • They should be available.

Most major firms/organizations/institutions have established advisory boards that help them obtain valuable user input and provide advice when solicited. For example, Mason has an advisory board for each major school/unit within the institution; so does the U.S. Department of Education, which has a number of advisory committees that provide advice on specific policy and program issues.

Beta Testing

Beta testing is a “time-proven quality assurance (QA) technique” and one that I am personally very familiar with (Kuniavsky, 2003, p. 391). Using a structured feedback system, beta testing is done at the very end of the development cycle. After content, graphics, interface, functionality, and navigation have all been agreed upon and finalized, end users test the product and report any issues they encountered to the developers. This is the last stop before the product gets packaged and published (Kuniavsky, 2003, p. 391).

 

Beta testing can be an effective technique used in both an educational and corporate setting. For example, I read an article that described the analysis of beta testing results of a study on using on-line modules for professional development in action research at schools in Florida. Little and King (2007) stated that, “to address the need for responsive, individual, and contextualized support during the implementation process of evidenced-based instructional practices by teachers to determine impact of instruction, an on-line module in action research has been developed, implemented, and researched using a beta testing process”. Within the beta testing framework, Little and King (2007) also noted that “most of the data collected during the beta-testing was the result of surveys, interviews, and focus groups”. The results of the survey indicated that “both the knowledge and perceptions of the content of action research, as well as the process of implementing action research, could be effectively completed using an on-line environment” (Little and King, 2007).

On a personal note, the design and development cycle the training team and I use at work also involves alpha testing and beta testing. We develop electronic learning tools, or simply counter-based training courses, for our Air Force client. We first meet with the client to decide upon the content, graphics, and audio script that will be included in the course. Then, comes alpha testing, where we send the course to a select list of subject matter experts (SMEs) who review it for accuracy and consistency, in terms of the content/processes explained, functionality, and navigation. We address all comments made by the SMEs and update the course based on their feedback. Then, comes beta testing, where we send the course again to that list of SMEs for their feedback on any issues they may encounter while reviewing the course. We fix those issues as applicable, record audio, package the course, and publish it to their learning system environment… and voila it’s done!

 

Final Thoughts

 

In conclusion, I think the most important thing I took away from this chapter is best said by Kuniavsky (2003) himself: “it’s important to know how products change and how people’s relationships to them shift as time goes on so that the products can be designed to grow with the knowledge and needs of its users” (p. 394). I will definitely keep the techniques and best practices I learned in this chapter in mind as my ISD team (Group 5) continues to design our Museum on the Mall conceptual design prototype.

References

Kuniavsky, M. (2003). Observing the user experience: A practitioner’s guide to user research. San Francisco, California: Morgan Kaufmann Publishers.

Little, M., & King, L. (2007). Using on-line modules for professional development in action research: Analysis of beta testing results. Journal of Interactive Online Learning, 6(2), 87-99. Retrieved from http://www.ncolr.org/jiol/

Mason – Volunteer Leadership – Advisory Boards: http://supportingmason.gmu.edu/volunteer-leadership/advisoryboards.html

U.S. Department of Education – Boards & Commissions: http://www2.ed.gov/about/bdscomm/list/index.html?src=ln

This semester, my ISD Team (Group 5) knew immediately that we wanted to conduct a survey as part of our user research plan to refine our prototype. We had wanted to administer a survey last semester, but didn’t have enough time. With EDIT 752 focusing on user research, we were excited to finally have the opportunity. We started by brainstorming some research questions that we wanted our survey to answer and then drafted some potential survey questions. Our survey questions all seemed to be appropriate for our target audience and would provide valuable information that we could use to revise our prototype. However, they weren’t clearly (beyond a shadow of doubt) in line with our initially proposed research questions. This made me wonder whether a survey was really the best method for answering our research questions, or whether we were simply planning to do a survey because we just really wanted to do a survey.

As an ISD, I want all elements of my designs to fit in pretty little, clearly labeled packages. Obviously, I want my evaluation methods to align with my learning objectives, no matter how informal those objectives may be. At my day job, my customers become somewhat defensive when I ask them to explain why the learner needs to know particular content that has “always” been included in a course that I am redesigning. By asking the customer to explain this to me, I am not suggesting that the learner doesn’t really need to know the information in question. On the contrary, I need to understand the why so that I can determine the most effective how—that is, how to help the learner… well, learn.

After reading Chapter 11 of Kuniavsky’s (2003) Observing the User Experience: A Practitioner’s Guide to User Research, I realize that designing an effective survey is a lot like designing effective instruction. Surveys can be an extremely efficient and effective way to collect information from a large group of users in order to refine your user profile. That being said, insufficient planning can be disastrous. As Kuniavsky points out, poorly designed surveys “can ask the wrong people the wrong questions, producing results that are inaccurate, inconclusive, or at worst, deceptive” (p. 304). As ISDs, we witness all too often how insufficient planning and analysis usually produces ineffective training.

In terms of planning our user survey, Group 5 started off on the right track by first identifying our research goals and associated research questions. But how can we ensure that the survey questions we initially came up with are aligned with these goals and objectives?

Hypothetical Tables

After you’ve identified the survey questions you would like to ask, Kuniavsky suggests creating a grid to map each survey question to the instructions you intend to provide to the respondent, possible answers, and the reason for asking the question (p. 311). The latter is of utmost importance because researchers need to be able to justify the inclusion of every single question in their survey. In other words, researchers need to be able to express exactly why they are asking each question and what they intend to do with the resulting data. This includes identifying how the researchers intend to analyze the data.

Brace (2008) warns researchers to include only those “questions that are relevant to the objectives and not [to] be tempted to ask questions of areas that might be of interest but not relevant to the objectives.  To do so is to waste resources in terms of the time of everyone involved, including the respondents, and to spend money unnecessarily.” Hypothetical tables, force the researcher to identify each potential survey question as either “need to know” or “nice to know.” Salent and Dillman (1994) assert, “I ‘Need to know’ is a critical criterion for every item on a questionnaire.”

Mock-Up Reports

Kuniavsky also recommends writing a mock-up survey report prior to administering the survey. The report should include the research goals, methodology, design description, sampling information, response rate, and data analysis with appropriate tables (p. 323). Clearly, it is impossible to include the actual data at this point, but including hypothesized results can help researchers ensure that they are asking respondents the appropriate survey questions to answer their research questions (p. 323). 

Decision Trees

Salant and Dillman (1994) advise researchers to think about surveys as a means to solving a problem. Once you identify the problem that needs to be solved, the next step is to determine what new information you need to solve it. Salant and Dillman encourage researchers to consider whether they really need to obtain new data in order to solve their problem. Re-evaluating the problem may help them see that a “survey won’t help them solve the problem they are trying to address.” Perhaps a different research strategy would better address the problem, or perhaps researchers really need to look at existing data in new ways.

Halteman (2011) reminds survey writers, “you owe it to your respondents to only ask questions from which the resulting data will be used to take action or make a decision.” Halteman explains, “The two most common types of unnecessary questions are asking about something that has already been decided and asking about things over which you have no control.”  In other words, if you don’t plan to do something with the data, or if you don’t have the authority to act on it, then it is a waste of resources (both yours and the respondents’) to ask the question.

Like Salent and Dillman, Vanek (2007) advises researchers to resist the urge to include “nice to know” questions in their surveys. Although “Learning for the sake of learning is admirable,” Vanek stresses that “learning something with the intent to act on it is far more practical.” To ensure that your survey questions are actionable, Vanek suggests developing a decision tree to “outline what actions will be most effective based on your data.” Having a well-thought-out action plan prior to administering the survey will also enable you to act more quickly and confidently once you’ve obtained your data.

Some Final Thoughts

After reading Kuniavksy Chapter 11, I’m pretty sure that Group 5 isn’t designing a survey simply for the sake of designing a survey. We have spent several months analyzing our target audience, our customer’s missions and goals, communicating with stakeholders, and conducting comparative analyses, and our instincts tell us that a survey is the way to go. Even so, we have decided to take a step back to re-evaluate our survey goals and perhaps attempt some of the strategies described here. We know that instincts alone aren’t enough—that we need to be able to articulate precisely what problem our survey data will help us solve, how each survey question ties back to our research questions, and what we plan to do with the data.

Those customers who become defensive when asked to explain why their learners need to know specific content usually have pretty good instincts, too. There’s usually a very good reason why that content has “always” been included in their course. The ISD’s job is to help the customer express the why so that the how produces the desired learning outcomes.

References

Brace, I. (2008). Questionnaire design: How to plan, structure and write survey material for effective

market research. (2nd ed.) [Books24x7 version] Retrieved from http://common.books24x7.com/
toc.aspx?bookid=28480.

Halteman, E. (2011, November 10). 10 common mistakes made when writing surveys—Part 2.

SurveyGizmo. Retrieved on March 5, 2012, from http://www.surveygizmo.com/survey-blog/10-common-survey-mistakes-part-2/

Kuniavsky, M. (2003). Observing the user experience: A practitioner’s guide to user research.

San Francisco: Morgan Kaufmann.

Salant, P. & Dillman, D. A. (1994). How to conduct your own survey. [Books24x7 version] Retrieved from

http://common.books24x7.com/toc.aspx?bookid=4863

Vanek, C. (2007, May 11) What is a successful survey project? (Hint: It’s not just the data).

SurveyGizmo. Retrieved March 5, 2012, from http://www.surveygizmo.com/survey-blog/what-is-a-successful-survey-project-hint-it%e2%80%99s-not-just-the-data/

 

Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.