Posted by: bbannan | March 26, 2012

Customer Feedback and Log Analysis – Carmina

In Chapter 13, Kuniavsky (2003) focuses on the concepts of customer support and log analysis for products that are “live” (being used day-to-day by consumers).  Customer support includes the processes of supporting users while they use a product, and the data that comes out of that support.  Log analysis refers to gathering detailed data about how users are using the product via techniques such as tracking log files from websites.  He notes that while the usability testing that happens during design (which is what we’re focusing on this semester) is useful, it doesn’t actually get to the heart of real problems our users face.  The issue, Kuniavsky explains, is that the foundation of the data from usability testing is not actual users – it’s survey and focus group participants.  He notes that if you really want to improve your product long-term, you must eventually field it and collect information about how your real users are actually using the product in their daily lives.  Some of the benefits of this that Kuniavsky points out on page 396 are:

  • You get a sense of your users’ real mental models – how they talk and think about your product
  • You learn about users’ expectations, and how your product does or does not meet them
  • You learn about user frustrations
  • Through the data you gather on the points above, you are able to target future builds or enhancements for your product

The two methods Kuniavsky reviews in Chapter 13 at first glance seem better suited to the commercial world of product development.  I think he gets to a very important point, though, which is not new to us as instructional designers: you must continually evaluate your product after it has been made available to your users.  We’re all familiar with the many ways of doing this in the instructional design world – student evaluations, assessment results, implementation of the four levels of the Kirkpatrick Model, etc.

I have the most professional experience with the customer support techniques Kuniavsky reviewed.  My experience, however, has ranged greatly depending on the level of control I (or my company) have had over the methods with which I can gather that data.  I worked for many years at a company that developed and sold training courses for the general public.  At that company, I had a lot of customer support data at my disposal that I was able to regularly analyze and use for course maintenance and improvement.  As Kuniavsky mentions in the chapter, the amount of data can certainly be overwhelming.  My courses (which were instructor-led) were delivered hundreds of times per year, with 15 – 30 students in each session.  Part of my job was to work through the incredible amount of data from those course evaluations (and comments gathered by actual customer service reps) to identify trends and then use those trends to plan curriculum improvements.  I found Kuniavsky’s description of the coding process interesting, because I think my co-workers and I did that without really knowing it.  We would often categorize the comments by content area, activity type, etc. to try and narrow down on the trends as much as possible.

In contrast, more recently I have worked on contracts developing courses that are essentially handed to the customer for implementation and ongoing monitoring.  In these projects, if the customer hasn’t requested ongoing data analysis and support as part of the contract, I’m not able to provide it.  What often happens is that the data is gathered by the customer (usually via course evaluations), and then that data is handed to the team responsible for course improvements (which sometimes is my team, and sometimes is not).  Without direct control or influence over how the “customer support data” is collected in these cases, I sometimes feel like my ability to target the course improvements is limited.  One way of tackling this issue is to ask for the raw data that has been gathered so that my team can pick up at the coding and analysis steps — making sure we’re able to draw out the trends relevant to the project, and not just working with someone else’s summary of the data.

The data gathering techniques Kuniavsky describes for log analysis are only relevant to web-based products (which doesn’t really help ISDs who design and develop instructor-led or other paper-based solutions).  These techniques could, however, be very useful for our augmented reality design teams.  Using Kuniavsky’s chapter as a starting point, I found some great resources for how to implement analytics for mobile applications.  There are many companies that provide this service (either for free or for a fee), and I have listed a few of them in my resources.  These analytics companies examine many of the same areas Kuniavsky brought up, including:

  • Usage statistics (average session length; new vs. active users)
  • Benchmarks
  • Audience segmentation (demographics, user interest data)

I was also interested to find out that Nielsen, the company that we all know for their in-depth research on television viewing, has started a new push to look at mobile usage.  Their project is called “Nielsen Smartphone Analytics”, which “tracks and analyzes data from on-device meters installed on thousands of iOS and Android smartphones.” Nielsen has some interesting articles on their website that detail what they have found by segmenting the data (the link to their site is also in the resources, below).

Regardless of the amount of influence we have over customer support processes, or the amount of data we do (or don’t) have from in-depth data mining like log analysis, we do need to remember to step back and look at the big picture.  As Kuniavsky says on pages 398 and 399, “don’t jump to conclusions” and “resist the urge to solve problems as you learn about them.”  As we gather data on our products, we have to remember to step back, examine the trends, and prioritize our improvement efforts in order to tackle those items that will make the biggest difference to our users.

Resources:

Sample studies conducted by Nielsen, from the Nielsen Smartphone Analytics project:  http://blog.nielsen.com/nielsenwire/category/online_mobile/

Info from Google Analytics, including access to code:  http://code.google.com/apis/analytics/docs/mobile/overview.html

Flurry an example of a company that does free mobile app analytics: http://www.flurry.com/product/analytics/index.html

An article that provides an overview of several companies that do analytics for mobile apps: http://mobile.tutsplus.com/articles/marketing/7-solutions-for-tracking-mobile-analytics/

Advertisements

Responses

  1. Carmina,

    I worked for many years in commercial hardware and software, and I have found myself wishing for log files more than once as an instructional designer. The thing about that is, software log files allow investigators to pinpoint the time and (potentially) the source of an error or unexpected behavior. But for learning environments, even online learning environments, we’re typically not concerned with bugs and functionality by the time it gets in front of users. I wish it were as easy as reading a log file. (Funny aside, I’m working on a project right now about log analysis.)

    I can also sympathize with your observation that we’re often not hired all the way through evaluation. Talking about reading log files and feedback from customers during support assumes, as you pointed out, that you’re dealing with a product in operations and sustainment. This reminds me of Melissa’s blog on beta testing. In software, you release one or two early (but more or less fully functional) versions of your product to a limited audience for the express purpose of getting feedback and improving the product before a general “1.0” release.

    We sometimes do a similar thing with faculty and student pilots, but those are really like alpha tests. It would be nice to work at an organization where you have the luxury of managing existing courses. Evaluating and improving them over time.

    Interesting idea to apply to instructional design–log file analysis and customer support.

    Heath

  2. Hi Carmina,

    Really nice summary! I agree that log files aren’t terribly useful when you’re working off the web, but this is definitely something to consider as more and more content moves to web-based and mobile. In general I don’t think ISDs immediately jump to considering complex metrics like these for analysis. I was in a meeting the other day listing to an executive talk about the metrics they were gathering on their users (hoards of logs) and I started thinking about all the great ways that information could be used in conjunction with other research to target learning objectives. It was pretty overwhelming to consider, but in a good way. But that ties in perfectly to using this type of analysis for our AR projects. Like you, I’d love to see the paths that users take when they use our application and how long they spend in the different activities. That would be a great asset for further development.

    And thanks for the Nielsen information. I feel like this will be a good resource for my future. It makes sense that Nielsen would move in the direction of mobile analytics, but it’s nice to see that they are actually taking steps.

  3. At my job, we are currently looking at developing a community of practice (CoP) for the Air Force. Using log file information would be a good start to track how people are using the CoP after it is launched. Although information from log files and customer support is beneficial to improve a product, the information gathered still must be analyzed properly so the correct conclusions on how to update the CoP to make it better can be made.

  4. This post really made me think about the difference between approaching the creation of learning experiences as a project (per the PMI’s PMBOK, a temporary endeavor with a defined beginning and end) or a program (ongoing and implemented to consistently achieve certain results) in which continuous improvements are made based on data received/extracted from feedback surveys, log files, test scores, etc. Is there ever really a non-forced end to an instructional design (other than sunsetting outdated classes)? I know that we have classes I designed over 5 years ago that are still offered, but we have revised them repeatedly based on learner feedback and changing recommendations on how to use the software.

    For those of you who are professional instructional designers, how is evaluation/revision presented to clients who only order a design and deliver? Is there any attempt to design a system by which they can gather/code/act on the feedback they may be able to gather after implementing the learning experience?

  5. Hi Carmina,

    I enjoyed your summary and reading about your experiences as instructional designer developing course material. I thought your approach to collecting the raw data in your contact job was innovative and a great approach to gathering data for your teams own analysis.

    Thanks for the helpful resources, I did not know Nielsen is starting in mobile analytics-makes me wonder how long it will be before smartphone companies will start sharing their own analytics.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: