Posted by: bbannan | March 7, 2012

Survey design – Jennifer P.

Kuniavsky (2003) warns the reader about the importance of survey design, saying that poor design can lead to “ask[ing] the wrong people the wrong questions, producing results that are inaccurate, inconclusive, or… deceptive” (p. 304). Chapter 11 spends time introducing the reader to survey research. In a recent webinar, Phillips (2012) shared that survey questions should be written with the respondent in mind. Can they answer the question? Do they understand the meaning of the question? Do they know the answer? Are they able to answer in the terms required? And are they willing to answer the question? It’s very easy to write questions that fit our own schemas and interpretations; however, it’s much more difficult to write for other people. For the purpose of this post, I’ll be talking about how to write good questions that will get what you want to know.

In his text, Kuniavsky (2003) offers a few suggestions for writing good survey questions. I’ll briefly review them here (as you’ve already read them!):

  • Don’t ask respondents to predict their behavior; focus on past behavior
  • Don’t ask negative questions as the words “not” or “never” can be easily overlooked by the respondent
  • Keep questions simple and don’t overload with additional concepts
  • Be specific; avoid using broad terms such as “sometimes” or “rarely”
  • Provide options that don’t make assumptions about the respondent or exclude respondents
  • Keep the language of questions consistent throughout the survey
  • Make sure the questions are relevant to the audience to reduce mortality
  • Use Likert-like scale questions
  • Create follow up questions for certain answers (use “logic”)
  • Include opt-out options such as “not applicable” or “none”
  • Give respondent an opportunity to leave comments

Coming from a perspective of the American Evaluation Association, Ritter and Sue (2007) describe good questions as clear, short, unbiased, and relevant.

  • Clear means that the respondent’s understanding of the question matches the surveyor’s intent. Does the question convey what I want it to? A common obstacle to writing clear questions is the use of jargon or difficult words. Be sure to explain jargon (e.g. ISD means instructional systems design) and that the language level matches your audience. But be weary of simple yet broad terms that may have multiple interpretations (e.g. does the term “weekend” include Friday or is it just Saturday and Sunday?).
  • Ritter and Sue suggest keeping a question short, or no more than 20 words. Long questions can be intimidating to respondents and also run the risk of becoming overly complex and “double-barreled” (and confusing).  An example of a double-barreled question would be, “the materials used in the training session were visually appealing and helped me better understand the content”. Perhaps the materials were visually appealing but actually did not help the learner. How should the respondent answer?
  • Unbiased refers to questions that do not mislead respondents to answering a specific way (also called loaded or leading questions). For example, a biased question for a teacher would be “Are you interested in your students?” Most respondents would likely say “yes” because it would reflect poorly on them if they didn’t.
  • Just as Kuniavsky (2003) indicated, questions must be relevant to the respondents. In order for a question to be relevant, you must be sure to properly identify your target population. Questions that are irrelevant to respondents may be skipped because they don’t know the topic well enough to answer. While others may still choose a favorable answer so that they appear well informed on the topic even if they are not.

Perhaps the best advice offered by Kuniavsky (2003) is to pilot the questions before deployment. It can be difficult to understand your own biases and mistakes. And as the author mentioned, it’s even more difficult to re-run your survey. This is especially true if you already have a small target population to begin with. Last summer I made that mistake with a technology survey for the employees of my company. I realized after I had the data that the questions weren’t really asking what I intended them to. Unfortunately I can’t re-run the survey because I’ll be targeting the same group of 350 employees. In another example, my company surveyed employees about the recent merger. One of questions was not appropriate for the Liket-like scale used on the survey and therefore was confusing. The results showed this confusion as the responses were split across the scale. Through a pilot, you can come closer to ensuring the questions will be gathering the information that you want to know.

 

References

Kuniavsky, M. (2003). Observing the user experience:  A practitioner’s guide to user research. San Francisco, California:  Morgan Kaufmann Publishers.

Phillips, P. (2012, February 28). Evaluation basics:  Four challenges that make or break survey research. Webinar viewed from http://www.trainingmagnetwork.com/main/home

Ritter, L. A. & Sue, V. M. (2007). Using online surveys in evaluation. New Directions in Evaluation, (115), p. 29-36.

Chapter 10 in our text discusses usability and usability testing and research.  It provides some excellent guidance on the process of collecting data from users, but two things about it stood out to me: one, it feels a little bit dated, and two, it feels reactive when we’re still in the midst of our design process.

When I say it feels a little bit dated, it reads as if the target audience consists mainly of desktop computer users with limited familiarity of the Internet and e-Commerce.  The author was an early web developer and a pioneer in designing usable web sites in the early days of e-Commerce.  The principles in the chapter are still sound, but the process may need to be tailored for our purposes, given that our audience is almost exclusively Internet savvy smart phone owners.

When I say that it feels reactive when we’re still in the midst of our design process (to me, I should qualify), I mean that our compressed academic schedule means our designs are still in their infancy.  I feel like what I need first is a lesson on how to design usable mobile applications, rather than an exercise on how to put my potentially unusable application in front of users.  I’m an instructional designer by trade.  I have some software development experience (enterprise desktop and web-based courseware), but I feel out of my depth developing for mobile platforms that I’m unfamiliar with.

I know that a wealth of usability research for desktop and web-based software applications exists, and that over time that research produced a rough set of accepted usability guidelines for developers.  Avoid horizontal scrolling on web pages, for example.  So my hope was that equivalent research existed for mobile applications, and it does.

Jakob Nielsen’s name has already come up several times, most notably in week 2’s presentation, and he has conducted three rounds of extensive usability testing for mobile platforms.

NOTE: It’s worth mentioning here that there’s a difference between a mobile application, which is highly platform dependent, and a version of a web site that is optimized for mobile devices, which is to a larger extent platform independent and easier (subjectively) to develop.

Nielsen has been called the “guru of Web page usability” and “the world’s leading expert on user-friendly design.”  During a recent round of mobile web site user experience testing, his organization, Nielsen Norman Group (www.nngroup.com), combined three usability methods:

  • Diary Study: 14 participants from 6 countries logged all of their non-call mobile device usage for a week using Twitter, and filled out a questionnaire each day to provide more in-depth information.
  • User Testing: 48 participants (half men, half women, 33 in the U.S., 15 in the U.K.) participated in usability studies on their own phones, and the NN Group recorded their sessions with a document camera.
  • Cross-platform Review: the NN Group conducted a design review of 20 sites using 6 different phones.

In all, 62 users on four continents participated in reviewing 36 sites.  What they found is that “the mobile user experience is miserable.  The phrase “mobile usability” is pretty much an oxymoron.  It’s neither easy nor pleasant to use the Web on mobile devices.”  The average success rate (i.e., the user accomplishes what he or she intends to accomplish successfully on the first attempt) was only 59%–20% lower than on a regular personal computer.

In other words, we shouldn’t feel too bad if our test results are kind of rough this semester.  Even the professionals have a lot of room for improvement in this arena.

Based on the data, Nielsen’s team has compiled over 100 suggestions, guidelines, and observations for building usability into your mobile sites and applications.  I’m going to share a few here that I found immediately relevant to our design exercise in 732/752.

Drop-down Boxes, Buttons, and Links

  • Clickable areas should be at least 1 cm x 1 cm.
  • Leave generous amounts of space around radio buttons, drop-down boxes, check boxes, scroll bars, links, etc.

Lists and Scrolling

  • All the items on a list should go on the same page if the items are text-only, and if they are sorted in an order that matches the needs of the task.
  • If the list contains items that download slowly (e.g., images), split the list into multiple pages and show just one page at a time.If a list of items can be sorted according to different criteria, provide the option to sort that list according to all those criteria.
  • If a list contains items that belong to different categories, provide filters for users to narrow down the number of elements that they need to inspect.
  • If the list contains only one item, take the user directly to that item.

NOTE: Nielsen separates users into “Searchers” and “Browsers” and advises site developers to think about which bucket their audience primarily falls into.  “Browsers want to kill time; they look for something interesting to do. Searchers look for one particular piece of information—they usually need that information to achieve a real-life goal (e.g., eat lunch, get to your store). Searchers want to be done with your site as quickly as possible, in order to go on and pursue their bigger goal.”

Navigation

  • Include navigation on the homepage of your mobile website.
  • On homepages of browsing sites, give priority to new content over navigation links.
  • Include at a link to navigation on every page of your mobile website.
  • Do not replicate a large number of persistent navigation options across all pages of a mobile site.
  • Use breadcrumbs on sites with a deep navigation structure (many navigation branches). Do not use breadcrumbs on sites with shallow navigation structures.
  • Use links with good information scent (that is, links which clearly indicate where they take the users) on your mobile pages.
  • Use links to related content to help the user navigate more quickly between similar topics.
  • JavaScript and Flash do not work on many phones; do not use them.

Images and Video

  • Include images on your website only if they add meaningful content. Do not use images for decoration purposes only.
  • Do not use image sizes that are bigger than the screen. The entire image should be viewable with no scrolling.
  • For cases where customers are likely to need access to a higher resolution picture, initially display a screen-size picture and add a separate link to a higher resolution variant.
  • When you use thumbnails, make sure the user can distinguish what the picture is about.
  • If you have videos on your site, offer a textual description of what the video is about.
  • Clicking on the thumbnail and clicking on the video title should both play the video.
  • Indicate video length.
  • Specify if the video cannot be played on the user’s device.

Conclusion

These are just a few of the guidelines in the NN Group’s report that I felt were helpful when thinking about the mechanics of our application and designing usable screens.  The biggest things I took away from the research study were: 1) that this is hard to do, even for the pros, and 2) research exists to help you get started.  Nielsen’s report includes plenty of good and bad examples and screen shots that may give you some ideas.

References

The Nielsen Norman Group: http://www.nngroup.com/

Jakob Nielsen’s Website: http://www.useit.com/

Jakob Nielsen’s Column on Usability: http://www.useit.com/alertbox/

The reports can be downloaded from the NN Group’s web site for a hefty fee (@$300 for the latest one), but a little google searching produced a .PDF copy of the previous report, which I will upload to the wiki and link from the blog topic page.

Posted by: bbannan | February 23, 2012

Thoughts on Focus Groups – Bonnie-Elizabeth

Focus groups are a qualitative data-gathering technique more associated with product marketing than instructional design.  Often lampooned (see, for example, the Snickers shark focus group commercial or this comedy sketch mocking the group-think problem of focus groups), the focus group seems to be more of a punchline rather than a serious methodology.  Of course, given Kuniavsky’s painstakingly detailed instructions for running a focus group properly, it’s quite easy to see how one could go awry in the hands of an inexperienced researcher or moderator.  In retrospect, I chuckled at his calling focus groups “easy[]” at the chapter outset (p. 202) after going through the detailed instructions for microphone setup (p. 225-226) and moderator skills (p. 227-236) – how difficult it is to suppress the urge to give yes-I’m-listening-to-you signals to the participants takes that out of the ballpark of “easy” for me!

 Given the number of people volunteered to cover this particular chapter and my penchant for procrastination and turning things in at the absolute last minute, I am sure that everyone has a good primer on the substantive content of Chapter 9 at this point.  Rather than expound on the salient points of the chapter, I thought I would speak a little to my experience and a situation in which I intend to make use of the techniques that Kuniavsky details in “Focus Groups” in my professional life.

 Not being an educator or instructional designer by trade, I often find myself struggling to relate to certain aspects of our IDD curriculum, and I often do not have opportunities to practice what we learn.  It is rare that I can apply any methodology exactly as described in an academic text, whether the issues is budget, time, or organizational culture  However, in reading Chapter 9, though, it occurred to me that I was getting ready to undertake an exploratory focus group of sorts myself, I just didn’t realize it.  The timely discovery of Chapter 9, quite frankly, has also made me aware that if I’d proceeded without its guidance, I would have made a number of mistakes.

 Last Monday, I inherited a group within my organization that increased number of people that I manage from 4 to 30.  While I know and have worked with these new-to-me folks for years, my view of what it right and wrong with their present structure is based entirely on an outside view and does not at all reflect what the existing personnel see as right and wrong nor does it address their professional aspirations or any suggestions that they may have.  In reading through Kuniavsky’s commentary on when focus groups are appropriate and for what they can be used (pp. 202-204), I realized that this would be a near perfect fit for what I need to do.  I am at a beginning stage where this data will help inform the path the department takes and what changes to prioritize (Kuniavsky, p. 201-202).  I am interested in the present state of the department and what the employees value and are motivated by (Kuniavsky, p. 203).  This particular group is easy to separate into homogenous populations of appropriate size (Kuniavsky, p. 209-210) by years of experience and practice area.  I don’t need any statistics or to be able to generalize this information beyond this target group.

 Had I not read the “Focus Groups” chapter, I would not doubt have walked into this experience unprepared.  I wouldn’t have had a guide.  I wouldn’t be prepared to deal with reticent participants or have tactics for dealing with the hostiles.  I fully intend to use the identify-write-prioritize-discuss technique described on pages 232-233.

 I would love to tell you that this process went swimmingly, and I have enough data to set about designing a great professional development program and career track for my new employees, but I have not yet had the opportunity to conduct my focus groups yet.

 I think my major takeaway from this has been that all of these techniques are tools that can be useful when judiciously applied and that it can be helpful sometimes to look beyond the formal instructional design contexts.  Any data gathering technique used inappropriately is going to be unsuccessful, and I think that the focus group suffers from its unfortunate reputation because it is misused due to its low cost and seemingly easy practice.

Additional resources:

In terms of resources, I felt that the chapter spend a lot of time detailing preparing for and actually putting on the focus group but glossed over the data analysis piece.  To that end, here are some resources that address what you actually do with the focus group data collected:

Reference

Kuniavsky, M. (2003). Observing the User Experience: A Practitioner’s Guide to User Research.San Francisco: Morgan Kaufmann.

Posted by: bbannan | February 21, 2012

1+1>2 or 1+1<0 — the Use of Focus Groups – Ying Wu

In Chapter 9 of Observing the User Experience, Kuniavsky discusses Focus Groups, one of the qualitative user research methods that are sometimes used in software or web site development. Focus group is unique and different from individual interviews or surveys in that, it is conducted in an environment where the users are interviewed and observed together as a group, led by an moderator, discussing and sharing their opinions on topics chosen by the researcher.

Because of the interactive nature of Focus Group, it has received contradictory views on its ability to yield valid results for researchers. For novice instructional designers with little user research experience, whether to choose Focus Groups as a methodology to conduct user research is critical and difficult. Kuniavsky states that, knowing when to use focus groups is one of the keys to using them successfully (p202).

In observing the User Experience, Kuniavsky elaborates on how to conduct and analyze Focus Groups, and it’s not the primary goal of this reflection to reiterate those in details. This reflection will explore situations when Focus Groups can beneficial or detrimental to your user research, so that you can have general criteria to follow and then make wise decision on the choice of methodology for user research.

What are Focus Groups

A focus group is a form of qualitative research in which a group of people are asked about their perceptions, opinions, beliefs, and attitudes towards a product, service, concept, advertisement, idea, or packaging (Henderson, 2009). The purpose of focus groups is not to infer, but to understand, no to generalize but to determine a range, not to make statements about the population but to provide insights about how people perceive a situation (Krueger, 1988)

Unlike individual interviews, focus group is conducted by recruiting a group of participants with homogeneous background, scheduled to be interviewed at the same time in the same location. According to Krueger (1988), focus groups must be small enough for everyone to have the opportunity to share insights and yet large enough to provide a diversity of perceptions. Kuniavsky suggests four groups with 6-8 participants in each group, but no fewer than 4 participants each.

When 1+1>2

According to Kuniavsky, in software or Web site development, focus groups are used early in the development cycle, when generating ideas, prioritizing features, and understanding the needs of the target audience are paramount (Kuniavsky, p201).

Focus groups are convenient to conduct and time-efficient in that, you can get much more input from a group of participants than from one individual participant in a given time period. They provide a unique opportunity to see reality from the perspective of the user quickly, cheaply, and (with careful preparation) easily. (p202)

Group discussion produces data and insights that would be less accessible without interaction found in a group setting—listening to others’ verbalized experiences stimulates memories, ideas, and experiences in participants. This is also known as the group effect where group members engage in “a kind of ‘chaining’ or ‘cascading’ effect; talk links to, or tumbles out of, the topics and expressions preceding it” (Lindlof & Taylor, 2002)

Kuniavsky states that, “since focus groups can act as brainstorming sessions, it’s possible to achieve a synergy in which participants generate more ideas together than they could have come up with on their own” (Kuniavsky, p203).

When 1+1<0

However, Douglas Rushkoff argues that, “focus groups are often useless, and frequently cause more trouble than they are intended to solve, with focus groups often aiming to please rather than offering their own opinions or evaluations, and with data often cherry picked to support a foregone conclusion” (Rushkoff, 2005). Because of the setting of Focus Groups, participants may swiftly alter their opinions to please other participants or the moderator, sometimes unconsciously. Focus groups can create situations that are deceptive both to the participants and to analyst who literally interpret statements made in focus groups rather than extracting their underlying attitudes (Kuniavsky, p205).

Focus groups are not good for usability information in that, they can’t show you whether they can use a feature in practice. Nor are they statistically significant like a survey. In other words, there is no guarantee that the proportion of responses in the group matches that of the larger population of users (p204). However, focus groups can give you a really good idea of why the audience behaves how it does. Once the “why” has been determined, it can be verified through statistically significant research, such as a survey. (p204)

Discussion

Lambert and Loiselle (2008) have integrated focus groups and individual interviews data for their study on patterns of people’s cancer information-seeking behavior, and found that the “convergence of the central characteristics of the phenomenon across focus groups and individual interviews enhanced trustworthiness of findings”.

With the above said, this reflection is not designated to write in stone if Focus Groups is appropriate to serve as a methodology for all user research or not. Depending on the variables in the research such as research questions and target audience, researchers may find that results from focus groups can intrigue more insights into the topics than individual interviews, thus make “1+1>2”; whereas others find that the results are null, or even worse, misinterpret focus group results, making “1+1<0”.

Reference

Henderson, Naomi R. (2009). Managing Moderator Stress: Take a Deep Breath. You Can Do This! Marketing Research, Vol. 21 Issue 1, p28-29.

Kuniavsky, M. (2003). Observing the User Experience: A Practitioner’s Guide to User Research. San Francisco: Morgan Kaufmann Publishers.

Lambert, S. D., & Loiselle, C. G. (2008). Combining individual interviews and focus groups to enhance data richness. Journal Of Advanced Nursing, 62(2), 228-237. doi:10.1111/j.1365-2648.2007.04559.x

Lindlof, T. R., & Taylor, B. C. (2002). Qualitative Communication Research Methods,2nd Edition. Thousand Oaks, CA: Sage.

Rushkoff, Douglas, Get back in the box: innovation from the inside out, New York: Collins, 2005

Posted by: bbannan | February 21, 2012

Focus Groups – Nouf

There are a variety of techniques accessible in collecting necessary information when conducting a research; among those techniques is the focus groups. It is a great system of information gathering that is recognized and valued by many researchers, educators, organizations, as well as by community leaders. Kuniavsky’s chapter on Focus groups provides me with a great perception on one of the significant method of information gathering for user research and design research. Focus groups could definitely assist throughout the progress of one’s prototype during the semester.

What is a Focus Group?

A focus group is a series of discussions intended to collect participants’ perceptions, set in a “permissive, nonthreatening environment.”

(Krueger, 2000)

A focus group is a discussion conducted in a group of (approximately 6 to 10 people- enough to give everyone the opportunity to express an opinion). The participants could be selected randomly by choosing different population and race to gain diverse point of views, or the participants could be selected in a not random mode but instead selected based on who best meets the recruitment criteria to obtain a particular point of view on a specific issue.  The main purpose of conducting a focus group is for the purpose of gathering opinion, perspective, and suggestion from the participants to make the appropriate decision and modification on the research performed. Corporations for instance include focus groups for overall support in marketing their products.

Why conduct a focus group?

Reliable, valid information collected in a manner that takes stakeholders’ values and needs into consideration has the potential to reduce conflicts and provide leadership to decision-makers in organizations and communities.

(House and Howe, 1999)

 The purpose of conducting a focus group

  • Increases elaboration on a particular topic selected
  • Achieves a broader insight into understanding a specific topic selected
  • Helps in gaining a better understanding a group of participants’ interpretations, perceptions, insights, knowledge, and attitude of a particular topic explored
  • Provides a clear explanation on why a certain opinion is held
  • Offers assistance in planning and designing a new product/idea
  • Offers assistance in improving a certain product/idea
  • Assists in finding a solution to an issue

When to use a Focus group?

As stated by (McQuarrie, 1996), when conducting a research design, the could be used in three modes:

  1. 1.      Stand alone method: Focus groups are the only data collection method used
  2. 2.      Supplementary to a survey: Focus group is used to enhance a previous collected data; it could be collected either before the survey to find out what the issue is or after the survey to solve the issue.
  3. 3.      Multi method design: Studies uses more than one method of collecting the data.

ABCs of Focus Groups

Richard A Krueger (1994) pointed out four basic steps in conducting a successful focus group:

  1. Planning: It is important to carefully plan the focus group several days ahead to ensure a flawless system and an enhanced outcome; plan by clarifying the purpose of conducting the research, evaluating resources required, deciding on the method and procedure used, and finally preparing the questions.
  2. Recruiting: Recruit the participants’ that you will interview for an informative and beneficial result and choose the appropriate location and time to conduct the interview.
  3. Moderating: Have the outlined questions available during the session, perform the focus group session to gather the proper information and make sure to store all essential data.
  4. Analysis and Reporting: Organize, summarize, then analyze the data collected from the participants and use program available for analysis then interpret the records and publish the results.

Example of a Retailer Using Focus Groups:

Starbucks is a worldwide-recognized coffee shop; it is a great illustration of a retailers’ continuous implementation of focus groups to get a deep insight on customers likes and dislikes. The purpose of using focus groups is to enhance their products and ensure their customers’ satisfaction.  There are many articles and resources presented on the net that describes Starbucks’ usage of focus groups, such as an article posted on July of 2009, “Starbucks: Give Your Customers Free Stuff (For a Price)”.

The article introduces the notion of the golden card offered to Starbucks’ loyal customers that was initially formatted through focus groups.  Focus groups/loyal customers demonstrated the desire of feeling important and being rewarded from time to time as regularly purchasing from Starbucks. Therefore until the current time there are many benefits and VIP treatments to Starbucks’ loyal customers, such as the holder of a Golden card illustrates to the barista that he/she is a loyal customer by owning one, also with every purchase the cardholder gets a point and when reaching fifteen points the cardholder gets a free drink. Additionally the cardholder may get a free cup of coffee when purchasing a bag of coffee bean, free pumps of syrup in their drinks, free refill, and free cup of coffee when a new coffee is out. As Brad Stevens Starbucks’ vice president of customer relationship management says, “We could show we are listening to customer needs by offering value right off the bat,”

Conclusively, the idea of the golden card illustrated a positive outcome, “By the end of the second quarter of this year, the promotion had brought in $17.5 million in revenue from 700,000 card purchasers.” This statistical data was performed during 2009; however until today the usage of the golden card is widely spread and used that undoubtedly indicates the continuous rise in revenue and increase in positive outcome, which with the help of focus groups made the progressive effect possible.

Furthermore, one of the most recent posted article on the net that provides efficient and updated evidence on the topic of focus groups, the article “Starbucks draws big crowds”; It describes how the opening of Starbucks at California State University was a result of focus group participants’ feedback and their suggestion of opening a Starbucks on campus. The opening of Starbucks on campus resulted in the attractions of a large crowd and happy customers.

Related & Useful Links:

Example of research conducted using focus groups:

Sample of focus groups questions:

Visual People: some helpful & informative videos on YOUTUBE that explains Focus groups

Resources:

House, R., and K. Howe, 1999. Values in Evaluation and Social Research. Thousand Oaks, Calif.: Sage.

Krueger, R. A., 1994.  Focus groups:  A Practical Guide for Applied Research (2nd ed.).  Thousand Oaks, CA:  Sage.

Krueger, R., and M.A. Casey, 2000. Focus Groups: A Practical Guide for Applied Research (3rd edition). Thousand Oaks, CA: Sage.

Marsh, Ann. (July 20, 2009) Starbucks: Give Your Customers Free Stuff (For a Price).Cbsnews. Retrieved February 6, 2012 from http://www.cbsnews.com/8301-505125_162-51322691/starbucks-give-your-customers-free-stuff-for-a-price/

McQuarrie EF (1996) The market research toolbox: A concise guide for beginners. Thousand Oaks, CA: Sage

Richard A. Krueger, A.M. Huberman, and Matthew Miles, 2000. Focus Groups: A Practical Guide for Applied Research (3rd edition). Thousand Oaks, CA: Sage.

Weiner, Joan. (Feb 08, 2012). Starbucks draws big crowd. CAL STATE MONTEREY BAY NEWS . Retrieved February 6, 2012 from http://news.csumb.edu/news/2012/jan/26/starbucks-draws-big-crowds

As a former English teacher I tend to think in terms of character, plot, setting, and theme and apply them to nearly everything I do.  It allows me to make better sense of the world and examine how things work and how people react to different situations. This is extremely important when it comes to creating a user-centered product that is successful.  What the designer must do is create a story for the product and whoever will be using the product, which becomes the story of your business.  The better story you take the time to write the better planned your end product will be.  Just like in creative writing we must follow the formula for the creation of the user experience, we are creating characters, scenery, giving them a problem to solve and keeping track of the path they used to get there. A great storyteller knows that you must put as much detail as possible into a story to make it not only believable but also great. It is important to note that everyone acts differently given any situation and the writer must be aware of this. Everything about a character and the scene they are within affects how they will react to the problem they will face. As instructional designers it is extremely important to remember all of this when creating a user experience.

First we must create the characters that will be using the product that is being designed.  The importance of creating user profiles that fit your users is of the utmost importance when it comes to keeping the User at the center of the Design of your product. Essentially at the bare bones level a User Profile is “more than just a face put on a list of demographic data; they are entities whom you work with to make a better product” (Kuniavsky 130). Ultimately, you use the data to create the user profile that will most likely use your product and this profile becomes an integral part of the conversation about the design. This is not a specific art that is based solely on data it is “based on your intuition, your judgment, and the information you have at hand” (Kuniavsky 130). User profiling is a bit of an art and takes some skill to acquire and over time you will be able to create highly developed profiles that are injected to all parts of the design and development process.

According to Kuniavsky, a typical user profile takes at least three weeks to develop by no more or less than five to six people (131). The profiling should include as much detail as possible well beyond simple demographic data. The profiles should provide an insight into the User’s life and how the product will fit into their life and an arduous process of clustering all of this information using post-it notes and a meeting room. To me this process seems a bit dated and overly reliant on cumbersome tactics. I prefer mind-mapping software like “Inspiration” that enables the user to easily get a big picture look at a User Profile in creation and allows for multiple users to work on the same project. Due to the arduous process described I went on a search to find an easier way of working through this process and I found some interesting resources. One such resource is Dey Alexander Consulting, which had more articles than I could consume specifically about User Profiling and another page solely for templates.  One article in particular struck my fancy because it was titled “Customer Story-Telling at the Heart of Business success” I believe that this title summed up what a user Profile is exquisitely. Each customer is telling the story of the product and your business. You must create a good user profile to truly understand how they will react with different pieces of the product because each person will react differently.

This leads into “contextual inquiry” which in essence is the scenery you are going to inject your user profile into.  The scenery of contextual inquiry is not just where the product will be used but how and according to Kuniavsky this entire process is much longer than user profiling and takes a team of roughly the same size.  I will not take the time to review the details of what is suggested but the key point to take is that it must be based on data. Data collection for this scene must be made in the real world to better understand where the product will be used.  What I gathered to be the main point of this part is that it is important to note that products may be created in a vacuum but they are never used in one. During my research I was able to find a wiki called Fluid with many resources on this particular subject. This is also a great website from the UK that uses many visuals to explain the process as well, it is called Webcredible

The problem that comes to the character and the scenery is the “task analysis,” how the character reacts to the problem and ultimately solves it is the most barebones level of “task analysis.” The designer is taking note of everything the user will do with or within the product that is being designed for them.  The designer must take note of everything the user does and breakdown why they did it. For example, “why did the user click on this button instead of that one?” A writer does something similar, a story is character driven and how the character reacts can sometimes surprise the writer and then the action must be revised to fit the story.  Once the designer has the user go through the entire process of “the story” the designer must examine every step along the way and “revise” revise the product based on this new information.  A great resource for the task analysis of design is UXmatters. This website examines the tasks that are performed on a website we are all very familiar with.

In essence an entire story must be created for the product and the business. As deisgner’s it is our mission to help create this detailed story and can help to make a product successful.  A good story will help to create a good product and allow the end user to have a rich experience that will enable them to keep coming  back and using the product again and again.

 

Bibliography

Dey Alexander. (2009, August 18). Personas. Retrieved Feb 13, 2012, from http://www.deyalexander.com.au/: http://www.deyalexander.com.au/resources/uxd/personas.html

Dey Alexander. (2009, August 18). Templates. Retrieved February 13, 2012, from http://deyalexander.com.au: http://www.deyalexander.com.au/resources/uxd/templates.html

Hornsby, P. (2010, February 08). Hierarchical Task Analysis. Retrieved February 14, 2012, from http://uxmatters.com: http://uxmatters.com/mt/archives/2010/02/hierarchical-task-analysis.php

Kuniavsky, M. (2003). Observing the User Experience: A Practitioner’s Guide to User Research. San Francisco: Morgan Kaufmann Publishers.

Ogle, D. (2009, May 01). Contextual Inquiry. Retrieved February 13, 2012, from http://wiki.fluidproject.org: http://wiki.fluidproject.org/display/fluid/Contextual+Inquiry

Webcredible. (2009, September 1). Contextual Inquiry. Retrieved February 13, 2012, from http://www.webcredible.co.uk: http://www.webcredible.co.uk/user-friendly-resources/web-usability/contextual-inquiry.shtml

 

 

Posted by: bbannan | February 14, 2012

Interviews … Not as Easy as They May Seem – Mimi

You’ve decided to gather data by conducting interviews.  You, of course, wonder what questions you should ask, in what order you should ask them, and what follow up questions you should have ready.  You also worry about forgetting an important question and what to do if your interviewee gives an unexpected answer or goes off on a tangent.  Do you pull them back or do you let them take you where you did not expect to go?

So, why do interviews?  Interviews can be rich sources of information.  The research questions themselves will drive the interview questions.  The questions need to focus on getting the needed information.  As Goldin (2000, p. 519) explains, the interview questions need “to take into account their research purposes. These may include (for example) exploratory investigation; refinement of observation, description, inference, or analysis techniques; …; and/or inquiry into the applicability of a model of teaching, learning, or problem solving.”  But, interviews alone do not present an entire picture.  Researchers justifiably want to triangulate in order to solidify their conclusions.  Tobin (2000, p. 492) reasons that “if multiple data sources produce a pattern that makes sense, then there is greater confidence that the pattern is not dependent on a particular form of data, such as field notes or interviews.”

So, pertinent questions which are asked in appropriate ways and recorded accurately are the hallmarks of productive interviews.  Pertinent questions are dependent on the research goals.  How do we present the questions in an appropriate manner?   Kuniavsky gives a reasonable list of strategies to employ while interviewing to address these concerns.  Having done a handful of research interviews myself, I can acknowledge that his list is a good one.  However, there are a few other considerations which I would also include.

So that I can avoid repetition, I will state that my first set of educational research interviews were done with four colleagues, all high school math teachers.

Here are my additions:

Do not assume you know what the interviewee means.  During interviews, my colleagues occasionally said to me, “oh, you know what I mean.”  I would ask them to assume that I did not know and then I asked for more explanation.  To my surprise, more often than not, I DID NOT KNOW what they meant.  I was not completely off the mark; but, wow, we were at different shades of grey on many occasions.  Asking them to elaborate and clarify was eye-opening and gave me great insights.  It follows then, that it would probably be even more dangerous and inaccurate to assume that you know what your interviewee means if you do not know them.

Kuniavsky describes the “neutral interviewer.”  Personal bias is a big concern.  “The more involved the researcher is, the greater the degree of subjectivity likely to creep into the observations (Gay & Airasian, p. 213).  Everyone has their own biases, of course.  Learning to separate oneself from them during interviewing is no easy task.  However, it is important in order to garner the most from what your interviewees have to offer.  In analyzing interviews with science students who were engaging in problem solving, Clement (2000) found that the interviews revealed direct sources of misunderstandings and errors.   Goldin (2000) asserts that one of the values of good research, including interviews, is that it shows us other perspectives, but only if we are open to them.   Our ideas evolve as we learn; and, we learn to ask better questions, to listen more carefully, and, to combine observations with our listening.  Keeping our own opinions and viewpoints out-of-sight and out-of-mind during interviews allows the possibility of gaining perspectives outside of our own paradigms.   Some totally unexpected results are very possible.

Do not engage in conversations with your interviewees.  Johnson  & Christensen (2008) suggest establishing trust and rapport with your interviewees at the beginning of the interview process.  Starting the interview with fairly simple and nonthreatening questions is advised by Ary, Jacobs, & Razavieh (2002).   However, the researcher needs to keep her/his eye on the goal of the interview.   Falling into a comfortable conversation can happen very easily.   A classmate in my qualitative research class (EDRS 812) with Dr. Maxwell, told the class that she made an index card for herself with two words on it: SHUT UP.   She was interested in language and cultural studies and found herself so interested in what her interviewees were saying that she wanted to have conversations and discuss issues.  She used that card to help restrain herself.  It is difficult, however, especially when dealing with interesting people or people who you know well (not implying these are mutually exclusive groups).  But, the interviewee needs to be doing 99% of the talking.  Actually, you need the interviewee to do 99% of the talking.  “Listen more, talk less.  Listening is the most important part of interviewing” (Gay & Airasian, p. 213).   Keeping your opinion to yourself can be a difficult task; but, it is the bedrock of effective interviewing.  Researchers must  refrain from expressing approval, surprise, or shock at any of the respondents’ answers (Ary,  Jacobs, & Razavieh, 2002).

Lesh et al. (2000, p. 606) observed that, for some interviewers, “the ratio of ‘researcher talk’ to ‘student talk’ was nearly one-to-one;” but, for other equally adept interviewers, the “researcher talk” time was significantly less.  He based his research on the latter because the goal was to have the participants “reveal explicitly a great deal of information about their evolving ways of thinking.”  I infer from this that the more loquacious interviewers were not encouraging their interviewees to be as evocative about their thinking – largely because of their (interviewee’s) reduced air time.

Ask your interviewee her/his opinion of the interview.  Being an interviewee, as we all have been at least a couple of times, I have sometimes walked away from an interview wondering why certain questions were not asked of me and why obvious follow-up questions were not pursued.  So, my third suggestion is to get the interviewee’s opinion.  There are several ways to do this.  I was fortunate that the first several research interviews which I conducted were with colleagues.  At the end of the interview, with the recorder turned off, I asked how I could improve my interviewing.  One of my colleagues said that she was desperately trying to find the “right” answer (even though I repeatedly told her that I wanted her opinion and experiences) but that she could not read anything in my expression (or lack of).  This made me happy.  I did not want to influence her.  Another colleague told me that I only let a certain amount of silence occur.  He said that he liked to contemplate; but, after about 15 seconds or so, I would rephrase the question to elicit a response from him.   This did not make me happy.  I realized that I may have rushed him and missed out on some good insights.  But, I learned from that.

If asking your interviewee to give their opinion of your interviewing techniques is uncomfortable or inappropriate, you could always end the interview with asking if there was anything they wanted to talk about or anything you missed.  A follow-up email or survey could serve you well, too.

Reflect, reflect, reflect.  Asking questions so that valid responses are obtained and recording these responses accurately and completely, as suggested Ary, Jacobs, & Razavieh, (2002), seems almost too obvious to mention.  But, what should we be recording beyond the interviewee’s responses?  Hall (2000, p. 656) observes that “interviewing people raises all the usual problems of distinguishing what they say from what they actually do.”   Kuniavsky advises that “people won’t always say what they believe” and that the researcher should “watch for clues about what they really mean.”  Assuming that the two are not necessarily the same thing, then the idea of observational data becomes germane.  Creswell (2008) states that, at the end of an interview, the wise interviewer engages in “thanking the participant, assuring him or her of the “confidentiality of the responses, and asking if he or she would like a summary of the results of the study.”  Additionally, it is wise to make notes after the interview to memorialize your thoughts and impressions.  You think you will remember; but, you will not.  Those little observations are fresh in your mind right after the interview.  You will thank yourself for taking a few minutes to write them down.  I was surprised by how much I had forgotten when I read my reflective notes.  Creswell (2008, p. 412) advises researchers to “write down comments which help explain the data, such as the demeanor of the interviewee or specifics about the situation, or personal feelings about the interview.”   I found this to be very valuable.

References

Ary, D., Jacobs, L. C., & Razavieh, A. (2002).  Introduction to research in education (6th Ed.).  Belmont, CA: Wadsworth.

Clement, J. (2000).  Analysis of clinical interviews: Foundations and model viability.  In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 547-589).  Mahwah, NJ: Erlbaum.

Creswell, J. W. (2008).  Educational research: Planning, conducting, and evaluating quantitative and qualitative research (3rd Ed.).  Upper Saddle River, NJ: Pearson.

Gay, L. R., & Airasian, P. (2003).  Educational research: Competencies for analysis and applications (7th Ed.).  Upper Saddle River, NJ: Merrill Prentice-Hall.

Goldin, G. A. (2000).  A scientific perspective on structured, task-based interviews in mathematics education research.  In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp.  517-545).   Mahwah, NJ: Erlbaum.

Halls, R. (2000).  Videorecording as theory.  In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 647-664). Mahwah, NJ: Erlbaum.

Johnson, B., & Christensen, L. (2008).  Educational research: Quantitative, qualitative, and mixed approaches (3rd Ed.)  Los Angeles, CA: Sage Publications.

Lesh, R., Hoover, M., Hole, B., Kelly, A., & Post, T. (2000).  Principles for developing thought revealing activities for students and teachers.  In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 591-648). Mahwah, NJ: Erlbaum.

Tobin, K. (2000).  Interpretive research in science education.  In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 487-512). Mahwah, NJ: Erlbaum.

Posted by: bbannan | February 8, 2012

The User Experience – Gloria

In chapter 4, Mike Kuniavsky introduces the concept of user experience by defining the user experience as continuous.  In his introduction to the chapter he states that “What they [the users] understand, affects not just what they can accomplish, but what attracts them to the product, and what attracts them to the product affects how willing they are to understand it”.   Although he makes it clear that defining user experience is a difficult task, he offers three general categories of work that may be helpful in understanding and creating the user experience for information management products: Information Architecture, Interaction Design, and Identity Design.

My husband, Bob, is a person who considers himself to be one of the most technically challenged people around today.  After much complaining about new technologies he becomes a late adopter and in the end adapts quiet well.  About a year ago, Bob’s company provided him with an iPhone. Due to the clean and elegant interface of the  iPhone, he was able to figure out how to download apps,  set up his  email account, add contacts, send text messages and emails, browse the web, and make phone calls. 

Information Architecture is the implicit or explicit structure used to organize information in order to help users find what they need. Information architects aim at making information that is implicit to become more explicit to the user so that they are able to understand and intuitively know what to do within the structure (with minimal frustration or getting lost) by creating patterns.  Information architects pay special attention to their target audience; what they need and expect, what they think and understand about the structure and task, and the use of appropriate terminology and keywords. Demographics, web use profiles, appropriate terminology, and audience’s mental models are key elements to architects as they create their product’s information architecture to maximize their user’s understanding and interest.  Profiles, surveys, contextual inquiry and task analysis, card sorting, and diaries are effective tools and techniques often used to gather information about the target audience and how they think.

At MC, the target audience for professional development tutorials is in essence a captive audience of college faculty. Our department tries to determine the need by listening to faculty or receiving a direct request from department deans. After every workshop, an online survey provides the opportunity for every participant to provide feedback about what worked well and what did not work so well.  The survey also offers a textbox for suggestions or comments.   

Interaction Design does not really have a single user interface. Rather, “the interface can be though of as everything that goes into the user’s immediate experience: what the user sees, hears, reads, and manipulates” (Kuniavsky, pg. 48). Presenting information in the clearest way for the user is a key task for designers which requires the collection of very specific information focusing on interaction and paying special attention to task flows, interface predictability, emphasis of key interface elements, and different audiences such as first time users.

During the interaction design phase, architects rely on tools such as task analysis, focus groups, usability testing, and log analysis to determine the user’s interaction sequences, priorities, understanding, and general use.

I have noticed from using the device that the high definition touch screen interface of the iPhone lends itself to all users from the average to the sophisticated.  It is an all in one device that can be used as a mini-computer. I think the multi-touch screen is very attractive and the controls such as slide, drag, stretch, and pinch easy to use.

Identity Design. “The style, the feeling, and the vibe” (Kuniavsky, pg. 50)

“The identity is the combination of what a site does, how it looks, what associations it evokes, its editorial voice, and how it emphasizes certain features over others” (Kuniavsky, pg. 50).

To make a strong identity, designers are consumed with getting it right by using the most effective editorial voice, visual themes, site features emphasis, and brand recognition.  These are elements that work together to set the site apart from its competitors by creating a unique and pleasant experience and a strong impression for the user.

For a lasting and memorable identity design, the designer focuses on anticipating the user’s immediate experience and emotional responses after researching and understanding the current users, the direction of the user’s attention, references and associations, and comparing the competitive strengths of the product.  User needs research data can be gathered from focus groups, surveys, and competitive analysis.

Creating a good product requires extensive and continuous research that can provide the most accurate and updated insight into the user’s preferences and needs.

Apple has done a great job at creating a strong identity design. Since I had been contemplating investing in an iPhone for years, I have conducted my own little research by asking every person I saw with an iPhone if they were happy with it.  The answer was a consistent “yes”.  My next question was “Is there anything you don’t like about your iPhone?” and the answer was a consistent “no”. Truth is, I have yet to find one person who does not like their iPhone. It is of no surprise that iPhone users are the most loyal to their brand with 84% stating in a recent survey that they will purchase another iPhone as their next handset.

http://www.telegraph.co.uk/technology/apple/8915861/Apple-iPhone-users-most-brand-loyal.html

 

Figure 2 shows the numerous aspects and interactions that can influence the user experience.

 User Experience Graphic

A user experiences things in interaction with product in the particular context of use including social and cultural factors. (Arhippainen, pg 5)

 

Suggested Readings:

  1. A User-Centered Approach To Web Design For Mobile Devices by Lyndon Cerejo
    http://www.smashingmagazine.com/2011/05/02/a-user-centered-approach-to-mobile-design/

  1. The Importance of User Experience- the Poster!
    http://www.demystifyingusability.com/2006/09/the_importance_.html

  1. Project Rethink: The Circle of Google and User Experience
    http://www.projectrethink.org/tag/user-experience/

Resources:

Observing the User Experience: A Practitioner’s Guide to User Research by Mike Kuniavsky

Capturing User Experience for Product Design by Leena Arhippainen. University of Oulu. Link to document

Forlizzi, J. Towards a Framework of Integration and Experience As It Related to Product Design. [Web-document] Available: http://goodgestreet.com/experience/theory.html

Posted by: bbannan | February 8, 2012

The Research Plan – Boshra

The Research Plan

Why designers need to have a research plan? Well, people who usually start their day with a piece of paper that says (Thing to do) will know the answer. Conducting a research plan provides you with a list of goals should be accomplished, organizes your time and prioritizes your effort. (Demetrius Madrigal & Bryan McClain; and Kuniavsky).

Planning a research can be the difficult part of the research; you have so many people affected by the research, and each demands goals and tasks from the product; you have limited time and budget; and you need an optimal contribution to the product before its release.

Kuniavsky in his book Observing The User Experience and in his blog post Crafting a User Research Plan, which is a great summary of chapter five, explained the process of conducting a research plan in an easy step-by-step way. He declared three main components should be covered in a research plan, namely: goals, budget and schedule. He thoroughly explained each part and then provided a good example of a research plan at the end of the chapter.

Starting your research plan with a list of goals means two important things; (a) you are going to start communicating with participants that are directly affected by a product, (b) and you will combine users’ experience and company’s goals to establish a list of research questions.

Daniel Szuc suggested some components to keep in mind while working on a research plan:

  1. Choosing the right research method at the right time
  2. Understanding how your product fits into a larger product and business strategy
  3. Building on research success and showing your value
  4. Making everyone on the product team own the research outcomes
  5. Bridging research results back into improving the design
  6. Ensuring your research results get implemented and improve products

Goals

Kuniavsky argued that stakeholders should be aware of your research goals and needs, so they can value what you do and participate with time and data.

After creating a list of potential stakeholders and converting their needs/goals to research questions, you will probably have a long list of issues that are time consuming. Kuniavsky suggested a method to use to prioritize your goals. In fact, Janice Fraser wrote a blog post called Setting Priorities which is helpful for designers with beginning research skills. Her method consists of four steps that might take time, but worth doing. These four steps can be combined with Kuniavsky’s table by adding an additional column for Severity.

Step 1: Make a “Big List of Things To Do.”

“The Big List should include features you want to add, changes to make, new sections, and so on. Do this collaboratively… through brainstorming with coworkers and managers”.

Step 2: Organize your list according to Dependencies and Baseline items.

“Dependencies — You can’t launch X until you have Y.”

“Baseline — small number of things that you absolutely, positively must have in order to launch the project”.

Step 3: Have the appropriate coworkers score each item.

Ask stakeholders to score items in the list based on the following measurements:

Technical Feasibility: a score of 1 is low feasibility (meaning, it’s hard or expensive, or would take a lot of time. A score of 5 means it is easy to implement in an affordable cost)

Creative Feasibility: availability of content and creative staff.

Importance to the User: based on data gathered about user experience or from stakeholders who work closely with users.

Importance to the Business: have a business or marketing manager provide this score.

Step 4: start implementing issues with high scores in feasibility, importance, and severity. Issues with low scores in feasibility, importance, and severity should be removed form the list. Figure 1 is an example of a list of prioritized goals.

Budget

It is wise to calculate a research plan budget before implementing the research. For our educational projects, since we are not actually going to develop the mobile apps, equipment costs is our big challenge. We attempt to find tools with no to little cost to run our user research. Thus, starting the research with low cost techniques and grouping research questions by them is recommended by Kuniavsky.

Schedule

It is important to make a user research fit current design cycle. In fact, Kuniavsky suggested a list of research methods that can be used in different design cycles. However, if the product is in the final stage and you were asked to conduct a user research, do not, as Kuniavsky claimed. You will not have the time to make effective changes!

Demetrius Madrigal and Bryan McClain wrote a concise yet sufficient blog post called Planning User Research Throughout the Development Cycle about conducting user research in different development phases.

Toward the end of each user research, Kuniavsky recommended revising your goals and questions in parallel with receiving new information from users and stakeholders. In fact, Demetrius Madrigal and Bryan McClain suggested changing your research method if it did not provide you with information you are seeking. “Steve Jobs was famous for saying, “People don’t know what they want until you show it to them” (Demetrius Madrigal and Bryan McClain). Therefore, try to apply changes on the product based on research findings as much as possible and keep stakeholders and users part of the product’s new version.

References

Mike Kuniavsky, Crafting a User Research Plan. http://www.adaptivepath.com/ideas/e000107

Janice Fraser, Setting Priorities. April 23, 2002 http://www.adaptivepath.com/ideas/e000018

Demetrius Madrigal and Bryan McClain, Planning User Research Throughout the Development Cycle, December 5, 2011

http://www.uxmatters.com/mt/archives/2011/12/planning-user-research-throughout-the-development-cycle.php

Daniel Szuc, Finding Gold in Your User Research Results, July 6, 2009 http://www.uxmatters.com/mt/archives/2009/07/finding-gold-in-your-user-research-results.php

He has an excellent sample questions to start user research plan.

A great example of conducting usability testing with excellent sample questions and charts. https://sites.google.com/site/superuserfriendly/information-architecture—web-applications/usability-testing-results

Posted by: bbannan | February 1, 2012

Iterative Development – Beth

In Chapter 3, Kuniavsky discusses the concept of balancing the needs of a user. The definition of a successful product (whether an application, website, or training package) is different to the stakeholders involved in product development.  A user looks for a functional, efficient and desirable product.  Advertisers want a product to experience high traffic and have high awareness.  A company wants a product to be profitable and promotable. In his article The seven habits of effective iterative development, Cardozo (2002) encourages designers to adopt a proactive attitude in order to develop partnerships with stakeholders.  During the development process, designers should consult with all stakeholders to ensure the product is as successful as possible.

Iterative development is the process of refining through trial and error.  During this semester, our groups will be following an iterative design process.  Throughout each version of development, we will examine, define, and create our mobile applications.  During examination, questions will be asked, needs will be analyzed and evaluated and possible solutions will be discussed.  Cardozo (2002) states to “seek first to understand, then to be understood.”  Designers must be able to understand the objectives and needs prior to determining solutions.  All stakeholders should be considered and consulted to understand their point of view.  A designer who does not understand the needs of the stakeholders will not be able to create a successful product.  Once solutions are agreed upon by designers, those solutions will be defined in larger detail.  As details emerge, design creation begins.  Developers should be flexible, adaptable, and have a shared vision throughout the iterative development process as there is not one right way of developing a product.  I hope that after completion of EDIT 732, each team has developed a good understanding of the strengths and weaknesses of each team member.  Understanding how to work within a team is critical to success in iterative development according to Cardozo.

Melzer et al (2009) suggest that designing mobile learning applications benefit from using an iterative design process.  Melzer et al goes onto suggest that because an iterative design process involves student participation it bridges the gap between classroom learning and outside the classroom experiences.

An example of iterative development is described by Kuniavsky at the end of Chapter 3.  Another example of an iterative design process can be found here: http://www.educause.edu.mutex.gmu.edu/EDUCAUSE+Review/EDUCAUSEReviewMagazineVolume46/MakinganApp/238422

After reading Chapter 3 and learning about the iterative design process, I realized that I follow this process at work but had no idea that I was following an iterative design process.  My schedules from training development are divided into three phases: storyboard development, alpha development, and beta development.  Training is developed in sequences to ensure that all ideas are not delivered at once.  Cardozo (2002) refers to this as developing a delivery habit to ensure that progress is made.  At the end of each development period, I sit down with my client and a select group of users to obtain their input on the product.  Changes are identified and I work to incorporate their changes into the next version of my design.  My process is cyclical and although it takes time to find willing users to test my product, I have always felt that each input I receive contributes positively to the final product that I deliver.

References:

Cardozo, E. L. (2002). The seven habits of effective iterative development. http://www.ibm.com/developerworks/rational/library/1742.html

Fontana, A. (2011). Making an App. EDUCAUSE Review Magazine, 46(6), 108-109.

Melzer, A., Hadley, L., Glasemann, M., Günther, S., Winkler, T., & Herczeg, M. (2009). Iterative Design of Mobile Learning Systems for School Projects. Technology, Instruction, Cognition & Learning, 6(4), 235-251.

« Newer Posts - Older Posts »

Categories