Posted by: bbannan | February 15, 2009

Human Nature and Surveys – Tom Sakell Washington Post article

Over the weekend, The Washington Post ran a story on how two states differently tracked the number of female crabs caught in the Chesapeake Bay in 2008. Because each state collected information using different metrics they came to two different conclusions: Maryland said the number of female crabs harvested grew 14% while Virginia said its crop had dropped 37%.

At stake is a possible change in the quota allowed for crabbers in future years, and the crab population in the bay: More females mean more future crabs.

The Maryland survey had a usability flaw, according to the The Post. Maryland crabbers were uncertain how to tally crabs who were caught, but thrown back as captured-and-thrown-back, or captured-and-sold. A Maryland state agency discovered the discrepancy when they polled seafood dealers and saw the number of harvested female crabs had actually dropped by about 25%.

“This isn’t unpredictable, and . . . I’d argue it’s human nature,” said Gina Hunt, deputy director of the Maryland Fisheries Service to The Post.

I’d argue that usability testing could vastly improve the crab form for watermen, as well as focus groups to determine best ways to communicate with this demographic.

How would you take human nature out of this survey?



  1. Since this experience had usability flaws, we can use them to talk about the nature of a usability study, and how it would help with the problem of human nature.

    One of the reasons for usability testing is its very nature. Given the fact that you have to employee some method for the testing process, it is necessary to have both controls and management of the testing process. Let’s look at the Maryland problem.

    One of the flaws in this particular experiment is evident in the report. “Simns said his association had instructed watermen to ‘somehow’ record the crabs they had caught but thrown back.” From the content of the report, it is evident there was a great deal of misunderstanding on the part of the participants concerning the outcome of the study. No effort to test the goodness of the actions the watermen were being asked to take by conducting a usability study beforehand are evident in the article.

    When administering usability testing, I think that the question should be, “How do you take into consideration human nature, and compensate for it? If a usability study is to be properly set up, there must be a defined protocol; each participant must be fully aware of the protocol and agree to it; and, both the participants and the testers must have a thorough understanding of it. For example, if this were a usability test, in this case, the watermen’s thoughts that the results would have an impact on the size of future earnings guaranteed that the results would be tainted, giving the testers valuable insight toward ways to compensate when the method was used for the general population.

    A second problem with this ‘study’ would be the lack of proper procedures employed by the people in Maryland who were conducting the survey. Early on, they should have, through monitoring and random interviews, began to see problems. It seems, from the article, that none of this happened. While it is not wise to attempt to direct a study, it is a good practice to look for and recognize trouble flags that place a study in jeopardy.

    Now for the Virginia report. While the statement said, “Virginia officials had no such trouble.” it is hard to say much about the statement. We simply don’t know the extent of the protocol followed in Virginia. Without a set of comparison information, and any information on the method for the participants to use in making decisions and recording results, no further comment can be made. The comment by Bob O’Reilly, concerning the decision not to use 2008 data is interesting; however, we don’t know what the participants were told the use of the study would be.

    All of this said, I would venture that the actions taken by Maryland did not meet the criteria for a usability study, and in fact indicated that no such testing took place prior to the real-time use of the surveys. The action being ‘tested’ was not controllable and did not represent a pilot for a set of parameters that were being considered for future use. I expect the same for Virginia.

    Finally, as I indicated in the fourth paragraph, while the idea of compensating for the impact of ‘human nature’ is valid when conducting usability testing; in fact, if you factor it into the process, it can be an indicator of how the protocol for the product or method being tested can be improved.

  2. Thanks for your answer, Ed. this article raised a few questions for me.

    > how were the tests given: online, pen-and-paper, multiple choice, open answer?

    > sounds like there were some qualifiable answers when quantitative answers were needed.

    > were these surveys intended for one purpose, and the state agency was trying to interpolate information for other questions?

    I don’t have access to the waterman surveys, Maryland or Virginia. I guess I was put off by the inference in the article that the watermen were trying to influence the outcome of a possible quota by giving bad information. After reading the article twice, I realized it was a one-source story: only the state agency was quoted and only the state agency surmised that the watermen might have tried to influence the outcome.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: