required reading

If you run usability tests, you need to read Usability Testing: You Get What You Pay For. I’m […]

If you run usability tests, you need to read Usability Testing: You Get What You Pay For. I’m not sure about many of the conclusions she drew from the fact that different usability companies charge differently and find different problems (Jared, maybe you have some thoughts on this). It seems to mostly be “People are charging too little so therefore they must be doing it wrong.” I have no doubt a lot of people are doing it wrong; I’ve seen most of the mistakes she lists as well. But I suspect there is more to the difference in results and cost than just wrong and right.

Her assessment of common mistakes made by novice usability testers and how to correct them is dead on. It is well worth reading and seriously asking yourself Am I guilty of this? It’s rare to find a practical article with applicable advice. This is one. Reminds me of one of my favorite books, By People, For People. If you don’t have this book, I encourage you to pick it up. It deals with many of the questions Mayhew brings up like sample size and how to avoid influencing Think Aloud protocols.

1 Comment

Add Yours
  1. 1
    Melissa Bradley

    Mayhew’s really good article isn’t about the cost of usability testing. But it clearly warns about the difference in results — interpreted conclusions and “straight” data collection — that can reveal much about the biases and pratfalls of untrained or novice testers.

    There’s an implied argument that better testing, with strict protocols and experienced and sensitive testers, will result in “truer” results. In that sense, the wheat will separate from the chaff: because accuracy can command a higher price, you WILL eventually “get what you pay for.” But all this is more common sense, or the American way, or (at most) implied, not stated.

    Christina, do you see any fundamental barriers to solving the problems Mayhew describes? Improving protocols, critical thinking and heuristics will increase the validity of the data collected, and there are many easily-referenced articles on improving survey design, task analysis, etc.

    How can testers best remove their blinders, their biases, their “fatal flaws”? what would be their incentive, and how would testers know they need help in the first place?

    Would like to hear your thoughts…. Thanks for pointing to her article!


Comments are closed.