ROI and Design

A particularly good review in Boxes and Arrows this week, Report Review: Nielsen/Norman Group’s Usability Return on Investment In which messieurs Merholz and Hersch take NNG to task

“the report methodology is so fundamentally flawed that any financial analyst worth her salt would immediately question its findings.”

Bad statistics analysis/ DOE procedure is probably the worst critique that could be levels against something that purports to be quantifiable. The book might still be useful, perhaps, if one wanted to wave it about while talking to management since the Nielsen norman names are two of the few known outside the practice. But then, you’d be chancing the managers might read it and then you’d look foolish. There was much grumbling around Y! when our copy made the rounds. I still feel Cost Justifying Usability is the place to place your dough, be you designer or evaluator. After all, after usability testing reveals the problems, it’s design that fixes them. This marriage is the magic.

Larger questions arise, of course. As the “ROI us” movement gains steam, a few dissenters start to push back. Designers are often once removed from critical design decisions, and have trouble owning fully the results of their work (sometimes to their benefit, sometimes to their detriment). Design also often results in effects that are subtle and hard to measure, or need be measured over longer stretches of time– something hard to convince people to do in this ever faster paced environment.

Intense and frequent measuring also can result in making design a slave to tiny jumps in quick-numbers, and an attention to the page as a series of tiny components to be optimized. Looking at Amazon lately, I wonder if their increasingly disjointed design is a result of their A-B testing. Sometimes you have to step back from the daily data and look at the design system, and make the leap of faith that a coherent design will make a long term positive user experience and go for it.

Now obviously one can choose ot measure this too. One can run longer tests, and discover if what I’ve said is true. But will companies do so? If you practice data-driven design, what data drives you? Is it the right data? Is it enough data? Is it good data?