The latest interview in ACIA with Seth Gordon is full of nuggets of knowledge. On IA: “As the […]

The latest interview in ACIA with Seth Gordon is full of nuggets of knowledge.

On IA:

“As the web consulting industry started to take shape, firms tried to create proprietary approaches that would separate them from the competition by giving the perception of a more disciplined and scientific focus on the work. It seems that just about all of the approaches were similar except for a few nuances or catchy acronyms. I’ve been in Thailand for a few weeks, and there is a common expression that I think perfectly fits some of the formalized IA approaches. “Same, Same but different.””

On Usability:

“In a misguided effort to measure the effectiveness of an architecture, many researchers assess variables such as time on task (how long it takes a user to complete a given task) and error rate and recovery (the number of errors and how users recover). While these may be relevant in certain situations, like diffusing bombs or responding to 911 calls, I think they can be misleading when trying to measure the average user’s experience on the Internet.”

He goes on to recomend two other metrics that we’ve also been using at CIQ, frustration and misguided confidence (which he calls “Confidence of Accuracy”). Check it out!


Add Yours
  1. 1

    I also differ with the interview in that from my view the past few years tended to segregate elements of the IA/UX/Usability Research tasks into separate jobs. It seems that many of these rolls have moved back in to single positions or fall under one roll in the process. The IA, it seems to me, is often the role player that guides and performs the user testing research, evaluates the outcome, and ties the findings to tasks.

  2. 2

    Seems to me that most IAs will have to study software architecture/design in order for their skills to be relevant in 2+ years.

Comments are closed.