yes or no? maybe…

In which we discover that two companies decide to discover if online branding is possible, useful and/or effective; […]

In which we discover that two companies decide to discover if online branding is possible, useful and/or effective; and discover so by creating a fake brand, YesSirNoSir.
The results are on the site. Also check out realated article Case Study – Online Branding? Yes Sir!


Add Yours
  1. 1

    course they don’t have a clue what branding means. but most don’t, so it doesn’t make them especially moronic.

  2. 2

    What I like is how everyone is glossing over an important stat from the study: The exposed group stated they were no more likely to purchase than the control group.

    If an advertiser doesn’t generate more sales, why spend the money on advertising?


  3. 4

    the thing that i think everyone is missing is that all this data is missing any actual field results of any kind. just because they said they weren’t more likely is actually not really terribly meaningful all on its own. ask anyone… no one, by thier own reckoning… is influenced by advertising. the whole thing is just a bunch of promotional hot air being blown by a desperate company trying to breathe some life into a terrifically anemic online ad market.

    you can’t blame them for trying. but to take it more seriously than any other piece of selfpromoting puffery seems like a mistake.

  4. 5
    Eric Scheid

    although of course we all know not to completely believe anything a test subject “states” as to future intentions and actions, right?

    so either they need to follow up with an experiment that shows that action, or show some other study that correlates brand awareness with purchasing actions is a general rule.

  5. 6

    Sorry, Eric, couldn’t let this one go 🙂 You said: “…or show some other study that correlates brand awareness with purchasing actions…” Surely you can’t be doubting the correlation between brand awareness and purchasing behaviour?

    About the self reporting issue: I’m not that familiar with measuring brand, but they say they are measuring “Brand Awareness, Message Association and Purchase Intent”. I’d agree Purchase Intent measuring is pretty dodgy if you want to get numbers on how many people will purchase, but they are measuring the *difference* between people who saw the ad and people who didn’t, not the actual numbers. So it’s not that bad. For the other measures (as far as I understand them), I don’t see a problem using self reporting.

    I’d say the whole self reporting issue is one of the reasons why there is such a fundamental gap between market researchers and usability people somehow: seems like marketers only want to use self reporting, and usability people only want to use observation. But why limit your data gathering methods?

    (And let me just agree up front there are too many dodgy marketers out there who don’t have a clue what they’re doing. I’ve worked with them. But if usability ever took off like branding did it would soon be the same.)

    It seems to me that this study, as far as I can evaluate it with my limited knowledge of brand measuring, stands. The methodology isn’t described in much detail (which undermines its credibility) but what’s there isn’t enough to condemn them.

    Any toughts? Hackles, what do you mean “of course, they don’t have a clue what branding means”? Are they wrong in measuring “brand awareness and message association”? (I’m not even sure what these mean exactly tbh)

    Time for my medication! (damn flu)

  6. 7

    not wrong, just presenting a very incomplete view. your points about self reporting are the points i was trying to make. hear, hear.
    they are basing their crowing on recall, assumably aided recall. aided recall is not equivalent to branding. the other results didn’t strike me as really very impressive, plus they didn’t seem like a very robust set of metrics.
    good marketing people, in my experience, both use and demand both kinds of data – observational and self-reporting.
    aided and unaided recall, primacy, and other brand factors are all perfectly legitimate and helpful tools which are properly collected by interview or survey. even brand perceptions and brand associations are helpful and may be gathered through self-reporting, though the refinement of the instrument used to collect this data becomes magnitudes more important.
    my biggest issues aren’t with what they’ve collected, but what they haven’t collected, and their logical leaps in reporting ‘branding success’ chiefly on some increased awareness numbers.

  7. 8

    Signing up on the YesSirNoSir web site got me a powerpoint presentation in the mail. (Thank god for Autoresponders, no?)

    So, the powerpoint claims that 175% increase in brand awareness. An impressive claim, until you see the raw numbers.

    They went from a 4% awareness to an 11% awareness.

    That means that, despite the ad, 89% (or 9 out of 10) people they surveyed were unaware of the brand regardless of the advertising.

    If my kid brought home a test paper with an 11 for a score, I wouldn’t be happy, even if it was raised from a 4.

    When you deal with small numbers, any improvement is a massive improvement. In my experience, making a big deal about 175% improvement over a base of 4 is an act of desparation.

    On the subject of self reporting:

    Like any instrument, it has tolerances, biases, and recording errors. Self reporting is a useful data collection tool. Anyone using it had better know what the data means.

    In our work, we use a lot of self reporting data. We also correlate it to actual behaviors. We’re actively trying to identify those self reporting factors which can be reliable predictors of behavior and which ones aren’t.

    For example, after using a web site, it turns out that people’s rating of the speed of the site is not a predictor of the actual speed of the site. People will rate fast sites as slow and vice-versa.

    However, the perceived speed of the site is a predictor of whether users completed their tasks. If users accomplish what they came to accomplish, they are very likely to rate the site as fast. If they don’t, they rate it as slow.

    This means that we could potentially use the self reporting data of site performance to tell us if users are completing their tasks or not. However, we can’t use it for download times.

    In our research we’ve never found a correlation of brand awareness to purchase behavior. It’s not a predictor. (Someone can be well aware of the Nike Brand but have never bought a Nike product. Most people often buy products for which they have no previous brand knowledge.)

    There is a high correlation between brand engagement and repeat purchasing behavior. Brand engagement is far more complex than brand awareness. We think instruments like the Gallup CE^11 metrics can be an accurate predictor of brand engagement.

    We’ll be talking about our research into measuring brand effects at the UIE Research Forum, which is part of User Interface 6 West on March 21st in Burlingame, CA.

  8. 9

    regarding site speed observations, what people think is the fact is vastly more important than the fact. this is often true.

  9. 10

    That’s interesting stuff. Does anyone have pointers to proper research related to or an overview of methodologies to measure branding?

    I’ve read the Aaker book (“Building strong brands”) which has a chapter on measuring brand equity that’s pretty good, but I felt myself wanting more depth, more different perspectives.

    Jared, you say you have found no correlation between brand awareness and purchase behaviour. That seems to go against common sense… Unless if brand awareness has no correlation with brand engagement. Can there be brand awareness without brand engagement? (I’m not familiar with that term, I’m assuming brand engagement means how much someone identifies with a brand?)

Comments are closed.