Last week Anthropic released findings from their new AI Interviewer tool. Among the headlines: people are optimistic about AI in business.
I want you to sit with that for a second.
Anthropic interviewed people using their AI product about whether they’re optimistic about AI. And discovered that… they are.
This is like polling people at a Tesla dealership about whether electric vehicles are the future.
The Anthropic Interviewer runs in the cloud on Claude. Which means every single respondent is someone who not only uses AI regularly, but specifically uses Claude rather than ChatGPT – the default choice for most people. Claude users skew technical, early-adopter, and AI-curious by definition. They’re the nerds. (I say this with love. I’m one of them.)
So when this population says they’re bullish on AI for business, that’s not a finding. That’s a tautology.
The “A Study Said” Problem
Here’s what I see constantly: someone drops a link in Slack with “interesting research!” and suddenly that study becomes organizational truth. It gets cited in strategy decks. It shapes roadmaps. Nobody asks the basic questions: who paid for the research? What was the methodology? Who (and how many) did they talk to?
Product managers and designers are particularly vulnerable to this because we’re trained to respect data. “Data-driven” is a compliment in our world. But treating all data as equally valid is just innumeracy wearing a lab coat.
The Three Questions Framework
Before you cite any research – before you even finish reading it – ask:
1. Who did they have access to?
Every study has a recruitment problem. Anthropic talked to Claude users because that’s who they had access to. Academic researchers often rely on college students because they’re convenient. Market research firms use panels of people who’ve signed up to take surveys for money – which is its own weird population.
The question isn’t whether the sample is biased. It always is. The question is how it’s biased and whether that bias matters for the claim being made.
2. What does that population actually represent?
This is where the Anthropic example gets interesting. Another finding was that scientists who use AI heavily still don’t fully trust it.
That’s actually a meaningful signal. Because if anyone should be confident in AI by now, it’s power users in technical fields. Their skepticism tells you something real about where the technology currently falls short.
Same data source, completely different interpretive frame. The “people are optimistic” finding is meaningless because of the sample. The “power users still don’t trust it” finding is meaningful because of the sample.
3. What are you trying to decide?
Here’s where I’ll give you some grace: not every decision requires bulletproof data.
If you’re trying to validate a major product bet that’ll take six months to build, you need strong signals from representative users. Convenience samples won’t cut it.
But if you’re looking for inspiration – just trying to generate ideas about what might make your app better – a few conversations with any users can spark something useful. Weak signals are fine for ideation. They’re dangerous for validation.
Know which mode you’re in before you evaluate the research.
Read the Boring Methodology Section
Most of the research that circulates in product organizations is convenience-sampled, self-selected, or conducted by companies with an obvious stake in the findings. That doesn’t make it worthless. But it does mean you should treat it as “one perspective from a specific population” rather than “the truth about users.”
The Anthropic Interviewer will probably generate some genuinely useful findings. It’s a clever tool. But every single thing it produces will be filtered through the lens of “things Claude users think” – which is a very specific slice of humanity.
Next time someone drops a study in your Slack channel, don’t just read the findings. Read the methodology section. Look at who they recruited and how. Ask yourself what that population actually represents.
Then decide if it’s signal or noise.