Insights First Read

Insights Blog

When you’re not a pet rock: Six qualitative research sins, part 2

A slightly different version of this article originally appeared in Quirk’s Research Review, May 2005, page 40.

Part 2 in a 6-part series. 

The second sin, or ‘Presto!  Let there be quant.’
Under the illusion of “representativeness,” researchers may bring quantitative instruments into the qualitative setting and report the aggregate (or worse, subgroup) results as if they represented individual data points, thereby choosing a quicksand pit as a building site. Though elementary, my dear readers, if you interview 38 people in your “national” qualitative project, whether singly or in groups, whether they represent 38 metro areas or three, you do not have an n of 38 independent cases. Only respondents in a few areas had a non-zero chance of selection; there are more than 38 metro areas in the U.S.; three of your respondents may have signed up with the same research center as friends and so on.

The misconception that qualitative findings should be cut-and-pasted into quant design rests on this faulty premise as well, but that’s another story.
Qual must provide context that numbers can neither replace nor explain, or there’s no reason to do it. It’s reasonable to ask what someone would anticipate doing under certain circumstances, or how, if at all, participants would differentiate various stimuli. However, those answers are integrally connected to the “what, when, where, why, how” that presumably the rest of the interview has been about. Understanding this connection is the “beef” into which marketing can sink its teeth. If clients ask for quant instruments in exploratory settings, I politely explain why these could compromise our objectives, and then outline what the research will do.

There’s nothing wrong with yes/no and structured or numeric questions as they might occur in real conversations. There is something wrong with aggregating the results as if they were the Harris Poll, or separating them from their context. This also argues against routine “head counts” for questions or forced differentiation. The information the client needs should be in the verbatims, not a show of hands. Just because we can force respondents to comment that layout A is very “green” doesn’t mean we learned anything.
If we aren’t presenting stimuli that can evoke different reactions and preferences and allowing exploration as to why the responses are different, we have brought inadequate stimuli to the table; torturing the respondents all night won’t change that.

As for the notion that using card sorts, rankings, ratings and such will “facilitate discussion,” in over 20 years of interviewing (and twice that as a conversationalist), I can’t recall ever needing a quantitative catalyst. Do you? Sometimes, perhaps, these tools are attempts to substitute for conversational skills/product category knowledge. But interviewers who look or act ill at ease should be given more prep/training, or replaced, not handed stacks of forms. Maybe good conversations aren’t as easy to sell (sounds too simple?) or even deliver. But the effort is well worth it.

Besides wasting time, superimposing quant reroutes the discussion. Mid-conversation with your friend, do you ask, “How was your date with George? Here, do this attribute rating task so I can more fully understand your viewpoints.” When we try later to reconcile free-flowing conversation with eked-out data, we are no longer doing qual work, or anything else useful.

In the next part of this article, I’ll explore the perils of using attributes and “trade-offs” in qualitative research.

WE'RE READY TO LISTEN

We’d love to have a conversation with you about how Trellist can help you engage your customers and maximize growth. Start by providing us some details in the form below.

To see career opportunities, apply for a job, or submit your resume, visit our Careers section.

Required