The global media landscape Download report

Collecting data from real people is both an art and a science.

Any researcher will tell you that your insights are only as good as the questions you ask. Or, in other words, “junk in, junk out.” 

Part of getting good quality data relies on knowing how different aspects of survey design influence the respondent experience. One of the most fundamental of these, regardless of whatever field of social research you’re working in, is avoiding bias

But what do we mean when we talk about bias? The Oxford dictionary defines bias as “Systematic distortion of results or findings from the true state of affairs, or any of several varieties of processes leading to systematic distortion.” 

The key word here is distortion

Bias happens when we distort the ultimate truth we’re looking for because of flaws in our research design. 

There’s many reasons why participants can be swayed to answer in one direction or another. Experienced market researchers have seen what this looks like in practice – as well as its effects on findings and, ultimately, the bottom line of their clients. 

When companies spend thousands of dollars (or more) on research, they need to know the results they’re getting are reliable: these often guide a lot of big, costly decisions.

The good news is there’s a lot that researchers can do to both spot and mitigate the effects of bias.

Designing against sampling bias

Before putting pen to paper and drafting questions, thoughtful survey design begins with plans around sampling. Recruiting a sample that’s representative of the broader population you’d like to draw conclusions on is crucial, otherwise the insights are only applicable to the individual group of people surveyed. 

When it comes to sampling, there’s a lot of room for bias. 

In the early days, standard market research practice was to interview respondents face-to-face or on the telephone. This meant going door-to-door to find willing participants, calling names out of a phone book, or, as was sometimes the case, interviewing people out in the world. 

While the former two options offer researchers more control, the latter is very vulnerable to sampling bias. 

Let’s say, for example, you want to research in-store consumer shopping habits. An easy way to do this might be to ask people who happen to be in a mall to participate in your research. 

While we can probably assume these respondents are, in some form, “shoppers”, there’s no way of telling if this study broadly reflect the “shopper” population in the place we’re trying to understand. 

There’s a ton of things that influence who our mall participants are and their distinct behaviors. For example:

  • Will we be recruiting participants on a weekday (when many adults are at work) or on a weekend? 
  • Are we researching close to a holiday, when lots of people who don’t normally go to malls are out shopping? 
  • What’s the makeup of the mall – is it mainly premium stores that attract wealthier, more affluent patrons? 
  • Is it hard to get to – meaning only those with access to their own cars can shop there? 
  • What about all the shoppers who decline to participate? 

They’re very different from those who are willing, and will skew results due to non-response bias.

In this hypothetical study, there’s really no way to generalize our findings from willing mall participants to the greater population of shoppers.

Quotas and representation

Since most consumer research has moved online, the effects of sampling bias are less dramatic as in the previous example. But there are still major considerations in avoiding this pitfall. 

It’s crucial to work with reputable and experienced panel providers who cast a wide net in how and where they recruit respondents online. 

Setting quotas for demographic indicators – such as age, gender, race or ethnicity, income, and education – is also important in avoiding bias. The key is ensuring your sample looks at the broader population you’re studying. 

Even with quotas, data must be weighted – meaning the survey sample is “corrected” mathematically to more accurately mirror the demographic distribution of the population in question. 

Priming the respondent

Apart from sampling, there are key elements of bias to try and avoid in questionnaire design. 

One of these is called priming. According to Marketing Society, this happens when “our brains make unconscious connections to our memory so that exposure to a prime increases the accessibility of information already existing in the memory”. 

Essentially, respondents in your survey already had a memory stored, but you’ve boosted their recall. Here is an example: 

Say you’re writing a survey to understand consumer perceptions of an ad. 

First you ask them questions about the brand that created the ad, mention the campaign the ad was featured in, and outline products or services the brand provides. 

When you finally show respondents the ad, they’re more likely to say they recognize it and would react more positively than if you’d let them respond “cold” – without any information about the brand, its products, or its campaigns. 

As the example shows, priming can play a big role in inflating findings. 

When trying to measure things like brand awareness, brand affinity, or ad recall, it’s especially important to keep this type of bias in mind. 

Leading the respondent

Leading, another form of creating bias, is exactly what it sounds like – structuring surveys or questions to “lead” people in responding a certain way. 

Questions can be leading in many forms, either by linking together numerous ideas that make a statement conditional, making assumptions of prior information, or being coercive in tone. 

Take, for example, two questions: 

How big of a problem do you think the plastics crisis is for our oceans? 

  1. Huge problem
  2. Big problem 
  3. Not a big problem 
  4. Not a problem at all 

This is leading for a number of reasons. First off, its wording assumes that respondents think that plastic in the oceans is, to some degree, a problem. Second, it catastrophizes the topic by referring to ocean pollution as a “crisis.” Third, it creates a sense of personal responsibility for the respondent by using the word “our.” Reducing bias in this question might look like this: 

Do you think plastic pollution in the oceans is…

  1. A huge problem 
  2. A big problem
  3. Not a big problem
  4. Not a problem at all 

Order and randomization 

When it comes to question design, randomization is a researcher’s best friend. 

It helps combat the effects of priming and leading by keeping the order of sections, questions, or options changing each time someone takes a survey. 

For listed options within a question, randomization is standard practice when a fixed order is not required (i.e. for time intervals, an agreement scale). This mitigates the effect of order bias, where people are more inclined to select options at the beginning and end of lists rather than the middle. 

Keeping lists short, to avoid middle options from getting too lost in the mix, also helps. 

When it comes to Likert scales, such as agreement, satisfaction, or likelihood, many researchers choose to order these from most negative to most positive. 

It can feel unnatural, but it works against the double-whammy effect of order bias working on top of acquiescence bias – people’s tendency to answer agreeably.

Social desirability and the interviewer effect 

Acquiescence bias is an example of how social conditioning impacts research, as it’s people’s aversion to being impolite or disagreeable that creates it. 

Social conditioning plays a big role in skewing research in general. Often, the effect is so strong that people will respond in ways that make their behavior seem “better” or more “acceptable” rather than what’s truthful – despite a survey being both confidential and anonymous. This is called social desirability bias. 

One of the most cited (and studied) examples of this bias is in asking respondents about their alcohol consumption, which many people tend to downplay in survey research. 

In other cases, participants might over-report on socially “good” behaviors – like recycling, voting, or donating to charities. 

While social desirability bias can happen in any mode of research, there’s an added risk when a researcher is directly involved in data gathering, such as through face-to-face interviews, telephone interviews, or focus groups.  

Called the “interviewer effect,” this type of bias happens when a participant’s interaction with a researcher influences their responses. An interviewer’s background – like their age or gender – might impact how comfortable participants feel in responding honestly to certain questions they pose. 

Verbal and nonverbal cues that the interviewer may reveal, despite their best intentions to remain neutral, can also have a big influence.  

Culture matters

A key point to understand with these types of biases is that, as with anything socially constructed, it’s ultimately culture that shapes them. 

Culture dictates the expectations and norms around what is “appropriate”, “acceptable,” and “polite” in a society. So we can expect acquiescence bias, social desirability, and the interviewer effect to vary quite a bit depending on where the research is being done. 

One of the most common examples is the preference to express strong agreement in collectivist societies, such as India or China, vs. more individualistic ones, like the U.S. 

In highly collectivist cultures, response styles are more moderate – with participants choosing mid-points of scales rather than agreeing or disagreeing with statements strongly. 

In the U.S., the opposite is true; respondents tend to show stronger agreement or disagreement. In countries like India and Brazil, the effect is even more pronounced. 

While there’s no way to control for cultural bias when doing global research, it’s important to be aware of it and take it into consideration in analysis.

Report Global media landscape Download now

You’ve read our blog, now see our platform

Every business has questions about its audiences, GWI has answers. Powered by consistent, global research, our platform is an on-demand window into their world.

laptop