Our Blog

5 Research Biases that Kill Products



A product built through biased research will die a painful death from poor adoption.

Cognitive biases are among the imperfections that make us human, but they are bad news for product marketers. A bias, simply put, is an idea or prejudice that the respondent or the researcher brings to the research process and that can distort research findings.

Biases can affect all phases of customer development, from survey design to data collection and analysis. Unless the product marketer is aware of these biases and takes steps to reduce their impact, the results of customer discovery may become misleading.

Let’s take a closer look at five most pervasive cognition biases and see how we can reduce their impact.

#1. Acquiescence Bias

Moderator: Will you buy A instead of B?

Respondent: Yes, absolutely (whatever)!

Acquiescence bias, also known as the “yea saying bias,” occurs when research subjects display a tendency to agree with whatever they are being asked. They seem to think every idea is a good one. It happens because saying yes is the easy way out. Thinking and carefully weighing each option takes time and energy.  People can say they would buy your product under every circumstances and set of features you offer them, but it doesn’t mean they really would.

The tendency to answer every question with a yes becomes more pronounced when the interview duration increases, as people just want to get it over with. This can be especially true when you are using tech jargon and pitching tech functionality that you assume your interviewee understands.

To avoid acquiescence bias the researcher should vary the order and logic of the questions in order to detect and correct the effects. For instance, if you’re comparing A and B, you may ask the following questions with a few questions in between them:

I think A is better than B (yes/no)

I would buy A only if B is not available (yes/no)

If the person says yes to both the questions, it’s the acquiescence bias at play. It’s an oversimplification, but you get the idea. You can also reduce the duration of the interview, change the order of the question and ask probing questions to control this bias.

#2. Social Desirability Bias

When interviewing a client after their product failed to get anywhere close to the projected sales figures, we asked them if they had done customer research. “Yes, we consulted everyone in our organization and people were unanimous it was a great idea,” replied the CEO.

In truth, the product never had a pull with its intended consumers. The employees the company had surveyed were probably affected by social desirability bias.

Social desirability bias influences research subjects to answer questions in a way they think will socially acceptable and liked. It’s hard to criticize an idea when it’s coming from the boss, or a vendor, a partner or a prominent software company, so most people just go with the flow.

Plus, entrepreneurs/product marketers don’t really want to hear that what they are dedicating themselves to is not desired.

To build successful products you need to speak with people that are unbiased, to people that don’t know who you are. And you need to engage with the user, the buyer, and the influencers. Don’t discount the negative, it has more value than the good stuff.

Controlling the social desirability bias involves asking questions in a way that shows it’s okay to answer in an undesirable way. For instance, instead of asking “Do you think it’s a great idea?” you can ask, “What would you say about this idea if it were coming from our competitor?”

#3. Sponsor Bias

Larger organizations, when they feel the need for customer research, invite people to the company’s offices for focus group interviews. Fewer of these companies may know that the respondents don’t come alone. They bring sponsor bias with them. It is the bias that influences people’s behavior and opinions when they know who is sponsoring the research. “If it’s from Apple, it must be good.”

Moderators can control this bias by maintaining a neutral stance, whereby it is clear to the subjects that they have no affiliation with the product or brand in question. Conducting the research at a neutral venue also helps. Even better, go into your prospect’s environment. Let them answer your questions while in their working frame of mind.

#4.  Confirmation Bias

Human as they are, researchers have their own biases and prejudices that can affect their interpretation of research data. Confirmation bias is the most common type of bias in research, and the hardest to identify and remove. It happens when the researcher has some preconceived faulty hypothesis or assumption, which they tend to confirm by using supporting facts from research, and discarding facts that do not support their idea.

This bias was responsible for the New Coke horror story taught as a classic to managers and researchers. The product turned out to be a disaster even though more than 200,000 taste trials were conducted, in which the majority of the people preferred New Coke over old Coke and Pepsi. However, as it turned out in the hindsight after New Coke became a disaster, taste was not the only reason why people drank Coke. But to the researchers, it was, so they stopped short of further research and gave the green light to New Coke.

Confirmation bias stems from deeply rooted attitudes and beliefs that we use to understand and interpret information. In the software world, we often think all users are like us, with similar problems, and technology capabilities.

Confirmation bias causes researchers to become narrowly focused on a single hypothesis. Continually challenging preconceived assumptions and reevaluating respondents’ answers can help minimize confirmation bias. In the software world where product marketers are valued for their bias toward action, revaluation rarely happens. We tend to move forward, move fast, and assume the world will catch up with us. It often does, but the timing is unpredictable.

#5.  Escalation Bias

Jimi Heselden, the multi-millionaire owner of the company that makes Segway motorized scooters, was found dead in a river after plunging 80 feet over a cliff while he was riding one of his vehicles. Ten years earlier, Segway “Ginger” had made headlines, promising to revolutionize the automobile market. It was touted to become the next big thing after PCs and smartphones.

Most of the consumers probably lost enthusiasm when the first ads of the prototype were released, which showed people looking like acrobats riding weird-looking chariots.  The company was undeterred. Ginger hit the market with a daunting $5,000 price tag. Instead of selling 10,000 units a week that they had estimated, the company sold only 24,000 scooters in five years. Everyone could see it coming after they had watched the ads, but investments had already been made in RND, so they decided to spend good money after bad.

Escalation bias can result when the initial impressions indicate positive sentiments about a product or idea, but subsequent research seems to contradict the previous findings. This can be the result of placing too much confidence in the early research, pressure from management, desire to build innovation, and/or one of the biases discussed above.

While the Segway example is a well known joke today, it’s not unusual. Escalation bias happens all the time in the software world. In the 1980’s I was part of a team called in to rescue a failed product launch. The PC1000 was revolutionary;  a handheld device that enabled collision repair estimators to write estimates electronically while in the field. After years of development, and great marketing fanfare, the product was launched at our largest client. While our client’s executives loved the concept of automating collision repair estimates, their estimators did not. Adoption was zero.

Turns out, the only people that could use the software was our product marketing team. And only in a controlled office environment.

We had used the latest technology to turn a relaxed job that took 5 hours a day into a 9-10 hour work day. No longer could the estimators pick up their kids from school, or hit the gym before going home. To avoid working the “modern” way, our client’s employees sabotaged their devices. Units went missing. Estimators called in sick and took vacation.

Back at our headquarters, fingers were pointed in all directions. It was the hardware manufacturers’ fault (it wasn’t). More investments were made. Engineering rewrote code. More quality assurance resources were added.  Training was enhanced.

Our product marketing team had so much invested in launching the revolutionary PC1000, (renamed the Pecker1000 or PieceofChit1000 by estimators), that feedback from users didn’t factor into development. Having the executive team onboard was what mattered. It wasn’t until our client threatened to pull the entire account that failure was admitted.

Whatever the reason, a positive initial impression makes the company invest in the idea, make plans, allocate resources, etc. Later on, if the more in depth research seems to suggest the product will not be as successful as anticipated, researchers tend to ignore or discount the value of such contradicting information. The company pushes ahead with a bad product, until it becomes without a doubt nonviable.

It’s irrational, but so are all biases.

Visit me at Savvy Marketing for more tips on how to build software that gets adopted. I’m Pam Swingley, founder of Savvy. Our marketing services help B2B technology companies succeed. We connect product marketers to customers for market validation. Fill sales pipelines with qualified leads. And, supercharge anemic social media accounts. Results are backed by decades of tech marketing success with Fortune 1000 companies (ADP and Oracle), startups, and mid-sized software firms. Say hello to savvy marketing  www.savvyinternetmarketing.com.