Sometimes you need a definitive measurement. While we ask buyers and customers a range of qualitative and probing questions, we also love to ask them to quantify their perceptions. With that information, we can provide a host of metrics which precisely pinpoint the areas in need of focus to win more business and retain more customers.
A basic output in our TruVoice software would look something like this:
When asking these types of questions, we favor a 0-to-10 scale. Why? In 13 years of experience, we’ve found it works best. It’s that simple. It matches how buyers and customers in a B2B situation tend to differentiate vendors and perform the evaluations.
We find three distinct benefits with the 11-point scale which makes it ideal.
Benefit 1: The “magic zero” effect
The most common point of confusion for a respondent is understanding which side of the scale is positive and which is negative. People often laugh when I say this, but it’s true. For example, a respondent could transpose the values on a 1-to-10 scale to assume 1 is high and 10 is low. (In fact, when we’ve used this scale in the past, we found around 5% of respondents transposed the values. If you don’t have a chance to follow up with the respondent via phone, it could really throw off your results.)
Enter the magic zero. Because zero is intuitively understood as a low or undesirable rating, including it in your scale somewhat “magically” clears up confusion about the high and low of the scale.
Benefit 2: Balanced scale with a true mid-point
With 11 rating options, the 0-to-10 scale gives a true average rating (5). Unbalanced rating scales, by contrast, have midpoints that can’t be selected, such as 5.5 for a 1-to-10 scale. It’s important to offer a true “average” option so respondents can indicate when something was neither exceptional nor poor.
Technically speaking, a balanced scale can be considered an interval measurement while an unbalanced scale cannot. Some people would argue you can only compute a true average with an interval (balanced) scale. I’m not sure it’s fair to say a 0-to-5 scale can’t give you a true average, but we love averages, so I’ll go with it.
Benefit 3: Increased variability and differentiation
When it comes to understanding something as complex as a buyer or customer’s perceptions of your solution, precision is preferred. An 11-point system provides significantly more room for buyers to express differentiation than a 5-point scale. For example, while a good and very good solution may both receive a 4 rating on a 5-point scale, the same solutions would likely receive different ratings on a larger scale, such as 6 and 8. Don’t force your respondent to select the next best alternative.
In our research, we’re also looking for differentiation and an 11-point scale has been shown to increase the variability in responses.
Another notable advantage: your “top box” and “bottom box” calculations will be more powerful. At Primary Intelligence, we calculate the top box (exceptional ratings) as the top 80th percentile, and the bottom box (very poor ratings) as the lower 50th percentile. In a 0-to-10 scale, that equates to 9 and 10 as a top box. In a 1-to-5 scale, the top box includes 4 and 5 ratings. This means even though a 4 rating is just above average (3), it’s counted as an “exceptional” rating, stripping the power of top box calculations.
Additional Tips for Rating Scales
Remember, the goal is to get the most accurate measurement of the buyer or customer’s perceptions – avoid any risk of confusion.
- Keep the scale consistent throughout the interview. Don’t mix in alternate rating options or reverse the meaning of a rating (e.g.: “10” is excellent in one question but terrible in another).
- Include a legend or label on your scales to clarify the value (e.g.: “10 = Excellent, 5= Average, 0 = Terrible”).
- Include answer options outside of the scale, such as “Not applicable,” “Refused,” or “Unsure” to ensure an exhaustive answer set.
- Evaluate the scale against what you are trying to figure out. Don’t need much differentiation? A smaller scale could work better. Are respondents struggling to quantify their answers? Maybe text answers would work better. Use whichever answer set will guarantee the best respondent experience while meeting your research needs.
Suggested Additional Reading
There is a lot of online discussion and academic papers on this topic – some of them agree with me, some don’t. My favorite is a simple discussion on Quora that I think is worth the read: Is there a better alternative to the 5-star rating system?