Confronting ‘Trickster’ Figures In Market Metrics

Order a reprint of this story
Close (X)

ORDER A REPRINT

To reprint an article or any part of an article from Hospitality Upgrade please email geneva@hospitalityupgrade.com. Fee is $250 per reprint. One-time reprint. Fee may be waived under certain circumstances.

SEND EMAIL

July 12, 2019
Data Analysis
Chris Mumford - cmumford@aethoscg.com

Numbers are Tricky Business 
Data has arguably become the most valuable currency in business today. It’s the food that nourishes companies and gives them the sustenance to act and react. It underpins the sky-high valuations of firms like Facebook; without it, AI would be just two letters of the alphabet. Data has also become the accepted crutch that supports nearly all decision making. “What does the data say?” For business leaders, when it comes to issues like profitability, market share or brand equity, the more facts, figures and information they have, the better they feel.

We place huge faith in data, often blindly. People want to believe the black and white figures in front of them; it feels safe and reassuring. In the hospitality sector alone, data has spawned a whole industry of firms that specialize in collecting, housing, distributing, compiling and helping users to understand it – data related to bookings, food cost, productivity, guest reviews, water usage, market demand, average spend, CRM, etc. 
 
Is data always our friend? The consumer group Which?, for example, recently investigated customer ratings on Amazon and found the site was inundated with fake five-star reviews. Can data always be trusted? No. Often, lying in wait to confuse or sabotage the conclusions business leaders draw from it is the Trickster – one of several universal forces or patterns within the human mind that psychiatrist Carl Jung termed archetypes. This one specifically represents the irrational, chaotic and unpredictable side of thought and behavior.
 
These manifestations of chaos exist on balance sheets. Marketers are somewhere between reality and fantasy in the way they interpret and act on market metrics, and are likely completely unaware. Best practice market intelligence in sales and marketing often comes from consumer reviews, like formal satisfaction surveys, TripAdvisor ratings, Michelin star series or Net Promoter Scores (NPS), which are often held up as a best practice metric. This latter index gauges the loyalty of a company’s customer relationships, and some research indicates it’s correlated to revenue growth. NPS scores range from -100 (every respondent is a detractor) to +100 (every respondent is a promoter). NPS scores vary by industry, but a positive NPS (i.e., higher than zero) is generally deemed good, an NPS of +50 is considered excellent, and scores over +70 are exceptional. 
 
Such scores look straightforward, but leading-edge research in psychometrics, tests and measurements increasingly suggests that market-driven metrics like these are often and quite literally Trickster figures.
 
What You See is Rarely What You Get
Lurking under NPS scores you’ll often find critical, consumer-based nuances or market-based idiosyncrasies that can distort responses, skew trends and promote misleading conclusions. Specifically, traditional consumer ratings, like NPS, have at least four inherent limitations:
  • The usual approach of summing test scores or ratings doesn’t provide linear (i.e., interval-level) measures of the underlying variable. Group differences and treatment effects are distorted. 
  • Traditional scaling approaches treat all questionnaire items as equal, thereby making it difficult to select those questions that are most appropriate for a specific population of respondents. 
  • The standard raw score approach doesn’t recognize that some items may be biased so that respondents with identical trait levels receive systematically different scores, e.g., younger respondents) endorse some questions more (or less) often then older respondents with equal trait levels. 
  • Traditional scaling approaches offer no indicators of the internal validity of respondents’ scores. Aberrant response records can’t be identified. You can’t determine whether low scores are due to low trait levels, respondents’ misunderstanding or incomplete processing of the questions. 
 
Moreover, there’s a sinister fifth limitation related to rater-severity. Put simply, not everyone uses ratings or scoring systems the same way. Some people are harder or easier to please than others. Think back to your school report card. Although they used the same grading system, some teachers were lenient in their scoring while others were strict. 
 
This type of respondent bias unfortunately also applies to consumer ratings like TripAdvisor reviews and NPS scores. Many marketers know this intuitively. For example, British consumers seem more difficult to please, so they give systematically lower (i.e., stricter) ratings on products or services. In contrast, American consumers are relatively easier to please, so they give systematically higher (i.e., more lenient) ratings on the same products or services. The net effect is that both consumer sets are using the same NPS scheme but in significantly different ways, thereby not producing a true apples-to-apples comparison. 
 
This problem is extremely pervasive, said Rense Lange, a Ph.D. statistician with the Polytechnic Institute of Management and Technology in Portugal and a published expert in psychometrics and quality control models. “Over 20 years of scientific research demonstrates that raters consistently differ in severity, even when using the same scale or review system.” So, how can organizations that rely on consumer ratings manage these severity biases so that their data-mining accurately guides business decisions? 
 
Lange spoke to two solutions. First, it’s possible to change rater severity with two to three days of structured training, but this is difficult, time-consuming and totally infeasible for online reviews or NPS scores that depend on consumer interaction. Plus, there’s the added complexity that when raters or reviewers are trained to be meticulous and contemplative, they tend to start using safe, middle-rating categories. In other words, if you tell people they’re too easy or too hard in their grading or reviewing, they start avoiding their respective extremes by starting to use more middle categories. Consistent middle-of-the-road scores or ratings essentially dilute or hide the positive or negative feedback that organizations need. So, it seems the Trickster strikes again. 
 
There’s a second and more technologically grounded solution, however, that levels or equates rating across different reviewers with different severity biases. And, as Lange noted, it has been proven effective in courtroom settings with judges’ attitudes toward leniency in rulings, with HR and manager performance appraisal reviews, and even consumer product reviews on Amazon.com. It’s an advanced statistical analysis known as the multifacet Rasch scaling (MFRS) methods. It can be performed by qualified operators using the FACETS computer software by Michael Linacre. 
 
Companies that care about accurate insights from data-mining – especially using information derived from customer ratings and reviews – are strongly encouraged to confront the Trickster in market metrics by never taking raw numbers like NPS scores at face value. The best practice is to leverage the power and accessibility of leading-edge software and statistical analyses to understand this type of data with nuance and intelligence. Most organizations don’t even know the problem exists. And if they are aware, they likely won’t take the time to actually fix it with MFRS. Those that do, however, will find they have a far greater handle on their data and, by avoiding the Trickster, are ultimately able to make better decisions. For those companies, the value of data will go up even further.
 
©2019 Hospitality Upgrade 
This work may not be reprinted, redistributed or repurposed without written consent. For permission requests, call 678.802.5302 or email info@hospitalityupgrade.com.


want to read more articles like this?

want to read more articles like this?

Sign up to receive our twice-a-month Watercooler and Siegel Sez Newsletters and never miss another article or news story.