⚠ We would appreciate if you would disable your ad blocker when visiting our site! ⚠

T.M.I. (too much information)

Order a reprint of this story
Close (X)

To reprint an article or any part of an article from Hospitality Upgrade please email geneva@hospitalityupgrade.com. Fee is $250 per reprint. One-time reprint. Fee may be waived under certain circumstances.


November 01, 2009
Data Application
Michael Schubach, CHTP, CHAE, - mschubach@trumphotels.com
LisaPhillips- lisa@guestrategy.com

View Magazine Version of This Article

IBM would like for us to build a smarter planet.  Apparently, now that humankind has unraveled the ancient mysteries of data storage and retrieval, our species is ready to progress to a higher level of being. 

Listen to a 30-second IBM advertisement touting this evolutionary step:
Every day we generate eight times the information found in all U.S. libraries.  Where does it come from?  Stored transactions, market movements, e-mails, photos, videos, blogs.  What if technology could capture all this information and turn it into intelligence? We could identify patterns faster.  We could predict with greater confidence, convert data into actions.  Smarter information means smarter decisions.  Smarter decisions build a smarter planet.  …Let’s build a smarter planet. - IBM television commercial
New Intelligence:
Smarter Information Management

What’s wrong with this noble endeavor? At first glance, nothing.  What could possibly be wrong with having more data at our fingertips?  After all, isn’t having data what separates us from the animals?  Aren’t we the smartest generation in history precisely because of our printout mountains majesty?  And if the data just happens to be smarter (IBM contends it could happen) then having vast quantities of it must certainly be next to godliness. It seems as though you just can’t get too much of a good thing – an ocean is better than a puddle, right?  Of course, right… unless you don’t swim very well.
Although our “super size” mentality causes us to salivate at the thought of unlimited information, nothing would be a bigger disaster.  The weak link in the chain that IBM proposes is neither the quality of the information itself nor the systems that would store, organize and present the output.  The problem is us – we humans are far too limited to cope with copious data.  Smarter information does not necessarily produce smarter decisions, and there are instances where the over-abundance of data produces stupid, even deadly decisions.  And better predicting?  Why just consider how accurate our government has been at predicting economic pitfalls and subsequent recovery based on the profuse supplies of data they currently maintain.  Our government is not just accurate but also highly insightful and in universal agreement, right?  Of course, right… or perhaps not. 

Let me offer up a less loaded example.  Malcolm Gladwell (my new favorite author) analyzes the process of human decision making in his book, "Blink: The Power of Thinking without Thinking" (Back Bay Books /Little, Brown and Company, 2004).  Gladwell tells the story of Brendan Reilly, an administrator who joined Cook County Hospital in 1996, and took on a task of helping Chicago’s physicians make one of those smarter decisions that IBM so loves. 

The problem he tackled was a life-and-death decision their ER doctors constantly faced: which of the 30-odd patients who arrived daily complaining of chest pains were having heart attacks and which weren’t?  A wrong decision can turn seriously ill patients away or, perhaps worse, squander precious hospital resources on those suffering with nothing more serious than heartburn.  Electrocardiograms aren’t completely reliable – a healthy individual can produce an unhealthy looking EKG and vice versa.  The tests that give indisputable results take up to 12 hours to process, so what do ER doctors on the firing line do?  They consider the patient’s health (and perhaps the cost of a malpractice suit) and opt to err of the side of caution.  Only 10 percent of the patients with chest pains admitted to U.S. hospitals actually suffer a heart attack during or immediately after their stay.  Hospital expenses can run an easy $2,000 a day per bed, and cardiac ward patients typically stay for observation up to three days.  Do the math and see what a 90 percent error margin costs our overtaxed healthcare system.

Brendan Reilly’s research into the problem surfaced a statistical study done by Dr. Lee Goldman, a Harvard educated doctor whose study was funded by the U.S. Navy.  (The Navy needed to be able to discern heart attack from heartburn during submarine voyages.) The upshot: regardless of symptoms presented, Dr. Goldman felt he could reliably spot a heart attack with an EKG and only three other points of data.  A pre-printed Goldman decision tree could, in very few steps, decide if a patient should be sent to the cardiac care unit or to the drug store for a roll of Tums.  Reilly tested the Goldman algorithm in Cook County for two years, where it outperformed the attending physicians’ highly educated hunches by an astounding 70 percent. 

Was this new method immediately embraced by the medical staff?  Hardly. Goldman’s statistics flew in the face of a wealth of medical data and derived conventional wisdom.  An ambulance would arrive with an old, overweight man with a history of smoking, high blood pressure and diabetes complaining of acute chest pains.  Any process that resulted in a recommendation that he be sent home had to be seriously flawed. Physicians deduce their findings based on hundreds of facts resulting from thousands of studies; besides, they aren’t a group likely to doubt the evidence of their own eyes.  But the Goldman chart proved that a great abundance of smarter lifestyle data was irrelevant when making that one particular yes-or-no decision. Well-read physicians couldn’t bring themselves to imagine that an old, overweight, smoker with high blood pressure, diabetes and chest pains was suffering from nothing more serious than indigestion.  The real casualty in this situation was the doctors who succumbed to an overload of interesting yet (in this case) irrelevant information. 

And yet we data gatherers still want more. Lisa Phillips, founder of GueSTrategy Consulting, veteran hospitality marketing maven and adjunct instructor at North Texas Central College, confirms the “more is better” mentality that she encounters in her practice.  “My clients not only want more rows in their database, but more columns as well. The majority focus is really on database growth rather than database quality.  Although many clients do a good job a leveraging their data, when it comes to starting a new campaign, the vast majority still falls back on the standard measures of RFM (recency, frequency and monetary value) in selecting mailing list recipients, who all get the same message.  The additional data is playing no significant role in their marketing efforts.”  In short: more data, same results.  What’s so smart about that? 

So why is all that other stuff being stored?  We store data because we can – because we believe we live in a world of infinite storage capacity that allows us to store anything and everything forever.  We’ve been lulled into complacency with mistaken notions such as disk is cheap and e-mail is free.  We hoard guest history as if prizes were awarded each year for the biggest files.  And you can forget about a guest history exit plan – unrepentant sinners get out of purgatory faster than you’ll get out of a hotel’s guest history file. 

Too much information obscures information that really matters.  This is the wise message of major crime novelists as well as that insidious artist responsible for Where’s Waldo.   The best place to hide something is to put it into plain sight surrounded by a vast array of similar objects.  The challenge is filtering out the clutter in order to find what is relevant.  Useless records are not benign: storing them costs money and campaigning to them is a waste of time, ink and electricity.  More to the point, you soon can’t spot the forest for the trees, so in wild desperation you market to everything in your path and begin to convince yourself that one-half of one-tenth of 1 percent is a pretty decent open rate, all things considered. 

Does that make you feel as though you’re on a smarter planet?
For the record, there are other pieces of information that could be in our guest history files and could improve the application’s functionality, but I sincerely wonder what the result would be if we actually did manage to capture and store eight times the information found in all U.S. libraries every day.  Would utter brilliance be within our grasp or would we perish in an avalanche of crap from Facebook and Twitter?  Would we be IBM smarter or would we completely lose sight of an already massive forest for the addition of a few hundred billion more trees?  Unlike IBM, I don’t see the blue sky for all the foliage.  I agree with a very basic Gladwellian proposition: our problem today is not a lack of information but rather a lack of knowledge.  Trying to solve a knowledge gap by throwing more random data at it only makes it worse. The Goldman lesson is not how many data points do we store but rather which data points are required for us to make the best decisions.  Once that determination is made, our job is to ignore the irrelevant and to follow our decision tree in order to eliminate unnecessary records.  

One of the best measures of guest record relevance is one that we don’t typically see stored in guest history files: the guest’s satisfaction survey results.  Phillips said, “There is probably no stronger indicator of a guest’s propensity to return than their satisfaction survey scores.  And when they’re finished visiting your property, they will flat out tell you they’re not coming back.  Once you overlay survey results, you can see that up to 40 percent of guest records can be immediately removed from guest history.” 

To get such great guest indicators on file you have only to ask and then record the answer.  Besides the typical five-point scale ratings (from excellent to hellish), your surveys should specifically include two more basic yes-or-no questions as well as: would you recommend our hotel to your friends, and will you stay with us again?   Willingness to recommend can be the better indicator of satisfaction. Oftentimes well satisfied guests choose not to return because of personal situations or a plain old been-there-done-that readiness to move on.  When your guest is kind enough to provide you with no-thank-you feedback, return the favor and let them live their lives without perpetual spam.  Begging just makes you look needy.

Good data collection is an art – and a complicated one at that – but don’t confuse your guest history file with a museum.  To keep data useful, removing obsolete records is easily as important as collecting records in the first place.  Unlike fine art, guest records don’t become more valuable over time – they expire.  Records that consume resources without producing results are indigestion sufferers occupying valuable hospital beds, and it’s high time for us to be checking them out.   There is that possibility that we could be living in a smarter, more productive world with less information well presented rather than a big blue boatload more.  

Michael Schubach, CHTP, CHAE, is CIO for the Trump Hotel Collection. When he is not trying to make guest history more meaningful for the planet, he can be reached at mschubach@trumphotels.com. Schubach would like to extend thanks and appreciation to Lisa Phillips for her contributions to this article. She can be reached at lisa@guestrategy.com.

want to read more articles like this?

want to read more articles like this?

Sign up to receive our twice-a-month Watercooler and Siegel Sez Newsletters and never miss another article or news story.