Reminiscent of the scene in Alfred Hitchcock’s “The Birds,” when the sky turns black with attacking avians, I often feel as if I’m being attacked by throngs of bad statistics when I read today’s insurance and business press. “The Birds” was just a movie, but bad statistics can draw real blood. Bad statistics can lead to bad strategies and bad decisions. For proof, look no further than our economic malaise. While some people rely on statistics, others avoid statistics at all costs. Some people believe Benjamin Disraeli’s famous saying, “There are three kinds of lies: lies, damned lies and statistics.” I would qualify to say that bad statistics are damned lies, while good statistics are key to intelligent management and a huge competitive advantage. This makes it essential to discern between the two. Sometimes advanced education is required to identify the good from the bad, but usually common sense will suffice. Here are some examples: Check the data source I recently examined a PowerPoint from an agency consultant. I was jealous of his beautiful graphs that were so artistically crafted. After looking more closely, though, I realized they were just art. There was no real data behind them. The consultant had started with how he wanted the charts to look, then created the data that would generate his desired charts. He had not gathered any real data. The lesson: Always check the data source. The best data is random. If it’s not random, suspect it. Be particularly cautious if the organization that created the study also is selling something because its study is more likely to be biased.
Beware of averages Studies presenting only averages are of little use because the information regarding what creates better or worse performance is absent, as is variance. Averages are not usually adjusted to the median, so a few scores could greatly skew the result. And even if you see averages showing the numbers for the top performers or worst performers, that data often is meaningless because the numbers are just averages. Many factors affect averages–without more information, how do you know what to do about it?
For example, an agency was comparing its producers with a study showing the average producer’s book was $300,000. Was its $250,000 producer a poor performer? Possibly, except she made $250,000 in half of the time. How about another producer with $350,000? Is he great? Perhaps, except his book was given to him. Possibly the biggest problem with averages is the general assumption that every statistic can be plotted along a normal curve. Some sophisticated users exacerbate this mistake by assuming normal levels of confidence and variation at either tail. The truth is not all curves are normal and we know that in reality, especially in financial markets, even when using a normal curve, the extreme tails do not always work the way a normal curve predicts. The lesson: When a study only shows averages, very little weight should be placed on the results if one is trying to determine cause and effect. Context is critical Changing the context enables people to purposely mislead others. For example, if I wanted to pump up an agency owner, I could expound on how his agency’s 88 percent retention rate was great, without mentioning that most of his competitors are doing even better. At 88 percent retention, the reality is the agency is likely doing something wrong. Another great example is, “Our producers are awesome! They each wrote more than $200,000 in new commissions last year!” Are the producers new or established? Is this a small or large agency? Does this include program business? And most importantly, what difference does it make how much new business the agency writes if all of it goes out the back door? I’ve met awesome new business producers who could write the equivalent of 30 to 40 percent of their books in new business each and every year. Of course, their retention rates were around 65 percent. New business only counts if it is retained. These are important issues that greatly affect the terrific-sounding “$200,000 in new business” statistic. The lesson: Check the context of the statistic. Do factors exist that might mitigate the statistic’s usefulness? Mixing & matching Be cautious of studies that compare apples to oranges. A common and very misleading mismatch is to use EBITDA (earnings before interest, taxes, depreciation and amortization) to compare companies growing organically to those growing by acquisition. The problem is while EBITDA excludes almost the entire cost of acquisition growth, it does not exclude the cost of organic growth. I do not have space to go into all the details here, but suffice it to say that when comparing the two types of growth using EBITDA as a measure, the acquiring firm’s EBITDA is virtually guaranteed to look hugely better. The key is to look at cash flow. When firms grow organically, their cash flows and profitabilities usually are very close to the same. When a firm grows by acquisition, cash flow often is much less than profits. But because too many people do not understand the implications of EBITDA versus cash flow, a lot of bad acquisitions, growth decisions and even loans have been made.
The lesson: The statistic or measure must match the purpose. Do not compare apples to oranges!