5 ways to make stats in content marketing more credible

Suspicious Statistics in Content Marketing

As marketers, we want to be better consumers of data. Presented with data and its analysis, we want to be able to judge its accuracy and relevance to our decision making. We want to gauge its ambiguity and uncertainty, even though on the surface we’re being presented with quantified “facts.” We want to detect bias and account for it.

So let’s start with our own statistics in content marketing.

Because, seriously, too many of the stats that are appearing in content marketing these days smell fishy. I don’t want to pick on anyone in particular — there are too many folks doing this to unfairly single out one — so I’ll give you a hypothetical example:

Company X reports that their latest state-of-the-industry survey reveals 72% of marketers are engaging in — or plan to engage in — hamster optimization. Clearly hamster optimization is big! And isn’t that great, because coincidentally Company X just happens to be a hamster optimization provider…

You’ve certainly see examples like that. Some are blatantly biased. Others are a little more subtle. But the content marketing arms race has fueled the fire for many newsworthy-but-questionably-justified kinds of claims like these.

Now, being biased is not necessarily a terrible thing — as long as you disclose your bias and don’t try to sweep it under the rug. Qualify the data on which your statistics are based, so that readers can make a fair assessment of the context and relevance of your findings. After all, you presumably want your readers to trust you. That’s kind of the bigger brand mission with your content marketing in the first place, right?

5 ways to make your statistics more authentic

There are five things you can — and should — do when presenting survey statistics:

  1. Note the sample size — how many people participated.
  2. Break down the basic firmographics of the participants.
  3. Describe how the participants were selected.
  4. Include the original question and answer choices.
  5. Define nomenclature that may be highly subject to interpretation.

The first, noting sample size, most people already do. If you don’t, start now because it’s an immediate red flag if it’s not stated — and because n = 20 is very different than n = 2,000. Conclusions drawn from small samples are weaker than those from large samples. And while there may still be value in sharing results from undersized samples, the value is more anecdotal than statistical. Play fair and let people know that.

How large should your sample size be — how big should n be? It depends on some slightly technical parameters such as confidence level and confidence interval. But here’s a simple sample size calculator you can use with some basic examples to put you in the ballpark.

The second, breaking down the basic firmographics of participants, is unfortunately less common. Firmographics are things such as the size of participants’ companies in revenue or employees, their geographic region, their industry, whether they’re B2B or B2C, etc. You may also consider including the level of the participants — mid-level managers, senior directors, top executives, etc.

You don’t have to go overboard, but even a little bit of this information goes a long way towards qualifying your results. If all your participants were mid-level managers from enterprises with $100 million or more in revenue, that’s probably a very different story than if your data comes from top executives at small businesses with less than 50 employees.

The third, describing how participants were selected, is the difference between the amateurs and the pros. Any professional research will disclose how the participants were found, enticed, and qualified, usually under the heading “methodology.” Here’s an example from a report on business analytics by Harvard Business Review and SAS:

Sample Methodology Section

This is super important because selection bias — a set of characteristics or circumstances that influenced the selection of participants — can profoundly skew the results of a study.

For example, let’s go back to our hypothetical hamster optimization provider, Company X. For their survey, they reach out to their blog subscribers, Twitter followers, and Facebook fans to participate. It should come as no surprise that a sample of that population — people who follow Company X — would have pretty positive views on hamster optimization. (“Hamster optimization rules!”)

The results would likely be quite different if Company X invited participants from a random set of Harvard Business Review subscribers. (“What the heck is hamster optimization?”)

To be honest, selection bias is almost impossible to avoid — especially in industry studies with modest budgets, which is usually the case in guerrilla content marketing. That’s okay. Just disclose your selection methodology so that readers can adjust their interpretation with that bias in mind.

For Company X to pull participants mostly from its own universe, but to report as if their findings represent a more general population, however, would be disingenuous.

If you fear that disclosing your selection methodology could undermine the results of your study — that should be a warning bell — then you might consider ponying up money to find a less biased population to sample. This is one of the services that professional industry analyst firms offer. They’re not free from selection bias either, of course, but their audiences are usually much less biased than the ecosystem around a single vendor.

The fourth, including the original question and answer choices, helps make sure that you, your survey participants, and your content consumers are all talking about the same thing.

If you ask a question like, “Do you use data-driven decision making?” and get Y% who respond “yes” — but then in your report you write, “Y% are data-driven decision makers” — you’re changing the meaning. Participants may have answered the question thinking that they occasionally use data-driven decision making along with other experience-driven approaches. But the statement in your report could be interpreted that those participants predominantly or exclusively use data-driven decision making.

This effect can be subtle or significant. But it’s easy to avoid problems by simply restating the question and answers verbatim. You can add other narrative around that, but you’re clear about what’s data and what’s narrative.

Finally, defining nomenclature that may be highly subject to interpretation, in both the survey and the report, avoids misinterpretations. For instance, if you’re surveying how many marketing teams have a “marketing technologist” on staff, you might want to define who qualifies as a marketing technologist. An IT person working in marketing? A web developer? A marketing automation specialist? Depending on the definition, you may get very different results.

Particularly with so many new terms popping up in our profession, including brief definitions in your study can help reduce the risk of wildly different interpretations impacting the accuracy of your analysis.

This certainly isn’t an exhaustive list of survey and statistical analysis dos and don’ts. But if we could raise the bar on survey-driven content marketing to address these five issues, it would make that content more valuable than a quick flurry of soundbite tweets on Twitter — it would provide information that our readers could actually use in more data-driven decision making.

And it would make you a more credible source in the eyes of your audience — as all good content marketing should.

Get chiefmartec.com directly in your inbox!

Subscribe to my newsletter to get the latest insights on martech as soon as they hit the wire. I usually publish an article every week or two — aiming for quality over quantity.

This field is for validation purposes and should be left unchanged.

7 thoughts on “5 ways to make stats in content marketing more credible”

  1. These are very good tips – particularly the selection bias issue. Data is never wrong – but collection methods may not be fit for purpose (whether that means your microscope is broken or you only ask your close friends to fill in an important survey).

    Something I would suggest prior to following Scott’s advice on presenting the data acquired would be to check and analyse your results with previous studies on the topic – if there are any – and if time and budget allow.

    Finding relevant research will give you a starting point as long as you evaluate it (along with the collection methods) critically, keeping in mind everything Scott has said above. If it has similar conclusions to you then great – you’ve got more conviction material to discuss. If it seems to say the opposite then maybe you can find out why based on the methods used by yourself and the source. Either way it should be an interesting part of the process!

    1. That’s a great suggestion, Hywel — thank you!

      It’s actually what inspired this post. I was looking at three different studies that superficially appeared to have asked the same question, but their results were wildly different. It took some digging to determine that undisclosed firmographics and some major selection bias were responsible for the divergence in their results.

  2. Scott,
    I really enjoyed this post. It was very timely because I was just having a discussion yesterday about the Gartner study you shared about the rise of the Chief Digital Officer. A rather large, rather notable consulting firm that conducted an in-depth set of research of the CIO suite had very conflicting experience with that. Not saying Gartner’s wrong or the other firm’s right, but just that it’s hard to make a comparison without knowing the firmographics of the two studies. Thanks for reminding everyone of the benefits of being transparent.

    Also, if you have any insights on how to increase our hamster conversions that would be awesome!

    1. Thanks for the comment, Jason — glad this was helpful.

      The major analyst firms are interesting in this regard. The folks actually doing the research are generally very professional and pay attention to all these details, and the full reports that they issue almost always contain details on firmographics and research methodology. But those full reports aren’t generally available to the public — you have to pay to see them.

      However, the marketing/PR teams at those companies often then take a couple of soundbite statistics from those reports and promote them WITHOUT those accompanying details. Of course, those are the stats that tend to get widely picked up and reiterated across blogs and publications and other content marketing (I’m as guilty of this as anyone). But without enough mooring to their original context, those stats can be easily misinterpreted.

      My humble plea to the PR folks in those firms: include some basic firmographics and methodology, even if it’s in a footnote. It only makes your claim more authoritative.

      Separately, I’ll let you know when my HAAS (hamster-as-a-service) product is ready for beta.

  3. Scott,

    Glad to see you join the fight for integrity in marketing research! I have a 6th item to add to your list: Testing for statistical differences. Too often researchers draw conclusions on data that just looks different. But, you can’t tell if a difference is real by looking at it. Statistical significance is related to sample size as well as variability in the data. You can’t know if the differences are significant without doing the mathematical calculations (or having the computer do it for you!)

    In the absence of statistical testing, researchers risk fabricating meaningless conclusions. I certainly wouldn’t want to base my marketing strategy on meaningless conclusions.

    Keep up the good work Scott!

    Julie

  4. Pingback: Weekly Trumpets: The More Things Change, the More They Stay the Same | Ghergich & Co.

  5. Scott,

    Excelent points. Often, when “trumpeting the stats” as marketers we tend to jump on what reinforces our viewpoint and what we’re offering, without taking the time to did deeper. After all, why would we? The data already validates what we’ve been preaching.

    Your piece points out whyt we should, indeed, turn at least a few shovels full before telling the word how the latest study backs up what we’ve been saying all along. The trouble for many is that even when the study agrees with our viewpoints, it often takes a bit mroe work to get at the numbers behind it, even if we are at a mind to disclose them.

    In the “at least a post a day” world in which we live, people are more inclined to take shortcuts. Perhaps your piece will push more marketers to look at the foundation, before publishing pictures of all the pretty buildings.

    Thanks,

    Steve

Leave a Comment

Your email address will not be published. Required fields are marked *