Some of the most respected tech researchers have blasted cyber security surveys, describing them as “mindless” and “hopelessly flawed.”

Microsoft researchers Dinei Florencio and Cormac Herley didn’t mince words in their 2011 report, Sex, Lies and Cybercrime Surveys.

“Our assessment of the quality of cyber crime surveys is harsh: they are so compromised and biased that no faith whatever can be placed in their findings,” wrote the researchers. “We are not alone in this judgement.”

More recently, well-known Gartner analyst Anton Chuvakin wrote a blog likening some cyber security statistics and polls to “comedy.” Because of these misleading figures, he said, the maturity of cyber security market is wildly overestimated.

None of this is to say that cyber security surveys don’t serve a purpose. Most of them are well-meaning attempts to understand security problems, threats, solutions and the effectiveness or market penetration of certain technologies.

However, for these specific reasons, even well-intentioned surveys can disseminate inaccurate, misleading information and overly-generalized assumptions.

Top Posts in Data Security

  1. Cyber Security Predictions for 2018: The Top Experts Speak
  2. Cyber Security Statistics 2017: Data Breaches and Cyber Attacks
  3. Information Security Trends 2017 and 2018
  4. Top Cyber Security Conferences: 2018's Best Information Security Events

Vendor bias

Cyber security surveys are often commissioned by vendors as a marketing tool. They want to understand the problems and pain points potential customers face so they’re in a better position to pitch their product as the solution. They also hope media outlets will report on the branded survey results, boosting the company’s name recognition.

This isn’t inherently bad. Vendors should understand their market, which requires feedback from real technology users, and there’s nothing wrong with brand promotion. The problem lies in the fact that some vendors ask leading questions in an effort to get the answers they want, rather than the actual answers. Questions are cleverly worded so that respondents feel compelled to answer a certain way, even if they don’t realize it.

Surveys where vendors partner with an independent and well-respected research firm are less likely to have this bias, so they’re more reliable. Even if the vendor is funding the survey, the researcher crafts questions and writes the final report.

Small sample sizes

The thoughts and opinions of, say, 150 CTOs in North America do not necessarily reflect the thoughts and opinions of every CTOs in the world. Most of us understand this to be true in theory, yet we still give credence to surveys that are very limited in scope.

Even surveys that are considered big or broad might be based on the responses of a couple thousand people. These surveys can highlight trends or patterns that are likely true, but they shouldn’t be used to make sweeping and resolute judgments. Always pay attention to the sample size, considering that number in your assessment of the findings.

If you’re ever in doubt about the reliability of a survey, ask a respected technology researcher or consultant for his or her opinion. Tech analysts, in particular, have their finger on the pulse of the security industry, so they can usually identify skewed or suspect results.

Selection bias

Selection bias occurs when the sample selected is not representative of an entire population. As with scientific studies, accuracy and reliability depend upon the attributes, backgrounds, experiences and demographics of the sample matching those of the overall group.

Selection bias can skew survey results by making an issue or problem appear larger than it is. This often occurs when individuals are allowed to self-select whether they qualify to participate – say, for example, identifying themselves as a cyber security expert when there’s no independent verification that they really have those credentials. Selection bias can also lead to the assumption that the unique experiences of a few individuals reflect the experiences of a much larger group or population, even though that might not be the case.

To avoid being fooled by selection bias, always look into the methodology that was used to arrive at survey results. How were participants selected, and how were they qualified? What is the margin of error? If the methodology isn’t disclosed, consider that a red flag.

The human element

Humans are inevitably going to make some mistakes when answering survey questions. It’s surprisingly common for respondents to inaccurately assess things like the strength of their organization’s security defenses. Sometimes they overestimate how prepared the company is for a data breach, and other times they underestimate.

As John E. Dunn wrote in this PC World article, this can be partially explained by the Dunning-Kruger effect, which says that people with little skill or knowledge overestimate their own ability. Inaccuracies can also happen because respondents only have knowledge and cyber security practices in their own organization. They don’t know how good or bad others are, so they might think they’re doing great.

Human error is impossible to avoid, but, again, it’s crucial to look at the methodology that was used for survey results. Were the people surveyed the most qualified within the organization to assess cyber security health? Always look at a variety of surveys and research reports, too, rather than basing your opinions and actions on just a single poll.