The Do-It-Yourself Guide to Valid Marketing Research, Part 2
Apr 1, 2006 8:00 AM, By Don Kreski
Conducting your own survey research.
Market research can be an enormous help in decision-making, but it’s not often used in the AV industry. Perhaps that’s because of the cost or our industry’s lack of research experience.
That doesn’t have to be the case. Last month we explored how you can do valid, inexpensive qualitative research; this month we’ll look at doing your own surveys.
Survey research makes a lot sense if you need to make an important decision or a question comes up involving a lot of money. There are times when you can just turn your sales force loose and see what happens, but often it’s a big help to measure your customers’ preferences before you act.
For example, when I was marketing manager at a Chicago-area AV integrator, we spent almost two percent of our gross revenue on marketing activities, including a website, catalog, direct mail, email, literature, and advertising programs. I surveyed our customers to find out which of these they used regularly and which they ignored. Their answers were not always what we expectedfor example, the paper catalog scored much higher than any of us would have thoughtbut they were a big help in deciding how to allocate our funds.
What’s the best method to distribute our surveys? Should we use a mail, email, web-based, telephone, or perhaps a face-to-face instrument? To answer a question like this, it helps to understand the science behind survey research.
Who to survey
The keys to any survey are the clarity of the questions and the size and representativeness of the sample you choose.
First, let’s look at the reliability of your survey. The confidence that you can have in a given result comes from a formula that relates the sample size to an acceptable magnitude of error. Researchers most commonly use three sample sizes derived from this formula.
If you use a sample of 100, you can be confident that in 95 percent of the surveys you do, your results will be within ± 10 percent of the true result. This sample size is normally used only for rough estimates.
If you use a sample of 400, you can be confident that in 95 percent of your surveys, your results will be within ± 5 percent of the true result. This sample size is the one most often used in market research studies.
If you use a sample of 1,000, you can be confident that in 95 percent of the surveys you do, your results will be within ± 3.2 percent of the true result. Samples of 1,000 to 2,000 are most commonly used in political polls.
Surprisingly, these numbers are not dependent on the size of the population you’re trying to understand. So whether you’re looking at a market of 5,000 or a nation of 200 million, the same sample size will give the same reliability level for your survey.
There are three things you need to accomplish in choosing a sample for a survey: the population you sample from has to represent your market accurately; the sample has to be random, and you have to stick to your sample.
First, define your population; next, identify a representative source of names and contact information for that population. If you’re looking at your own customers, you might look at your whole customer list; if it’s a vertical market, consider magazine subscription lists, association membership, or possibly show attendance lists. Next, you need to choose people randomly from that list. Excel will generate a random number source with the formula RANDBETWEEN. You’ll get a string of numbers, say 3, 15, 37, 89, etc. To use them, simply pick the third person on your list, then the 15th, 37th, and so on.
Choosing the medium for your survey
For your survey to be as valid as your sample size suggests, you have to get a completed survey from every randomly chosen participant. Obviously you won’t reach everyone, and not everyone will agree to answer your questions. But each time you fail, you introduce error, because the reasons why people refuse can influence your results.
For example, if you’re surveying pastors, you can expect the busiest pastors to be the ones most likely to refuse your survey. Why are they busy? Well, very likely because they have large, growing congregationsthe kind that most need AV systems. Leave them out and you may exclude the opinions of your best potential customers.
Therefore, to get the best possible results, you have to do your best not to leave anyone out. If you don’t get an answer, you call that person back. If he won’t agree to participate, explain how important his answers are, and if he still refuses, offer payment or another incentive. To remove issues of privacy, promise to keep answers anonymous.
A good way to judge if a survey is valid is to ask if participants are selected randomly. You’ll often see surveys posted on websites, and readers are invited to participate if they wish. Ask yourself, what can these self-selected participants tell us? Is that market normally defined, say, by people with nothing much to do?
For all these reasons, telephone surveys tend to be the most valid, with email, if conducted carefully, a close second. When I do telephone research, I normally hire a telemarketing firm to do the calling. You can also do well if you have an inhouse telemarketer on staff, but it’s a tough project for a secretary or clerk. Email is easier to administer, but anti-spam programs make it that much harder to get through.
Whatever method you choose, you need a way to track who responds to your survey. By that I mean not what a given person answers (results should be anonymous), but whether they’ve answered. Decide how many times to re-contact those who have not (three times is a common choice), decide if you’ll accept substitutions from the same organization (an associate pastor if the pastor refuses), and consider alternate ways to re-contact non-participants (follow up an email survey with telephone calls). When you give up on one person, go to a B list, also randomly chosen.
Dealing with bias
There’s another important potential problem.
You have to be sure that the questions you ask are straightforward and meaningful. For example, you may remember when Coca-Cola tried to change their formula in 1985. They based that decision on 180,000 blind taste tests, the largest sample ever taken in a market research study. The tests proved that most consumers preferred the taste of the new Coke, but the researchers never took the reactions of loyal customers into account. Sample size means nothing if you ask the wrong question or an ambiguous question.
For this reason, no matter what your delivery medium, do the first five or 10 surveys yourself by telephone. Does the person you’re talking to always understand what you’re asking? If you’re using a multiple-choice format, do their answers fit into your choices? When you’re finished, explain what you’re trying to measure and see if they have any comments. Chances are you’re going to want to revise at least a couple of your questions based on what you learn.
Another big problem is bias in interpretation. There’s a natural human tendency to fit new data into old opinions, rather than the other way around. One way to minimize bias is simply to write up a report. It’s a lot easier to look at something objectively if you have it in black and white. You can write a long, detailed report or simply put what you learn into tables, but make sure you write it down.
What happens if you go through all this effort and your boss or fellow managers refuse to accept something you’ve learned? First, know that it’s going to happen. None of us are free from bias. But second, realize that without hard data, every decision you make is going to be based almost completely on bias. Your decisions will never be perfect, but you can make much better decisions with research. That will mean more profits for your company.
Next month we’ll take a look at the most commonly-used survey instrument, the customer-satisfaction card.
Questions? You can reach Don Kreski at www.kreski.com/contact.html.
Acceptable Use Policy blog comments powered by Disqus