Advertisement

Survey-based research: Performing the survey

  • Author Footnotes
    1 Edward A. Panacek, MD, MPH, is professor of emergency medicine and clinical toxicology at the UC Davis Medical Center in Sacramento, California.
    Edward A. Panacek
    Footnotes
    1 Edward A. Panacek, MD, MPH, is professor of emergency medicine and clinical toxicology at the UC Davis Medical Center in Sacramento, California.
    Search for articles by this author
  • Author Footnotes
    1 Edward A. Panacek, MD, MPH, is professor of emergency medicine and clinical toxicology at the UC Davis Medical Center in Sacramento, California.
      This is the second article in this series on performing survey-based research. The first article provided an overview of the field of surveys and covered basic issues in survey design. Read that article first. It included a list of 10 commandments recommended for performing valid survey research and discussed many of them. This installment covers the remainder of that list, as well as issues of performing the survey itself, collecting the data, and considering some unique analytic approaches.

      Pilot Test the Survey

      Pilot testing is generally advisable for any research project. The exception would be when the investigator already has extensive experience in the field and has successfully used the research instruments previously. However, whenever using a new survey instrument, it should be pilot tested, particularly if it underwent only “face” or “content” validity (see prior article). Even when using a validated survey instrument from a previous study, it may not perform the same in a new environment; pilot testing is advisable. Pilot testing can be performed using colleagues or research assistants, but is best when it involves actual study candidates. The goal is to identify items that are misunderstood and closed-ended questions that do not include all of the relevant options. Never assume a question is clear until it has been pilot tested. Pilot testing also helps further validate a survey questionnaire before full implementation.

      Plan for Nonresponders

      Surveys often have problems of unexpected low response rates. Extremely large national marketing surveys may average only a 10% response rate. However, using complex techniques of sampling and weighting of responses, they are still capable of accurate results. Novices do not have access to such sophisticated design and analysis approaches and need much higher response rates to ensure valid results. Moderately large surveys, such as one of all flight nurses, usually achieve response rates in the 40% to 50% range. Again, if scientifically valid sampling and analysis techniques are used, the results can be representative and valid.
      However, most novice researchers engage in smaller surveys, such as surveys of all of the helicopter emergency medical services programs in the United States or in their state. The target response rates for such studies should be at least 85%. The reason relates to a simple question: If all the subjects who did not respond to the survey had responded with answers that were the opposite of the responses obtained, would it substantially change the study results and conclusions? Generally, if only 5% to 10% of the survey population had entirely different answers, the main survey results would not substantially change. However, if the response rate is only 50% and, if the nonresponding 50% had very different answers, it could completely change the study results and conclusions. Therefore, high response rates, at least 85%, are sought in smaller, simpler surveys. Otherwise, the results and conclusions could be invalid. All surveys should have a goal of a high response rate and a prospective plan for dealing with nonresponders in place before beginning the study.
      There are established techniques for improving response rates. Not all are applicable to every project. Consider the perspective of a respondent who receives the survey. What would make it more likely for them to complete it?
      The first principle is to keep the survey short and focused. Many people will not answer a long survey, so shorten wherever possible. Focus on the questions truly necessary to answer the primary research question. Eliminate other peripheral or redundant questions.
      Second, make the survey user-friendly. Poorly organized surveys, which are difficult to follow, are not well received. Computer software programs make it easy to improve the design and readability of surveys. Take the time for that, and solicit suggestions from volunteer reviewers.
      Third, people respond better when they believe the survey is important. If you can generate a sense of importance for your study, response rates will increase. There are multiple options, depending on the nature of the project. Having the official endorsement of a national professional organization, such as the National Emergency Medical Services Pilots Association, can be quite helpful. Preceding the survey with a letter or email notifying the target audience in advance that it has the official support of the organization, or emphasizing the importance of the project, can improve response rates.
      Fourth, incentives can also help increase survey response rates. Small monetary gifts or gift certificates can be remarkably effective. However, this requires a study budget sufficient to support that option, which many novice researchers may not have.
      Fifth, every survey that uses a mailing (direct postal or email) distribution should plan on second or even third mailings to increase the response rate. This should be built into the study plan. For example, if postage is involved, the study budget should include the cost of possible additional mailings. There are different approaches to performing multiple mailings. One is to simply send another mailing to the entire study population. This is the simplest to perform but perceived as a nuisance by individuals who already responded.
      A more effective approach is to target repeat mailings to the nonresponders, but this requires tracking those individuals. The most common tracking approach, while still maintaining anonymity, is the “double envelope” technique. The outside “reply” envelope has a code on it that identifies the individual or the site. Once received, they are checked off the response list, and that envelope is discarded. However, the “inside” second envelope (or at least the inner response form) lacks identifiers. Thereby, the answers cannot be traced directly to the respondent, maintaining anonymity. This is the preferred approach.
      When these techniques fail to result in an adequately high response rate to a mailed survey, there is the option of selected follow-ups. For this, a representative sample subset of nonresponders is selected, and they are directly encouraged to respond. This could involve personal pleas to colleagues or increased incentives. It could involve individual phone calls to elicit responses directly by phone. Although there are important issues involved in each approach, the goal is always to obtain more data or answers from the nonresponding population. This is important not just to increase the response rate but also to help interpret the study results during the analysis phase (discussed later).

      Issues in Designing the Survey Instrument

      This article does not have sufficient space to examine each of the important issues in survey content and construction. The reader is referred to a list of in-depth references for more information in this area (see Recommended Readings). However, a few basic principles warrant mention. First, addressing “sensitive” topics, such as drug use or sexual topics, can be a problem in survey research. This is an area in which it is very important to pilot test the survey instrument in a relevant population. It will help identify areas that are particularly problematic. When concerns exist regarding overly sensitive questions, allowing the respondent a “decline to answer” option is important. That way, it can be clear that the individual did not simply skip the question but rather chose not to answer. This will be important in the data analysis phase.
      The order of questions can be important. Questions that individuals might object to answering outright should be put toward the end of the survey. Start with questions that are less threatening or inflammatory and “warm up” to answering more problematic questions. This increases the pleted most of a survey, there is a vested emotional interest in completing the rest, even if later questions are less comfortable.
      It is believed that the average American reads at a fifth-grade level, so survey instruments, and other research materials, should be geared to that level. Some word processing programs allow documents to be checked for readability. Avoid the use of medical jargon and terms that the lay public may not understand. In some settings, the survey may need to be translated into additional languages to have an adequate sample of the patient population.

      Using Scales in Study Questions or Data Forms

      Many studies and most surveys use some form of scale to quantify responses. Scales are easier to transform into numeric variables that can then be tabulated and analyzed statistically. They are particularly good for converting subjective answers into quantitative data. Numerous scales exist. The most common are briefly described below. They are covered in greater detail in a later part in this series.
      Likert scales are generally questions with five answer options that represent ordinal data. A statement or question is provided and then the respondent is asked to select the most appropriate response. The responses are ordered by degree, rank, or severity. For example, if asked, “Do you like being a flight nurse?” the possible answers could be “definitely yes,” “yes,” “not sure,” “no,” “definitely not.” Likert scales are perhaps the most common scales in clinical research.
      A numeric description scale (NDS) is similar to a Likert scale but generally includes more answer options and also provides numbers to anchor the responses along an ordinal scale. It usually provides descriptions at either end and asks the individual to select a number. The 10-point pain scale is the most commonly used example of this scale.
      The visual analog scale (VAS) is the third most common scale used in clinical research after the Likert scale and the NDS. The VAS provides a horizontal line on a piece of paper, representing all possible answers, and asks the respondent to make a mark along that line/scale. It has anchors at either end, such as “none” to “all,” or “easy” to “severe.” The line is generally 10 cm or 100 mm in length. The score is calculated by measuring from the 0 or negative end up to the patient mark point in millimeters. VAS scores are commonly used in pain research. Because the number of different answers is considered to be 100 (in 1-mm increments), the scale is sometimes used as continuous-type data, analyzed with parametric statistical tests. However, use of a VAS in this way is controversial.

      Internet-Based Surveys

      With the rapid evolution of the internet and computer-based techniques, the ability to do on-line surveys or mass email-based surveys increases every year. The cost of emails is obviously quite low, if not free. Web-based surveys also may be inexpensive, depending on the program and site is used, making these techniques attractive. However, as of 2006, when internet-based surveys were analyzed in terms of the nature of the responding population, they were still found to be a much skewed population, not a representative sample. This was true even for physician organizations, where computer access and general computing skills were assumed to be universal. Therefore, be very cautious about using the internet as an instrument for performing a survey. The external validity of the results, if not the internal validity, may be very questionable.

      Survey-Specific Data Analysis Approaches

      Data analysis for surveys can involve unique approaches, especially when response rates are low. The best approach is always to take steps to increase responses, but if still low, special analytic techniques can help. These are based on prior extensive studies of survey respondents. Individuals who respond first (earliest) to a survey have important differences from those who respond late and those who do not respond. Understanding these differences can assist the analysis. First responders are generally those with the strongest opinions on the subject, either positive or negative. Nonresponders usually have much less interest in the subject. They also tend to come from lower income groups, with fewer resources and less education. Language barriers are more common. However, and most importantly, nonresponders tend to have answers that are most similar to those of late responders.
      When response rates are low, the first step is to analyze the demographics of the responders versus the nonresponders, when such information is available. If the nonresponding population is not substantially different from the responding population, it can be argued that their answers would not be substantially different and would not change the results.
      However, many studies do not have the detailed demographic information for such an analysis. Another option is to divide the responding population into quartiles. The last quartile to respond (late responders) can be compared with the answers of the first two quartiles (early responders) to look for important differences. If there are no substantive differences in the answers between those two groups, it can be argued that the nonresponding population would also not be different. This is based on the knowledge that nonresponders tend to have answers most similar to those of late responders. If, however, the late responding population results are substantially different from those of the early responders, the nonresponding population could be quite problematic. In such a situation, it is best to perform a focused follow-up of a representative sample of the nonresponders whenever possible. Otherwise, the survey results could be invalid.

      Survey Research Costs

      Novice researchers often underestimate the actual cost of surveys. Even though surveys are generally less expensive than other research designs, they do require resources in terms of study personnel time, photocopying, and postage and mailing costs. Academic institutions may have discounted mass mailing postage rates that can help, but it is not free. If postage is being used, the budget should include costs for additional second and perhaps third mailings. Face-to-face interviews with paid research assistants can average $30 to $100 per subject. Using volunteers or unpaid research assistants for the interviews saves money but may compromise the quality of the results. The cost of telephone follow-up should also be considered in the study budget. Last, surveys can require complicated statistical analyses, so including statistical consultation in the budget is recommended.

      Conclusion

      A large body of literature exists describing accepted scientific methods for performing survey research. This brief article cannot go into detail, and the reader is referred to textbooks and references in the field for further information. Although novices might consider surveys to be a relatively easy form of research, there are established principles for their proper performance. This article describes major and common pitfalls in performing surveys research and steps to take to try to avoid those pitfalls.

      Suggested Readings

      The Sage Publications (Thousand Oaks, Calif.) Survey Research Kit contains:
      • The Survey Handbook
      • How to Ask Survey Questions
      • How to Conduct Self-Administered and Mail Surveys
      • How to Conduct Interviews by Telephone and in Person
      • How to Design Surveys
      • How to Sample in Surveys
      • How to Measure Survey Reliability and Validity
      • How to Analyze Survey Data
      • How to Report on Surveys