We present a working review of survey methods based on market research technology. The structure of questionnaires, their distribution and analysis, are considered, together with techniques for increasing response rates.
- research methods
Statistics from Altmetric.com
Questionnaire surveys are a cheap and quick research tool used by many medical researchers to investigate various aspects of health and disease. Their popularity is founded on the speed with which results can be obtained without significant capital investment. However, there is a commonly held view that, because of these elements, questionnaires can be easily constructed and used without training. It is such thinking that has led to a limited respect for this methodology and which fails to recognise the discipline that should be applied to their development.
Questionnaire studies can be used in the systematic collection of information and may help to define the incidence of disease, identify aetiological factors and investigate quality of life, as well as predict some aspects of behaviour. Despite these possibilities, only limited interest has been shown by the medical profession in the further development of this methodology.1 Particular criticism has been levelled at survey techniques when they fail to pay attention to reliability and validity.2 Despite such problems, considerable effort has been made by market survey organisations and social scientists to improve questionnaire quality and in this review it is largely their work on better administration and improved structure which will be considered. With appropriate attention, questionnaires can provide reliable information. For example, in a Swedish postal study of bowel symptoms, a medical examination of a sub-sample of people showed there was both good reproducibility and validity of the questionnaire response.3 4 Indeed there are a large number of validated questionnaires in the public domain and many are available for use by researchers, although sometimes at a cost.
What social survey methods are available to the clinician?
Typical techniques that may be used include face-to-face interviews, telephone interviews, or mail surveys. The choice of a method will depend upon:
complexity of questions asked
required amounts of data
desired accuracy of responses
time requirements to complete the project
acceptable level of non-response
cost of the investigation
Questions may be of an open or closed type. Open questionnaires tend to encourage qualitative responses, while more structured ones can give quantitative answers. Qualitative responses are harder to analyse and are not given to easy measurements. They allow the respondent to comment on the question in a general way. In contrast, quantitative studies require ‘yes’ or ‘no’ type answers and allow a numerical value to be attached to the response.
Face-to-face interviews are probably best suited for qualitative studies. They can be classified according to their degree ofstructure anddirectness. Structure deals with the amount of freedom given to the interviewer in allowing the questionnaire to meet the unique situation posed by each interview. Directness involves the extent to which the respondent is aware of the nature and purpose of the survey. This may be significantly increased by unstructured interviews and therefore their place is probably in qualitative research or the initial development of a structured questionnaire.6 Errors can be introduced by misinterpretation of replies given by the subject, although this may be limited by tape recording the interview. However, such an approach will not eliminate errors due to misinterpretation of answers during data analysis. Greatest variance is seen with attitudinal questions but there is only limited information as to how to predict those items which produce the largest degree of variance.7 8 Despite these comments, it is surprising that interpersonal and analytical skills do not correlate with the success of a face-to-face interview.9 Consequently, the value of specific training of interviewers when questionnaires are well designed and pretested has been questioned, although it is probably of some importance.
Face-to-face interviews tend to be expensive and time consuming. Telephone interviews are cheaper to conduct but, while still fraught with all the problems of face-to-face interviews have, in addition, ones peculiar to themselves. Of course telephone surveys are directed at that section of society who own a telephone. In the early 1930s this led to significant selection bias10 but this is less so in the 1990s. Telephone interviews can prove a quick, effective method of investigation providing appropriately trained interviewers are used.11 However, refusal rates can be doubled from 21% to 41% when interviews last more than 5 minutes.12 In addition, there is evidence that interviewer intonation may affect the outcome of yes/no or agree/disagree surveys13-15 and so produce a larger number of positive responses than expected.
Mail surveys are useful when there is a sharply defined hypothesis or a clear focus on the type of information needed. The major issues are of non-response bias, response quality and item non-response. In addition, they allow only limited control over whether the intended respondent or someone else completes the form.
Five major issues arise when designing a questionnaire:
the need for the data: data should be critical to any analysis and irrelevant questions excluded
the ability of the question to produce data: questions should be unambiguous
the ability of the respondent to answer accurately
the willingness of the respondent to answer accurately
the potential for external events to bias the answer.
When respondents are forced to rely on their memory for specific facts, three aspects of forgetting can affect their responses:
omission: an individual is unable to recall an event that actually took place, which can be a particular problem for some elderly people
telescoping: an individual remembers an event as occurring more recently than it actually happened; substantial effects start to appear in periods as short as a week
creation: an individual ‘remembers’ an event that did not occur.
Phrasing of questions
It is critical that respondents and researcher assign exactly the same meaning to the question. In order to minimise differences the following issues need to be addressed.
are the words, separately and in total, understandable to the respondents?
are the questions ‘loaded’ in any respect?
are all the alternatives involved in the questions clearly stated?
are any assumptions implied by the questions clearly stated?
what frame of reference is the respondent being asked to assume?
Questions can be open-ended, multiple-choice or dichotomous. Regardless of which type is used, the data collected can be very accurate. This comparability between question types has been shown in a number of studies, including one of co-morbidity, health resource utilisation, hospitalisations, and medication use.16Open-ended questions are the most difficult to analyse, but give an opportunity for respondents to express their views and beliefs and should be included in a questionnaire.
The first questions should be simple, objective and interesting. The overall questionnaire should move from topic to topic in a logical manner with all questions on one topic being completed before the respondent moves to the next. Individual questions should avoid suggesting answers to following questions. The use of dichotomous questions (a yes/no answer) is associated with significant reproducibility and reliability. This is particularly so when collecting factual information about health.17
Before a questionnaire is distributed, a draft should be pretested and then piloted. With these results the form should then be re-evaluated.1 The validity of the questionnaire will initially be restricted to the population on which it was tested. Factors such as cultural differences, age, and sex, will all affect its validity and before it can be used in a wider setting this will again need to be checked. For these reasons it is important that the pilot testing is on a representative group drawn from the final population to be tested. Once a survey has been completed there should be a post-enumeration survey which will check on the reliability of the data that have been collected. If this second survey uses a different style of question it can show any variability within the data. The accuracy of a sample of the data should be checked from as many independent sources as possible.18
As early as 1961, Scott19 was able to identify some of those factors associated with response rates greater than 90%. In subsequent years additional elements have been identified and they include:
the nature of the sponsor, eg, sponsorship by a professional body led to a high response rate amongst psychiatrists21
a relatively short and non-contentious questionnaire which includes a description of the purpose and benefits of the study
notification by mail or telephone22
a handwritten note attached to the questionnaire
a supporting letter from the patient's general practitioner.
There is relatively little the researcher can do to shorten the time to achieve a response but a small-scale preliminary mailing can be used to identify likely problems with response rates and predict the number and timing of responses to the final questionnaire. It generally requires two weeks to receive most of the responses to a single mailing. A mail survey with only one follow-up mailing and no pre-notification will require at least three weeks for data collection.
In general, although the exact nature and cause may be variable, 3–8% of items in any questionnaire are usually left blank.23 Age, education, and sex have all been thought important,24 as well as lack of familiarity with the topic under investigation. Use of questions which seek an opinion can lead to a poor response.23 Some of these problems can be overcome by question simplification, use of larger type, and asking the same question in several different ways. Although anonymity may be expected to reduce item non-response rates, this has been disputed.20 25 26
In general, the lower the response rate the higher the probability of non-response error. However, this is not always so.24 27 This effect can be reduced by weighting samples, eg, assuming those who take longest to respond to a questionnaire are most similar to non-responders and weighting them appropriately.
This is the difference between factual and self-reported data and it is closely correlated with respondents' ability to follow instructions. The length of a questionnaire is probably unimportant, although lack of interest or motivation may produce the ‘straight line’ answering phenomenon28 29 and, in practice, many respondents are put off by questionnaires greater than A4 in length.
STRATEGIES TO DEAL WITH POOR RESPONSE RATES
Methods used to encourage individuals to respond to mail surveys such as prenotification, cover-letter messages and follow-up contacts can reduce the accuracy of responses.30 Such techniques may encourage ‘guessing’ by uninformed respondents7 but this is not always the case and they can also increase accuracy.28
However, the provision of first class postage can generate an additional 9% response rate as shown in a study by Armstrong and Lush.31 Monetary incentives can also increase response rates substantially,28 especially if they are prepaid.32 33 Other factors which may favourably influence response rates include the physical appearances of the questionnaire, including paper colour. The choice of pink paper has been favoured since the 1920s, although evidence for its value is lacking.34
STRATEGIES TO MONITOR QUALITY
After each successive wave of contact with a group of potential respondents the researcher should run a sensitivity analysis: its purpose is to ascertain how different non-respondents would need to be from respondents to alter the significance of the data supplied by current respondents. If the most extreme foreseeable answers by the non-respondents would not alter the decision, no further efforts are needed. If the non-respondents could alter the decision then the researcher should examine the trend over the first, second and third mailings. The attributes of the non-respondents are assumed to be similar to a projection of the trend between early and late respondents. Unfortunately these trends may not hold and should only be used when there are logical reasons to believe the trend will apply to the non-respondents.35
Another and perhaps better method is to sub-sample non-respondents and approach them through personal or telephone interviews. The results can then be projected to the entire group and the overall survey results adjusted to take non-respondents into account.
Mail surveys can be effective methods of collecting data. Provided basic rules about design, reliability, and validity are adhered to and some methods of achieving adequate response rates are adopted, the data will be robust. This is especially so if some account is taken of non-responders. From such data valid conclusions can be drawn. Such research can and will continue to stimulate hypotheses and improve quality of care. However, in view of the ‘questionnaire fatigue’ now experienced by some doctors,36 it is essential that data collected in this way have real value to the participants in the study and not just the researchers.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.