ISPUB.com / IJE/14/1/34897
  • Author/Editor Login
  • Registration
  • Facebook
  • Google Plus

ISPUB.com

Internet
Scientific
Publications

  • Home
  • Journals
  • Latest Articles
  • Disclaimers
  • Article Submissions
  • Contact
  • Help
  • The Internet Journal of Epidemiology
  • Volume 14
  • Number 1

Review

Participation Rates In Epidemiology Studies And Surveys: A Review 2007–2015

C Keeble, P D Baxter, S Barber, G R Law

Keywords

bias, non-response, participation, rates, selection

Citation

C Keeble, P D Baxter, S Barber, G R Law. Participation Rates In Epidemiology Studies And Surveys: A Review 2007–2015. The Internet Journal of Epidemiology. 2016 Volume 14 Number 1.

DOI: 10.5580/IJE.34897

Abstract

Understanding the factors associated with participation is key in addressing the problem of declining participation rates in epidemiological studies. This review aims to summarise factors affecting participation rates in articles published during the last nine years, to compare with previous findings to determine whether the research focus for non-participation has changed and whether the findings have been consistent over time.

Web of Science was used to search titles of English articles from 2007–2015 for a range of synonymous words concerning participation rates. A predefined inclusion criteria was used to determine whether the resulting articles referred to participation in the context of study enrolment. Factors associated with participation were extracted from included articles.

The search returned 626 articles, of which 162 satisfied the inclusion criteria. Compared with pre-2007, participant characteristics generally remained unchanged, but were topic-dependent. An increased focus on study design and a greater use of technology for enrolment and data collection was found, suggesting a transition towards technology-based methods.

In addition to increased participation rates, studies should consider any bias arising from non-participation. When reporting results, authors are encouraged to include a standardised participation rate, a calculation of potential bias, and to apply an appropriate statistical method where appropriate. Requirements from journals to include these would allow for easier comparison of results between studies.

Abbreviations: Missing at Random (MAR), Missing Not at Random (MNAR), Doctor of Philosophy (PhD), Short message Service (SMS).

 

Introduction

Identification of the factors associated with participation could help to understand why participation rates have been declining over the last 30 years or more [1, 2]. Reasons for the decline itself include an increasing number of refusals, more stringent participant criteria and changes in lifestyle [1]. Refusals can be explained by the increasing number of research requests, general decreases in volunteering and the increased expectations of the participants, while changes in lifestyle include more mobile telephones, fewer telephone directories and longer working hours [1]. Regardless of the underlying motivation, non-participation can lead to participation bias and cause the results to not be generalisable to the intended population [1].

To calculate participation rates, a range of definitions have been suggested [3], which can cause problems when comparing studies. Although standard calculation formulae developed by experts exist [4, 3] as far as we are aware, there is no formal consensus, even within journals, on which rate to adopt or which calculation to use [5]. This may result in researchers selecting the definition or formula which shows their work most favourably. For a given definition, the rate value can also differ merely by the assumptions made by the researcher, leading to different rates quoted from the same study or survey [6].

In 2007, a detailed review of participation rates in epidemiology studies was conducted, including what was known about who participates in epidemiologic studies [1]. However, we could find no such review since then. Briefly, the 2007 review found that participation was associated with individual characteristics such as age, sex, race/ethnicity, socioeconomic status, education level, employment status and marital status [1]. Regarding the study design, participation rates tended to be higher for face-to-face recruitment rather than less personal means and, understandably, for those requiring less commitment from the participant [1]. Studies offering monetary incentives were generally found to increase participation, as were those offering a choice between modes such as paper surveys or telephone communications [1]. Web-based surveys were beginning to be utilised and found to be particularly useful when recruiting young or college participants, but not universally successful, especially with concerns such as data security [1].

In an attempt to increase participation rates, researchers have utilised these findings and incorporated incentives into their studies [7] or oversampled groups of people known to participate less frequently [8]. While these approaches may aid participation [9], they may not be reducing the bias associated with nonparticipation. “(Non)participation bias refers to the systematic errors introduced in the study when reasons for study participation are associated with the epidemiological area of interest” [1]. Therefore non-participation can lead to participation bias, but bias does not always occur [6, 5, 10]. A study with a very low participation rate may contain little or no bias, while another study with high participation rates may have considerable problems associated with participation bias [5, 11]. Often participation rates are reported for a given study, however it is possible that participation bias may vary from one estimate within a study to another [12], causing participation rates to be a poor proxy for participation bias [6]. Participation bias is known to invalidate conclusions and generalisations which would otherwise be drawn, yet unfortunately its consideration is frequently omitted from articles [13].

Advances in technology, increased use of the Internet, more open data and increased data sharing have all occurred in recent years. These changes may have affected the way in which data are sought and recorded, and in turn may have affected participation rates. In addition, societal shifts may have led to differences in participant characteristics. This work is intended to build upon the findings in the previous review [1] which summarised studies prior to 2007, and so includes more recent articles from 2007–2015. We wish to know whether these developments have influenced the type of person participating and the way in which they do so. This will be answered using a literature review, with the findings available to inform future work requiring participants, or for analysis by behavioural psychologists with the view to a multidisciplinary approach to understanding participation.

Method

Inclusion Criteria

Web of Science [14] was used to search titles of English (language) articles from 2007–2015 for a range of synonymous words concerning participation rates. The title search used on 8th September 2015 was TI=("selection rate*" OR "participat* rate*" OR "nonresponse rate*" OR "response rate*" OR "nonparticipat* rate" OR "cooperat* rate*" OR "noncooperat* rate*"). This returned 626 articles for further consideration.

The abstract of each of the 626 articles was read to determine whether the article met the next phase of the inclusion criteria which ensured participation rates were in relation to a study or survey. Specifically, participation here refers only to the willing enrolment, or involvement, of an individual to a survey or study, where adequate data are provided to assist the research question. Synonyms of participation include ‘(self-)selection’, where an individual volunteers, ‘cooperation’, where an individual agrees to be involved, or ‘response’ relating to, say, the return of a completed questionnaire. Therefore these synonyms are provided in the context of participation in research rather than the general definition of the term. Linking these terms is the willingness of the individual to contribute data. Similarly, non-response, non-cooperation and non-participation were of interest, to understand those individuals who decline a survey or study. If the abstract was not sufficiently detailed to determine inclusion or not, the full text was sought and read. All study designs were included such as cohort studies, case control studies, trials and surveys, with the overarching requirement that the individual had to consent to involvement in the data collection, that is, willingly participate.

From the 626 article abstracts read, 162 articles satisfied the inclusion criteria. The results included a brief summary of each article, the year it was published and any participation findings. The results were later split into two sections; those concerning the person participating and those relating to the study design.

Exclusion Criteria

Unintended interpretations of the search terms such as ‘response’ to an intervention, ‘participation’ in a physical activity, or ‘cooperation’ with an event were not of interest, and hence these articles were excluded from the review.

During the final phase of the inclusion criteria, 464 articles were excluded, with the main reason being that the term ‘response’ related to a patient response to a drug or treatment (282). Other reasons were repeated articles (6), articles regarding best practice (67), where ‘participation’ described the uptake

or acceptance of an intervention (26), articles investigating the labour force participation rate (22), articles where ‘participation’ described involvement in a sport or activity (27) or where ‘response’ described a reaction using a stimulus or similar (34).

Results

Participant Characteristics

Characteristics of the people found to participate or not, are listed starting with the most reported theme, and their correspondence with previous findings noted.

Age was found to differ between participants and non-participants, as in the 2007 review [1], with studies reporting findings such as those who were 30+ [15], 40+ [16], 51+ [17], 75+ [18] or older [19, 20, 21, 22, 23] being more likely to participate. Although these studies used different age categories, they each concluded that older people were more likely to participate than younger people. One study simply stated that age was important [24], while another found those who were younger [25] were more likely to participate in a text messaging study, although this may be a finding unique to text messaging.

Higher education levels were associated with higher participation rates in studies [26, 27, 28], or the education level of participants was found to differ by sampling technique [20]. This was a known characteristic associated with increased participation in 2007 [1]. Being a homeowner was also found to be associated with increased participation probability [28]. There may be an association between education levels and homeownership, or between homeownership and age. Employment type was associated with participation [24]; full-time employment was associated with lower participation rates [20, 23], while unemployment was associated with increased participation rates for studies offering incentives [29]. This may be related to the amount of free time potential participants have to complete a survey or be involved in a study, but does contradict the findings in 2007 [1].

Race and ethnicity differed between those who chose to participate and those who did not [24]. Those more likely to participate were found to be non-Asian [30], white [31, 32, 33], or Western [34], generally agreeing with the previous review [1]. Participation was found to differ by country [35], which may incorporate factors such as ethnicity and race. Location generally was also found to differ between participants and non-participants [24, 36], with those in rural locations more likely to participate [32]. Location may be associated with other factors discussed earlier, such as employment status, education level and homeownership.

Sex was found to be associated with participation [24, 22], with females more likely to participate than males [19, 34, 32, 37, 16, 38], as commonly found in studies through time [1].

Smoking status was found to be associated with participation [39], with non-smokers (or those who are not lifelong smokers) usually more likely to participate [40, 41, 28, 23], as also found in the earlier review [1]. Smoking may be a factor specifically related to the study of interest, since it is unlikely to be recorded routinely for all studies.

Marital status was found to differ between participants and non-participants; with those classed as married [28] or not single [15] being more likely to participate, again agreeing with previous findings [1].

Socioeconomic class was associated with participation, with those categorised as not lower class [34] or not manual social class [28] being more likely to participate. Similarly, previous work has concluded that upper class people or those with a higher socioeconomic status are more likely to participate [1].

Physicians with less than 15 years’ experience were found to be more likely to participate than those with more experience [32], which may be specific to physicians or even this particular study. Mental health problems were associated with lower participation [34], although this is a variable which may only be recorded in studies where mental health is of interest. Obesity was found to be associated with lower response rates [28], but again obesity is a factor which is often only recorded in studies associated with weight. Multiparous women, or women with preterm deliveries were less likely to participate in a pregnancy study [15]; variables which are likely to only be recorded in pregnancy or pregnancy-related studies. Lower pain intensity was found to be associated with increased participation probability [23] when the study considered surgery; which may or may not be generalisable to other surgery studies. These factors are less commonly recorded and hence cannot easily be compared with the 2007 review findings.

Heavy drinkers were assumed to be less likely to participate in alcohol consumption studies [42]. Although specific to this study, or studies of alcohol consumption, it may be that people who indulge in habits with negative connotations are less likely to participate in a study regarding that aspect of their lifestyle. Alternatively, one’s function may be impaired by overindulgence in particular areas such as alcohol consumption or drug use and hence this may affect their participation in a study or their completion of a survey.

Cases were found to be more likely to participate than controls [25], as found in the previous review and frequently in case control studies [1]. This may be related to their motivation to participate, to find a cause or potentially a cure.

Study Design

Investigation into the procedures or details within a study or survey which may or may not be associated with participation are summarised here, with the most frequently reported themes listed first within each topic. Some are specific to particular studies, whereas others could be generalised to a range of data collection methods or study topics.

Study Design: Prior to the Study

Participation was found to increase with incentives or free gifts in some studies [43, 44, 45, 46, 47, 48, 49, 50, 26, 51, 52, 53, 54, 55, 20, 56, 57, 58, 59], but not in others [60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 21, 74]. Some found that small incentives were not quite sufficient to encourage potential participants [75], while larger incentives were [76]. There were studies comparing sizes of incentives with participation rates, which could help to determine a threshold amongst certain populations of interest, but this may not generalise to all populations. Often those who found incentives to not help study enrolment, were those offering less valuable incentives. Incentives were also usually more successful in studies which sought to enrol those who are less wealthy, or those who are busy and expect compensation for their time. A small incentive such as a free pen may be sufficient for a short survey for non-personal data, but a larger incentive may be required for a survey requiring a blood sample, sensitive data or a significant time commitment. The immediacy of the incentive was also important [77, 78], that is whether the incentive was given at the time of enrolment, or promised at a later date. This mixed influence of incentives was also found in the previous review [1].

Prenotification was found to be helpful in some studies [76, 79, 80, 56, 22, 81], but not in others [82, 83, 84, 85], even when personalised [86]. In 2007 it was thought to be a positive measure [1]. The type of prenotification used was generally found to be unimportant [87]. However, advanced mailing of the questionnaire before a telephone survey, was found to be associated with reduced participation rates [88].

Study Design: Mode of Contact

Paper surveys have been found to be effective [52], to be required in addition to electronic surveys [89, 90], to be better than web surveys (completed online) [50, 91, 92, 51, 75], or electronic surveys (completed electronically but not necessarily using the Internet) [93, 94, 95, 73], or advantageous over telephone surveys [96]. Although conversely an investigation into organisation surveys found participation rates in electronic studies to be as good as or higher than mail [62]. Web surveys were found to be better than mail surveys in a study of PhD (Doctor of Philosophy) holders [97], although offering a web option was associated with decreased participation in another study [69]. Item non-response was similar in web and mail surveys [98], but online surveys were better for open-ended and text answers in a study of item non-response [99]. For web surveys, a welcome screen describing a survey with a short length and including less information regarding privacy, was found to be most effective [100]. Recruitment using a direct email was more successful than through a newsletter [49] and tablet device surveys [101] or facebook [102] were found to help recruit reluctant or hard-to-reach potential participants. Exclusively online surveys were found not to be suitable for a doctors survey [103], generally not effective in a medical practitioner survey [104], or less effective than other modes [105].

Telephone calls can be useful [43, 106, 81] and there exists a simple positive relationship between the number of calls made and the response rates [107]. Utilising multiple sources to obtain a telephone number, followed by multiple phone call attempts and postal approaches, was successful at increasing participation rates in one study [108], although this could be viewed as unethical and as a form of harassment or coercion.

Short message service (SMS) was successful in an arthritis study [109] and an SMS reminder was found to increase response rates [37]. Text messaging an invitation received faster responses than email invitations [110] and particular combinations were found to be highly effective, such as an SMS prenotification followed by an email invitation [111].

The previous review also found differences in participation between survey modes [1], but with less emphasis on modes utilising modern technology such as web surveys and SMS. Recent advances in technology may alter the effectiveness of each mode of recruitment now and in future research.

Study Design: Survey Delivery Mode & Design

Mailing was found to be an effective mode [43, 112, 113, 114, 56, 38], and better than emailing [63], although being handed a survey by an acquaintance was found to be more effective than mailing in studies involving older communities [115]. Priority [74] or registered mail [116, 56] were found to be associated with higher response rates [45], but tracked mailing was associated with lower rates [117].

Repeated mailing [74] and reminders [118, 119, 120, 121] successfully increased participation rates, as did rewording the reminder [55]. Follow-up generally was viewed as useful [52, 122], with follow-up more effective for mail than web surveys [123], but not helpful in all cases [113]. One study even found reminders to be associated with decreased participation rates [62]. Sending a newsletter initially was found to be more beneficial than sending a reminder later [124] and electronic reminders were not found to improve response rates in postal studies [125]. This generally supports the previous review finding of increased participation with follow-up [1].

Response rate does not differ with envelope type [126], envelope colour [127], whether the material was aesthetically pleasing [128], enhanced [17], or contained an envelope teaser regarding an incentive [70]. However, the invitation design was found to be significant [129] as were the size and colour of the paper [130]. The location of the respondent code (on the survey itself or on the return envelope) was not found to significantly affect participation rates [131], neither was numbering the questionnaires [132]. Inclusion of a return stamp aided participation rates [43], and stamped envelopes were found to be more effective than business ready envelopes [124]. Investigations into these factors were not so common in 2007 [1] and so show a recent shift in focus of how to improve participation rates.

Study Design: Choice and Personalised Surveys

The illusion of a choice between surveys (but in fact just a different ordering of questions) was found to increase participation [133], as was locating the demographic data at the start of the survey [134]. Presenting the survey in multiple languages also increased participation rates [135, 136], whereas single (opposed to double) sided questionnaires and the Internet, were not found to produce significantly improved response rates [137]. Survey length was significant in some studies [43, 120, 138, 121], but not in others [63, 87, 64, 139]. Participation differed with the time of day [37] and with the day of the week in some studies [56, 37], but not in others [117]. These are again areas not covered by the 2007 review [1], so show recent developments for investigations into participation rates.

A choice of survey mode (i.e. electronic, paper, etc.) possibly increases participation [43, 140, 141, 48, 142, 143], but does not necessarily reduce the error associated with non-participation [141]. These views were also found in 2007 [1]. However in another study, the addition of a fax option was found to increase response rates, but other electronic options were not [144]. Multiple contact methods can increase participation rates [145] and it was found that the preferred survey mode differed between participants of different professions [123]. Similar findings were reported in 2007 [1].

Personalisation of the survey, such as through tailored letters or interaction with the potential participants, was associated with increased response rates in some studies [146, 43, 147, 148, 120, 122, 55, 56], but not in others [149, 150, 151]. Personalisation is another more recent consideration in studies of participation [1]. A persuasive message can be helpful [152] and surveys at an institutional level are more successful at recruiting respondents than those conducted nationally [95].

Study Design: Specific Studies

Participation rates were associated with features exclusive to particular studies, such as the number of days prior to surgery in an arthroplasty study [153], or the type of cancer amongst cancer patients [64]. A child-focused protocol was also found to be more effective in children’s health research, than a parent/teacher or teacher-only protocol [154]. A survey into male escorts [155] found increased response rates when the researcher posed as a client rather than a researcher, but this approach using deception may be seen as unethical. Sending a female responder to recruit male participants increased participation rates [79], as did having a dedicated centre for data collection rather than using a generic centre [36]. Generally, the survey content was found to affect participation rates [156], including whether samples were required such as saliva or blood [157]. These findings specific to particular studies are not easily comparable with the 2007 review.

Expert help was useful in one study [158], as was endorsement [43], but the additional of a logo or senior faculty’s signature was not found to be helpful [159]. One view is that the potential participants need to be intrinsically motivated for participation to occur [65], although offering the results from the study was not found to increase participation rates [160].

Study Design: Opt-Out

Using the approach of allowing the potential participants to actively decline a postal questionnaire, rather than actively agree, may be one way in which to increase participation rates [161], since active consent was found to reduce participation [162]. Alternatively, using default settings in a web survey could be useful [163], but this approach has the potential to lead to biased results with an excess of default responses.

Discussion

Consistency and Changes Through Time

Changes over time have not generally affected the demographic of participants. Only employment status contradicted previous findings [1], with three studies concluding a negative association of employment with participation [23, 20, 29]. One of these studies could be explained through the inclusion of incentives [29] raising participation rates in unemployed people, but the other two studies concluded full-time employment to be associated with decreased participation, possibly showing a shift in participant demographics. However the small sample size of these studies is not sufficient to draw any definitive conclusions.

In recent years, greater attention has been paid to techniques which increase participation. Studies researching envelope size, colour, style and composition are examples, with the results seen to differ by target population. This valuable information can be used to inform future studies, to ensure resources are not wasted and that the most suitable sample group is obtained. However, increased participation does not necessarily lead to reduced participation bias, since those participating may still differ from those who do not [141, 57].

The greatest change over time relates to participant recruitment and interaction. Although paper surveys remain the predominant survey mode, increasingly web-based approaches are being employed for recruitment and electronic tools are being utilised during data extraction. Technology has advanced greatly in recent years and is expected to continue to do so, suggesting an even greater involvement of electronic devices in future research. The availability of tablets and smartphones has allowed users to participate ‘on-the-go’ and complete surveys at a time convenient to them. Facilities such as facebook enable studies to be advertised easily and encourage the involvement of previously hard-to-reach participants. The Internet grants researchers the ability to quickly contact and enrol participants from all over the world, rather than be restricted to those locally. Advances in technology and the wider availability of devices in conjunction with social media, could result in significantly higher participation rates, particularly for studies where physical contact is not required. Even for studies requiring contact for blood or urine samples, advertisements can be circulated more widely. There will of course be studies for which this information will not be helpful. Examples include recruitment in locations where modern technology is not common, or for populations which are not able or not willing to use technology. In some instances, this ‘digital divide’ could lead to increased participation bias.

Limitations and Assumptions

One criticism of this review, but which does not limit the findings, could be the search terms, since 282 of the results related to treatment response, not satisfying the inclusion criteria. Although common words such as ‘virologic’ or ‘pathologic’ were used in these studies, there was no exhaustive list of terms which would have excluded all treatment articles. This resulted in increased data collection time, but ensured no relevant studies were missed. The search was conducted using only the article titles assuming that research relating to participation would use this or a similar word in the title. The abstract and keywords were trialled for inclusion in the Web of Science search, but since words such as ‘cooperation’ and ‘participation’ are used so frequently in English language, many results unrelated to the research question were returned. There was one article which met the inclusion criteria, but which could not be included in the review as the article was unavailable using the means available [164]. It compares email and postal surveys methods, but the conclusion is unknown.

There were instances where articles reported the same data set, either because the data appeared in multiple studies or since meta-analyses, included to contribute studies otherwise not captured, included the same data. Although this may have altered the findings, the effect should be reduced by the large sample of articles reviewed. Repeated articles were excluded from the review.

Study-specific findings were included, perhaps questionably, to demonstrate successful tactics for participation. Since the future direction of studies requiring participation is unknown, it may be that topics rarely studied now increase in frequency in the future, rendering these specific findings generalisable, hence they were not excluded.

Some articles assumed a causal link between study design and response rates, but it is recognised these may only be associations. Some, such as reminders resulting in reduced participation rates, seem unlikely.

Association Between Participation Factors

Many of the variables found to be associated with participation may be linked, for example it may be that higher proportions of older people live in rural locations or that more employed people live in urban areas. These are merely speculations, but these apparent reasons for participation or non-participation may be due to another recorded or unrecorded factor for which the identified reason acts as a proxy. Also some variables may differ between participants and non-participants, but may not have been recorded. For example, sex and age are often recorded, but factors such as obesity or pain intensity may only be recorded if relevant to the study. There is always the possibility of unidentified or unrecordable factors being associated with participation.

The Future of Participation

Although factors affecting participation have been considered, some authors correctly highlight that increased participation does not necessarily result in reduced participation bias [141, 57]. Using techniques such as incentives to increase participation rates may in fact increase bias. A shift of focus from participation rates to bias may save time and resources by not chasing unwilling participants, which in turn could be used to increase the sample size with willing participants or to conduct a detailed participation bias analysis [6, 165]. To aid this shift, journals could insist all surveys or studies requiring participants detail a participation bias calculation, for judgment by the reader. Alternatively journals could adopt standardised formulae to calculate rates such as those proposed by The American Association for Public Opinion Research (AAPOR) [3], which would at least provide guidance to researchers and allow easier comparisons between studies.

Regardless of the requirements imposed by journals, authors should provide a participation statement so the reader can compare sample-population characteristics, to judge population representation and hence the generalisability and validity of the results. Providing details of the population of interest can also help to assess bias, for example a study may have more female than male participants but if the study is concerning breast cancer survivors, a higher proportion of females than males is expected. Unfortunately details regarding the expected population of interest were not available for all studies reviewed here.

Where participation bias may be a concern, methods developed to reduce this form of bias should be considered. If non-participation causes data to be missing at random (MAR), multiple imputation [166] could be considered to replace missing variables with estimates calculated using the recorded variables. When non-participation causes data to be missing not at random (MNAR), external resources such as population level data could be used to draw conclusions [167]. Alternatively sensitivity analyses can help to estimate the direction and magnitude of participation bias so the true estimate can be adjusted accordingly [168]. The choice of an appropriate method for reducing participation (or selection) bias can be eased using a guidance tool [169], to ensure the study conclusions are optimal. Researchers should consider participation bias and readers should not outrightly dismiss findings on the grounds of low participation rates.

Non-participation is still an issue in studies and surveys, with different study designs and topics of interest suffering from non-participation in different ways and for different reasons. It is unlikely that one strategy would increase participation rates or reduce participation bias for all studies, but insight and knowledge gained from articles such as those covered here, should be used to aid future work. Even negative findings highlight where researchers should not focus their efforts, and hopefully areas which should be targeted, have been identified.

Funding

Claire Keeble is funded by an MRC Capacity Building Studentship. Paul D Baxter, Stuart Barber and Graham Richard Law are funded by HEFCE. The funding sources had no involvement in the study design, in the collection, analysis and interpretation of data, in the writing of the report or the decision to submit the article for publication.

References

[1] Galea S, Tracy M. Participation rates in epidemiologic studies. Annals of Epidemiology 2007;17(9):643–53.
[2] Hartge P. Raising response rates: Getting to yes. Epidemiology 1999;10(2):105–7.
[3] The American Association for Public Opinion Research. Standard definitions, final dispositions of case codes and outcome rates for surveys. http://aapor.org; 2011. Accessed online: 16/12/11.
[4] Council of American Survey Research Organizations. Research guidelines. https://www.casro.org; 2015. Accessed online: 27/08/2015.
[5] Johnson T, Wislar, J. Response rates and nonresponse errors in surveys. The Journal of the American Medical Association 2012;307(17):1805–6.
[6] Davern M. Nonresponse rates are a problematic indicator of nonresponse bias in survey research. Health Services Research 2013;48(3):905–12.
[7] Singer E, Groves R, Dillman D, Eltinger J, Little R. The Use of Incentives to Reduce Nonresponse in Household Surveys. New York: Wiley; 2002.
[8] Rogers A, Murtaugh M, Edwards S, Slattery M. Contacting controls: Are we working harder for similar response rates, and does it make a difference? American Journal of Epidemiology 2004;160(1):85–90.
[9] Beebe T, Davern M, McAlpine D, Call K, Rockwood T. Increasing response rates in a survey of Medicaid enrollees: The effect of a prepaid monetary incentive and mixed modes (mail and telephone). Medical Care 2005;43(4):411–4.
[10] Carter K, Imlach-Gunasekara F, McKenzie S, Blakely T. Differential loss of participants does not necessarily cause selection bias. Australian and New Zealand Journal of Public Health 2012;36:218–22.
[11] Rindfuss R, Choe M, Tsuya N, Bumpass L, Tamaki E. Do low survey response rates bias results? Evidence from Japan. Demographic Research 2015;32:797–828.
[12] Groves R. Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly 2006;70(4):646–75.
[13] Keeble C, Barber S, Law G, Baxter P. Participation bias assessment in three high impact journals. Sage Open 2013;3(4):1–5.
[14] Thomson Reuters . Web of Science. http://webofknowledge.com; 2015.
[15] O’Keeffe L, Kearney P, Greene R. Pregnancy risk assessment monitoring system in Ireland: Methods and response rates. Maternal and Child Health Journal 2015;19(3):480–6.
[16] Matias-Guiu J, Serrano-Castro P, Mauri-Llerda J, Hernandez-Ramos F, Sanchez-Alvarez J, Sanz M. Analysis of factors influencing telephone call response rate in an epidemiological study. The Scientific World Journal 2014;2014.
[17] Hall A, Sanson-Fisher R, Lynagh M, Threlfall T, D’Este C. Format and readability of an enhanced invitation letter did not affect participation rates in a cancer registry-based study: A randomized controlled trial. Journal of Clinical Epidemiology 2013;66(1):85–94.
[18] Chen K, Lei H, Li G, Huang W, Mu L. Cash incentives improve participation rate in a face-to-face survey: An intervention study. Journal of Clinical Epidemiology 2015;68(2):228–33.
[19] Hara M, Higaki Y, Imaizumi T, Taguchi N, Nakamura K, Nanri H, et al. Factors influencing participation rate in a baseline survey of a genetic cohort in Japan. Journal of Epidemiology 2010;20(1):40–5.
[20] Perez D, Nie J, Ardern C, Radhu N, Ritvo P. Impact of participant incentives and direct and snowball sampling on survey response rate in an ethnically diverse community: Results from a pilot study of physical activity and the built environment. Journal of Immigrant and Minority Health 2013;15(1):207–14.
[21] Koloski N, Jones M, Eslick G, Talley N. Predictors of response rates to a long term follow-up mail out survey. Plos One 2013;8(11).
[22] McLean S, Paxton S, Massey R, Mond J, Rodgers B, Hay P. Prenotification but not envelope teaser increased response rates in a bulimia nervosa mental health literacy survey: A randomized controlled trial. Journal of Clinical Epidemiology 2014;67(8):870–6.
[23] Nota S, Strooker J, Ring D. Differences in response rates between mail, e-mail, and telephone follow-up in hand surgery research. Hand 2014;9(4):504–10.
[24] Tate R, Jones M, Hull L, Fear N, Rona R, Wessely S, et al. How many mailouts? Could attempts to increase the response rate in the Iraq war cohort study be counterproductive? BMC Medical Research Methodology 2007;7(51).
[25] Li Y, Wang W, Wu Q, van Velthoven M, Chen L, Du X, et al. Increasing the response rate of text messaging data collection: A delayed randomized controlled trial. Journal of the American Medical Informatics Association 2015;22(1):51–64.
[26] Liu S, Geidenberger C. Comparing incentives to increase response rates among African Americans in the Ohio pregnancy risk assessment monitoring system. Maternal and Child Health Journal 2011;15(4):527–33.
[27] Lippmann S, Frese T, Herrmann K, Scheller K, Sandholzer H. Primary care research - Trade-off between representativeness and response rate of GP teachers for undergraduates. Swiss Medical Weekly 2012;142.
[28] Stafford M, Black S, Shah I, Hardy R, Pierce M, Richards M, et al. Using a birth cohort to study ageing: Representativeness and response rates in the National Survey of Health and Development. European Journal of Ageing 2013;10(2):145–57.
[29] Baron J, Breunig R, Cobb-Clark D, Gorgens T, Sartbayeva A. Does the effect of incentive payments on survey response rates differ by income support history? Journal of Official Statistics 2009;25(4):483–507.
[30] Talaulikar V, Hussain S, Perera A, Manyonda I. Low participation rates amongst Asian women: Implications for research in reproductive medicine. European Journal of Obstetrics & Gynecology and Reproductive Biology 2014;174:1–4.
[31] Kim S, Tucker M, Danielson M, Johnson C, Snesrud P, Shulman H. How can PRAMS survey response rates be improved among American Indian mothers? Data from 10 states. Maternal and Child Health Journal 2008;12 Suppl 1:119–25.
[32] Wiebe E, Kaczorowski J, MacKay J. Why are response rates in clinician surveys declining? Canadian Family Physician 2012;58(4):E225–8.
[33] Marks P, Babcock B, Cillessen A, Crick N. The effects of participation rate on the internal reliability of peer nomination measures. Social Development 2013;22(3):609–22.
[34] Bjertness E, Sagatun A, Green K, Lien L, Sogaard A, Selmer R. Response rates and selection problems, with emphasis on mental health variables and DNA sampling, in large population-based, cross-sectional and longitudinal studies of adolescents in Norway. BMC Public Health 2010;10.
[35] Beghin L, Huybrechts I, Vicente-Rodriguez G, De Henauw S, Gottrand F, Gonzales-Gross M, et al. Main characteristics and participation rate of European adolescents included in the HELENA study. Archives of Public Health 2012;70(1):14.
[36] Banks E, Herbert N, Rogers K, Mather T, Jorm L. Randomised trial investigating the relationship of response rate for blood sample donation to site of biospecimen collection, fasting status and reminder letter: The 45 and up study. BMC Medical Research Methodology 2012;12.
[37] Tolonen H, Aistrich A, Borodulin K. Increasing health examination survey participation rates by SMS reminders and flexible examination times. Scandinavian Journal of Public Health 2014;42(7):712–7.
[38] Garcia I, Portugal C, Chu LH, Kawatkar A. Response rates of three modes of survey administration and survey preferences of rheumatoid arthritis patients. Arthritis Care & Research 2014;66(3):364–70.
[39] Beebe T, Talley N, Camilleri M, Jenkins S, Anderson K, Locke G. The HIPAA authorization form and effects on survey response rates, nonresponse bias, and data quality - A randomized community study. Medical Care 2007;45(10):959–65.
[40] Owen-Smith V, Burgess-Allen J, Lavelle K, Wilding E. Can lifestyle surveys survive a low response rate? Public Health 2008;122(12):1382–3.
[41] Pandeya N, Williams G, Green A, Webb P, Whiteman D. Do low control response rates always affect the findings? Assessments of smoking and obesity in two Australian case-control studies of cancer. Australian and New Zealand Journal of Public Health 2009;33(4):312–9.
[42] Meiklejohn J, Connor J, Kypri K. The effect of low survey response rates on estimates of alcohol consumption in a general population survey. Plos One 2012;7(4).
[43] VanGeest J, Johnson T, Welch V. Methodologies for improving response rates in surveys of physicians - A systematic review. Evaluation & the Health Professions 2007;30(4):303–21.
[44] Keating N, Zaslavsky A, Goldstein J, West D, Ayanian J. Randomized trial of $20 versus $50 incentives to increase physician survey response rates. Medical Care 2008;46(8):878–81.
[45] Thorpe C, Ryan B, McLean S, Burt A, Stewart M, Brown J, et al. How to obtain excellent response rates when surveying physicians. Family Practice 2009;26(1):65–8.
[46] Brennan M, Charbonneau J. Improving mail survey response rates using chocolate and replacement questionnaires. Public Opinion Quarterly 2009;73(2):368–78.
[47] Hawley K, Cook J, Jensen-Doss A. Do noncontingent incentives increase survey response rates among mental health providers? A randomized trial comparison. Administration and Policy in Mental Health and Mental Health Services Research 2009;36(5):343–8.
[48] Balajti I, Darago L, Adany R, Kosa K. College students’ response rate to an incentivized combination of postal and web-based health survey. Evaluation & the Health Professions 2010;33(2):164–76.
[49] Doerfling P, Kopec J, Liang M, Esdaile J. The effect of cash lottery on response rates to an online health survey among members of the Canadian Association of Retired Persons: A randomized experiment. Canadian Journal of Public Health-Revue Canadienne De Sante Publique
2010;101(3):251–4.
[50] Crews T, Curtis D. Online course evaluations: Faculty perspective and strategies for improved response rates. Assessment & Evaluation in Higher Education 2011;36(7):865–78.
[51] Jacob R, Jacob B. Prenotification, incentives, and survey modality: An experimental test of methods to increase survey response rates of school principals. Journal of Research on Educational Effectiveness 2012;5(4):401–18.
[52] Martins Y, Lederman R, Lowenstein C, Joffe S, Neville B, Hastings B, et al. Increasing response rates from physicians in oncology research: A structured literature review and data from a recent physician survey. British Journal of Cancer 2012;106(6):1021–6.
[53] Olsen F, Abelsen B, Olsen J. Improving response rate and quality of survey data with a scratch lottery ticket incentive. BMC Medical Research Methodology 2012;12.
[54] Dykema J, Stevenson J, Kniss C, Kvale K, Gonzalez K, Cautley E. Use of monetary and nonmonetary incentives to increase response rates among African Americans in the Wisconsin pregnancy risk assessment monitoring system. Maternal and Child Health Journal 2012;16(4):785–91.
[55] Sauermann H, Roach M. Increasing web survey response rates in innovation research: An experimental study of static and dynamic contact design features. Research Policy 2013;42(1):273–86.
[56] Pit S, Vo T, Pyakurel S. The effectiveness of recruitment strategies on general practitioner’s survey response rates - A systematic review. BMC Medical Research Methodology 2014;14.
[57] Parsons N, Manierre M. Investigating the relationship among prepaid token incentives, response rates, and nonresponse bias in a web survey. Field Methods 2014;26(2):191–204.
[58] Murdoch M, Simon A, Polusny M, Bangerter A, Grill J, Noorbaloochi S, et al. Impact of different privacy conditions and incentives on survey response rate, participant representativeness, and disclosure of sensitive information: A randomized controlled trial. BMC Medical Research
Methodology 2014;14.
[59] Abdulaziz K, Brehaut J, Taljaard M, Emond M, Sirois MJ, Lee J, et al. National survey of physicians to determine the effect of unconditional incentives on response rates of physician postal surveys. BMJ Open 2015;5(2):e007166.
[60] Jamtvedt G, Rosenbaum S, Dahm K, Flottorp S. Chocolate bar as an incentive did not increase response rate among physiotherapists: A randomised controlled trial. BMC Research Notes 2008;1:34.
[61] Harris I, Khoo O, Young J, Solomon M, Rae H. Lottery incentives did not improve response rate to a mailed survey: A randomized controlled trial. Journal of Clinical Epidemiology 2008;61(6):609–10.
[62] Baruch Y, Holtom B. Survey response rate levels and trends in organizational research. Human Relations 2008;61(8):1139–60.
[63] Grava-Gubins I, Scott S. Effects of various methodologic strategies survey response rates among Canadian physicians and physicians-in-training. Canadian Family Physician 2008;54(10):1424–30.
[64] Kelly B, Fraze T, Hornik R. Response rates to a mailed survey of a representative sample of cancer patients randomly drawn from the Pennsylvania Cancer Registry: A randomized trial of incentive and length effects. BMC Medical Research Methodology 2010;10.
[65] Bruggen E, Wetzels M, de Ruyter K, Schillewaert N. Individual differences in motivation to participate in online panels: The effect on response rate and response quality perceptions. International Journal of Market Research 2011;53(3):369–90.
[66] Stange J, Zyzanski S. The effect of a college pen incentive on survey response rate among recent college graduates. Evaluation Review 2011;35(1):93–9.
[67] Clark M, Rogers M, Foster A, Dvorchak F, Saadeh F, Weaver J, et al. A randomized trial of the impact of survey design characteristics on response rates among nursing home providers. Evaluation & the Health Professions 2011;34(4):464–86.
[68] Viera A, Edwards T. Does an offer for a free on-line continuing medical education (CME) activity increase physician survey response rate? A randomized trial. BMC Research Notes 2012;5:129.
[69] Medway R, Fulton J. When more gets you less: A meta-analysis of the effect of concurrent web options on mail survey response rates. Public Opinion Quarterly 2012;76(4):733–46.
[70] Ziegenfuss J, Burmeister K, James K, Haas L, Tilburt J, Beebe T. Getting physicians to open the survey: Little evidence that an envelope teaser increases response rates. BMC Medical Research Methodology 2012;12.
[71] Glidewell L, Thomas R, MacLennan G, Bonetti D, Johnston M, Eccles M, et al. Do incentives, reminders or reduced burden improve healthcare professional response rates in postal questionnaires? Two randomised controlled trials. BMC Health Services Research 2012;12.
[72] van der Mark L, van Wonderen K, Mohrs J, Bindels P, Puhan M, ter Riet G. The effect of two lottery-style incentives on response rates to postal questionnaires in a prospective cohort study in preschool children at high risk of asthma: A randomized trial. BMC Medical Research Methodology
2012;12.
[73] Dykema J, Stevenson J, Klein L, Kim Y, Day B. Effects of e-mailed versus mailed invitations and incentives on response rates, data quality, and costs in a web survey of University faculty. Social Science Computer Review 2013;31(3):359–70.
[74] Bakan J, Chen B, Medeiros-Nancarrow C, Hu J, Kantoff P, Recklitis C. Effects of a gift certificate incentive and specialized delivery on prostate cancer survivors’ response rate to a mailed survey: A randomized-controlled trial. Journal of Geriatric Oncology 2014;5(2):127–32.
[75] Pit S, Hansen V, Ewald D. A small unconditional non-financial incentive suggests an increase in survey response rates amongst older general practitioners (GPs): A randomised controlled trial study. BMC Family Practice 2013;14.
[76] Dykema J, Stevenson J, Day B, Sellers S, Bonham V. Effects of incentives and prenotification on response rates and costs in a national web survey of physicians. Evaluation & the Health Professions 2011;34(4):434–47.
[77] Kanaan R, Wessely S, Armstrong D. Differential effects of pre and postpayment on neurologists’ response rates to a postal survey. BMC Neurology 2010;10.
[78] James K, Ziegenfuss J, Tilburt J, Harris A, Beebe T. Getting physicians to respond: The impact of incentive type and timing on physician survey response rates. Health Services Research 2011;46(1):232–42.
[79] Keusch F. How to increase response rates in list-based web survey samples. Social Science Computer Review 2012;30(3):380–8.
[80] Mitchell N, Hewitt C, Lenaghan E, Platt E, Shepstone L, Torgerson D, et al. Prior notification of trial participants by newsletter increased response rates: A randomized controlled trial. Journal of Clinical Epidemiology 2012;65(12):1348–52.
[81] MacLennan G, McDonald A, McPherson G, Treweek S, Avenell A, Group RT. Advance telephone calls ahead of reminder questionnaires increase response rate in non-responders compared to questionnaire reminders only: The RECORD phone trial. Trials 2014;15.
[82] Hammink A, Giesen P, Wensing M. Pre-notification did not increase response rate in addition to follow-up: A randomized trial. Journal of Clinical Epidemiology 2010;63(11):1276–8.
[83] Koopman L, Donselaar L, Rademakers J, Hendriks M. A prenotification letter increased initial response, whereas sender did not affect response rates. Journal of Clinical Epidemiology 2013;66(3):340–8.
[84] Carey R, Reid A, Driscoll T, Glass D, Benke G, Fritschi L. An advance letter did not increase the response rates in a telephone survey: A randomized trial. Journal of Clinical Epidemiology 2013;66(12):1417–21.
[85] Xie Y, Ho S. Prenotification had no additional effect on the response rate and survey quality: A randomized trial. Journal of Clinical Epidemiology 2013;66(12):1422–6.
[86] Hart A, Brennan C, Sym D, Larson E. The impact of personalized prenotification on response rates to an electronic survey. Western Journal of Nursing Research 2009;31(1):17–23.
[87] Beebe T, Rey E, Ziegenfuss J, Jenkins S, Lackore K, Talley N, et al. Shortening a survey and using alternative forms of prenotification: Impact on response rate and quality. BMC Medical Research Methodology 2010;10.
[88] Byrne C, Harrison J, Young J, Selby W. Including the questionnaire with an invitation letter did not improve a telephone survey’s response rate. Journal of Clinical Epidemiology 2007;60(12):1312–4.
[89] Kroth P, McPherson L, Leverence R, Pace W, Daniels E, Rhyne R, et al. Combining web-based and mail surveys improves response rates: A PBRN study from PRIME Net. Annals of Family Medicine 2009;7(3):245–8.
[90] Funkhouser E, Fellows J, Gordan V, Rindal D, Foy P, Gilbert G, et al. Supplementing online surveys with a mailed option to reduce bias and improve response rate: The national dental practice-based research network. Journal of Public Health Dentistry 2014;74(4):276–82.
[91] Rolfson O, Salomonsson R, Dahlberg L, Garellick G. Internet-based follow-up questionnaire for measuring patient-reported outcome after total hip replacement surgery-reliability and response rate. Value in Health 2011;14(2):316–21.
[92] Boschman J, van der Molen H, Frings-Dresen M, Sluiter J. Response rate of bricklayers and supervisors on an internet or a paper-and-pencil questionnaire. International Journal of Industrial Ergonomics 2012;42(1):178–82.
[93] Shih TH, Fan X. Comparing response rates in e-mail and paper surveys: A meta-analysis. Educational Research Review 2009;4(1):26–40.
[94] Crouch S, Robinson P, Pitts M. A comparison of general practitioner response rates to electronic and postal surveys in the setting of the National STI Prevention Program. Australian and New Zealand Journal of Public Health 2011;35(2):187–9.
[95] Yarger J, James T, Ashikaga T, Hayanga A, Takyi V, Lum Y, et al. Characteristics in response rates for surveys administered to surgery residents. Surgery 2013;154(1):38–45.
[96] Rookey B, Le L, Littlejohn M, Dillman D. Understanding the resilience of mail-back survey methods: An analysis of 20 years of change in response rates to National Park surveys. Social Science Research 2012;41(6):1404–14.
[97] Barrios M, Villarroya A, Borrego A, Olle C. Response rates and data quality in web and mail surveys administered to PhD holders. Social Science Computer Review 2011;29(2):208–20.
[98] Wolfe E, Converse P, Oswald F. Item-level nonresponse rates in an attitudinal survey of teachers delivered via mail and web. Journal of ComputerMediated Communication 2008;14(1):35–66.
[99] Denscombe M. Item non-response rates: A comparison of online and paper questionnaires. International Journal of Social Research Methodology 2009;12(4):281–91.
[100] Haer R, Meidert N. Does the first impression count? Examining the effect of the welcome screen design on the response rate. Survey Methodology 2013;39(2):419–34.
[101] Parker M, Manan A, Urbanski S. Prospective evaluation of direct approach with a tablet device as a strategy to enhance survey study participant response rate. BMC Research Notes 2012;5:605.
[102] Bolanos F, Herbeck D, Christou D, Lovinger K, Pham A, Raihan A, et al. Using facebook to maximize follow-up response rates in a longitudinal study of adults who use methamphetamine. Substance Abuse: Research and Treatment 2012;6:1–11.
[103] Scott A, Jeon SH, Joyce C, Humphreys J, Kalb G, Witt J, et al. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Medical Research Methodology 2011;11.
[104] Aitken C, Power R, Dwyer R. A very low response rate in an on-line survey of medical practitioners. Australian and New Zealand Journal of Public Health 2008;32(3):288–9.
[105] Manfreda K, Bosniak M, Berzelak J, Haas I, Vehovar V. Web surveys versus other survey modes - A meta-analysis comparing response rates. International Journal of Market Research 2008;50(1):79–104.
[106] Sinclair M, O’Toole J, Malawaraarachchi M, Leder K. Comparison of response rates and cost-effectiveness for a community-based survey: Postal, internet and telephone modes with generic or personalised recruitment approaches. BMC Medical Research Methodology 2012;12.
[107] Westrick S, Mount J. Effects of repeated callbacks on response rate and nonresponse bias: Results from a 17-state pharmacy survey. Research in Social & Administrative Pharmacy 2008;4(1):46–58.
[108] Kiezebrink K, Crombie I, Irvine L, Swanson V, Power K, Wrieden W, et al. Strategies for achieving a high response rate in a home interview survey. BMC Medical Research Methodology 2009;9.
[109] Christie A, Dagfinrud H, Dale O, Schulz T, Hagen K. Collection of patientreported outcomes; - text messages on mobile phones provide valid scores and high response rates. BMC Medical Research Methodology 2014;14.
[110] de Bruijne M, Wijnant A. Improving response rates and questionnaire design for mobile web surveys. Public Opinion Quarterly 2014;78(4):951–62.
[111] Bosnjak M, Neubarth W, Couper M, Bandilla W, Kaczmirek L. Prenotification in web-based access panel surveys - The influence of mobile text messaging versus e-mail on response rates and sample composition. Social Science Computer Review 2008;26(2):213–23.
[112] Converse P, Wolfe E, Huang X, Oswald F. Response rates for mixed-mode surveys using mail and e-mail/web. American Journal of Evaluation 2008;29(1):99–107.
[113] Bonevski B, Magin P, Horton G, Foster M, Girgis A. Response rates in GP surveys trialling two recruitment strategies. Australian Family Physician 2011;40(6):427–30.
[114] Hardigan P, Succar C, Fleisher J. An analysis of response rate and economic costs between mail and web-based surveys among practicing dentists: A randomized trial. Journal of Community Health 2012;37(2):383–94.
[115] Edelman L, Yang R, Guymon M, Olson L. Survey methods and response rates among rural community dwelling older adults. Nursing Research 2013;62(4):286–91.
[116] Pedrana A, Hellard M, Giles M. Registered post achieved a higher response rate than normal mail - A randomized controlled trial. Journal of Clinical Epidemiology 2008;61(9):896–9.
[117] Akl E, Gaddam S, Mustafa R, Wilson M, Symons A, Grifasi A, et al. The effects of tracking responses and the day of mailing on physician survey response rate: Three randomized trials. Plos One 2011;6(2).
[118] Cook J, Dickinson H, Eccles M. Response rates in postal surveys of healthcare professionals between 1996 and 2005: An observational study. BMC Health Services Research 2009;9.
[119] Horn R, Jones S, Warren K. The cost-effectiveness of postal and telephone methodologies in increasing routine outcome measurement response rates in CAMHS. Child and Adolescent Mental Health 2010;15(1):60–3.
[120] Sahlqvist S, Song Y, Bull F, Adams E, Preston J, Ogilvie D, et al. Effect of questionnaire length, personalisation and reminder type on response rate to a complex postal survey: Randomised controlled trial. BMC Medical Research Methodology 2011;11.
[121] Khamisa N, Peltzer K, Ilic D, Oldenburg B. Evaluating research recruitment strategies to improve response rates amongst South African nurses. SA Journal of Industrial Psychology 2014;40(1):01–7.
[122] Trapp G, Giles-Corti B, Martin K, Timperio A, Villanueva K. Conducting field research in a primary school setting: Methodological considerations for maximizing response rates, data quality and quantity. Health Education Journal 2012;71(5):590–6.
[123] Shih TH, Fan X. Comparing response rates from web and mail surveys: A meta-analysis. Field Methods 2008;20(3):249–71.
[124] Wakabayashi C, Hayashi K, Nagai K, Sakamoto N, Iwasaki Y. Effect of stamped reply envelopes and timing of newsletter delivery on response rates of mail survey: A randomised controlled trial in a prospective cohort study. BMJ Open 2012;2(5).
[125] Man MS, Tilbrook H, Jayakody S, Hewitt C, Cox H, Cross B, et al. Electronic reminders did not improve postal questionnaire response rates or response times: A randomized controlled trial. Journal of Clinical Epidemiology 2011;64(9):1001–4.
[126] Ziegenfuss J, Tilburt J, Lackore K, Jenkins S, James K, Beebe T. Envelope type and response rates in a survey of health professionals. Field Methods 2014;26(4):380–9.
[127] Mitchell N, Hewitt C, Torgerson D, Group ST. A controlled trial of envelope colour for increasing response rates in older women. Aging Clinical and Experimental Research 2011;23(3):236–40.
[128] Kereakoglow S, Gelman R, Partridge A. Evaluating the effect of aesthetically enhanced materials compared to standard materials on clinician response rates to a mailed survey. International Journal of Social Research Methodology 2013;16(4):301–6.
[129] Kaplowitz M, Lupi F, Couper M, Thorp L. The effect of invitation design on web survey response rates. Social Science Computer Review 2012;30(3):339–49.
[130] Beebe T, Stoner S, Anderson K, Williams A. Selected questionnaire size and color combinations were significantly related to mailed survey response rates. Journal of Clinical Epidemiology 2007;60(11):1184–9.
[131] King K, Vidourek R. Effect of respondent code location on survey response rate. Psychological Reports 2011;109(3):718–22.
[132] Kundig F, Staines A, Kinge T, Perneger T. Numbering questionnaires had no impact on the response rate and only a slight influence on the response content of a patient safety culture survey: A randomized trial. Journal of Clinical Epidemiology 2011;64(11):1262–5.
[133] Pickett J, Metcalfe C, Baker T, Gertz M, Bedard L. Superficial survey choice: An experimental test of a potential method for increasing response rates and response quality in correctional surveys. Journal of Quantitative Criminology 2014;30(2):265–84.
[134] Teclaw R, Price M, Osatuke K. Demographic question placement: Effect on item response rates and means of a veterans health administration survey. Journal of Business and Psychology 2012;27(3):281–90.
[135] Moradi T, Sidorchuk A, Hallqvist J. Translation of questionnaire increases the response rate in immigrants: Filling the language gap or feeling of inclusion? Scandinavian Journal of Public Health 2010;38(8):889–92.
[136] Brick JM, Montaquila J, Han D, Williams D. Improving response rates for Spanish speakers in two-phase mail surveys. Public Opinion Quarterly 2012;76(4):721–32.
[137] Fluess E, Bond C, Jones G, Macfarlane G. The effect of an internet option and single-sided printing format to increase the response rate to a population-based study: A randomized controlled trial. BMC Medical Research Methodology 2014;14.
[138] Choudhury Y, Hussain I, Parsons S, Rahman A, Eldridge S, Underwood M. Methodological challenges and approaches to improving response rates in population surveys in areas of extreme deprivation. Primary Health Care Research & Development 2012;13(3):211–8.
[139] Bolt E, van der Heide A, Onwuteaka-Philipsen B. Reducing questionnaire length did not improve physician response rate: A randomized trial. Journal of Clinical Epidemiology 2014;67(4):477–81.
[140] O’Toole J, Sinclair M, Leder K. Maximising response rates in household telephone surveys. BMC Medical Research Methodology 2008;8.
[141] Dillman D, Phelps G, Tortora R, Swift K, Kohrell J, Berck J, et al. Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response (IVR) and the Internet. Social Science Research 2009;38(1):3–20.
[142] Nguyet T, Dilley J. Achieving a high response rate with a health care provider survey, Washington State, 2006. Preventing Chronic Disease 2010;7(5).
[143] Olson K, Smyth J, Wood H. Does giving people their preferred survey mode actually increase survey participation rates? An experimental examination. Public Opinion Quarterly 2012;76(4):611–35.
[144] Nicholls K, Chapman K, Shaw T, Perkins A, Sullivan M, Crutchfield S, et al. Enhancing response rates in physician surveys: The limited utility of electronic options. Health Services Research 2011;46(5):1675–82.
[145] Fear N, Van Staden L, Iversen A, Hall J, Wessely S. 50 ways to trace your veteran: Increasing response rates can be cheap and effective. European Journal of Psychotraumatology 2010;1.
[146] Blohm M, Hox J, Koch A. The influence of interviewers’ contact behaviour on the contact and cooperation rate in face-to-face household surveys. International Journal of Public Opinion Research 2007;19(1):97–111.
[147] Rao P. International survey research understanding national cultures to increase survey response rate. Cross Cultural Management - An International Journal 2009;16(2):165–78.
[148] Munoz-Leiva F, Sanchez-Fernandez J, Montoro-Rios F, Angel IbanezZapata J. Improving the response rate and quality in web-based surveys through the personalization and frequency of reminder mailings. Quality & Quantity 2010;44(5):1037–52.
[149] Olson K, Lepkowski J, Garabrant D. An experimental examination of the content of persuasion letters on nonresponse rates and survey estimates in a nonresponse follow-up study. Survey Research Methods 2011;5(1):21–6.
[150] Luiten A. Personalisation in advance letters does not always increase response rates. Demographic correlates in a large scale experiment. Survey Research Methods 2011;5(1):11–20.
[151] Dembosky J, Haviland A, Elliott M, Kallaur P, Edwards C, Sekscenski E, et al. Does naming the focal plan in a CAHPS survey of health care quality affect response rates and beneficiary evaluations? Public Opinion Quarterly 2013;77(2):455–73.
[152] Misra S, Stokols D, Marino A. Using norm-based appeals to increase response rates in evaluation research: A field experiment. American Journal of Evaluation 2012;33(1):88–98.
[153] Wang W, Geller J, Kim A, Morrison T, Choi J, Macaulay W. Factors affecting response rates to mailed preoperative surveys among arthroplasty patients. World Journal of Orthopedics 2012;3(1):1–4.
[154] Claudio L, Stingone J. Improving sampling and response rates in children’s health research through participatory methods. Journal of School Health 2008;78(8):445–51.
[155] Pruitt M. Deviant research: Deception, male Internet escorts, and response rates. Deviant Behavior 2008;29(1):70–82.
[156] Rashidian A, van der Meulen J, Russell I. Differences in the contents of two randomized surveys of GPs’ prescribing intentions affected response rates. Journal of Clinical Epidemiology 2008;61(7):718–21.
[157] Hansen T, Simonsen M, Nielsen F, Hundrup Y. Collection of blood, saliva, and buccal cell samples in a pilot study on the Danish nurse cohort: Comparison of the response rate and quality of genomic DNA. Cancer Epidemiology Biomarkers & Prevention 2007;16(10):2072–6.
[158] Frass L, Hopkins X, Smith L, Kyle J, Vanderknyff J, Hand G. A collaborative approach in workforce assessment: South Carolina’s strategies for high response rates. Health Promotion Practice 2014;15:14S–22S.
[159] van Wonderen K, Mohrs J, Jff M, Bindels P, ter Riet G. Two simple strategies (adding a logo or a senior faculty’s signature) failed to improve patient participation rates in a cohort study: Randomized trial. Journal of Clinical Epidemiology 2008;61(10):971–7.
[160] Ziegenfuss J, Shah N, Deming J, Van Houten H, Smith S, Beebe T. Offering results to participants in a diabetes survey effects on survey response rates. Patient-Patient Centered Outcomes Research 2011;4(4):241–5.
[161] Stenhammar C, Bokstrom P, Edlund B, Sarkadi A. Using different approaches to conducting postal questionnaires affected response rates and cost-efficiency. Journal of Clinical Epidemiology 2011;64(10):1137–43.
[162] Ellwood P, Asher M, Stewart A, Group IPIS. The impact of the method of consent on response rates in the ISAAC time trends study. International Journal of Tuberculosis and Lung Disease 2010;14(8):1059–65.
[163] Jin L. Improving response rates in web surveys with default setting. The effects of default on web survey participation and permission. International Journal of Market Research 2011;53(1):75–94.
[164] Berman D, Tan L, Cheng T. Surveys and response rates. Pediatrics in review / American Academy of Pediatrics 2015;36(8):364–6.
[165] Groves R, Peytcheva E. The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly 2008;72:167–89.
[166] Sterne J, White I, Carlin J, Spratt M, Royston P, Kenward M, et al. Multiple imputation for missing data in epidemiological and clinical research: Potential and pitfalls. BMJ 2009;338:2393–7.
[167] Keeble C, Barber S, Baxter P, Parslow R, Law G. Reducing participation bias in case-control studies: Type 1 diabetes in children and stroke in adults. Open Journal of Epidemiology 2014;4(3):129–34.
[168] Kleinbaum D, Morgenstern H, Kupper L. Selection bias in epidemiological studies. American Journal of Epidemiology 1981;113(4):452–63.
[169] Keeble C, Law G, Barber S, Baxter P. Choosing a method to reduce selection bias: A tool for researchers. Open Journal of Epidemiology 2015;:155–62.

Author Information

Claire Keeble
Division of Epidemiology and Biostatistics, University of Leeds
Leeds, West Yorkshire, United Kingdom
c.m.keeble@leeds.ac.uk

Paul D Baxter
Division of Epidemiology and Biostatistics, University of Leeds
Leeds, West Yorkshire, United Kingdom
p.d.baxter@leeds.ac.uk

Stuart Barber
Department of Statistics, University of Leeds
Leeds, West Yorkshire, United Kingdom
stuart@maths.leeds.ac.uk

Graham Richard Law
Division of Epidemiology and Biostatistics, University of Leeds
Leeds, West Yorkshire, United Kingdom
G.R.Law@leeds.ac.uk

Download PDF

Your free access to ISPUB is funded by the following advertisements:

 

BACK TO TOP
  • Facebook
  • Google Plus

© 2013 Internet Scientific Publications, LLC. All rights reserved.    UBM Medica Network Privacy Policy