Impact of targeted wording on response rates to a survey of general practitioners on referral processes for suspected head and neck cancer: an embedded randomised controlled trial
Rebecca L. Venchiarutti 1 2 * , Marguerite Tracy 1 , Jonathan R. Clark 2 3 4 , Carsten E. Palme 2 3 4 , Jane M. Young 1 51 Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Camperdown, NSW 2006, Australia.
2 Sydney Head and Neck Cancer Institute, Department of Head and Neck Surgery, Chris O’Brien Lifehouse, Camperdown, 119-143 Missenden Road, NSW 2050, Australia.
3 Central Clinical School, Faculty of Medicine and Health, The University of Sydney, Camperdown, NSW 2006, Australia.
4 Royal Prince Alfred Institute of Academic Surgery, Sydney Local Health District, Camperdown, NSW 2050, Australia.
5 The Daffodil Centre, The University of Sydney, a joint venture with Cancer Council NSW, Camperdown, NSW 2006, Australia.
Journal of Primary Health Care 14(3) 200-206 https://doi.org/10.1071/HC21095
Published: 10 June 2022
© 2022 The Author(s) (or their employer(s)). Published by CSIRO Publishing on behalf of The Royal New Zealand College of General Practitioners. This is an open access article distributed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND)
Abstract
Introduction: Low response rates to surveys can lead to non-response bias, limiting generalisability of findings. When survey topics pertain to uncommon conditions, the decision of general practitioners (GPs) to complete a questionnaire may be swayed by the perceived relevance of the questionnaire content to their practice.
Aim: To explore whether targeted wording of a questionnaire for GPs about head and neck cancer referral patterns affects response rates.
Methods: A randomised controlled trial was embedded into a larger survey on referral practices for head and neck cancer among GPs in New South Wales, Australia. GPs were randomly allocated to receive versions of the study material with explanatory text written using either a ‘symptom-frame’ or a ‘cancer-frame’; however, the questions and responses were the same in both groups.
Results: The overall response rate was 10.9% (196/1803). The response rate to the ‘cancer-frame’ version was 10.6% and 11.1% for the ‘symptom-frame’ version. After adjusting for practice location and GP gender, the difference in response rate based on wording was not significant (difference 0.5% [95%CI: −2.4, 3.4%]). A sub-analysis showed that GPs practicing in regional New South Wales were more likely to respond to the survey compared to those practicing in metropolitan New South Wales, independent of the intervention group or participant sex (AOR 1.61 [95%CI: 1.12, 2.31]; P = 0.01).
Discussion: The wording ‘frame’ of the survey did not appear to impact response rates in a survey of referral practices for suspected head and neck cancer; however, the significantly higher response rate from regional GPs warrants further investigation as to whether the content was considered more salient to their practice.
Keywords: diagnosis delay, general practice, geography, head and neck cancer, health-care access, primary care, randomised controlled trial, variation.
WHAT GAP THIS FILLS |
What is already known: Low response rates to surveys remain a bug bear of methodologists. This is especially pertinent for uncommon topics or conditions, where participants may self-exclude if they have limited experience with the topic or condition. There is some evidence that potential participants will be more likely to respond if the questionnaire is more relevant to them. We set out to determine whether differential wording of a questionnaire could improve response rates to a survey of general practitioners (GPs) in Australia about head and neck cancer referral patterns, depending on whether a ‘symptom-frame’ or ‘cancer-frame’ was used. |
What this study adds: Although there was no difference in the proportion of potential participants responding based on the questionnaire ‘frame’ received, we did find that GPs practicing in regional Australia were more likely to respond. This provides some evidence that the survey was more salient in their practice and could be used to justify more tailored questionnaires to GPs to improve response rates in surveys. |
Introduction
Response rates to surveys of general practitioners (GPs) vary greatly, ranging in the literature from 7 to 67%.1–4 This can result in underpowered studies, raising concern for response bias and limit the generalisability of findings. Methods to increase response rates to surveys vary based on whether they are administered electronically or by hard copy,5,6 but can include incentives, pre-notification, follow up, and personalised approaches. A meta-analysis conducted by Edwards et al.5 found that more interesting or salient questionnaires had double the response rate compared to less interesting or non-salient versions (OR 2.00, 95%CI 1.32, 3.04). This suggests that question and content should be considered when designing questionnaires.
Cancer is a relatively uncommon presentation in primary care. It is estimated that Australian GPs will see only four ‘serious’ cancer cases per year.7 For uncommon cancers, like head and neck cancer, presentations are far fewer. Data from Finland suggests that GPs may only see two new cases of head and neck cancer in their career,8 though some 11% of symptoms experienced by patients visiting a GP may also be experienced by patients with head and neck cancer. This raises the question as to whether GPs may self-exclude from surveys on head and neck cancer if they see these presentations infrequently in practice, and therefore presents a challenge to obtain an optimal sample of survey participants to ensure external validity of results and relevance to practice.
Though cancer diagnoses in primary care are rare, associated symptoms are relatively common, and so it is plausible that GPs may be more willing to participate in research that addresses the latter. Therefore, we conducted this study to investigate whether the framing of the study materials affected survey response rates. The aim of this study was to determine whether targeted wording impacted on response rates to a survey of Australian GPs. We hypothesised that response rates would be higher among GPs provided with a questionnaire written with a ‘symptom-frame’ compared to a ‘cancer-frame’.
Methods
Trial design
This randomised controlled trial was embedded within a survey of GPs investigating referral processes for suspected head and neck cancer. The aim of the survey was to investigate geographical variation in self-reported management by GPs of patients with symptoms suggestive of head and neck cancer in the context of possible delayed referral for head and neck cancer in primary care.
Participants
GPs were identified from publicly available lists in two Primary Health Networks in New South Wales, Australia – the Central and Eastern Sydney Primary Health Network,9 and the North Coast Primary Health Network.10 These Primary Health Networks were selected as they mirrored the sampling area of patients who were being recruited for a related study of routes and times to diagnosis and treatment of head and neck cancer in New South Wales. GPs were eligible if they were a practicing GP within the Central and Eastern Sydney Primary Health Network or North Coast Primary Health Network. A total of 1875 unique GPs participants were identified (n = 1556 from the Central and Eastern Sydney Primary Health Network, n = 319 from the North Coast Primary Health Network) across 660 practices. GPs were excluded if they were a trainee, deceased, retired, unable to be contacted (i.e. return to sender with no forwarding address), on extended leave, or had moved out of the area.
Interventions
Two versions of the same survey materials were written for the study, with minor variations based on the frame (Table 1). Group A (‘symptom-frame’) received a version of the study materials (advance notification flyer, invitation letter, participant information sheet, and questionnaire) using the term ‘red-flag upper aerodigestive symptoms’. Group B (‘cancer-frame’) received a version of the study materials using the term ‘suspected head and neck cancer’. Survey administration procedures were identical in both groups, with one advance notification flyer, followed by a mailed version of the questionnaire (with a hand-signed invitation letter, participant information sheet, and reply paid envelope). Up to three mailed reminders were sent and each mail out contained a sachet of coffee and a teabag as an unconditional, non-monetary incentive to participate. The surveys were administered between 3 February 2020 and 18 June 2020.
Outcomes
The pre-specified primary outcome was the response rate, defined as the proportion of GPs in each group returning the completed survey by mail. A non-responder was defined as a participant who did not return completed study material after three postal reminders, or within 6 weeks of the sent date of the third reminder.
Sample size
Assuming a maximum response rate of 27%11 and a sample of 1875 GPs, we calculated that we would need a sample size of 434 per group to detect an 8% difference, with 80% power and 5% alpha.
Randomisation
The randomisation sequence was computer-generated with a 1:1 ratio and stratified by location (regional or metropolitan New South Wales) and practice (to mitigate the risk of contamination between study groups at practices). The random allocation sequence was assigned by a statistician who was not aware of the assignments, and the group assignments were allocated by the study coordinator.
Statistical analysis
The difference in response rates (primary outcome) between intervention groups was compared using the Chi-squared test on an intention-to-treat basis. The effect of other factors on response (sex and practice location) were analysed using logistic regression including interaction terms (sex-intervention group interaction and practice location-intervention group interaction). Raw and adjusted odds ratios were calculated, with 95% confidence intervals constructed around the estimates. Statistical significance was taken at P < 0.05 and analyses were conducted using SPSS for Windows Version 25 (SPSS Inc., Armonk, New York, USA).
Ethical approval
Ethical approval for the study was sought and granted by the Sydney Local Health District Human Research Ethics Committee (SLHD HREC) – RPA Zone (Protocol Number X19-0366 & 2019/ETH00449). Site governance approval was granted by the RPA Hospital Governance Office (approval number 2019/STE16418).
Results
The surveys were sent to 1875 GPs in total (930 in Group A ‘symptom-frame’ and 945 in Group B ‘cancer-frame’). Fig. 1 depicts the flow of the participants and responses over the study period. In total, 72 participants were excluded and did not receive the survey (n = 32 in Group A and n = 40 in Group B). The overall response rate to the survey was 10.9% (196/1803).
Effect of intervention on response rate
Response rates by group are presented in Table 2. In Group A, 100 participants completed and returned the questionnaire (11.1%), and in Group B, 96 participants completed and returned the questionnaire (10.6%). The difference was not statistically significant (0.5% [95%CI: −2.4, 3.4%]; P = 0.73).
Effect of GP factors on response rate
When testing the effect of GP factors on response rates, practice location was the only factor that demonstrated an effect. GPs practicing in regional New South Wales were more likely to respond to the questionnaire (adjusted odds ratio [AOR] 1.61 [95%CI: 1.12, 2.31]; P = 0.01) (Table 3). However, there was no differential effect of the intervention by practice location. All P values for interaction terms were >0.05.
Discussion
We explored whether targeted wording within study materials sent to GPs impacted on response rates in a study of referral processes for head and neck cancer. We found no significant difference in response rates based on the wording of the questionnaire. However, a sub-analysis showed that response rates to the questionnaire were higher among GPs practicing in regional New South Wales compared to those practicing in metropolitan New South Wales, independent of the intervention group or participant sex (AOR 1.61 [95%CI: 1.12, 2.31]; P = 0.01).
Our initial hypothesis was that GPs who received the questionnaire using the ‘symptom-frame’ would be more likely to respond, as this may be more relevant in their practice than ‘suspected head and neck cancer’; however, we found no evidence to support this hypothesis. The interesting finding of a higher response rate among regional GPs after adjusting for participant factors warrants further consideration. The Leverage–Saliency Theory12 proposes that if the content of a survey factors into the decision to participate, then the decision not to participate results in non-response bias. Groves, who proposed this theory, later demonstrated that response rates to a questionnaire with a specific topic (eg ‘education and schools’) vary based on the sampling frame.13 In this scenario, response rates to the survey were highest among the sampling frame that consisted of teachers. In contrast, a survey regarding ‘childcare and parents’ had the highest response rates from new parents. Although it is not possible to test this theory in the current study due to the absence of control groups, it is possible that the response rate was higher among regional GPs as they considered the general content (referral pathways for head and neck cancer) to be more relevant to their practice.
The primary limitation of this study is the low response rate (10.9%), and although other surveys have reported similarly low response rates among Australian GPs,14 this brings into question the generalisability of the results. However, a significant strength was that post hoc calculations showed that the study was still sufficiently powered, and we were able to detect a different in response rates based on location of practice. Another strength was the use of rigorous methods in the design of the questionnaire, which incorporated elements demonstrated to improve response rates to postal surveys.5,6 It is possible that the response rate may have been even lower had these elements not been included in the design of the questionnaire and survey administration. However, recent evidence suggests that self-selection of survey mode (postal or electronic) is an effective method to maximise response rates in surveys of medical practitioners,15 which was not implemented in this study owing to limitations in obtaining emails of all eligible GPs. The survey was also administered during the peak of coronavirus disease 2019 (COVID-19) infections in New South Wales in early 2020, which may also have impacted on response rates to the survey.
The findings from this study highlight an ongoing challenge in surveying GPs concerning uncommon topics or conditions. Despite utilisation of evidence-based approaches to health-care surveys, the response rate was low (10.8%). This warrants additional investigation as to the optimal sampling strategies for surveys of GPs pertaining to uncommon topics or conditions, especially given the importance of high-quality studies needed to inform practice and policy. Additional insight into how best to engage GPs as research participants could lead to less research waste and improvements of translation of research into practice, resulting in mutual benefits for both research and practice.
Conclusion
To our knowledge, this is the first study to test differential wording on the response rate of GPs to a survey of cancer referral practices, and highlights the challenge of optimising response rates for surveys about uncommon topics or conditions. Although there is some evidence in the literature that questionnaire saliency can improve response rates to questionnaires, we did not find evidence of this in the current study. It is possible that the minor wording differences used in this study were too small to have any impact on response rates. Future studies testing this approach should account for this possibility in the design of questionnaires.
Data availability
The data that support this study will be shared upon reasonable request to the corresponding author.
Conflicts of interest
The authors declare no conflicts of interest.
Declaration of funding
This study received funding from the Primary Care Collaborative Cancer Clinical Trials Group (PC4) in 2018 under the PC4 Training Award. The funder played no role in the design of the study.
References
[1] Bonevski B, Magin P, Horton G, et al. Response rates in GP surveys - trialling two recruitment strategies. Aust Fam Physician 2011; 40 427–30.| Response rates in GP surveys - trialling two recruitment strategies.Crossref | GoogleScholarGoogle Scholar | 21655493PubMed |
[2] Freed GL, Turbitt E, Kunin M, et al. General practitioner perspectives on referrals to paediatric public specialty clinics. Aust Fam Physician 2016; 45 747–53.
| General practitioner perspectives on referrals to paediatric public specialty clinics.Crossref | GoogleScholarGoogle Scholar | 27695726PubMed |
[3] Pirotta M, Kotsirilos V, Brown J, et al. Complementary medicine in general practice - a national survey of GP attitudes and knowledge. Aust Fam Physician 2010; 39 946–50.
| Complementary medicine in general practice - a national survey of GP attitudes and knowledge.Crossref | GoogleScholarGoogle Scholar | 21301677PubMed |
[4] Young JM, O’Halloran A, McAulay C, et al. Unconditional and conditional incentives differentially improved general practitioners’ participation in an online survey: randomized controlled trial. J Clin Epidemiol 2015; 68 693–97.
| Unconditional and conditional incentives differentially improved general practitioners’ participation in an online survey: randomized controlled trial.Crossref | GoogleScholarGoogle Scholar | 25450450PubMed |
[5] Edwards PJ, Roberts I, Clarke MJ, et al. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev 2009; 2009 MR000008
| Methods to increase response to postal and electronic questionnaires.Crossref | GoogleScholarGoogle Scholar |
[6] Pit SW, Vo T, Pyakurel S. The effectiveness of recruitment strategies on general practitioner’s survey response rates – a systematic review. BMC Med Res Methodol 2014; 14 76
| The effectiveness of recruitment strategies on general practitioner’s survey response rates – a systematic review.Crossref | GoogleScholarGoogle Scholar | 24906492PubMed |
[7] National Cancer Control Initiative. The Primary Care Perspective on Cancer - An Introductory Discussion. 2003. Available at https://www.canceraustralia.gov.au/sites/default/files/publications/primarycaresummary1_504af01edc922.pdf [Accessed 12 October 2020]
[8] Alho O-P, Teppo H, Mäntyselkä P, et al. Head and neck cancer in primary care: presenting symptoms and the effect of delayed diagnosis of cancer cases. CMAJ 2006; 174 779–84.
| Head and neck cancer in primary care: presenting symptoms and the effect of delayed diagnosis of cancer cases.Crossref | GoogleScholarGoogle Scholar | 16534084PubMed |
[9] PHN Central and Eastern Sydney. About Central and Eastern Sydney PHN. 2021. Available at https://www.cesphn.org.au/who-we-are/about-cesphn [Accessed 19 May 2021].
[10] PHN North Coast. About NCPHN. 2021. Available at https://ncphn.org.au/about [Accessed 19 May 2021].
[11] Rashidian A, van der Meulen J, Russell I. Differences in the contents of two randomized surveys of GPs’ prescribing intentions affected response rates. J Clin Epidemiol 2008; 61 718–21.
| Differences in the contents of two randomized surveys of GPs’ prescribing intentions affected response rates.Crossref | GoogleScholarGoogle Scholar | 18359606PubMed |
[12] Groves RM, Singer E, Corning A. Leverage-saliency theory of survey participation: description and an illustration. Public Opin Q 2000; 64 299–308.
| Leverage-saliency theory of survey participation: description and an illustration.Crossref | GoogleScholarGoogle Scholar | 11114270PubMed |
[13] Groves RM, Presser S, Dipko S. The role of topic interest in survey participation decisions. Public Opin Q 2004; 68 2–31.
| The role of topic interest in survey participation decisions.Crossref | GoogleScholarGoogle Scholar |
[14] Rose PW, Rubin G, Perera-Salazar R, et al. Explaining variation in cancer survival between 11 jurisdictions in the International Cancer Benchmarking Partnership: a primary care vignette survey. BMJ Open 2015; 5 e007212
| Explaining variation in cancer survival between 11 jurisdictions in the International Cancer Benchmarking Partnership: a primary care vignette survey.Crossref | GoogleScholarGoogle Scholar | 26017370PubMed |
[15] Brtnikova M, Crane LA, Allison MA, et al. A method for achieving high response rates in national surveys of U.S. primary care physicians. PLoS One 2018; 13 e0202755
| A method for achieving high response rates in national surveys of U.S. primary care physicians.Crossref | GoogleScholarGoogle Scholar | 30138406PubMed |