Methodological issues in the design and conduct of public health computer assisted telephone interview surveys: the case of informal carers in Australia
Nina Van DykeThe Social Research Centre, Level 1, 262 Victoria Street, North Melbourne, Vic. 3051, Australia. Email: nina.vandyke@srcentre.com.au
Australian Journal of Primary Health 15(2) 132-138 https://doi.org/10.1071/PY09008
Published: 5 June 2009
Abstract
The academic literature contains surprisingly little information regarding the design and conduct of surveys dealing with sensitive social issues. The present paper is an attempt to help fill that gap so that other researchers conducting similar projects can learn from our experience. In particular, I focus on the various challenges we encountered in carrying out a computer assisted telephone interview (CATI) survey of informal carers in Australia, our responses to these challenges and our learnings from this endeavour. In the present article, I discuss the following issues: cost-efficient sampling for small numbers; opt-out versus opt-in approaches to respondent participation; status errors in administrative data; reducing respondent refusals; interviewing non-English speakers; questionnaire topic order; carers who care for more than one person; and interviewer training, including interviewer and/or respondent distress. The conclusions were: (1) carers are generally willing and able to answer quite sensitive questions around caring, despite the fact that they may become distressed in doing so; (2) carers are willing to answer a rather long (25 min) telephone survey; (3) thorough interviewer training is critically important, with an emphasis on achieving a balance between sensitivity and efficiency; and (4) respondents should be given the opportunity at the end of the interview to make additional comments and to provide their contact details should they desire follow up from an appropriate authority.
Additional keywords: caregivers, questionnaire.
Acknowledgements
I would like to thank Rosie Briscoe, Robyn Graham, Kwee Phua, John Nielsen and the interviewers at Oz Info for their outstanding work on the interviewing phase of this research. I would also like to thank Jongsay Yong, Guyonne Kalb, M. Kent Jennings, C. Dean Goodman, and Ross Williams for their comments on draft versions of this article.
Boland M,
Sweeney MR,
Scallan E,
Harrington M, Staines A
(2006) Emerging advantages and drawbacks of telephone surveying in public health research in Ireland and the UK. BMC Public Health 6, 208.
| Crossref | GoogleScholarGoogle Scholar | PubMed |
CAS |
found that advance letters were more effective in increasing response rate than were postcards, which were more effective than no advance warning. They also found that letters were the most cost-effective.
4 Please contact the author for copies of the questionnaires.
5 This data consisted of administrative data from Centrelink. Centrelink is a federal government agency that delivers a range of services to the Australian community. CA and CP are government support programs for caregivers. CA is a supplementary payment available to people who provide daily care and attention at home for an adult or child with a disability or severe medical condition. CA is not income or assets tested and may be paid in addition to an income support payment (for example, a social security income support payment). CP provides income support to people who, due to the demands of their care-giving role, are unable to support themselves through substantial workforce participation. It is subject to income and assets tests and is paid at the same rate as other social security pensions. Those who received both were considered ‘CP’ for our study.
6 Compared with the 13% of the population that the 2003 Australian Bureau of Statistics Survey of Disability, Ageing and Caring found provides such care, Centrelink administrative data show that, between 2000 and 2004, less than 1% of the Australian population received CP and between one and 2% received CA.
7 From our original population of 785 080 caregivers, we first chose a 10% random sample. From these, we kept only ‘new’, ‘long-term’, and ‘past recipients’, which further reduced the sample size to 36 242. ‘New caregivers’ were defined as those who received either type of support payment for the first time during the 6-month period from 1 July 2004 to 31 December 2004. ‘Long-term caregivers’ were defined as those who had been on either form of payment for more than 2 years, since 1 January 2003 or earlier, either continuously with no gap or with gaps of not more than seven 2-week periods, and were receiving payments on 31 December 2004. ‘Past caregivers’ were those who ceased to receive either type of payment between 1 January 2004 and 31 December 2004, and who prior to that had received payments for at least 6 months. The rationale for this classification is to approximate years of caring, so that we can assess whether the impacts of care giving on caregivers vary by duration of care. This classification does not cover all caregivers in the sampling frame; in particular, caregivers who received support payments for more than 6 months but less than 2 years were not included in the sampling design. From these remaining caregivers, we chose random samples of 1200 caregivers within each of six different stratifications (payment type/length of care), to arrive at a sample size of 7200. Please contact the author for further details on this sampling process.
8 Please contact the author for a copy of the Rejection Management Script.
9 852 h without breaks.
10 There are many other ways of computing response rate. For example, based on the number of potential respondents before the opt-out process, the response rate is approximately 20%.
11 OzInfo, the survey company that conducted this survey, was capable of translating survey questions into 13 different languages.
12 In the numerous telephone surveys I have been involved in, income questions often have the highest refusal rates (often around 5%). Presumably this high rate has to do with general sensitivity regarding personal financial matters, as well as such a question having little obvious connection to the purpose of the survey.
13 At the time of this project, our expert consultant was enrolled in a Masters of Social Work (Research) in the Arts Faculty at the University of Melbourne and undertaking a research project on the mistreatment of older people, which was to be turned into a PhD. Please contact the author for a copy of the Interviewer Training Brief.
14
Keeter et al. (2000) compared the response rates for two identical random digit dialing national (USA) telephone surveys in which different level of effort were exerted. The ‘standard’ survey was conducted over a 5-day period and used a sample of adults who were at home when the interviewer called. The ‘rigorous’ survey was conducted over an 8-week period that used random selection from among all adult household members. Response rates were 36.0% for the ‘standard’ effort and 60.6% for the ‘rigorous’ effort.
15
CyBulski and Ciemnecki (2000), in a telephone survey of people with disabilities, also found that respondents were willing to answer sensitive questions, happy to be interviewed, and willing to answer long surveys (22 and 44 min).
16 In this case, it would be important to conduct multivariate analyses to control for such variables as age and sex, which we know are considerably different among caregivers as compared with the general population.