Abstract
Objective:
Evolving privacy and confidentiality regulations make achieving high completion rates in longitudinal studies challenging. Periodically reviewing the methods researchers use to retain participants throughout the follow-up period is important. We review the effectiveness of methods to maximize completion rates in a 1-year longitudinal study of repeat driving-under-the-influence (DUI) offenders.
Method:
During the course of 21 months, we attempted to follow-up with 704 participants of a licensed residential treatment facility for repeat DUI offenders. High rates of lifetime Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, substance-use disorders (97.6%) and nonsubstance- or nongambling-related psychiatric disorders (44.5%) among the sample made tracking participants difficult. To locate participants and complete follow-up interviews, we obtained baseline information, contacted collaterals, sent mailed reminders, searched Internet databases, and gave a monetary incentive for completing study interviews.
Results:
We located 608 participants with active telephone numbers (87.4%) and completed interviews with 488 (70.1% of the entire eligible sample and 80.3% of those with active telephone numbers), after an average (SD) of 8.6 (9.1) calls (median = 5.0). Increasing the number of calls continued to yield additional completions at 10, 20, and 30 calls; at approximately 40 telephone calls, the potential return for additional calls did not justify the added effort.
Conclusions:
These results suggest that researchers need to (1) employ more than 10 telephone calls to adequately track difficult-to-follow substance-using populations, and (2) prepare for a subsample of participants who might require more extensive contact. These results highlight the importance of using empirical guidelines to plan estimates for the number of contacts needed to achieve an adequate follow-up completion rate.
Longitudinal studies are necessary to investigate the antecedents, mediators, and consequences of various complex behavior patterns (e.g., substance-use disorders). However, attrition often limits the value of longitudinal research, particularly research with psychologically vulnerable and transient populations. High rates of attrition can compromise the external validity of study findings, because those who drop out might differ from those who complete a study (Hansen, 1985). Researchers disagree as to what extent and at what percentage attrition endangers the reliability of study results.
Longitudinal studies with samples of alcohol or illicit substance users reflect this variation. We conducted a PsycINFO search of the peer-review literature published in 2007 using the term “substance abuse OR alcohol abuse OR drug abuse AND longitud* OR prospectiv*” to determine recent completion rates in studies similar in design to our own. Excluded were studies that did not use adult substance users or did not have follow-ups after 6 months or before 2 years. Although the mean, median, and modal completion rates all were 73%, rates ranged widely from 43% to 100% (Bacharach et al., 2007; Burke and Gregoire, 2007; Davey et al., 2007; Freyer et al., 2007; Fridell et al., 2007; Gossop et al., 2007; Jason et al., 2007; Matzger and Weisner, 2007; Mensinger et al., 2007; Mihai et al., 2007; Moggi et al., 2007; Niv and Hser, 2007; Robinson et al., 2007; Wade et al., 2007; Williams et al., 2007; Williamson et al., 2007). It is worth noting,
…a response rate in excess of 85% is viewed as an excellent rate of return. It would need a peculiar set of circumstances to throw off your results by very much. Response rates in the 70% to 85% range are viewed as very good. Responses in the 60% to 70% range are considered acceptable, but you begin to be uneasy about the characteristics of nonresponders (Mangione, 1995, pp. 60–61).
Longitudinal studies involving populations of alcohol or illicit substance users are especially prone to attrition because these populations can be transient, require inpatient hospital stays, and have co-occurring psychiatric disorders. Further, the sensitive nature of many questions calls for special attention to privacy rights when attempting to contact participants for follow-up (Cunradi et al., 2005; Scott, 2004; Shaffer et al., 2007).
Although statistical techniques, to some extent, can evaluate and adjust for the impact of low retention rates (Cotter et al., 2002), this solution is not ideal. Statistical techniques that impute or substitute for missing data (Boys et al., 2003) only provide a best guess about the typical characteristics of dropouts, thereby losing potentially important information about these respondents and limiting the generalizability of the study findings.
Given the scope and impact of the problem of attrition on addiction studies, identifying cost-effective ways to retain longitudinal samples is crucial. Despite the importance of prospective research for the field, investigations focusing on retention strategies are limited. The current article reviews the literature about techniques used to retain participants; provides a description of the five methods we used to follow a sample of repeat driving-under-the-influence (DUI) offenders; and analyzes our primary mode of contact, weekly telephone calls, carried out until the end of the project.
A gold standard: Near-perfect retention in a sample of injection drug users
In 1992, using a prospective cohort design, Cottler et al. (1996) achieved a completion rate of 96.6% at 18 months with a population of injection drug users. The population included 479 adults: 203 in a drug-free treatment program, 74 in methadone treatment, and 202 participants not in treatment who acted as controls.
To achieve a high completion rate, Cottler et al. (1996) began by having participants complete an extensive future contact sheet at baseline along with their consent form. Researchers conducted follow-up interviews every 3 months, likely bolstering the 18-month retention rate. They located and interviewed respondents using telephone numbers from each participant's contact sheet. Researchers tried to convert refusals by offering monetary incentives for completing interviews and increasing the incentive as the follow-up deadline approached to encourage resistant participants. For participants they could not reach by telephone, two research team members conducted field tracking (e.g., searching common hangouts) and searched public and private databases (e.g., credit checks and social security offices) for up-to-date contact information. For the most difficult-to-follow participants, the researchers extended beyond the tracking methods likely provisioned in the consent form and asked hospitals and social workers, deemed likely associated with the participant, for contact information. These methods were effective: In total, the researchers interviewed 455 subjects; 371 involved telephone tracking, 138 involved database tracking, and 111 involved field tracking. (The groups are not mutually exclusive.)
Changes since Cottler et al.: New privacy standards
Since Cottler et al. (1996) published their study more than 1 decade ago, changes in privacy rights of participants have led researchers to alter their methods for locating, contacting, and completing follow-up interviews. The Health Insurance Portability and Accountability Act (45 CFR 164.502 (a) (1) (iii)) and the “Privacy Rule,” which states that health and research entities must put safeguards in place to protect the release of personal health information, have increased attention to keeping participant health information private and secure. Institutional review boards (IRBs) have reacted to privacy changes by developing more conservative policies about what qualifies as a violation of privacy and confidentiality. For example, tracking participants by contacting hospitals or social service agencies is now nearly impossible to do. Healthcare workers are not authorized to release any information to researchers without consent. Some IRBs will not grant researchers permission to search for participants in their neighborhoods or at service settings, especially when clear permission from the participant is unavailable (H.J. Shaffer, S.E. Nelson, and D.A. LaPlante, personal communication, September 19, 2008). IRBs require explicit permission to access tracking strategies; in Cottler et al.'s study, there is no evidence that the more aggressive tracking strategies employed had explicit consent from participants.
Changes since Cottler et al.: Technology advances
In addition to new Health Insurance Portability and Accountability Act (HIPAA) and IRB privacy requirements, technological advances also have changed the landscape of follow-up telephone interviews and reliance on traditional telephone-based tracking techniques. According to subscriber estimates from the Celluar Telecommunications Industry Association (CTIA) and population estimates from the Bureau of the Census, the number of people in the United States subscribing to wireless carriers has climbed from approximately 44 million (16.6% of the U.S. population) in 1996 to 233 million (77.9% of the U.S. population) in 2006 (CTIA–The Wireless Association, 2008; Bureau of the Census, 2000, 2008). From 1995 to 2000, there was a 34.8% increase in the number of people with caller ID and a 20.5% decrease in those likely to answer an unknown number (Tuckel and O'Neill, 2002). Caller ID and voicemail allow participants the opportunity to screen unwanted telephone calls and avoid speaking with researchers or completing a follow-up interview.
Assessing the effectiveness of conventional methods
Not surprisingly, spending a lot of money (Stephens et al., 2007), time (Desmond et al., 1995), and effort (Cottler et al., 1996) are crucial to retaining difficult-to-follow populations. Researchers tend to use five primary methods to complete follow-up interviews: (1) calling participants, (2) calling their friends and family members (i.e., collaterals), (3) mailing reminders, (4) searching public records for updated or missing contact information, and (5) providing monetary incentives. To facilitate these five methods, researchers often collect comprehensive contact information at baseline, keep detailed records, and allow interview time and location flexibility.
Number of callback attempts.
Researchers often use repeated contact attempts (e.g., telephone calls) as an effective method for tracking difficult-to-follow participants for follow-up studies (Desmond et al., 1995; Meyers et al., 2003). However, as they plan their budgets and time, most researchers do not know how many calls will be necessary to reach their target completion rate. For example, no longitudinal studies with samples of alcohol or other illicit substance users that we reviewed explicitly identified the maximum number of callback attempts. Having an expectation of how many calls will be necessary to reach a target completion rate can help researchers prepare more precise study protocols (e.g., set time limits, decide on additional tracking techniques, and ultimately set a reasonable budget).
We found three peer-reviewed studies of substance-using populations that explicitly reported the number of callbacks needed to obtain certain completion rates. The Chicago Target Cities (Scott et al., 2000) and the Drug Outcome Monitoring Study (Dennis et al., 2000), both reviewed in Scott (2004), reported a 70% completion rate after 20 and 23 callbacks, respectively. Although Cottler et al. (1996) reported an average (SD) of 7.8 (10) contact attempts to reach their sample, a subsample of difficult-to-track participants required an average of 30.2 (19.5) contact attempts. All three studies tracked their participants more than 10 years ago, before more restrictive regulations; therefore, the techniques used might not be suitable for or generalize to studies conducted today.
Collaterals.
Researchers often ask participants for collateral (i.e., friends and family members) contact information; this information allows researchers to call these collaterals for updated participant contact information (Boys et al., 2003; Cotter et al., 2002; Cottler et al., 1996; Vaillant, 1975). Some researchers ask for certain collaterals, such as neighbors or out-of-town relatives, who might be less likely than the participant to live a transient lifestyle (Meyers et al., 2003).
Mailed reminders.
Tracking participants by mail can save time and money. Researchers can send letters to participants to keep in contact between baseline and follow-up. This strategy provides an early opportunity to detect change of address (Meyers et al., 2003). Letters also remind participants to expect a follow-up call before any other tracking methods begin (Stephens et al., 2007). Another type of mailing reminds people who are unresponsive to calls about the study and asks them to contact the research team (Boys et al., 2003; Lapham et al., 2000). Boys et al. (2003) reported 12% of their total sample replied after receiving a postcard reminding them to return mailed follow-up surveys.
Public records (Internet searches).
Some researchers recommend using Internet people-searches and reverse searches to help find participants (Cotter et al., 2002, 2005; Zand et al., 2006) or collaterals who are less likely to have changes in contact information (Meyers et al., 2003; Passetti et al., 2000). Reverse searches allow users to enter a telephone number and receive other contact information (Cotter et al., 2002; Passetti et al., 2000). Passetti et al. (2000) recommend using Internet people searches regularly throughout the follow-up period because they are updated every few months. Researchers also can use the Internet to access public records, usually available through government agencies (Cotter et al., 2002).
Monetary incentives.
Researchers often include incentives for interview completion. Higher incentives, predictably, lead to higher completion rates (Festinger et al., 2008) and speed up the follow-up process (Croft et al., 2007).
Items essential to successful project operations
The success of the five primary methods discussed above is contingent on several other important items. These include obtaining informed consent and useful contact information, accurate record keeping of contact attempts, and flexibility among the research team.
Informed consent and the contact information form.
Foresight during study design is essential to improving completion rates. The contact information that participants provide during their baseline interview includes the information necessary for longitudinal tracking (Cottler et al., 1996; Meyers et al., 2003); however, one study (Stephens et al., 2007) found a weak relationship between the amount of information collected at baseline and completion rates. The type of information, more than the amount, influences follow-up. Given current privacy regulations, unless participants explicitly agree, using public or private records such as voter-registration records (Stephens et al., 2007), motor vehicle directories (Lapham et al., 2000), records of debit card use (Des Jarlais et al., 2005), or other information collected on a contact sheet to track participants might not gain IRB approval. Participants who provide this information imply consent, but consent forms must clearly give permission for researchers to use this information (Desmond et al., 1995; Stinchfield et al., 1994; Vaillant, 1975).
Record keeping and flexibility.
Methodical record keeping of all contact attempts likely increases retention, regardless of the budget and time constraints of a study (Cottler et al., 1996; Meyers et al., 2003). Tracking the day and time of each contact attempt allows researchers to try different times of day and days of the week. This becomes important when researchers come in contact with collaterals that might know a participant's availability, removing much of the guesswork for researchers. Flexibility among the research team when tracking, scheduling, and completing interviews decreases attrition (Cotter et al., 2005; Marsden et al., 2005). For example, researchers should be prepared to work flexible hours (Marsden et al., 2005), because approximately 20% more completed follow-up interviews occur during the weekend than during business hours (Weeks et al., 1987). In an effort to be flexible, Meyers et al. (2003) conducted interviews in various locations around the community and sometimes in prison. Finally, researchers must consider the length of the follow-up interview. Every 10 minutes of extended interview time corresponds to a 7% decrease in response rate (McCarty et al., 2006).
Present study
In the present study, our aim was to locate 704 repeat DUI offenders for a telephone interview 1 year after they completed a 2-week residential program. In this article, we examine the completion rate achieved by applying five tracking methods during a lengthy follow-up period: making telephone calls, contacting collaterals, mailing reminders, searching public databases, and providing monetary incentives. This article extends the current literature in two important ways. First, we examine the rate and trajectory of return using multiple tracking methods for this telephone survey. Second, we employ a large number of callbacks during the course of our study, providing an opportunity to analyze the completion rate at various numbers of callbacks. This analysis provides future researchers with an empirical guide for estimating the necessary number of callbacks to achieve a desired rate of participant response.
Method
Participants
Participants were 704 repeat DUI offenders who completed a psychiatric interview while at a 2-week inpatient treatment program; the participants agreed at baseline to complete a follow-up interview after 1 year. The program, the Middlesex Driving Under the Influence of Liquor Program (MDUIL), was a sentencing alternative to incarceration for these offenders. Both the baseline and follow-up interview included the Composite International Diagnostic Interview (Kessler and Ustun, 2004), a comprehensive, standardized, and computer-guided instrument for the assessment of substance use and other psychiatric disorders according to the definitions and criteria of ICD-10 (World Health Organization, 1992) and DSM-IV (American Psychiatric Association, 1994). Participants who completed the baseline interview received a $25 gift card.
Among MDUIL participants, the average age was 40 years old; 88% were white, 82% were male, 72% had a high school diploma or less, and 68% had a yearly income less than $20,000. Thirty-five percent had more than two previous DUI arrests. Ninety-eight percent (97.6%) of participants qualified for a lifetime alcohol-use disorder, and 40.6% qualified for a lifetime drug-use disorder at baseline. Forty-five percent (44.5%) also qualified for a lifetime non-substance-related psychiatric disorder, the most common of which were conduct disorder and posttraumatic stress disorder. Additional details about the baseline sample are reported in Shaffer et al. (2007).
Procedure
Baseline information.
To assist with follow-up contact, all participants who completed the informed consent and baseline interview and who agreed to participate in the follow-up interview filled out a comprehensive contact form. This form asked participants for their date of birth, address, and telephone numbers (i.e., home and work). An optional section requested contact information for three collaterals not living in the participants' household. Approximately half (52.5%) of the participants provided this collateral information.
Reminder postcards.
Every 2 weeks from February 20, 2005, until April 23, 2006, excluding 1 week in April 2005 and 1 week in December 2005, MDUIL admitted a new cohort from which we collected baseline information. Two weeks before each cohort's 1-year follow-up interview, we sent reminder postcards to participants who consented to follow-up interviews. These postcards alerted participants that we would be contacting them to schedule and complete a follow-up interview over the telephone and that this interview would last between 45 minutes to 1 hour. The postcard reminded them that they had agreed to the follow-up interview, noted the $50 reimbursement, and requested that they contact us if any of their contact information had changed during the previous year.
Contact attempts.
We created a spreadsheet for research interviewers to log each contact attempt for the 1-year follow-up interview. After each attempted telephone call, the interviewer recorded the date and outcome of the contact attempt. We defined an attempt to contact as any telephone call initiated by the research team to a participant or official collateral (i.e., those collaterals provided to us at baseline), whether or not the call yielded any information or direct communication. We did not include mailings or participant-initiated calls as contact attempts.
We maintained weekly contact attempts for 21 months from the first calls to the first cohort until the project ended. We continued calling each cohort until the close of the study; therefore, the first cohort had a potential 21 months to complete, although none took longer than 18 months to complete, and the last cohort had up to 7 months to complete. Figure 1 illustrates this timeline.
Figure 1.
Follow-up timeline for each cohort; m = month
If an attempt failed to yield a completed follow-up interview, we continued to call once every week (or more, if returning a call) throughout the course of the study follow-up phase. Upon reaching a telephone answering machine, we left voicemails with our name, the organization we were calling from, and a reminder that this was a research study and that the participant had consented to participate in it about 1 year ago. According to the IRB requirements regulating this research, messages on voicemails could not reveal the purpose or topic of the study.
When we reached a participant, we made an effort to complete the follow-up interview at that time. If this was not possible, we scheduled the interview for a mutually agreeable time, typically within 1 week. Twenty-four hours before a scheduled interview, we called and left a reminder message about the upcoming interview. On the day of an interview, if a participant did not answer the telephone on the first attempt, we continued to contact the participant every 15 minutes for 1 hour. If they still did not respond, we called them the next day as well. To maximize response, we varied the time of day, the day of the week, the researcher making this telephone call, and the number from which we were calling.
Alternative avenues to make contact.
At baseline, participants provided the contact information for official collaterals. When repeated attempts to contact the participant failed to produce a completed follow-up interview, we attempted to contact collaterals to ask for current participant contact information. If the collateral refused to release this information, we asked that they pass a message along to the participant. We also sent modified versions of the introductory reminder postcard to nonresponsive participants throughout the follow-up period. After months of attempting to reach nonresponsive participants, we increased the value of the gift card from $50 to $75 and sent another mailing to these participants publicizing the increase. When we exhausted all interpersonal avenues for contacting participants, we used Internet databases to locate a telephone number and/or address based on participant name and date of birth.
Refusals.
If any participants refused to complete the follow-up interview, we attempted to convert their decision (i.e., change their mind). Three common reasons for not completing the study included lack of time, the length of the interview (45–60 minutes), and the cost of using a cellular telephone. To counter these arguments, we interviewed during off hours, broke interviews into several shorter blocks, and offered to reimburse parking costs for participants who opted for an in-person interview in our office. We also provided a more detailed rationale for the study and the importance of their participation.
Results
Callbacks
Figure 2 illustrates the follow-up outcomes for our participants. Our research team located 608 participants, spoke directly with 572 participants, and completed follow-up interviews with 488 of the original sample of 704. Eight of those original 704 were deceased, reducing the potential sample to 696, resulting in an overall completion rate of 70.1% (i.e., 488/696) for the follow-up interview. The mean (SD) number of callback attempts needed to complete a follow-up interview was 8.6 (9.1), and the median number of callbacks was 5.0. Participants who completed the follow-up interview after we reached a 60% completion rate required significantly more calls to complete interviews (mean = 21.9 [13.3]) compared with participants who completed before we reached the 60% threshold (mean = 6.6 [5.9]); t = −15.74, 486 df, p <.001).
Figure 2.
Follow-up flowchart outlining the outcomes of the sample
Figure 3 shows two similar but distinct trajectories when comparing the number of callbacks by the completion rate for two groups. Among all participants, except those who were deceased (i.e., eligible participants, n = 696), approximately 70% completed the follow-up interview after 40 callbacks. Supplementary calls did not yield a meaningful number of additional completed interviews. Among participants we located, which excluded deceased and incarcerated participants, as well as participants with disconnected numbers (i.e., located participants, n = 608), 70% completed the interview after 19 or fewer callbacks. The response rate continued to increase until we reached 40 callbacks. At this point, 79.6% of located participants completed the interview, yielding an asymptotic completion rate; that is, additional callbacks yielded only a few additional completed interviews.
Figure 3.
Number of callbacks by completion rate. Note: Eligible participants excludes deceased participants; located participants excludes deceased participants, participants with disconnected numbers, and incarcerated participants
Among those who did not complete the interview, 41 participants explicitly refused to complete the follow-up interview. Seventy-nine participants were “soft refusers,” meaning that they avoided us by never answering the telephone, missed at least one scheduled interview, or spoke with us but did not schedule an interview.
One third of participants had a disconnected number at some point during the follow-up period. We completed follow-up interviews with 93 of the 234 participants whose number was ever disconnected (39.7%). At the conclusion of the study, among all 704 participants, we were unable to complete interviews with 132 (18.8%) participants because their telephone number was disconnected or wrong; they lived in a halfway house, were incarcerated, or were deceased; or they never once answered their telephone.
Participants who completed the 1-year follow-up interview averaged 77 (106) days (median = 26) from first attempted phone call to a completed interview. Although the follow-up period was as long as 21 months, no participants who completed the interview took longer than 18 months from the start of their follow-up window to the end of the study. Figure 4 illustrates the days to a completed interview versus the number of participants that completed the interview. This figure reveals three distinct groups: 53.1% of participants completed their interview during the first month (n = 259) of their follow-up window, 33.4% of participants completed within 2–6 months (n = 163), and 13.5% of participants completed after 7 months (n = 66). Comparing the required callbacks for these three groups reveals that participants who took 7 months or longer to complete the follow-up required on average 24.6 (12.4) callbacks, more than twice the average for those who completed within 2-6 months (mean = 10.5 [5.2]), and more than six times the average for those who completed in the first month (mean = 3.5 [2.3]; F = 375.5, 2/485 df, p < .001).
Figure 4.
Distribution of interview completions by follow-up days
Compared with the rest of the sample who completed interviews, the last 10% of participants who completed during the last 5 months of the study were more likely to have lifetime diagnoses of conduct disorder and attention deficit disorder (χ2 = 7.43 and 7.07, respectively, 1 df, p < .01).
Collaterals
Four hundred and fifty four (64.5%) participants had official or nonofficial collaterals (i.e., collaterals whose contact information was not provided at baseline but whom we talked with while attempting to contact participants). We attempted to contact 198 collaterals and reached 174 (87.8%). Of these, 117 provided new information, 51 provided incorrect or useless information, and 6 refused to provide us with any information. We cannot identify with precision how many completions official or nonofficial collateral assistance produced. However, 34 participants completed after we obtained new contact information from their collaterals, and 69 participants completed after we obtained other new information (e.g., best time to call) from their collaterals. Thus, collaterals potentially increased the completion rate by approximately 5%–10%. Collaterals also informed us of deceased or incarcerated participants. Participants who provided collaterals at baseline were more likely to complete the study (73.9%) than those who did not provide collaterals (63.6%), whether we spoke to their collaterals or not (χ2 = 8.7, 1 df, p < .01).
Three months before the study closed, we reached a 60% completion rate, which prompted a mass mailing to 262 (37.2%) participants who were still not interviewed and had a valid address on file. When the completion rate reached 63%, we sent one more reminder to 164 (23.3%) participants with valid addresses who had yet to complete the follow-up interview.
The majority of the 416 reminder postcards from the two mass mailings went unanswered. The United States Postal Service returned 70 (16.8%) because of incorrect addresses, of which 13 (3.1%) had forwarding addresses. Eighteen participants (4.3%) who received mailings sent updated information, and 15 participants (3.6%) completed follow-up interviews as a direct result of these two mailings.
Internet searches
To correct for invalid contact information, we attempted to locate new information from Internet databases. Internet databases yielded invalid numbers for approximately 72% of the 115 participants for which we searched but led to completed follow-up interviews for 16 people (14%).
Discussion
Longitudinal studies of people with substance-use disorders and other co-occurring disorders are a challenge because of the potential transience and resistance among this group. Participants in our sample had criminal backgrounds (e.g., repeat DUI offenses) and, for 98%, lifetime histories of psychiatric or substance-use disorders (Shaffer et al., 2007). The large number of disconnected numbers and the number of mailings returned because of an incorrect address reveal that these participants were transient. However, despite the difficulties, it is indeed possible to track participants prospectively and achieve an adequate completion rate using rigorous telephone survey methods. An adequate completion rate among a community survey sample is 70% (Hulley et al., 1988; Mangione, 1995). Primary among our findings is that this type of sample required more time and callbacks to achieve an acceptable completion rate than investigators might plan. This finding has implications for research proposals (e.g., budgeting, sample size, and estimates of completion rate).
Our primary method for tracking participants was telephone calls. This method yielded the most completed interviews. As Figure 3 illustrates, the rate of return for additional calls did not begin to level off until after 20. Only after 40 calls did the potential return for additional calls not justify the added effort. These results reveal that it is necessary to plan for an extended effort to achieve adequate follow-up samples. To illustrate, approximately half of those who completed interviews in this study took 2–18 months to follow up. Gaining access to the most difficult-to-contact subgroup, despite the disproportionate time and effort involved, was essential to our project. Indeed, an analysis revealed that this difficult-to-interview subgroup was clinically different from the noncompleters and earlier completers.
The most useful method for updating participant contact information was communication with collaterals. Collaterals confirmed current information, provided us with new contact info, passed along messages, and in some instances tracked down participants. They did all of this without receiving reimbursement or detailed information about the study. Our results revealed that collateral-provided information led to twice as many completed interviews as Internet databases. It also is worth noting that participants who did not provide collaterals at baseline might represent a more difficult-to-follow subgroup than participants who provided collateral information.
The two mass mailings sent to participants we could not track using provided telephone numbers or collaterals yielded a low rate of updated contact information and completed interviews. Based on the cost of postage sent to all participants, each interview completed as a direct or indirect result of the mailings cost $22.74.
For participants who did not provide collaterals, we used Internet databases to locate new contact information. This proved to be the least helpful method of gathering additional contact information. Many of the numbers provided were wrong, disconnected, or connected only to a generic voicemail. In these cases, determining if they were correct numbers was impossible. Internet databases yielded a quarter of the contact information and half the amount of completed interviews compared with the collateral-derived information. However, for participants who refused to provide collaterals, such databases are a viable option.
Taken together, these techniques reveal that persistence was crucial for achieving a high completion rate in this sample. Although we reached a 70% completion rate among active numbers with 19 or fewer callbacks, Figures 3 and 4 show that we continued to improve our completion rate to 80% of active numbers by employing up to 40 callbacks and continuing our efforts for more than 7 months.
Limitations
During the study period, to allow for as many weekly callbacks as possible, we had a lengthy follow-up window (i.e., potentially 21 months); many follow-up periods are shorter, limiting the possible number of callbacks. With the exception of one face-to-face follow-up interview, we conducted all other interviews using the telephone; therefore, these results might not generalize to studies that conduct interviews in person or use some other form of communication. We also cannot confidently determine which of the follow-up methods we used (e.g., flexible calls, collaterals, mailings, Internet searches, and monetary incentives) had primary influence on the rate of completed follow-up interviews. For example, our evening calls might have been more effective than our daytime calls. Alternatively, an unanswered reminder letter might have prompted a participant to stop screening our calls and complete the interview; therefore, the mailing could have been indirectly responsible for a completed interview.
Although the criminal, psychiatric, and substance-using history of our repeat DUI offender population leads us to believe this sample is comparable to other difficult-to-track populations (e.g., other substance-use disordered populations), this is an assumption, and our results might not generalize beyond repeat DUI offenders. More research will be necessary to determine whether completion rates and numbers of callbacks are similar among other populations.
Future directions
In addition to the methods already discussed, we have determined that six methods we did not use might make it easier for future researchers to contact difficult-to-track participants. First, when designing a consent form, anticipate that explicit permission is necessary to contact outside agencies; asking participants for this information only implies consent. In particular, our IRB prohibited us from contacting treatment programs, probation officers, and other community settings that would have helped us track participants for follow-up. Second, employing newer technologies also might have aided follow-up. Our contact sheet did not ask for an email address, which would have allowed us to reach participants with a disconnected telephone number and an invalid address. Third, more use could be made of the Internet, with a website of information about the study and a contact link for participants. Fourth, having a specific, dedicated follow-up phone line would have prevented confusion in contacting us. Fifth, budgeting for constant contact from baseline through follow-up would also have been cost effective. Finally, sending a birthday card, holiday greeting card, or a project update would have kept participants engaged in the project, reduced refusals, and alerted us immediately to changed addresses (Scott, 2004).
Conclusion
Often researchers fail to propose budgets that allow for the extended use of tracking and contacting methods required to achieve adequate survey and completion rates. Systematic planning and flexibility are essential for completing a project within a budget. Advances in technology and personal communication indicate a need for researchers to adapt the ways they conduct longitudinal studies, attempt new methods, and maintain the same diligence and persistence that researchers have used for decades.
This study provides a guide for the development of efficient research plans with difficult-to-follow populations. Researchers studying difficult populations using longitudinal methods should design their budget and timeline knowing that the project likely will require more than 10 and perhaps as many as 40 callbacks to reach a successful completion rate. If researchers conduct pilot testing or if the project includes multiple waves of follow-up, researchers should consider using the first wave to empirically determine the correct number of contact attempts to reach an acceptable completion rate for later waves.
The landscape for conducting longitudinal studies with difficult populations is quickly changing. Keeping technology and innovation on the side of the researcher is essential to maintaining high retention rates. Ultimately, however, low response rates are a direct function of the frequency and number of efforts to contact participants, and can be bolstered by persistence, planning, time, and new techniques for retaining participants.
Acknowledgments
The authors extend special thanks to the directors and counselors at MDUIL—Charles Karayianis, Michael Kennedy, Jim Barry, Christine Breen, Daniel Gallo, Karen Horrigan, and Michael Jezylo—as well as the entire MDUIL staff for their collaboration on this project. The authors also thank Andy Boudreau, Sarbani Hazra, Juan Molina, Siri Odegaard, Allyson Peller, Christine Thurmond, Jenna Tonet, and Audrey Tse for their support and work on this project.
Footnotes
This research was supported by National Institute on Alcohol Abuse and Alcoholism grant R01 AA014710-01A1. The authors also received support from the National Center for Responsible Gaming, bwin Interactive Entertainment AG, National Institute on Drug Abuse, National Institute of Mental Health, and the Las Vegas Sands Corporation.
References
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) Washington, DC: 1994. [Google Scholar]
- Bacharach SB, Bamberger PA, Cohen A, Doveh E. Retirement, social support, and drinking behavior: A cohort analysis of males with a baseline history of problem drinking. J. Drug Issues. 2007;37:525–548. [Google Scholar]
- Boys A, Marsden J, Stillwell G, Hatchings K, Griffiths P, Farrell M. Minimizing respondent attrition in longitudinal research: Practical implications from a cohort study of adolescent drinking. J. Adolesc. 2003;26:363–373. doi: 10.1016/s0140-1971(03)00011-3. [DOI] [PubMed] [Google Scholar]
- Bureau of the Census. Annual Population Estimates Population Estimates. Vol. 2000. Washington, DC: U.S. Census Bureau, Population Division; 2000. [Google Scholar]
- Bureau of the Census. Annual Population Estimates Population Estimates. Vol. 2009. Washington, DC: U.S. Census Bureau, Population Division; 2008. ( www.census.gov/popest/national/national.html). [Google Scholar]
- Burke AC, Gregoire TK. Substance abuse treatment outcomes for coerced and noncoerced clients. Hlth Social Work. 2007;32:7–15. doi: 10.1093/hsw/32.1.7. [DOI] [PubMed] [Google Scholar]
- Cotter RB, Burke JD, Loeber R, Navratil JL. Innovative retention methods in longitudinal research: A case study of the developmental trends study. J. Child Fam. Stud. 2002;11:485–498. [Google Scholar]
- Cotter RB, Burke JD, Stouthamer-Loeber M, Loeber R. Contacting participants for follow-up: How much effort is required to retain participants in longitudinal studies? Eval. Prog. Plann. 2005;28:15–21. [Google Scholar]
- Cottler LB, Compton WM, Ben-Abdallah A, Horne M, Claverie D. Achieving a 96.6% follow-up rate in a longitudinal study of drug abusers. Drug Alcohol Depend. 1996;41:209–217. doi: 10.1016/0376-8716(96)01254-9. [DOI] [PubMed] [Google Scholar]
- Croft JR, Festinger DS, Dugosh KL, Marlowe DB, Rosen-wasser BJ. Does size matter? Salience of follow-up payments in drug abuse research. IRB: Ethics Human Res. 2007;29(4):15–19. [PubMed] [Google Scholar]
- CTIA—The Wireless Association. Annualized Wireless Industry Survey Results, December 1985 to December 2008. Vol. 2009. Washington DC: 2008. ( http://files.ctia.org/pdf/CTIA_Survey_Year_End_2008_graphics.pdf). [Google Scholar]
- Cunradi CB, Moore R, Killoran M, Ames G. Survey nonresponse bias among young adults: The role of alcohol, tobacco, and drugs. Subst. Use Misuse. 2005;40:171–185. doi: 10.1081/ja-200048447. [DOI] [PubMed] [Google Scholar]
- Davey MA, Latkin CA, Hua W, Tobin KE, Strathdee S. Individual and social network factors that predict entry to drug treatment. Amer. J. Addict. 2007;16:38–45. doi: 10.1080/10601330601080057. [DOI] [PubMed] [Google Scholar]
- Dennis ML, Scott CK, Godley MD, Funk R. Predicting outcomes in adult and adolescent treatment with case mix vs. level of care: Findings from the Drug Outcome Monitoring Study. Drug Alcohol Depend. 2000;60:S51–S52. [Google Scholar]
- Des Jarlais DC, Perlis TE, Settembrino JM. The use of electronic debit cards in longitudinal data collection with geographically mobile drug users. Drug Alcohol Depend. 2005;77:1–5. doi: 10.1016/j.drugalcdep.2004.06.010. [DOI] [PubMed] [Google Scholar]
- Desmond DP, Maddux JF, Johnson TH, Confer BA. Obtaining follow-up interviews for treatment evaluation. J. Subst. Abuse Treat. 1995;12:91–102. doi: 10.1016/0740-5472(94)00076-4. [DOI] [PubMed] [Google Scholar]
- Festinger DS, Marlowe DB, Dugosh KL, Croft JR, Arabia PL. Higher magnitude cash payments improve research follow-up rates without increasing drug use or perceived coercion. Drug Alcohol Depend. 2008;96:128–135. doi: 10.1016/j.drugalcdep.2008.02.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Freyer J, Coder B, Bischof G, Baumeister SE, Rumpf H-J, John U, Hapke U. Intention to utilize formal help in a sample with alcohol problems: A prospective study. Drug Alcohol Depend. 2007;87:210–216. doi: 10.1016/j.drugalcdep.2006.08.018. [DOI] [PubMed] [Google Scholar]
- Fridell M, Hesse M, Billsten J. Criminal behavior in antisocial substance abusers between five and fifteen years follow-up. Amer. J. Addict. 2007;16:10–14. doi: 10.1080/10550490601077734. [DOI] [PubMed] [Google Scholar]
- Gossop M, Stewart D, Marsden J. Readiness for change and drug use outcomes after treatment. Addiction. 2007;102:301–308. doi: 10.1111/j.1360-0443.2006.01681.x. [DOI] [PubMed] [Google Scholar]
- Hansen WB, Collins LM, Malotte CK, Johnson CA, Fielding JE. Attrition in prevention research. J. Behav. Med. 1985;8:261–275. doi: 10.1007/BF00870313. [DOI] [PubMed] [Google Scholar]
- Hulley SB, Gove S, Browner WS, Cummings SR. Choosing the study subjects: Specification and sampling. In: Hulley SB, Cummings SR, editors. Designing Clinical Research: An Epidemiologic Approach. Philadelphia, PA: Lippincott Williams & Wilkins; 1988. pp. 18–30. [Google Scholar]
- Jason LA, Davis MI, Ferrari JR. The need for substance abuse after-care: Longitudinal analysis of Oxford House. Addict. Behav. 2007;32:803–818. doi: 10.1016/j.addbeh.2006.06.014. [DOI] [PubMed] [Google Scholar]
- Kessler RC, Ustun TB. The World Mental Health (WMH) Survey Initiative version of the World Health Organization (WHO) Composite International Diagnostic Interview (CIDI) Int. J. Meth. Psychiat. Res. 2004;13:93–121. doi: 10.1002/mpr.168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lapham S, Baum G, Skipper B, Chang I. Attrition in a follow-up study of driving while impaired offenders: Who is lost? Alcohol Alcsm. 2000;35:464–470. doi: 10.1093/alcalc/35.5.464. [DOI] [PubMed] [Google Scholar]
- McCarty C, House M, Harman J, Richards S. Effort in phone survey response rates: The effects of vendor and client-controlled factors. Field Meth. 2006;18:172–188. [Google Scholar]
- Mangione TW. Mail Surveys: Improving the Quality. Thousand Oaks, CA: Sage; 1995. [Google Scholar]
- Marsden J, Stillwell G, Barlow H, Taylor C, Boys A, Hunt N. An evaluation of a brief intervention model for use with young non-injecting stimulant users. Drugs: Educ. Prev. Policy. 2005;12(Suppl. No. 1):90–93. [Google Scholar]
- Matzger H, Weisner C. Nonmedical use of prescription drugs among a longitudinal sample of dependent and problem drinkers. Drug Alcohol Depend. 2007;86:222–229. doi: 10.1016/j.drugalcdep.2006.06.010. [DOI] [PubMed] [Google Scholar]
- Mensinger JL, Lynch KG, Tenhave TR, McKay JR. Mediators of telephone-based continuing care for alcohol and cocaine dependence. J. Cons. Clin. Psychol. 2007;75:775–784. doi: 10.1037/0022-006X.75.5.775. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meyers K, Webb A, Frantz J, Randall M. What does it take to retain substance-abusing adolescents in research protocols? Delineation of effort required, strategies undertaken, costs incurred, and 6-month post-treatment differences by retention difficulty. Drug Alcohol Depend. 2003;69:73–85. doi: 10.1016/s0376-8716(02)00252-1. [DOI] [PubMed] [Google Scholar]
- Mihai A, Damsa C, Allen M, Baleydier B, Lazignac C, Heinz A. Viewing videotape of themselves while experiencing delirium tremens could reduce the relapse rate in alcohol-dependent patients. Addiction. 2007;102:226–231. doi: 10.1111/j.1360-0443.2006.01667.x. [DOI] [PubMed] [Google Scholar]
- Moggi F, Giovanoli A, Strik W, Moos BS, Moos RH. Substance use disorder treatment programs in Switzerland and the USA: Program characteristics and 1-year outcomes. Drug Alcohol Depend. 2007;86:75–83. doi: 10.1016/j.drugalcdep.2006.05.017. [DOI] [PubMed] [Google Scholar]
- Niv N, Hser Y-I. Women-only and mixed-gender drug abuse treatment programs: Service needs, utilization and outcomes. Drug Alcohol Depend. 2007;87:194–201. doi: 10.1016/j.drugalcdep.2006.08.017. [DOI] [PubMed] [Google Scholar]
- Passetti LL, Godley SH, Scott CK, Siekmann M. A low-cost follow-up resource: Using the World Wide Web to maximize client location efforts. Amer. J. Eval. 2000;21:195–203. [Google Scholar]
- Robinson EAR, Cranford JA, Webb JR, Brower KJ. Six-month changes in spirituality, religiousness, and heavy drinking in a treatment-seeking sample. J. Stud. Alcohol Drugs. 2007;68:282–290. doi: 10.15288/jsad.2007.68.282. [DOI] [PubMed] [Google Scholar]
- Scott CK. A replicable model for achieving over 90% follow-up rates in longitudinal studies of substance abusers. Drug Alcohol Depend. 2004;74:21–36. doi: 10.1016/j.drugalcdep.2003.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scott CK, Muck RD, Foss MA. The impact of centralized intake on access to treatment and satisfaction with intake procedures. In: Levy JA, Stephens RC, McBride DC, editors. Emergent Issues in the Field of Drug Abuse. Stamford, CT: JAI Press; 2000. pp. 131–150. [Google Scholar]
- Shaffer HJ, Nelson SE, Laplante DA, Labrie RA, Albanese M, Caro G. The epidemiology of psychiatric disorders among repeat DUI offenders accepting a treatment-sentencing option. J. Cons. Clin. Psychol. 2007;75:795–804. doi: 10.1037/0022-006X.75.5.795. [DOI] [PubMed] [Google Scholar]
- Stephens RC, Thibodeaux L, Sloboda Z, Tonkin P. Research note: An empirical study of adolescent student attrition. J. Drug Issues. 2007;37:475–488. [Google Scholar]
- Stinchfield RD, Niforopulos L, Feder SH. Follow-up contact bias in adolescent substance abuse treatment outcome research. J. Stud. Alcohol. 1994;55:285–289. doi: 10.15288/jsa.1994.55.285. [DOI] [PubMed] [Google Scholar]
- Tuckel P, O'neill H. The vanishing respondent in telephone surveys. J. Advert. Res. 2002;42:26–48. [Google Scholar]
- Vaillant GE. Evaluation of treatment through the long-term follow-up. In: Bostrum H, Larsson T, Ljungstedt N, editors. Drug Dependence: Treatment and Treatment Evaluation. Stockholm, Sweden: Almqvist & Wiksell International; 1975. pp. 262–269. [Google Scholar]
- Wade D, Harrigan S, McGorry PD, Burgess PM, Whelan G. Impact of severity of substance use disorder on symptomatic and functional outcome in young individuals with first-episode psychosis. J. Clin. Psychiat. 2007;68:767–774. doi: 10.4088/jcp.v68n0517. [DOI] [PubMed] [Google Scholar]
- Weeks MF, Kulka RA, Pierson SA. Optimal call scheduling for a telephone survey. Publ. Opin. Q. 1987;51:540–549. [Google Scholar]
- Williams EC, Horton NJ, Samet JH, Saitz R. Do brief measures of readiness to change predict alcohol consumption and consequences in primary care patients with unhealthy alcohol use? Alcsm Clin. Exp. Res. 2007;31:428–435. doi: 10.1111/j.1530-0277.2006.00324.x. [DOI] [PubMed] [Google Scholar]
- Williamson A, Darke S, Ross J, Teesson M. The effect of baseline cocaine use on treatment outcomes for heroin dependence over 24 months: Findings from the Australian Treatment Outcome Study. J. Subst. Abuse Treat. 2007;33:287–293. doi: 10.1016/j.jsat.2006.12.009. [DOI] [PubMed] [Google Scholar]
- World Health Organization. The ICD-10 Classification of Mental and Behavioral Disorders: Clinical Descriptions and Diagnostic Guidelines. Geneva: World Health Organization; 1992. [Google Scholar]
- Zand D, Thomson NR, Dugan M, Braun JA, Holterman-Hommes P, Hunter PL. Predictors of retention in an alcohol, tobacco, and other drug prevention study. Eval. Rev. 2006;30:209–222. doi: 10.1177/0193841X05281160. [DOI] [PubMed] [Google Scholar]