jpad journal

AND option

OR option

RECRUITMENT INTO THE ALZHEIMER PREVENTION TRIALS (APT) WEBSTUDY FOR A TRIAL-READY COHORT FOR PRECLINICAL AND PRODROMAL ALZHEIMER’S DISEASE (TRCPAD)

S. Walter1, T.B. Clanton1, O.G. Langford1, M.S. Rafii1, E.J. Shaffer1, J.D. Grill3, G.A. Jimenez-Maggiora1, R.A. Sperling2, J.L. Cummings4, P.S. Aisen1 and the TRC-PAD Investigators*

1. Alzheimer’s Therapeutic Research Institute, University of Southern California, San Diego, CA, USA; 2. Center for Alzheimer Research and Treatment, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; 3. Institute for Memory Impairments and Neurological Disorders, University of California, Irvine;
4. Department of Brain Health, School of Integrated Health Sciences, University of Las Vegas, Nevada; Cleveland Clinic Lou Ruvo Center for Brain Health, USA;* TRC-PAD Investigators are listed at www.trcpad.org

Corresponding Author: S. Walter, Alzheimer’s Therapeutic Research Institute, University of Southern California, San Diego, CA, USA, waltersa@usc.edu

J Prev Alz Dis 2020;4(7):219-225
Published online August 11, 2020, http://dx.doi.org/10.14283/jpad.2020.46

 


Abstract

BACKGROUND: The Alzheimer Prevention Trials (APT) Webstudy is the first stage in establishing a Trial-ready Cohort for Preclinical and Prodromal Alzheimer’s disease (TRC-PAD). This paper describes recruitment approaches for the APT Webstudy.
Objectives: To remotely enroll a cohort of individuals into a web-based longitudinal observational study. Participants are followed quarterly with brief cognitive and functional assessments, and referred to Sites for in-clinic testing and biomarker confirmation prior to enrolling in the Trial-ready Cohort (TRC).
Design: Participants are referred to the APT Webstudy from existing registries of individuals interested in brain health and Alzheimer’s disease research, as well as through central and site recruitment efforts. The study team utilizes Urchin Tracking Modules (UTM) codes to better understand the impact of electronic recruitment methods.
Setting: A remotely enrolled online study.
Participants: Volunteers who are at least 50 years old and interested in Alzheimer’s research.
Measurements: Demographics and recruitment source of participant where measured by UTM.
Results: 30,650 participants consented to the APT Webstudy as of April 2020, with 69.7% resulting from referrals from online registries. Emails sent by the registry to participants were the most effective means of recruitment. Participants are distributed across the US, and the demographics of the APT Webstudy reflect the referral registries, with 73.1% female, 85.0% highly educated, and 92.5% Caucasian.
Conclusions: We have demonstrated the feasibility of enrolling a remote web-based study utilizing existing registries as a primary referral source. The next priority of the study team is to engage in recruitment initiatives that will improve the diversity of the cohort, towards the goal of clinical trials that better represent the US population.

Key words: Trial-ready cohort, online registry, remote recruitment, web-based, preclinical, Alzheimer’s disease, prevention.


 

Background

Identifying eligible participants for early intervention Alzheimer’s disease (AD) clinical trials continues to be a significant challenge in the field (1, 2). The overarching aim of the Trial-Ready Cohort in Preclinical and Prodromal Alzheimer’s Disease (TRC-PAD) program is to accelerate enrollment for early stage AD clinical trials (3). This will be accomplished by identifying and screening participants to confirm eligibility for these trials, including amyloid biomarker confirmation, and then monitoring and maintaining engagement with these participants through longitudinal visits until an appropriate trial is available. The considerations behind the design of TRC-PAD are described by Aisen et al. (4). The first step in establishing the Trial-ready Cohort (TRC) was to recruit participants into the Alzheimer Prevention Trials (APT) Webstudy, an online assessment tool designed to serve as a feeder to the in-person TRC-PAD cohort. We projected the APT Webstudy would require between 25,000 and 50,000 participants, with at least 20% participants from under-represented communities, in order to identify enough eligible participants for a planned TRC of n=2,000. The APT Webstudy program requires secure and scalable informatics infrastructure (5), as well as an algorithm to identify participants and rank them by risk of brain amyloidosis and development of AD dementia (6). These elements of the program are described in separate papers in this series.
The APT Webstudy was launched as clinical trials have increasingly utilized web-based tools, including registries, to improve efficiency in screening (7-9). Although leveraging registries to recruit for clinical trials is not a new concept, the establishment of online registries has broadened access to participants who are interested and eligible for studies (10-13). Going further than remote recruitment, Orri et al (14) conducted the first entirely web-based clinical trial run under an Investigational New Drug (IND) application. Digital tools allow researchers to optimize the use of mobile technologies in clinical trials, respond to the preferences of participants (15), and measure and fine-tune communication methods (16). To our knowledge, TRC-PAD is the first program inviting participants from various existing registries to a join a longitudinal Webstudy with identification and referral of high-risk individuals to an in-person TRC. In this article, we describe the preliminary experience of efforts to recruit to APT Webstudy, including from national and local registries, as a unifying path to enrollment in TRC-PAD.

 

Methods

APT Webstudy Experience

Participants log in using either their existing social login credentials or by creating an account and providing a username, email address and password. Once logged on, participants are considered ‘registered.’ The Webstudy is designed as a ‘walk through’ experience, with each new section opening after completion of the former. The sections are: Step 1: Personal Profile; Step 2: Consent; Step 3 Lifestyle; Step 4: Remote cognitive and functional assessments; Step 5: Review scores. Sections are described in more detail in a separate paper in this series (17). The questionnaires and assessments were designed to be brief with a target duration of 15 minutes.

Recruitment

APT Webstudy participants are recruited from multiple sources. For the purposes of this paper, the term registry refers to a online registry, study, or service matching individuals interested in participating in studies or clinical trials to prevent or delay AD dementia. Early in its development, the TRC-PAD study team established partnerships with each of the largest “Feeder” registries, and in collaboration with the managing team or investigators, developed a referral strategy based on the registry’s unique population and existing communication pathways. Each strategy began small and was expanded when we were able to ensure the stability of the Webstudy infrastructure, as well as our capacity to provide user support. Outreach took the form of direct email campaigns highlighting the APT Webstudy on the registry website, e-newsletters, and social media posts. In addition to referrals from registries, both central and site-based strategies were employed.

UTM Codes

Urchin Tracking Modules (UTM) were generated to track participants that registered for the APT Webstudy in response to digital outreach, and were embedded in emails, webpages, and social media advertisements. For some registries, although various outreach activities were utilized, all responses linked back through the registry website, requiring use of a single UTM, and limiting our ability to understand the response rates to different digital communications. Recruitment strategies that did not utilize a UTM included printed materials (i.e., brochures, newsletters and magazines) and earned media (i.e., online and print newspaper articles).

The Alzheimer’s Prevention Registry (APR) (www.endalznow.org)

APR was launched in October 2012 by the Banner Alzheimer’s Institute with the aim of providing a shared resource to the AD scientific community to facilitate enrollment in studies to prevent AD. In 2015, APR began offering an optional APOE genotyping program (GeneMatch) to members ages 55-75 to help match individuals to research studies. As of August 2018, APR enrolled a total 320,000 participants with 75,351 agreeing to the GeneMatch program, and approximately 75,000 agreeing to be contacted by researchers (18). APR participants are primarily women (65.6%) and Caucasian (45.5%); 1.8% are Hispanic/Latino and less than 1% are from other underrepresented groups. It should be noted that these percentages are a reflection of only the 60.8% of APR participants who provided their Race or Ethnicity (Table 1) (19). 14% of APR participants are age 50-59, 35% age 60-69, and 23% age 70-79 (Table 1). The APT Webstudy recruitment strategy began with a pilot phase in April 2018, with batches of emails sent from APR to 7,293 individuals (Figure 1). This was followed by an article in the APR quarterly newsletter introducing the APT Webstudy and posts on APR’s social media accounts. In January 2019, emails were sent in batches to 75,000 registrants inviting them to join the APT Webstudy. In March and April 2020, follow up emails were sent to participants who had not opened the email or clicked the link for the APT Webstudy, with additional reminders scheduled for May 2020.

Alzheimer’s Association TrialMatch (trialmatch.alz.org)

Alzheimer’s Association TrialMatch (trialmatch.alz.org) is a free online matching service that utilizes user’s information to generate a custom report of clinical trials for which they may be a good fit. TrialMatch has a large pool of 322,997 users, with 134,148 providing contact and personal information. Individuals enrolled in TrialMatch indicate whether they are a healthy volunteer (52.8%), a caregiver looking for clinical trials for someone else such as a family member with AD (31.7%), or a person living with the disease looking for trials (13.3%). A small percent (2.2%) of users are entered into TrialMatch by a physician or researcher. Individuals under 50 comprise 35% of the Healthy Volunteers and 20% of all TrialMatch participants. 69% of TrialMatch are over the age of 50. Participants are 73.4% Caucasian, 4.5% Hispanic/Latino, and 65% are women. Women comprise 78% of the healthy controls and 54% of caregivers looking for trials for someone else. 22% of TrialMatch users either care for someone with a diagnosis of AD or have a diagnosis of AD. The first APT Webstudy recruitment campaign began in March 2019, with direct emails targeting 48,000 TrialMatch users living within 200 miles of potential TRC-PAD clinical sites. An additional 33,000 users were invited to join APT Webstudy beginning in December 2019. Emails were sent in batches of 5,000 twice a week, and is ongoing at the time of this manuscript.

The Brain Health Registry (BHR) (brainhealthregistry.org)

The Brain Health Registry (BHR) (brainhealthregistry.org) collects longitudinal health, cognitive, and lifestyle data through detailed self-report questionnaires and online cognitive tests (Cogstate, Lumosity, and MemTrax) (16). BHR was launched in 2014 and currently has baseline data on 56,982 participants. BHR participants are 80.9% Caucasian, 5.3% Hispanic/Latino, 73.9% women, with 73% of participants over the age of 50 (20) (Table 1). The BHR team sent emails to 18,240 participants inviting them to register for the APT Webstudy beginning in March 2019 (Figure 1). Emails were sent in batches of 500 every week. If participants do not respond, two follow-up emails are sent, with a second set of reminder emails 231 and 238 days from their initial email contact. The BHR team also featured the APT Webstudy in their e-newsletter.

Table 1. Feeder Registries and APT Demographics

The Cleveland Clinic Healthy Brains Registry (healthybrains.org)

 

The Cleveland Clinic Healthy Brains Registry (healthybrains.org) is a longitudinal, web-based symptomatic and lifestyle assessment (21), with over 13,000 registrants, and over half expressing interest in enrolling into clinical trials. HealthyBrains has registrants and newsletter subscribers from across the nation. The highest number of registrants in the US states of Ohio, Nevada, California and Florida. Registrants were invited to join the APT Webstudy through an article on the HealthyBrains website in May 2018, followed by features in two newsletters, sent by email (Figure 1).

Figure 1. Alzheimer Prevention Trials (APT) Webstudy: Feeder Registry Recruitment Campaign Timeline

 

UCI Consent-to-Contact (C2C) Registry (c2c.uci.edu)

UCI Consent-to-Contact (C2C) Registry (c2c.uci.edu) is a confidential online tool to help match local volunteers in Orange County, CA, with research studies at the University of California, Irvine (22). Registrants enroll by providing an email address or by phoning the research site, remotely completing a series of questions regarding medical history and research interests. Beginning in July 2019, 7,300 C2C participants were invited by email to join the APT Webstudy (Figure 1).

Other sources

Anticipating that the registry-based approach would have limitations, especially in identifying eligible participants from under-represented groups, the APT Webstudy team developed recruitment strategies utilizing the TRC-PAD site network as well as other central activities. Sites participating in the TRC-PAD cohort study were identified early in the development of the program, with some agreeing to work locally to recruit participants to the APT Webstudy. Each of the TRC sites were invited to utilize their own databases of individuals interested in clinical research and email information about the APT Webstudy. The TRC-PAD study team provided flyers, postcards, newsletter and email template language, social media content and leaflets describing the APT Webstudy. Language for these materials was approved by the Institutional Review Board (IRB) and UTM codes were generated where appropriate. Sites also held community outreach events, partnered with other local community organizations to share information about the study, advertised on social media, and posted information about the Webstudy on their own webpages. Central recruitment efforts included generating earned media including newspaper and online and print edition magazine articles, local TV interviews, and posting the study on websites for clinical trials and AD. The earned media stories included an article in the San Diego Union Tribune in January 2018, two letters to the editor in May 2019, in local papers that have circulations of 80,000 (Charleston, SC) and 150,000 (Lexington, KY) respectively. Grand Magazine published an online piece about the APT Webstudy on August 12, 2019, generating 54,000 impressions. The Saturday Evening Post, with a circulation of 302,000 and majority of readers over the age of 45, included APT in its January/February 2020 print edition. So far, the only paid advertising was in the form of Facebook advertisements. Facebook ads ran in eight markets for two weeks in November 2018 for a cost of $12,000, and six markets for 5 weeks in August-September 2019 for a cost of $3,000. The ads were targeted geographically and to the largest minority population in each location, based around the location of TRC sites.

 

Results

APT Webstudy Enrollment: At the time of preparing this mansuscript, there are 30,650 participants consented to the APT Webstudy. Recruitment strategies for the first year were a mix of central and local efforts (Figure 1). The first notable increase was in January 2018 following local newspaper coverage. In March 2018, email referrals were piloted for APR Registry. In April 2018, APR and HealthyBrains introduced the Webstudy in their newsletters. In the first year, 388 participants per month consented to the APT Webstudy on an average. The APR email referrals began in earnest in January 2019, leading to a dramatic increase in consented participants, with 5,196 consenting in January 2019 (Figure 1). This was followed by email referrals from TrialMatch and BHR. In the second year, participants consented to the APT Webstudy on an average of 1,514 per month.

Demographics

Participants in the APT Webstudy have a mean age of 64.56 with a majority of participants ages 50-59 (28.9%) and 60-69 (44.1%) (Table 1). Most participants identify as women (73.0%), white (92.5%) and more than high school level education (85.0%). 2.3% of APT Webstudy participants self describe as Hispanic/Latino. Although most participants are retired or not working (53.2%), a significant percentage are employed either full (30.6%) or part-time (14.7%) (Table 2). A majority of participants have a family history of AD (62.6%) and do not have a personal diagnosis of AD (94.6%). Further details on lifestyle and medical history are provided on Tables 2 and 3.

Table 2. APT Webstudy Health and Lifestyle

Table 3. APT Webstudy Recruitment by Referral Sourc

 

Enrollment by Referral sources

At this point in the recruitment to the APT Webstudy, registries were the primary source of participants, with referrals resulting in 69.69% of consented individuals, according to UTM codes. APR was by far the biggest contributer with 38.98% of all APT Webstudy consented participants, followed by 25.40% referred by TrialMatch. Those referred by APR were also slightly more likely to both register and consent to APT (Table 3). All together 15.9% of the APR participants that were contacted consented to APT, compared to 9.8% or less for other registries. Email (32.92%) and websites (40.78%) were the most common mode of referral, however website visits were largely driven by email campaigns. Central media efforts that could be tracked with UTM resulted in 234 participants. The central Facebook ads accounted for 7,800 and 3,000 clicks which translated to 0.15% of consenting participants.

Geographic Distribution

APT Webstudy participants reside in each of the 50 United States (US), the District of Columbia, and Puerto Rico. States with the highest number of consented participants include California (16.63%), Florida (5.65%), New York (4.67%), Texas (4.66%), and Virginia (4.38%). International location is not currently collected. Participants consented to the APT Webstudy reside in 1931 (or 60%) of US counties. The top ten counties with participants consented to APT are San Diego County, CA (n=1621); Orange County, CA (n=861) Maricopa County, AZ (n=764), Los Angeles County, CA (n=612), Cook County, IL (n=443) Charleston County, SC (n=384), Fayette County, KY (n=279), King County, WA (n=270) Pima County, AZ (n=239) and Middlesex County, MA (n=238) (Figure 2).

Figure 2. APT Webstudy Enrollment: Heatmap of US Counties

 

Discussion

We have demonstrated that online registries are not only feasible but they are an excellent method to identify and recruit participants for a Webstudy. Participants in a registry have already demonstrated an interest in research and willingness to provide information about themselves. In addition, registries have communication infrastructure and digital platforms designed to engage individuals through educational materials, newsletters and other outreach, which may lead to higher rates of referral. UTM codes were shown to be an effective method to track the referral source in this study. The strategy that yielded highest rates of responses was to first feature the APT Webstudy in the registry’s newsletter, followed by direct email communication to registrants. Although not tracked with separate UTM codes, the consistent increase of participants demonstrates that sending second and third emails to non-responders produces additional participants. Although central media efforts and social media advertising were piloted in this first stage of recruitment, this strategy has not been fully explored as a potential source for remotely enrolled participants.
The registries used in this study had a contact-to-consent rate ranging from 1.8%-15.9%, despite having very similar composition of registrants. This brings up several questions as to best practices. Was the higher rate of consent from APR compared to BHR due to the fact that APR directly targets individuals interested in clinical trials? Could the observed rate of consent to contacted participant be influenced by the level of engagement utilized by the respective registries?
It is not surprising that the demographics of participants in the APT Webstudy are similar in demographics to the registries that referred the majority of participants. However, understanding why such a large majority of participants are women is important. Further research may reveal both barriers to in-person research and preferences for online studies. The low rate of Hispanic/Latino involvement in APT Webstudy can likely be attributed to 2 factors, (1) the low rates of Hispanic/Latino participants in the referral registries and (2) the APT Webstudy and recruitment materials had not been translated into Spanish.
We acknowledge that the APT Webstudy has an inherent selection bias, in that participants must have access to the internet in order to participate. This disproportionately excludes many people from under-represented communities, where according to recent Pew reports, only 57% of Hispanic and African American adults own a laptop or a tablet (23), compared to 82% of Caucasians. Although those over 65 years of age are more likely to use a desktop or tablet to access the internet, lower income Americans, those with less than college education, and black and Hispanic populations, are all more likely to use a cell phone to access the internet (24). Although the APT Webstudy is mobile-friendly, the cognitive testing at present requires use of a tablet or computer. The study team is considering changes to cognitive testing that will allow for the use of smart phones and expand accessibility to all communities. Other researchers (25) have demonstrated that text messages can be an effective communication channel with research participants. Would people be more responsive to a text message inviting them to return for a study visit?
The Spanish language version of the APT Webstudy was launched early in 2020, with efforts underway to optimize the cultural sensitivity of the Webstudy and all participant-facing content. A key aim of the study is to engage in recruitment initiatives that will improve the diversity of the cohort, towards the goal of clinical trials that better represent the US population. For the African-American community in particular, recruitment campaigns will highlight disparities in Alzheimer’s disease risk and care, and the role research and clinical trials can play in effecting change.
This study has several limitations. The feeder registries differ in numerous ways, including sample sizes, aims or purpose, geographic distribution, length of time from when participants were first engaged with, and frequency of participant engagement. The current analyses did not account for these differences. Similarly, varying levels of data were available for participants in feeder registries, preventing combination of data streams for more sophisticated analyses of recruitment efficiency. Recruitment from feeder registries was peformed over multiple years, introducing potential confounding by time. Quantification of site level efforts toward recruitment was minimal, limiting our ability to understand the efficacy of site level efforts relative to using central efforts or these feeder registries.
In conclusion, this study demonstrates the feasibility of recruiting from feeder registries into a common platform for identifying potentially eligible participants for a Trial-ready cohort. A robust sample was assembled in a relatively short period of time that is anticipated to play a key role in the national AD clinical trial agenda.

 

Acknowledgements: From the Alzheimer’s Assocation, our thanks to Keith Fargo, Stephen Hall, and Martha Tierney. From APR: Jessica Langbaum, Cassandra Kettenhoven, and Nellie High. From Brain Health Registry: Rachel Nosheny, and Joseph Eichenbaum. From University California Irvine Registry we’d like to thank Meagan Witbracht. Coordinating Center staff providing support to APT Webstudy participants are Godfrey Coker and Rocio Gonzalez-Beristain. The informatics development team is Stefania Burschi, Jia-Shing So, and Marian Wong.

Funding: The study was supported by R01AG053798 from NIA/NIH. The sponsors had no role in the design and conduct of the study; in the collection, analysis, and interpretation of data; in the preparation of the manuscript; or in the review or approval of the manuscript.

Ethical standard: Institutional Review Boards (IRBs) approved these studies, and all participants gave informed consent before participating.

Conflict of interest: The authors report grants from National Institute on Aging, during the conduct of the study. None of the authors have additional financial interests, relationships or affiliations relevant to the subject of this manuscript.

Open Access: This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

 

References

1. Fargo KN, Carrillo MC, Weiner MW, Potter WZ, Khachaturian Z. The crisis in recruitment for clinical trials in Alzheimers and dementia: An action plan for solutions. Alzheimers Dement 2016;12(11):1113-1115.
2. Sperling RA, Jack CR Jr, Aisen PS. Testing the right target and the right drug at the right stage. Sci Transl Med. 2011 Nov 30; 3(111):111cm33.
3. Sperling R, Cummings J, Donohue M, Aisen P. Global Alzheimer’s Platform Trial Ready Cohorts for the Prevention of Alzheimer’s Dementia. J Prev Alzheimers Dis 2016;3:185-7.
4. Aisen PS, Sperling R., Cummings J, et al. The Trial-Ready Cohort for Preclinical/Prodromal Alzheimer’s Disease (TRC-PAD) Project: An Overview. J Prev Alz Dis 2020; DOI: 10.14283/jpad.2020.45.
5. Jimenez-Maggiora GA , Bruschi S., Raman R, et al. TRC-PAD: Accelerating Recruitment of AD Clinical Trials through Innovative Information Technology. J Prev Alz Dis 2020; DOI: 10.14283/jpad.2020.48.
6. Langford O, Raman R, Sperling RA, et al. Predicting Amyloid Burden to Accelerate Recruitment of Secondary Prevention Clinical Trials. J Prev Alz Dis 2020; DOI: 10.14283/jpad.2020.44.
7. Mohebati A, Knutson A, Zhou XK, et al. A web-based screening and accrual strategy for a cancer prevention clinical trial in healthy smokers. Contemp Clin Trials 2012;33(5):942-948.
8. Parker G, Fletcher K, Blanch B, Greenfield L. The ‘real world’ utility of a web-based bipolar disorder screening measure. Acta Psychiatr Scand 2013;127(5):373-380.
9. Thibault-Halman G, Rivers CS, Bailey CS, et al. Predicting Recruitment Feasibility for Acute Spinal Cord Injury Clinical Trials in Canada Using National Registry Data. JNeurotrauma 2017;34(3):599-606.
10. Wessel J, Gupta J, Groot MD. Factors Motivating Individuals to Consider Genetic Testing for Type 2 Diabetes Risk Prediction. Plos One 2016;11(1).
11. Russo R, Coultas D, Ashmore J, et al. Chronic obstructive pulmonary disease self-management activation research trial (COPD–SMART): Results of recruitment and baseline patient characteristics. Contemp Clin Trials 2015;41:192-201.
12. Andersen MR, Schroeder T, Gaul M, Moinpour C, Urban N. Using a Population-Based Cancer Registry for Recruitment of Newly Diagnosed Patients With Ovarian Cancer. Am J Clin Oncol 2005;28(1):17-20.
13. Hein A, Gass P, Walter CB, et al. Computerized patient identification for the EMBRACA clinical trial using real-time data from the PRAEGNANT network for metastatic breast cancer patients. Breast Cancer Res Treat 2016;158(1):59-65.
14. Orri M, Lipset CH, Jacobs BP, Costello AJ, Cummings SR. Web-based trial to evaluate the efficacy and safety of tolterodine ER 4mg in participants with overactive bladder: REMOTE trial. Contemp Clin Trials 2014;38(2):190-197.
15. Perry B, Geoghegan C, Lin L, et al. Patient preferences for using mobile technologies in clinical trials. Contemp Clin Trials Commun 2019;15:100399.
16. Baca-Motes K, Edwards AM, Waalen J, et al. Digital recruitment and enrollment in a remote nationwide trial of screening for undiagnosed atrial fibrillation: Lessons from the randomized, controlled mSToPS trial. Contemp Clin Trials Commun 2019;14:100318.
17. Walter S, Langford OG, Clanton TB, et al. The Trial-Ready Cohort for Preclinical and Prodromal Alzheimer’s Disease (TRC-PAD): Experience from the First 3 Years. J Prev Alz Dis 2020; DOI: 10.14283/jpad.2020.47.
18. Langbaum JB, Karlawish J, Roberts JS, et al. GeneMatch: A novel recruitment registry using at-home APOE genotyping to enhance referrals to Alzheimer’s prevention studies. Alzheimers Dement 2019;15:515-24.
19. Langbaum JB, High N, Nichols J, Kettenhoven C, Reiman EM, Tariot PN. The Alzheimer’s Prevention Registry: a large internet-based participant recruitment registry to accelerate referrals to Alzheimer’s-focused studies. J Prev Alz Dis 2020; DOI:10.14283/jpad.2020.31.
20. Weiner MW, Nosheny R, Camacho M, et al. The Brain Health Registry: An internet-based platform for recruitment, assessment, and longitudinal monitoring of participants for neuroscience studies. Alzheimers Dement 2018;14(8):1063-1076.
21. Zhong K, Cummings J. Healthybrains.org: From Registry to Randomization. J Prev Alzheimers Dis 2016;3(3):123-126. doi:10.14283/jpad.2016.100
22. Grill, JD, Hoang, D, Gillen, DL et al. Constructing a Local Potential Participant Registry to Improve Alzheimer’s Disease Clinical Research Recruitment. J Alzheimers Dis 2018;63(3):1055-1063.
23. Perrin A, Turner E. Smartphones help blacks, Hispanics bridge some – but not all – digital gaps with whites. Pew Research Study. 2019 https://www.pewresearch.org/fact-tank/2019/08/20/smartphones-help-blacks-hispanics-bridge-some-but-not-all-digital-gaps-with-whites/ Accessed April 5, 2020.
24. Anderson, M. Mobile Technology and home Broadband 2019. Pew Research Study 2019. https://www.pewresearch.org/internet/2019/06/13/mobile-technology-and-home-broadband-2019/ Accessed May 14, 2020
25. Lincoln KD, Chow TW, Gaines BF BrainWorks: A Comparative Effectiveness Trial to Examine Alzheimer’s Disease Education for Community-Dwelling African Americans. Am J Geriatr Psychiatr. 2019 Jan;27(1):53-61.

What do you want to do ?

New mail

ASSESSMENT OF INSTRUMENTAL ACTIVITIES OF DAILY LIVING IN OLDER ADULTS WITH SUBJECTIVE COGNITIVE DECLINE USING THE VIRTUAL REALITY FUNCTIONAL CAPACITY ASSESSMENT TOOL (VRFCAT)

 

A.S. Atkins1, A. Khan1,2, D. Ulshen1, A. Vaughan1, D. Balentin1, H. Dickerson1, L.E. Liharska1, B. Plassman3,4, K. Welsh-Bohmer3,4, R. S.E. Keefe1,4

 

1. NeuroCog Trials, Durham, NC, USA; 2. Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; 3. Duke University Bryan ADRC, Durham, NC, USA; 4. Duke University Medical Center, Durham, NC, USA

Corresponding Author: Alexandra S. Atkins, Ph.D., Vice President-  Scientific Development, NeuroCog Trials, 3211 Shannon Road, Suite 300, Durham, NC 27707, USA, ph. (919) 401-4642, fx. (919) 401-4644, email: alexandra.atkins@neurocogtrials.com

J Prev Alz Dis 2018 inpress
Published online July 5, 2018, http://dx.doi.org/10.14283/jpad.2018.28

 


Abstract

Background: Continuing advances in the understanding of Alzheimer’s disease progression have inspired development of disease-modifying therapeutics intended for use in preclinical populations. However, identification of clinically meaningful cognitive and functional outcomes for individuals who are, by definition, asymptomatic remains a significant challenge. Clinical trials for prevention and early intervention require measures with increased sensitivity to subtle deficits in instrumental activities of daily living (IADL) that comprise the first functional declines in prodromal disease. Validation of potential endpoints is required to ensure measure sensitivity and reliability in the populations of interest.
Objectives: The present research validates use of the Virtual Reality Functional Capacity Assessment Tool (VRFCAT) for performance-based assessment of IADL functioning in older adults (age 55+) with subjective cognitive decline.
Design: Cross-sectional validation study.
Setting: All participants were evaluated on-site at NeuroCog Trials, Durham, NC, USA.
Participants: Participants included 245 healthy younger adults ages 20-54 (131 female), 247 healthy older adults ages 55-91 (151 female) and 61 older adults with subjective cognitive
decline (SCD) ages 56-97 (45 female).
Measures: Virtual Reality Functional Capacity Assessment Tool; Brief Assessment of Cognition App; Alzheimer’s Disease Cooperative Study Prevention Instrument Project – Mail-In Cognitive Function Screening Instrument; Alzheimer’s Disease Cooperative Study Instrumental Activities of Daily Living – Prevention Instrument, University of California, San Diego Performance-Based Skills Assessment – Validation of Intermediate Measures; Montreal Cognitive Assessment; Trail Making Test- Part B.
Results: Participants with SCD performed significantly worse than age-matched normative controls on all VRFCAT endpoints, including total completion time, errors and forced progressions (p≤0001 for all, after Bonferonni correction).  Consistent with prior findings, both groups performed significantly worse than healthy younger adults (age 20-54).  Participants with SCD also performed significantly worse than controls on objective cognitive measures. VRFCAT performance was strongly correlated with cognitive performance. In the SCD group, VRFCAT performance was strongly correlated with cognitive performance across nearly all tests with significant correlation coefficients ranging from 0.3 to 0.7; VRFCAT summary measures all had correlations greater than r=0.5 with MoCA performance and BAC App Verbal Memory (p<0.01 for all).
Conclusions: Findings suggest the VRFCAT provides a sensitive tool for evaluation of IADL functioning in individuals with subjective cognitive decline. Strong correlations with cognition across groups suggest the VRFCAT may be uniquely suited for clinical trials in preclinical AD, as well as longitudinal investigations of the relationship between cognition and function.

Key words: Functioning, iADL, preclinical, endpoints, assessment.


 

Continuing advances in the understanding of Alzheimer’s disease (AD) progression have ignited widespread interest in the development of disease-modifying therapeutics intended for use in preclinical, largely asymptomatic populations. With at least eight clinical trials of such drugs currently underway (1), industry investment in early intervention and prevention of AD is growing. Nevertheless, reliable detection of clinically meaningful cognitive and functional change in individuals who are, by definition, asymptomatic, remains a significant challenge. The recently revised FDA draft guidance for development of early-stage AD drugs underscores the current lack of clear, acceptable endpoints for preclinical AD, particularly with respect to functioning (2). Since functional impairment remains a key differential diagnostic for dementia, preservation of functioning in preclinical and prodromal individuals is generally assumed. The recent FDA draft guidance adopts this assumption by specifying that the onset of subtle functional decline marks the progression from Stage 2 to Stage 3 prodromal disease (2).
Unfortunately, identifying the onset of functional impairment is non-trivial, and often depends on the type of functioning assessed. Although basic activities of daily living (ADLs) are largely preserved in prodromal AD, there is ample evidence for decline in more complex aspects of functioning in prodromal and even preclinical disease (1, 3-6). Deficits in instrumental ADLs (IADLs) such as shopping, navigating public transportation, and cooking are well-documented in mild cognitive impairment (MCI) (1, 3, 6). Declines in IADLs have also been demonstrated in cognitively unimpaired individuals who later develop MCI or AD (3-4, 6), suggesting that declines in complex functioning may sometimes precede objective cognitive decline. Longitudinal data from the Sydney Memory and Ageing Study supports this notion, demonstrating that subtle but measurable impairment in IADLs requiring high cognitive demand at baseline both predates and predicts the subsequent diagnosis of both MCI and AD dementia (7). These findings highlight that reliable evaluation of both cognitive performance and functioning across the entire AD continuum will be critical to assessment of treatment response at each stage. While several candidate measures have been proposed to increase the sensitivity of cognitive evaluations in preclinical AD (8-9), reliable functional assessment has remained far more elusive (1).
Given the increasing prevalence of clinical trials for early intervention in preclinical AD, there is a growing need for measures of functioning with improved sensitivity to functional deficits in healthier, non-demented individuals (see (10, 6)). Although performance-based tools such as the Virtual Reality Functional Capacity Assessment Tool (VRFCAT) (11)and the University of California, San Diego Performance-Based Skills Assessment (12) have begun to show promise (see (10) for review) (13-14), the most commonly utilized functional measures continue to be informant-reported questionnaires, such as the Alzheimer’s Disease Cooperative Study – Activities of Daily Living (ADCS-ADL), the Functional Activities Questionnaire (FAQ), and the Disability Assessment for Dementia (DAD), among others. These measures, though helpful in later-stage disease, often lack sensitivity to subtle functional deficits in non-demented preclinical and prodromal populations. Furthermore, a 2009 systematic review of all available informant-based IADL scales reported that none of the 12 questionnaires evaluated could be considered an adequate measurement of function due to deficient psychometric properties or lack of sufficient psychometric information (15). An updated 2016 review identified eight additional scales, but again failed to make a recommendation for the use of one or more specific instruments (16).
The Virtual Reality Functional Capacity Assessment Tool (VRFCAT) is a direct performance-based assessment of IADL functioning that uses a computer interface and is appropriate for multiple populations. Utilizing a realistic virtual environment, the VRFCAT assesses a subject’s ability to complete instrumental activities associated with a shopping trip, including searching the pantry at home, making a shopping list, taking the correct bus to the grocery store, shopping in the store, paying for groceries, and returning home. In previous studies, the VRFCAT has demonstrated strong psychometric characteristics including high test-retest reliability, lack of practice effects, and strong correlations with cognition (17-18). The VRFCAT has shown sensitivity to functional impairment in schizophrenia, as well as sensitivity to age differences in the accuracy and efficiency of IADL performance (18).
The primary goal of the present work is to assess the VRFCAT’s sensitivity to potential differences in IADL functioning between older adults (aged 55 and over) with and without subjective cognitive decline (SCD). SCD – defined as the subjective experience of cognitive decline from a previous normal state that is independent of objectively measurable decline in cognitive performance (19) – is gaining increasing attention as a potential preclinical marker for MCI or dementia due to AD (20-21). Multiple epidemiological studies have shown that subjective cognitive decline is a notable risk factor for progression to dementia (22), with one meta-analysis reporting that 25% of unimpaired older adults with SCD developed MCI leading to AD in the next four years (23). As a result, SCD presents a preclinical population uniquely suited for the validation of assessment tools proposed for use in preclinical AD. In order to elucidate the relationship between SCD, objective cognitive performance, and IADL functioning, the current study also examines correlations between cognitive and functional measures in older adults with SCD.

 

Method

Participants

Participants included 245 healthy younger adults (YAs) ages 20-54 (131 female), 247 healthy older adults ages 55-91 (151 female) and 61 older adults with subjective cognitive decline (SCD) ages 56-97 (45 female). All subjects were recruited as a part of an ongoing normative study. Younger participants were recruited through a variety of online and paper advertisements, including Craigslist, paper flyers, and local newspaper ads. Older adults and individuals with SCD were recruited primarily though the Duke University Bryan Alzheimer’s Disease Center Prevention Registry (ADPR), which includes approximately 3,600 individuals ages 55 to 95 who joined the registry to volunteer as participants in research related to cognitive aging. Initial screening was completed by phone. Although individuals with diagnoses of MCI were not specifically recruited for this study, this diagnosis was not exclusionary for individuals in the SCD group. Thus, those reporting a current diagnosis of MCI were admitted to the SCD group if they met all other inclusion criteria for enrollment. Of the 61 participants enrolled in the SCD group, six reported a current MCI diagnosis.
All participants received compensation at a rate of $50 per visit. Participants with SCD were asked to provide an informant to supply collateral information. Informants were compensated $20 for participation. Participants who failed the screening at the first visit were compensated a minimum of $10.

Measures

The Virtual Reality Functional Capacity Assessment Tool (VRFCAT)

The VRFCAT (Figure 1) presents participants with multiple instrumental activities of daily living including: navigating a kitchen, catching a bus to a grocery store, finding/purchasing food in a grocery store, and returning home on a bus. Participants complete all objectives in order. If a given objective cannot be completed (defined as five errors or more than five minutes spent on a given objective), the participant is forced to progress and assigned the worst possible score; the program then offers assistance and continues along to the next objective. The VRFCAT takes approximately 20 minutes to complete for cognitively healthy adults, and up to 35 minutes for those with SCD or MCI. Primary endpoints include Total Adjusted Time (time to complete all objectives, adjusted for instructions and error messages), Total Errors, and Total Forced Progressions. The VRFCAT consists of a tutorial and six alternate versions to allow for repeated assessment and evaluation of change over time.

Figure 1. During the VRFCAT, participants complete 12 objectives associated with multiple instrumental activities of daily living including: navigating a kitchen, catching a bus to a grocery store, finding/purchasing food in a grocery store, and returning home on a bus

Figure 1. During the VRFCAT, participants complete 12 objectives associated with multiple instrumental activities of daily living including: navigating a kitchen, catching a bus to a grocery store, finding/purchasing food in a grocery store, and returning home on a bus

 

Alzheimer’s Disease Cooperative Study Prevention Instrument Project – Mail-In Cognitive Function Screening Instrument (ADCS-MCFSI)

Assessment of subjective cognitive decline was completed using the ADCS-MCFSI (24). The ADCS-MCFSI is a 14-item self-administered questionnaire assessing recent changes (over the last year) in cognition and functional activities that are commonly affected in the development of MCI. Scores on the ADCS-MCFSI range from 0 (no decline over the past year), to 14 (definite decline in all areas assessed), with higher scores indicating greater subjective impairment. For the present study, a ADCS-MCFSI total score equal to or greater than 4 was required for individuals assigned to the SCD group, indicating subjective decline in a minimum of four areas.

Brief Assessment of Cognition (BAC App)

The BAC App has been described in detail elsewhere (25). Briefly, the BAC App provides tablet-based administration and scoring of the original pen-and-paper Brief Assessment of Cognition (BACS) (26-27), which includes assessment of the following domains: Verbal Memory (List Learning Task), Working Memory (Digit Sequencing Task), Verbal Fluency (Letter Fluency Task and Semantic Fluency Task), Processing Speed (Symbol Coding Task), Motor Function (Token Motor Task), and Executive Functioning (Tower of London Task). The BAC App has been clinically validated and has demonstrated equivalency to the original measure (25). In the present study, the BAC App was augmented with the addition of the following tasks to assess delayed episodic verbal memory: Delayed Free Recall, Delayed Recognition, and Forced Choice Recognition.

University of California, San Diego Performance-Based Skills Assessment – Validation of Intermediate Measures (UPSA-2-VIM)

The UPSA-2-VIM is a standard rater-administered performance-based measure of functional capacity utilizing physical props (28). The UPSA-2-VIM takes about 40 minutes to administer and measures performance in several domains of everyday living, including counting money, planning an outing, and reading a bus schedule. Raw scores from each subtest are transformed to yield scores (ranging from 0 to 20) for each and a summary score ranging from 0 to 100. Higher scores reflect better performance.

Other Measures

The Montreal Cognitive Assessment (MoCA) and the Trail Making Test – B (TMT-B) are commonly used standardized assessments, which have been described in detail elsewhere. Briefly, the MoCA (29) is a short (10-minute) cognitive status assessment scored from 0 to 30, with higher scores indicating better performance. The MoCA is commonly used for screening and follow-up evaluation of healthy older adults and those with MCI. The TMT-B (30) is a brief, timed assessment of executive functioning and processing speed. TMT-B scores are reported in seconds (up to 300 seconds), with lower scores indicating better performance.
The Alzheimer’s Disease Cooperative Study Instrumental Activities of Daily Living – Prevention Instrument (ADCS-ADL-PI) (31) is a 15-item informant-reported measure of functioning designed for use in a primary prevention context. The ADCS-ADL-PI was completed by informants of participants with SCD, either in person or via phone interview.

Procedure

Participants provided written informed consent prior to engagement in any study-related activities. Participants completed completed two study visits. At Visit 1, all participants completed a screening assessment that included administration of the MoCA as well as a general health and demographic questionnaire; participants of ages 55 and older were given the MCFSI. Participants with MoCA scores of 22 and greater were eligible for enrollment in the YA and OA groups. This cut-off was based on recent recommendations for suitable cut-off scores that are inclusive of the normative range across diverse demographics (e.g., (32)). Participants of ages 55 and older were eligible for the SCD group with MoCA scores of 16 and higher and ADCS-MCFSI scores of 4 and greater, indicating subjective decline in four or more distinct cognitive areas over the past year.
Because recruitment took place within the context of a larger normative study, participants with a history of documented neurologic or psychiatric disorders and/or medical conditions interfering with daily function or cognition were excluded, with two exceptions. These exceptions were: 1) Participants with a history of depression and/or anxiety were included if they were stable (treated or non-treated) for a period of three months or more prior to participation; 2) As noted above, participants with a diagnosis of MCI or currently treated with cholinesterase inhibitors were admitted to the SCD group if they met all other inclusion criteria.
Eligible participants went on to complete the TMT-B, VRFCAT, and BAC App assessments. The TMT-B assessment was added to the study after several YAs had already completed Visit 1, and is therefore only available for a portion of YAs assessed. Participants also completed the WMS IV Logical Memory subtest (33), and the Test of Premorbid Functioning (34) (both to be discussed in a separate publication). Participants of ages 55 and older completed the MCFSI at this first visit. Participants who endorsed four of more items on the MCFSI questionnaire were categorized as having SCD and were asked to provide an informant to complete the ADCS-ADL-PI.
At Visit 2 (7 to 14 days later), participants completed the UPSA-2-VIM as well as alternate versions of the VRFCAT and BAC App (the latter of the two will be presented in a separate publication regarding test-retest reliability).

Analysis

Data were analyzed with IBM® SPSS® Statistics 23.0 (2015) (35), using a 5% significance level and a 95% confidence interval. Demographic and rating scale data are reported as means and standard deviations (SDs). Continuous variables for all rating scales were compared by one-way analysis of variance with the Scheffe post hoc test, which corrects alpha for simple and complex mean comparisons. Raw scores for individual BAC App and VRFCAT measures are reported as means and SDs.
All measures were normally distributed (Kolmogorov-Smirnov normality test) so we chose to employ parametric tests to evaluate group differences. Cohen’s d was used to estimate the effect size for key differences (e.g., differences in cognitive and functional measures) and computed as SCD mean – YA mean or OA mean divided by the pooled standard deviation (Cohen’s d = x̄CC – x̄YA or OA / SD pooled) (36). Cohen (1988) criteria were employed for interpreting effect size d = 0.20 – 0.49 as small, d = 0.50 – 0.79 as medium, and d  ≥ 0.80 as large; effect sizes above 1.00 are considered very large, indicating that the difference between the two means is larger than 1 SD.
Bonferroni corrections for multiple comparisons (p-value/number of tests) were made when several dependent or independent statistical tests were performed simultaneously (as specified in Results). To assess the relationships between the BAC App, VRFCAT, and other cognitive and functional scales, a series of Pearson r correlations were computed. Given the large number of computed correlations, Type I error rate was minimized by setting alpha to 0.01.

 

Results

Demographic data for participants in the younger adult (YA), older adult (OA), and subjective cognitive decline (SCD) groups are displayed in Table 1. Statistically significant differences were not assessed among the three groups as subject recruitment was stratified with respect to age, sex, race, and education. Mean ADCS-ADL-PI score for participants in the SCD group was 38.93 (SD=5.81). Mean ADCS-MCFSI score for participants in the SCD group was 5.62 (SD=1.79), with a range of 4-11.5.  Mean ADCS-MCFSI score for participants in the OA group was 1.23 (SD=1.18), with a range of 0-3.

Table 1. Demographic Characteristics

Table 1. Demographic Characteristics

Young Adults: Age 20 – 54 years; Older Adults: Age > 54 years; SD: Standard Deviation; Data for Race was not reported for one subject (0.41%) in the Young Adult group

 

Mean performance for YA, OA, and SCD groups on each VRFCAT summary measure is displayed in Figure 2. Statistically significant differences were demonstrated among the three groups for all three summary measures, including VRFCAT Total Adjusted Time, Total Errors and Total Forced Progressions (p≤0.001 for all, after Bonferonni correction). Scheffe’s post hoc tests showed a significant difference between the YA group, the OA group, and the SCD group, with the OA group performing significantly worse than the YA group and the SCD group performing significantly worse than the OA group on all three measures (p≤0.001 for all comparisons).

Figure 2. Mean VRFCAT Total Adjusted Time (Panel A; Cohen’s d: YA-OA = 0.99, SCD-YA = 1.45, SCD-OA = 0.65), Total Errors (Panel B; Cohen’s d: YA-OA = 0.48, SCD-YA = 1.08, SCD-OA = 0.55) and Total Forced Progressions (Panel C; Cohen’s d: YA-OA = 0.66, SCD-YA = 1.12, SCD-OA = 0.55) for participants in each group

Figure 2. Mean VRFCAT Total Adjusted Time (Panel A; Cohen’s d: YA-OA = 0.99, SCD-YA = 1.45, SCD-OA = 0.65), Total Errors (Panel B; Cohen’s d: YA-OA = 0.48, SCD-YA = 1.08, SCD-OA = 0.55) and Total Forced Progressions (Panel C; Cohen’s d: YA-OA = 0.66, SCD-YA = 1.12, SCD-OA = 0.55) for participants in each group

 

Mean Adjusted Time and Errors for each VRFCAT objective are displayed in Figures 3A and 3B, respectively. For Adjusted Time, Bonferonni corrected (p≤0.002) omnibus ANOVA tests were significant for differences among the groups on all VRFCAT objectives with the exception of Objective 4: Pick up the Billfold on the Counter (p=0.014), and Objective 11: Wait for the Correct Bus to Your Apartment (p=0.013). Scheffe’s post hoc tests performed for objectives with significant omnibus tests showed significant differences between the SCD and OA groups for eight of the remaining objectives; post-hoc tests were not significant for Objective 5: Exit the Apartment and Head to the Bus, and Objective 6: Wait for the Correct Bus to the Grocery Store.

Figure 3. Mean Adjusted Time (Panel A) and Errors (Panel B) for VRFCAT objectives 1-12. Error bars depict +/- one standard error

Figure 3. Mean Adjusted Time (Panel A) and Errors (Panel B) for VRFCAT objectives 1-12. Error bars depict +/- one standard error

 

For VRFCAT Errors (Figure 3B), Bonferonni corrected (p≤0.002) omnibus ANOVA tests were significant for differences among the groups on Objective 3: Cross Off Ingredients in Kitchen, Objective 9: Shop for Grocery Items on List, Objective 10: Pay for Groceries with Exact Change, Objective 11: Wait for and Board Correct Bus Home, and Objective 12: Pay for Bus with Exact Change. Scheffe’s post hoc tests performed for objectives with significant omnibus tests showed significant differences between the SCD and OA groups on all but Objective 9: Shop for Grocery Items on List (p>0.1).
Group means and standard deviations for cognitive performance measures are displayed in Table 2, along with effect size (Cohen’s d) calculations for comparisons between groups. The SCD group performed significantly worse than the OA group on objective cognitive tests. On the MoCA, the SCD group performed 0.92 SDs (2.67 points) lower than the OA group. On the TMT-B, the SCD group took an average of 37.62 seconds (0.69 SDs) longer than the OA group to complete the task.

Table 2. Cognitive Performance

Table 2. Cognitive Performance

BAC-APP: Brief Assessment of Cognition – Application; Bonferroni adjusted p value = 0.005 (0.05 p value/12 cognitive tests); SD: Standard Deviation

 

On the BAC App, statistically significant differences were demonstrated among the three groups for all ten individual BAC App tests (p≤0.001 for all, after Bonferonni correction). Scheffe’s post hoc tests showed significant differences between the SCD and OA groups (p values range from 0.04 to p≤0.001) for all BAC App tests except Token Motor (p=0.196). Consistency of findings across all cognitive measures indicated the sensitivity of these cognitive assessments in detecting objective cognitive differences between the OA and SCD groups, which were defined only through subjective reporting of cognitive decline on the ADCS-MCFSI.
Correlational analyses were conducted to examine the relationship between the VRFCAT and the UPSA-2-VIM, as well as the relationship between cognitive and functional measures in the SCD group. UPSA-2-VIM total score was strongly correlated with all VRFCAT Total Adjusted Time (r=-0.56, p<.01), Total Errors (r=-0.47,p<.01), and Forced Progressions (r=-0.56, p<0.01). In all cases, decreased performance on the VRFCAT measure was associated with a corresponding decrease in UPSA-2-VIM performance.
VRFCAT measures were also strongly correlated with cognitive performance.  Table 2 displays correlations between summary VRFCAT measures, MoCA, TMT-B, and BAC App tests for the YA, OA, and SCD groups. Across all three groups, total completion time of the VRFCAT was associated with processing speed (Symbol Coding r >0.500).  In the SCD group, VRFCAT performance was strongly correlated with cognitive performance across nearly all tests with significant correlation coefficients ranging from 0.3 to 0.7. Only Forced Choice Recognition was not significant due in part to ceiling performance across all groups. In the SCD group, VRFCAT summary measures all had correlations greater than r=0.5 with MoCA performance and BAC App Verbal Memory (p<0.01 for all). VRFCAT Total Time was also strongly correlated with measures of processing speed (r=-0.714, p<0.01 for Symbol Coding) and executive functioning (r=-0.439 for Tower of London and r=0.545 for TMT-B; p<0.01 for both).

Table 3. Correlations between VRFCAT Measures and Cognitive Performance

Table 3. Correlations between VRFCAT Measures and Cognitive Performance

** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed).

 

Discussion

 

The past two decades have brought a series of failed clinical trials for new treatments of Alzheimer’s disease (37-38). The failure of symptomatic treatments, coupled with increased understanding of disease progression, has catalyzed interest in disease-modifying treatments intended for largely asymptomatic, preclinical AD populations. However, selection of appropriate functional endpoints in preclinical AD poses a considerable challenge due to the preservation of those functional activities assessed by traditional informant-based measures.
In order to address this challenge, the present study examined the sensitivity of the VRFCAT – a performance-based measure of IADL functioning – to differences between healthy younger adults, healthy older adults, and healthy older adults with SCD.
Older adults with SCD performed significantly worse than OAs without SCD on nearly all cognitive measures assessed (Forced Choice Recognition was the only exception), suggesting that SCD was associated with objective cognitive decline in the present sample (Table 2). In addition, the VRFCAT was sensitive to differences between all groups, with YAs significantly outperforming OAs and OAs significantly outperforming those with SCD. Importantly, performance on all three summary measures (Total Adjusted Time, Total Errors, and Total Forced Progressions) distinguished between the groups (Figure 2). Findings comparing YAs and OAs replicate earlier work (18), while findings in SCD suggest that the VRFCAT may be sensitive to the earliest detectable declines in IADL functioning in preclinical AD.  Strong correlations between the VRFCAT and the UPSA-2-VIM, a rater-administered performance-based measure of functioning, confirm previous reports (17-18) and offer additional evidence in support of the construct validity of the VRFCAT.

Results also demonstrated strong relationships between VRFCAT outcomes and performance on a diverse set of cognitive measures (Table 3). In the SCD group, performance on tests of verbal memory and executive functioning, both of which demonstrated objective decline, showed remarkably strong correlations (r>0.5) with VRFCAT performance. This finding is particularly striking given that these domains are often among the first to decline and exhibit the most robust changes observed in preclinical AD populations compared to healthy controls who remain stable over time (39-40).
The present study also evaluated the use of the BAC App to assess cognition in individuals with SCD. Like the VRFCAT, the BAC App leverages technology to reduce rater burden by providing automated stimulus presentation and scoring (see (25)). In the present study, the BAC App demonstrated sensitivity to objective cognitive differences between YAs, OAs, and individuals with SCD (Table 2). These findings provide support for the BAC App as a method of cognitive assessment in trials for preclinical AD.
There are a few limitations to the current study. First, because prior work on SCD has applied heterogeneous methods of evaluating subjective decline, there is no guarantee that the current findings would hold if SCD were operationalized in a different manner. A cut-off score of 4 on the ADCS-MCFSI was chosen to ensure moderate subjective impairment, and this may have increased the probability of objective decline in our SCD group relative to other studies. Inclusion of six subjects with a current diagnosis of MCI may also have contributed. However, since the aim of the current study was to assess the use of the VRFCAT in the early portion of the AD continuum, we decided that eliminating those with an MCI diagnosis would not be useful, particularly given the likelihood that more individuals in the SCD group may meet diagnostic criteria for MCI if a full clinical evaluation were available.
The findings from the present study suggest that the VRFCAT provides a sensitive tool for evaluation of IADL functioning in SCD with strong correlations to cognitive performance. Taken together with prior findings regarding the strong psychometric properties of the VRFCAT, including strong test-retest reliability and lack of practice effects, these findings suggest that the VRFCAT may be uniquely suited for use in clinical trials in preclinical AD, as well as in longitudinal investigations of the relationship between cognition and function throughout the AD continuum.

 

Funding: This research was supported National Institutes of Health under NIMH 2R44 MH084240 (ASA) and NIA 1R44AG03191 (RSE). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health

Disclosures: AS Atkins is a full-time employee of NeuroCog Trials, Durham, NC, USA, and has received support from National Institute of Mental Health and National Institute on Aging. A Khan is a full-time employee of NeuroCog Trials, Durham, NC, USA, and has received support from National Institute of Mental Health, Janssen, Celgene, Teva Pharmaceuticals, and Stanley Medical Research Foundation. D Ulshen, A Vaughn and D Balentin are full-time employees of NeuroCog Trials, Durham, NC, USA, H. Dickerson as a NeuroCog employee. LE Liharska is a consultant to NeuroCog Trials. B Plassman currently or in the past 3 years has received funding from the National Institute on Aging and the National Institute of Environmental Health Sciences, and has served as a consultant for Takeda. K Welsh-Bohmer is a consultant to NeuroCog Trials. She currently or in the past 3 years has received funding from the National Institute on Aging and has received honoraria, served as a consultant, or advisory board member for Takeda, Biogen, Roche, T3D Therapeutics, Diffusion Pharmaceutical company, and Merck. RSE Keefe currently or in the past 3 years has received investigator-initiated research funding support from the Department of Veteran’s Affair, Feinstein Institute for Medical Research, GlaxoSmithKline, National Institute of Mental Health, National Institute on Aging, Novartis, Psychogenics, Research Foundation for Mental Hygiene, Inc., and the Singapore National Medical Research Council.  Dr. Richard Keefe currently or in the past 3 years has received honoraria, served as a consultant, speaker, or advisory board member for Abbvie, Acadia, Aeglea, Akebia, Akili, Alkermes, ArmaGen, Astellas, Avanir, AviNeuro/ChemRar, Axovant, Biogen, Boehringer-Ingelehim, Cerecor, CoMentis, Critical Path Institute, FORUM, Global Medical Education (GME), GW Pharmaceuticals, Intracellular Therapeutics, Janssen, Lundbeck, Lysogene, MedScape, Mentis Cura, Merck, Minerva Neurosciences Inc., Mitsubishi, Monteris, Moscow Research Institute of Psychiatry, Neuralstem, Neuronix, Novartis, NY State Office of Mental Health, Otsuka, Pfizer, Regenix Bio,  Reviva, Roche, Sangamo, Sanofi, Sunovion, Takeda, Targacept, University of Moscow, University of Texas Southwest Medical Center, and WebMD.  Dr. Keefe receives royalties from versions of the BAC testing battery, the MATRICS Battery (BACS Symbol Coding), and the Virtual Reality Functional Capacity Assessment Tool (VRFCAT).  He is also a shareholder in NeuroCog Trials, Inc. and Sengenix.

Ethical standard: The study protocol was approved by a central Institutional Review Board. All participants provided written informed consent prior to participation.

 

References

1.     Weintraub S, Carrillo MC, Farias ST, et al. Measuring cognition and function in the preclinical stage of Alzheimer’s disease. Alzheimers Dement Transl Res Clin Interv 2018;64-75.
2.     FDA Guidance: Guidance for Industry: Early Alzheimer’s Disease: Developing Drugs for Treatment. 2018. Retrieved from: https://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM596728.pdf
3.     Jekel K, Damian M, Wattmo C, et al. Mild cognitive impairment and deficits in instrumental activities of daily living: a systematic review. Alzheimers Res Ther 2015;7(1):17.
4.     Farias ST, Chou E, Harvey DJ, et al. Longitudinal trajectories of everyday function by diagnostic status. Psychol Aging 2013;28(4):1070-1075.
5.     Lindbergh CA, Puente AN, Gray JC, Mackillop J, Miller LS. Delay and probability discounting as candidate markers for dementia: An initial investigation. Arch Clin Neuropsychol 2014; 29(7): 651-662.
6.     Marshall GA, Amariglio RE, Sperling RA, Rentz DM. Activities of daily living: where do they fit in the diagnosis of Alzheimer’s disease? Neurodegener Dis Manag 2012;2(5):483-491.
7.     Reppermund S, Brodaty H, Crawford J, et al. (2013). Impairment in instrumental activities of daily living with high cognitive demand is an early marker of mild cognitive impairment: The Sydney Memory and Ageing Study. Psychol Med 2013;43(11): 2437-2445. doi:10.1017/S003329171200308X
8.     Loewenstein DA, Curiel RE, Duara R, Buschke H. Novel cognitive paradigms for the detection of memory impairment in preclinical Alzheimer’s disease. Assessment 2018;25(3):348-359. doi: 10.1177-1073191117691608
9.     Vellas B, Bateman R, Blennow K, et al. Endpoints for Pre-Dementia AD Trials: A Report from the EU/US/CTAD Task Force. J Prev Alzheimers Dis 2015;2(2):128–135. http://doi.org/10.14283/jpad.2015.55
10.     Harvey PD, Cosentino S, Curiel R, et al. Performance-based and Observational Assessments in Clinical Trials Across the Alzheimer’s Disease Spectrum. Innov Clin Neurosci 2017;14(1-2):30-39.
11.     Keefe RS, Davis VG, Atkins AS, et al. Validation of a computerized test of functional capacity. Schizophr Res 2016;175(1-3):90–96.
12.     Patterson TL, Goldman S, McKibbin CL, et al. UCSD performance-based skills assessment: development of a new measure of everyday functioning for severely mentally ill adults. Schizophr Bull 2001;27(2):235–245.
13.     Gomar JJ, Harvey PD, Bobes-Bascaran MT, et al. Development and cross-validation of the UPSA short form for the performance-based functional assessment of patients with mild cognitive impairment and Alzheimer disease. Am J Geriatr Psychiatry 2011;19(11):915–922.
14.     Goldberg TE, Koppel J, Keehlisen L, et al. Performance-based measures of everyday function in mild cognitive impairment. Am J Psychiatry 2010;167(7):845–853.
15.     Sikkes SA, de Lange-de Klerk ES, Pijnenburg YA, Scheltens P, Uitdehaag BM. A systematic review of Instrumental Activities of Daily Living scales in dementia: room for improvement. J Neurol Neurosurg Psychiatry 2009;80(1):7-12.
16.     Sikkes SA, Hooghiemstra AM, Pijnenburg YA, Scheltens P. The past, present, and future of instrumental activities of daily living assessments in Alzheimer’s disease. Alzheimers Dement 2016; 12(7):372-373.
17.     Keefe RS, Davis VG, Atkins AS, et al. Validation of a computerized test of functional capacity. Schizophr Res 2016;175(1-3):90–96.
18.     Atkins AS, Stroescu I, Spagnola NB, et al. Assessment of Age-Related Differences in Functional Capacity Using the Virtual Reality Functional Capacity Assessment Tool (VRFCAT). J Prev Alzheimers Dis 2015;2(2):121-127.
19.     Jessen F, Amariglio RE, van Boxtel M, Breteler M, Ceccaldi M, et al. A conceptual framework for research on subjective cognitive decline in preclinical Alzheimer’s disease. Alzheimers Dement 2014;10:844–852.
20.     Molinuevo JL, Rabin LA, Amariglio R, et al. Implementation of Subjective Cognitive Decline criteria in research studies. Alzheimers Dement 2017;13(3):296-311. http://doi.org/10.1016/j.jalz.2016.09.012
21.     Ávila-Villanueva M, Fernández-Blázquez MA. Subjective Cognitive Decline as a Preclinical Marker for Alzheimer’s Disease: The Challenge of Stability Over Time. Front Aging Neurosci 2017;9:377. http://doi.org/10.3389/fnagi.2017.00377
22.     Mendonça MD, Alves L, Bugalho P. From Subjective Cognitive Complaints to Dementia: Who is at Risk?: A Systematic Review. Am J Alzheimers Dis Other Demen 2016;31(2):105-114.
23.     Mitchell AJ, Beaumont H, Ferguson D, Yadegarfar M, Stubbs B. Risk of dementia and mild cognitive impairment in older people with subjective memory complaints: meta-analysis. Acta Psychiatr Scand 2014;130(6):439-451.
24.     Walsh SP, Raman R, Jones KB, Aisen PS; Alzheimer’s Disease Cooperative Study Group. ADCS Prevention Instrument Project: the Mail-In Cognitive Function Screening Instrument (MCFSI). Alzheimer Dis Assoc Disord 2006;20(4 Suppl 3):S170-178.
25.     Atkins AS, Tseng T, Vaughan A, et al. Validation of the tablet-administered Brief Assessment of Cognition (BAC App). Schizophr Res 2017;181:100-106  http://dx.doi.org/10.1016/j.schres.2016.10.010
26.     Keefe RS, Goldberg TE, Harvey PD, et al. The Brief Assessment of Cognition in Schizophrenia: reliability, sensitivity, and comparison with a standard neurocognitive battery. Schizophr Res 2004;68(2-3):283–297.
27.     Keefe RS, Poe M, Walker TM, Harvey PD. The relationship of the Brief Assessment of Cognition in Schizophrenia (BACS) to functional capacity and real-world functional outcome. J Clin Exp Neuropsychol 2006;28(2):260–269.
28.     Sabbag S, Twamley EM, Vella L, et al. Assessing everyday functioning in schizophrenia: not all informants seem equally informative. Schizophr Res 2011;131(1-3):250-255.
29.     Nasreddine ZS, Phillips NA, Bédirian V, et al. The Montreal Cognitive Assessment (MoCA): A Brief Screening Tool for Mild Cognitive Impairment. J Am Geriatr Soc 2005;53(4):695-699.
30.     Reitan, R.M. The relation of the trail making test to organic brain damage. J Consult Psychol 1955;19(5):393–394.
31.     Galasko D, Bennet D, Sano M, et al.; Alzheimer’s Disease cooperative Study. ADCS Prevention Instrument Project: assessment of instrumental activities of daily living for community-dwelling elderly individuals in dementia prevention clinical trials. Alzheimer Dis Assoc Disord 2006;20(4 Suppl 3):S152–169. doi:10.1097/01.wad.0000213873.25053.2b.
32.     Carson N, Leach L, Murphy KJ. A re-examination of Montreal Cognitive Assessment (MoCA) cutoff scores. Int J Geriatr Psychiatry 2018;33(2):379–388. doi: 10.1002/gps.4756.
33.     Wechsler D. Wechsler Memory Scale – Fourth edition (WMS-IV). San Antonio, TX: NCS Pearson Inc., 2009.
34.     Psychological Corporation. Advanced clinical solutions: Clinical and interpretive manual. San Antonio, TX: Pearson, 2009.
35.     IBM Corp. IBM SPSS Statistics for Windows, Version 23.0. Armonk, NY: IBM Corp, 2015.
36.     Cohen J. Statistical Power Analysis for the Behavioral Sciences (2nd Edition). Hillsdale, New Jersey: Erlbaum, 1998.
37.     Anderson RM, Hadjichrysanthou C, Evans S, Wong MM. Why do so many clinical trials of therapies for Alzheimer’s disease fail? Lancet 2017;390(10110):2327-2329.
38.     Gauthier S, Albert M, Fox N, et al. Why has therapy development for dementia failed in the last two decades? Alzheimers Dement 2016;12(1):60-64.
39.     Edmonds EC, Delano-Wood L, Galasko D, Salmon D, Bondi M. Subtle Cognitive Decline and Biomarker Staging in Preclinical Alzheimer’s Disease. J Alzheimers Dis 2015;47(1):231-242.
40.     Bäckman L, Jones S, Berger AK, Laukka EJ, Small BJ. Cognitive impairment in preclinical Alzheimer’s disease: a meta-analysis. Neuropsychology 2005;19(4):520–531. 10.1037/0894-4105.19.4.520