jpad journal

AND option

OR option

A NOVEL COMPUTERIZED COGNITIVE STRESS TEST TO DETECT MILD COGNITIVE IMPAIRMENT

 
R.E. Curiel Cid1, E.A. Crocco1, M. Kitaigorodsky1, L. Beaufils2, P.A. Peña2, G. Grau1, U. Visser2, D.A. Loewenstein1
 

1. Center for Cognitive Neuroscience and Aging, Department of Psychiatry and Behavioral Sciences, University of Miami Miller School of Medicine, 1695 NW 9th Avenue, Miami, Florida, 33136. U.S.A; 2. Department of Computer Science, University of Miami, 1365 Memorial Drive, Coral Gables, Florida 33146, U.S.A.

Corresponding Author: Rosie E. Curiel, Psy.D., Associate Professor of Psychiatry & Behavioral Sciences, University of Miami Miller School of Medicine, 1695 NW 9th Avenue, Suite 3202, Miami, FL 33136. RCuriel2@miami.edu

J Prev Alz Dis 2021;
Published online January 19, 2021, http://dx.doi.org/10.14283/jpad.2021.1

 


Abstract

BACKGROUND: The Loewenstein Acevedo Scales of Semantic Interference and Learning (LASSI-L) is a novel and increasingly employed instrument that has outperformed widely used cognitive measures as an early correlate of elevated brain amyloid and neurodegeneration in prodromal Alzheimer’s Disease (AD). The LASSI-L has distinguished those with amnestic mild cognitive impairment (aMCI) and high amyloid load from aMCI attributable to other non-AD conditions. The authors designed and implemented a web-based brief computerized version of the instrument, the LASSI-BC, to improve standardized administration, facilitate scoring accuracy, real-time data entry, and increase the accessibility of the measure.
Objective: The psychometric properties and clinical utility of the brief computerized version of the LASSI-L was evaluated, together with its ability to differentiate older adults who are cognitively normal (CN) from those with amnestic Mild Cognitive Impairment (aMCI).
Methods: After undergoing a comprehensive uniform clinical and neuropsychological evaluation using traditional measures, older adults were classified as cognitively normal or diagnosed with aMCI. All participants were administered the LASSI-BC, a computerized version of the LASSI-L. Test-retest and discriminant validity was assessed for each LASSI-BC subscale.
Results: LASSI-BC subscales demonstrated high test-retest reliability, and discriminant validity was attained.
Conclusions: The LASSI-BC, a brief computerized version of the LASSI-L is a valid and useful cognitive tool for the detection of aMCI among older adults.

Key words: Computerized test, mild cognitive impairment, Alzheimer’s disease, semantic intrusion errors, semantic interference, clinical trials.


 

Introduction

Alzheimer’s disease (AD) is a devastating condition that is expected to significantly impact the rapidly aging population. Important advancements have been made to identify novel candidate biomarkers of AD, and a research framework to stage the disease from its preclinical stage onward has been proposed, with the aim of establishing a biological definition of the disease (1). Despite these formidable advances, neuropsychological assessment remains an essential component of the evaluative process because cognitive impairment is a fundamental defining symptom of AD that emerges early, at a certain point in the transition from the preclinical to clinically symptomatic stages of the disease. Further, cognitive changes are used to detect and track disease progression over time and a measurable change in cognitive ability represents a potentially meaningful clinical outcome (2). Thus, the identification of cognitive markers that are sensitive to detecting early disease states and converge with biological markers of AD pathology, have become increasingly necessary in terms of identifying individuals at risk, monitoring disease progression, and ascertaining treatment efficacy (3).
Traditional paper-and-pencil cognitive measures employed for the detection of AD-related Mild Cognitive Impairment (MCI) are often insensitive to detecting subtle cognitive changes that occur during preclinical or prodromal disease states (5, 6). There is a developing body of literature, however, that cognitive stress paradigms can measure subtle deficiencies that are highly implicated in early AD disease states among older adults. One such paradigm that measures semantic interference in memory, the Loewenstein-Acevedo Scales for Semantic Interference and Learning (LASSI-L), was sensitive enough to differentiate older adults who are cognitively unimpaired from those with subjective memory complaints, and early amnestic MCI (7, 8). On this memory measure, proactive semantic interference (PSI) deficits and particularly, the inability to recover from PSI (frPSI) was also highly associated with brain amyloid load in older adults with otherwise normal performance on a traditional battery of cognitive tests (9). The LASSI-L has outperformed other widely used memory measures in detecting prodromal AD in both English and Spanish (10, 11), and has been found to be useful in different cultural/language groups (7, 11, 12). In addition to measuring the total number of correct targets recalled on individual LASSI-L subscales, there is evidence that semantic intrusion errors may have specific utility in the assessment of prodromal AD. Loewenstein and colleagues (4) found that semantic intrusion errors sensitive to PSI and frPSI on the LASSI-L could differentiate amyloid positive aMCI groups from amyloid negative aMCI groups with non-AD diagnoses.
While it is recognized that intrusion errors represent early manifestations of neurodegenerative brain disease, a potential limitation of previous approaches is that the number of intrusion errors are often highly dependent on an individual’s total responses on a particular trial. Thus, even a seemingly modest number of intrusion errors may actually represent an at-risk cognitive profile, depending on the total number of responses that are correct. For example, an individual may make a minimal number of intrusion errors on a given trial, which may appear to be clinically insignificant. However, if the number of total responses is low, even a modest number of intrusion errors may indicate impaired inhibitory processes and underlying brain pathology. As a result, we recently developed a novel method to evaluate semantic intrusion errors utilizing the percentage of intrusion errors (PIE) in relation to total correct responses (13). This method takes into account the observation that the number of intrusion errors a person makes is often highly dependent on their total responses on a particular trial. Thus, even a seemingly modest number of intrusion errors may represent an at-risk cognitive profile. PIE demonstrated high levels of sensitivity and specificity in differentiating CN from amyloid positive persons with preclinical AD and preliminary work suggests that it is a novel and sensitive index of early memory dysfunction (11, 13).
Traditional paper and pencil neuropsychological assessments are lengthy, require a skilled examiner, are vulnerable to human error in administration and scoring, and associated with practice effects. Moreover, some of these measures have been found to be biased among diverse ethnic/cultural and language groups. To address some of these concerns, computerized testing batteries have been developed to explore a more suitable option to mitigate some of the abovementioned limitations (14-17). However, these too have limitations in early detection of AD-associated cognitive impairment. For example, many of these computerized batteries are relatively successful at distinguishing between older adults with normal cognition and those with dementia or late stage MCI, but lack the predictive power needed to move the field forward, which is to correctly classify individuals with MCI and/or earlier on the disease continuum, and do so in a manner that is validated for use among different ethnic/cultural and language groups. This highlights a major problem with many traditional computerized batteries; they are automated versions of traditional paper-and-pencil cognitive assessment paradigms that lack sensitivity to detect AD-associated cognitive decline, and employ the same paradigms originally developed for the assessment of dementia or traumatic brain injury (17).
Recent work by Curiel and associates (5-12) led to the development of a brief computerized version of the LASSI-L, the LASSI-BC, which incorporates all the elements of this well-established cognitive stress test. The LASSI-BC is currently being studied extensively in a longitudinal study of at-risk aging adults. This novel computerized version of the instrument does not require a skilled examiner, is web-based and can remotely run on most browser-capable devices. Moreover, it is intuitive and appropriate for use among older adults that are either predominantly English or Spanish-speaking and who have varying ethnic/cultural backgrounds including Hispanics and African Americans.
In this first validation study, we examine the psychometric properties of the LASSI-BC. We also evaluate the clinical utility several LASSI-BC subscales as it relates to their ability to differentiate older adults with normal cognition from those with aMCI on measures of: i) proactive semantic interference, ii) the failure to recover from proactive semantic interference, iii) retroactive semantic interference and iv) the percentage of intrusion errors in relation to total cued recall responses by the participant. Performance on these specific subscales were selected a priori because, as noted above, our previous work using the paper-and-pencil LASSI-L has robustly demonstrated that these particular subscales are the most sensitive to cognitive breakdowns associated with MCI due to preclinical and prodromal AD.

 

Methods

This study included 64 older adults that were evaluated as part of an IRB-approved longitudinal investigation funded by the National Institute on Aging. An experienced clinician administered a standard clinical assessment protocol, which included the Clinical Dementia Rating Scale (CDR) (18), and the Mini-Mental State Examination (MMSE) (19). Subsequently, a neuropsychological battery was independently administered in either Spanish or English dependent on the participant’s dominant and preferred language. Spanish language evaluations were completed with equivalent standardized neuropsychological tests and appropriate age, education, and cultural/language normative data (20-23). Proficient bilingual (Spanish/English) psychometricians performed all the testing.
Diagnostic groups were classified using the following criteria:

Amnestic MCI group (aMCI) (n=25)

Participants met Petersen’s criteria (24)) for MCI and evidenced all of the following: a) subjective cognitive complaints by the participant and/or collateral informant; b) evidence by clinical evaluation or history of memory or other cognitive decline; c) Global Clinical Dementia Rating scale of 0.5 (18); d) below expected performance on delayed recall of the HVLT-R (23) or delayed paragraph recall from the National Alzheimer’s Coordinating Center -Unified Data Set (NACC-UDS) (25) as measured by a score that is 1.5 SD or more below the mean using age, education, and language-related norms.

Cognitively Normal Group (n=39)

Participants were classified as cognitively normal if all of the following criteria were met: a) no subjective cognitive complaints made by the participant and a collateral informant; b) no evidence by clinical evaluation or history of memory or other cognitive decline after an extensive interview with the participant and an informant; c) Global CDR score of 0; d) performance on all traditional neuropsychological tests (e.g.: Category Fluency (26), Trails A and B (27), WAIS-IV Block Design subtest (28) was not more than 1.0 SD below normal limits for age, education, and language group.

Loewenstein-Acevedo Scales for Semantic Interference and Learning, Brief Computerized Version (LASSI-BC)

The LASSI-BC was not used for diagnostic determination in this study. This computerized cognitive stress test is a novel computerized measure that is briefer than the paper-and-pencil LASSI-L, taking approximately 10 to 12 minutes to complete. The LASSI-BC contains the elements of the original LASSI-L which demonstrated the greatest differentiation between aMCI, PreMCI and CN older adults in previous studies. For example, free recall preceding the cued recall trials of the LASSI-L added time to the administration but was never as effective as cued recall in distinguishing among diagnostic groups. Developed in collaboration with the University of Miami Department of Computer Science, the LASSI-BC is a remotely accessible test available in both English and Spanish. As a web application, it can be run on devices that can run Google Chrome, including desktop computers, laptops, tablets, or even smartphones. While the LASSI-BC is a fully self-administered test with all verbal responses recorded and scored by the computer, for the purposes of this validation study, a trained study team member was present for each administration to systematically record responses, which provided a double check on the accuracy of data. The LASSI-BC utilizes Google Cloud Speech API , which has been successfully implemented for use with older adults. The test leverages Google Cloud’s Speech to Text software in conjunction with a backup lexicon for understanding the participants’ spoken words. The lexicon is designed to account for variations in participant’s pronunciation by allowing for words that the computer “mishears” to serve as alternatives to the actual word being spoken. Lexicons were chosen based on observations from participants during the test.
Upon initiating the examination, the participant is instructed in both audio and visual formats. They will see 15 words belonging to one of three semantic categories: fruits, musical instruments, or articles of clothing (five words per category). The words are then individually presented on the screen and audio for a 6-second interval. This presentation facilitates optimal encoding and storage of the to-be-remembered information. Further, this instruction style has been easily understood and accepted by older adults during pilot studies in the course of developing the LASSI-BC. After the computer presents all 15 words, participants are presented with each category cue (e.g., fruits) and asked to recall the words that belonged to that category. Participants are then presented with the same target stimuli for a second learning trial with subsequent cued recall to strengthen the acquisition and recall of the List A targets. The exposure to the semantically related list (i.e., List B) is then conducted in the same manner as exposure to List A. List B consists of 15 words different from List A, all of which belong to each of the three categories used in List A (i.e., fruits, musical instruments, and articles of clothing). Following the presentation of the List B words, the person is asked to recall each of the List B words that belonged to each of the categories. List B words are presented again, followed by a second category-cued recall trial. Finally, to assess retroactive semantic interference, participants are asked to free recall the original List A words. Primary measures used in this study are the second cued recall score for List A (maximum learning), first cued recall score for List B (susceptibility to proactive semantic interference), second cued recall of List B (failure to recover from proactive semantic interference), and the third cued recall of List A (retroactive semantic interference). In addition, we evaluated the novel ratio used with the LASSI-L, that takes into account the percentage of intrusion errors (PIE) as a function of total responses on subscales that measure proactive semantic interference and the failure to recover from proactive semantic interference. Specifically, the ratio is denoted as follows: Total Intrusion Errors/ (Total Intrusion Errors + Total Correct Responses) for LASSI-BC Cued B1 (a measure of susceptibility to proactive semantic interference) and LASSI-BC Cued B2 recall (a measure of recovery from proactive semantic interference).

 

Results

The computerized version of the LASSI-BC had psychometric properties that compared favorably to the test-retest reliabilities obtained on the original paper-and-pencil LASSI-L (7). As depicted in Table 1, CN (n=39) and aMCI (n=25) groups did not differ in terms of age, sex, or language of evaluation. Individuals diagnosed as aMCI, although well educated (Mean =14.26; SD=3.5), had less educational attainment relative to their cognitively normal counterparts (Mean =16.32; SD=2.3). As expected, aMCI participants also had lower mean MMSE scores (Mean =26.04; SD=2.3).

Table 1. Demographic Characteristics and Computerized LASSI-BC Scores among Participants who are Cognitively Normal and with Amnestic Mild Cognitive Impairment

 

Test-retest reliability

Test-retest reliability data was obtained on a subset of 15 older adults diagnosed with aMCI using Petersen’s criteria (24) for each of the LASSI-BC subscales. The mean age was 73.4 (SD=6.3); education 15.4 (SD=3.6); and the mean MMSE score for this group was 26.6 (SD=2.2). These individuals (60% primary English-speakers and 60% female) were administered the LASSI-BC on two occasions, within a 4 to 39-week interval (Mean =13.9.; SD=10.6 weeks). In our pilot work, we found robust test-retest correlations ranging from 0.55 to 0.721 on the subscales that have shown to be the most sensitive measures of cognitive decline in the original paper-and-pencil version. In this study, test-retest comparisons were conducted for Cued Recall A2 (measures maximum learning), Cued Recall B1 (measures proactive semantic interference), and Cued Recall B2 (measures the failure to recover from proactive semantic interference). One-tailed Pearson Product Moment Correlation Coefficients were obtained given the directional hypotheses concerning test-retest relationships. High, statistically significant test-retest reliabilities were obtained for Cued A2 Recall (r=.726; p<.001); Cued Recall B1 (r=.529; p=0.021); Cued Recall B2 (r=.555; p=0.016).

Discriminant validity

As depicted in Table 1, LASSI-BC scales sensitive to maximum learning (Cued A2), vulnerability to proactive semantic interference (Cued B1) and the failure to recover from proactive semantic interference (Cued B2) were statistically significant in discriminating between older adults with amnestic MCI and cognitively normal counterparts. These results were identical when demographic variables such as education were entered in the model as covariates
We then calculated areas under the Receiver Operating Characteristic (ROC) curve for LASSI-BC correct responses as well as the PIE indices for Cued B1 and Cued B2 subscales. We selected these measures a priori given that performance on these specific subscales have traditionally been the most discriminant measures on the paper-and-pencil form of the LASSI-L.
As shown in Table 2, an optimal cut-point of 5 by Youden’s criteria on correct responses for Cued Recall B1, yielded a sensitivity of 84.6% and a specificity of 86.8%. An optimal cut-point of 9 by Youden’s criteria on correct responses provided on Cued Recall B2, yielded statistically significant areas under the ROC curve of .868 (SE=0.88) and .824 (SE=.051), respectively.

Table 2. Classification of aMCI versus Cognitively Normal Participants on the LASSI-BC

 

We subsequently examined an optimal cut-point for PIE on the Cued Recall B1 and the Cued Recall B2 subscales. For PIE on Cued Recall B1, the area under the ROC was .879 (SE=.06) with a sensitivity of 92.9% and specificity of 80%, respectively using an optimal cut-point of .2540. For PIE on Cued Recall B2, the area under the ROC was .801 (SE=.07), using an optimal cut-point of .2159, which yielded a sensitivity of 78.6% and specificity of 68.0%. We selected these specific subscales because they have shown to be the strongest predictors of aMCI in the paper-and-pencil form of the LASSI-L.
We subsequently entered the statistically significant LASSI-BC subscales (Cued Recall B1 and Cued Recall B2) into a stepwise logistic regression. As seen in Table 3, the first variable to enter the logistic regression model was PIE on Cued B1 [B=6.86 (SE=1.67) Wald=17.07, p<0.001)]. On the second step of the logistic regression model, correct responses on Cued Recall B2 entered the model [B=-.34 (SE=.128), Wald= 17.1 (p=.008)]. Combining PIE Cued Recall B1 and correct responses on Cued Recall B2, yielded an overall sensitivity of 80% and specificity of 89.7%. It should be noted logistic regression weighs overall classification in a manner that favors the largest diagnostic group (in this case CN participants). Nonetheless, ROC and stepwise regression models yielded similar results indicating excellent discriminative ability.
In sum, our findings support that the LASSI-BC has equal or better psychometric properties than the original paper-and-pencil LASSI-L and demonstrates that computerized administration is both feasible, well accepted, and has excellent discriminant properties.

Table 3. Step-wise Logistic Regression Using Proactive Semantic Interference Measures on the Computerized LASSI-BC

 

Discussion

The present study was designed to examine the psychometric properties of the LASSI-BC, the brief computerized version of the LASSI-L, a cognitive stress test that utilizes a novel cognitive assessment paradigm based on semantic interference in memory. In studies conducted in the United States and abroad, the LASSI-L has shown great utility in detecting cognitive changes among individuals during the preclinical and prodromal stages of AD (4, 29) and has been found to be appropriate for use among diverse ethnic/cultural and language groups (11, 30, 12). The paradigm that this measure employs is unique in that it explicitly and from the outset organizes the examinee’s learning around specific semantic categories, which promotes active encoding, reduces the use of individualized learning strategies that can help or hinder performance, increases depth of initial learning, and is designed to tap an individual’s vulnerability to semantic interference.
The current investigation examined all salient subscales of the LASSI-BC, which were selected based on previous work with the paper-and-pencil versions. The computerized version evidenced good test-retest reliability for participants diagnosed with aMCI. Scores on all LASSI-BC subscales were higher for cognitively normal older adults, relative to aMCI participants. In addition, high levels of discriminant validity were obtained in differentiating aMCI from cognitively normal groups based on ROC analyses and logistic regression.
A potential limitation of this first validation study is that we employed modest numbers of participants who were tested in either English or Spanish on the LASSI-BC. Although, our overall findings were highly significant and the paper-and-pencil LASSI-L has been validated in different languages (i.e.- Spanish speakers in Argentina, Spanish speakers in Spain, Spanish speakers from Mexico, etc.) and with different ethnic/cultural groups (European Americans, Hispanics and African Americans), such future comparisons should be made with the LASSI-BC. Further, additional studies with the LASSI-BC will include evaluating the diagnostic utility of this computerized cognitive stress test to differentiate older adults earlier on the preclinical continuum of AD, and relate performance to biomarkers of AD pathology, as well as compare it to other traditional and widely used cognitive measures in the field.
There has been an increase in the number of computerized tests developed including the CogState (31) and the Cognition Battery from the NIH Toolbox (16), but limitations exist. For example, one of the most widely-used computerized cognitive batteries for the assessment of MCI is the CogState. As part of the Mayo Clinic Study on Aging, Mielke and associates (32) administered the CogState to eighty-six participants diagnosed with MCI who were found to have worse performance than cognitively healthy individuals; however, it is likely that individuals classified as MCI ranged from early states of MCI to late MCI, the latter of which is more cognitively similar to early dementia in terms of neuropsychological test performance, limiting evidence that this measure in sensitive to preclinical cognitive change. Further, the authors noted that their results are not generalizable to other ethnicities due to the demographic makeup of the region (Minnesota, USA). Another study conducted by Mielke and colleagues (33) aimed to examine performance on the CogState with neuroimaging biomarkers (MRI, FDG PET, and amyloid PET) among cognitively normal participants aged 51-71; however, only weak associations were found between CogState subtests and biomarkers of neurodegeneration.
With the rapidly aging population, early detection of cognitive decline in individuals at risk for AD and related disorders has become a global priority. Accurately identifying at risk individuals through the detection and monitoring of subtle, albeit sensitive cognitive changes that transpire early in the disease course is an important initiative and computerized cognitive outcome measures have the potential to greatly reduce burden for participants, clinical researchers and clinicians.
The development of computerized cognitive tests for older adults has significantly increased during the past decade. In fact, available systematic reviews have identified more than a dozen computerized measures designed to detect dementia or MCI (34, 35, 36). Moreover, the use of computerized assessments with older adults has been found to be feasible and reliable (37, 38). A recent meta-analysis has shown relatively good diagnostic accuracy, and authors further concluded that their performance distinguishing individuals with MCI and dementia is comparable with traditional paper-pencil neuropsychological measures (35). It is anticipated that as technology advances, clinical trials will include validated computerized testing to sensitively capture cognitive performance, particularly in large-scale secondary prevention efforts (39). The impact of this technological advancement in computerized, web-based cognitive testing has the potential to facilitate remote deliverability, allow for real-time data entry, improves standardization, and reduces administration and scoring errors. Moreover, computerized assessment can more readily monitor longitudinal cognitive changes for each individual, facilitating a precision-based approach. It is critical; however, that emerging cognitive tests move beyond simply computerizing outdated, insensitive cognitive paradigms and instead invest in the development and validation of cognitive paradigms that are sensitive and specific to early cognitive breakdowns that occur during the preclinical stages of AD. These too should exhibit sensitivity to biomarkers of AD (e.g., amyloid load, tau deposition, and neurodegeneration in AD-prone regions). Doing so may address some of the most critical challenges facing clinical trials including proper selection of at-risk participants, and monitoring meaningful cognitive change over time.

Funding: This research was funded by the National Institute of Aging Grant 1 R01 AG047649-01A1 (David Loewenstein, PI), 1 R01 AG047649-01A1 (Rosie Curiel Cid, PI) 5 P50 AG047726602 1Florida Alzheimer’s Disease Research Center (Todd Golde, PI), 8AZ. The sponsors had no role in the design and conduct of the study; in the collection analysis, and interpretation of data; in the preparation of the manuscript; or in the review or approval of the manuscript.

Ethical standard: This research study was conducted in alignment with the Declaration of Helsinki and through the approval of the University of Miami Institutional Review Board.
Conflict of interest: Drs. Curiel and Loewenstein have intellectual property used in this study.

References

1. Jack Jr CR, Bennett DA, Blennow K, et al. NIA-AA research framework: toward a biological definition of Alzheimer’s disease. Alzheimer’s & Dementia. 2018 Apr;14(4):535-62.
2. Harvey PD, Cosentino S, Curiel R, et al. Performance-based and observational assessments in clinical trials across the Alzheimer’s disease spectrum. Innovations in clinical neuroscience. 2017 Jan; 14(1-2):30.
3. Edmonds EC, Delano-Wood L, Galasko DR, et al. Subtle cognitive decline and biomarker staging in preclinical Alzheimer’s disease. Journal of Alzheimer’s disease. 2015 Jan 1;47(1):231-42.
4. Loewenstein DA, Curiel RE, Duara R, et al. Novel cognitive paradigms for the detection of memory impairment in preclinical Alzheimer’s disease. Assessment. 2018 Apr25;(3):348-59.
5. Brooks L, Loewenstein D. Assessing the progression of mild cognitive impairment to Alzheimer’s disease: current trends and future directions. Alzheimer’s Research & Therapy. 2010;2(28):28-28.
6. Crocco E, Curiel RE, Acevedo A, et al. An evaluation of deficits in semantic cueing and proactive and retroactive interference as early features of Alzheimer’s disease. The American Journal of Geriatric Psychiatry. 2014 Sep 1;22(9):889-97.
7. Curiel RE, Crocco E, Acevedo A, et al. A new scale for the evaluation of proactive and retroactive interference in mild cognitive impairment and early Alzheimer’s disease. Aging. 2013;1(1):1000102.
8. Loewenstein DA, Curiel RE, Greig MT, et al. A novel cognitive stress test for the detection of preclinical Alzheimer disease: discriminative properties and relation to amyloid load. The American Journal of Geriatric Psychiatry. 2016 Oct 1;24(10):804-13.
9. Matías-Guiu JA, Curiel RE, Rognoni T, Valles-Salgado M, Fernández-Matarrubia M, Hariramani R, Fernández-Castro A, Moreno-Ramos T, Loewenstein DA, Matías-Guiu J. Validation of the Spanish version of the LASSI-L for diagnosing mild cognitive impairment and Alzheimer’s disease. Journal of Alzheimer’s Disease. 2017 Jan 1;56(2):733-42.
10. Rosselli M, Loewenstein DA, Curiel RE, et al. Effects of bilingualism on verbal and nonverbal memory measures in mild cognitive impairment. Journal of the International Neuropsychological Society. 2019 Jan;25(1):15-28.
11. Capp KE, Curiel Cid, RE, Crocco EA, et al. Semantic Intrusion Error Ratio Distinguishes Between Cognitively Impaired and Cognitively Intact African American Older Adults. Journal of Alzheimer’s Disease. 2019 Dec 23(Preprint):1-6.
12. Matias-Guiu JA, Cabrera-Martín MN, Curiel RE, et al. Comparison between FCSRT and LASSI-L to detect early stage Alzheimer’s disease. Journal of Alzheimer’s Disease. 2018 Jan 1;61(1):103-11.
13. Crocco, E, Curiel RE, Grau, G. Percentage of intrusion errors predicts patterns of cognitive change in older adults. (Under Review). Journal of Alzheimer’s Disease.
14. Beaumont JL, Havlik R, Cook KF, et al. Norming plans for the NIH Toolbox. Neurology. 2013;80(11 Suppl 3):S87–S92.
15. Saxton J, Morrow L, Eschman A, et al. Computer assessment of mild cognitive impairment. Postgraduate medicine. 2009 Mar 1;121(2):177-85.
16. Weintraub S, Dikmen SS, Heaton RK, et al. Cognition assessment using the NIH Toolbox. Neurology. 2013 Mar 12;80(11 Supplement 3):S54-64.
17. Parsons TD, Courtney CG, Arizmendi BJ, et al. Virtual reality stroop task for neurocognitive assessment. InMMVR 2011 Feb 16 (pp. 433-439).
17. Morris JC. Clinical dementia rating: a reliable and valid diagnostic and staging measure for dementia of the Alzheimer type. International psychogeriatrics. 1997 Dec;9(S1):173-6.
18. Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”: a practical method for grading the cognitive state of patients for the clinician. Journal of psychiatric research. 1975 Nov 1;12(3):189-98.
19. Arango-Lasprilla JC, Rivera D, Aguayo A, et al. Trail making test: Normative data for the Latin American Spanish speaking adult population. NeuroRehabilitation. 2015 Jan 1;37(4):639-61.
20. Arango-Lasprilla JC, Rivera D, Garza MT, et al. Hopkins verbal learning test–revised: Normative data for the Latin American Spanish speaking adult population. NeuroRehabilitation. 2015 Jan 1;37(4):699-718.
21. Benson G, de Felipe J, Sano M. Performance of Spanish-speaking community-dwelling elders in the United States on the Uniform Data Set. Alzheimer’s & Dementia. 2014 Oct;10:S338-43.
22. Peña-Casanova J, Quinones-Ubeda S, Gramunt-Fombuena N,et al. Spanish Multicenter Normative Studies (NEURONORMA Project): norms for verbal fluency tests. Archives of Clinical Neuropsychology. 2009 Jun 1;24(4):395-411.
23. Brandt J. The Hopkins Verbal Learning Test: Development of a new memory test with six equivalent forms. The Clinical Neuropsychologist. 1991 Apr 1;5(2):125-42.
24. Petersen RC. Mild cognitive impairment as a diagnostic entity. Journal of internal medicine. 2004 Sep;256(3):183-94.
25. Beekly DL, Ramos EM, Lee WW, Deitrich WD, Jacka ME, Wu J, Hubbard JL, Koepsell TD, Morris JC, Kukull WA. The National Alzheimer’s Coordinating Center (NACC) database: the uniform data set. Alzheimer Disease & Associated Disorders. 2007 Jul 1;21(3):249-58.
26. Binetti G, Magni E, Cappa SF, et al. Semantic memory in Alzheimer’s disease: an analysis of category fluency. Journal of Clinical and Experimental Neuropsychology. 1995 Feb 1;17(1):82-9.
27. Reitan RM. Validity of the Trail Making Test as an indicator of organic brain damage. Perceptual and motor skills. 1958 Dec;8(3):271-6.
28. Wechsler D. Wechsler adult intelligence scale–Fourth Edition (WAIS–IV). San Antonio, TX: NCS Pearson. 2008;22(498):816-27.
29. Crocco EA, Loewenstein DA, Curiel RE, et al. A novel cognitive assessment paradigm to detect Pre-mild cognitive impairment (PreMCI) and the relationship to biological markers of Alzheimer’s disease. Journal of psychiatric research. 2018 Jan 1;96:33-8.
30. Curiel Cid RE, Loewenstein DA, Rosselli M, et al. A cognitive stress test for prodromal Alzheimer’s disease: Multiethnic generalizability. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring. 2019 Dec;11(C):550-9.
31. Darby D, Collie A, McStephen M, Maruff P. Reliable detection of asymptomatic longitudinal cognitive decline in healthy community dwelling volunteers. Journal of the American Geriatrics Society. 2004 Apr;52.
32. Mielke MM, Machulda MM, Hagen CE, Edwards KK, Roberts RO, Pankratz VS, Knopman DS, Jack Jr CR, Petersen RC. Performance of the CogState computerized battery in the Mayo Clinic Study on Aging. Alzheimer’s & Dementia. 2015 Nov 1;11(11):1367-76.
33. Mielke MM, Weigand SD, Wiste HJ, Vemuri P, Machulda MM, Knopman DS, Lowe V, Roberts RO, Kantarci K, Rocca WA, Jack Jr CR. Independent comparison of CogState computerized testing and a standard cognitive battery with neuroimaging. Alzheimer’s & Dementia. 2014 Nov 1;10(6):779-89.
34. Zygouris S, Tsolaki M. Computerized cognitive testing for older adults: a review. American Journal of Alzheimer’s Disease & Other Dementias®. 2015 Feb;30(1):13-28.
35. Chan JY, Kwong JS, Wong A, Kwok TC, Tsoi KK. Comparison of computerized and paper-and-pencil memory tests in detection of mild cognitive impairment and dementia: A systematic review and meta-analysis of diagnostic studies. Journal of the American Medical Directors Association. 2018 Sep 1;19(9):748-56.
36. De Roeck EE, De Deyn PP, Dierckx E, Engelborghs S. Brief cognitive screening instruments for early detection of Alzheimer’s disease: a systematic review. Alzheimer’s research & therapy. 2019 Dec 1;11(1):21.
37. Wild K, Howieson D, Webbe F, Seelye A, Kaye J. Status of computerized cognitive testing in aging: a systematic review. Alzheimer’s & Dementia. 2008 Nov 1;4(6):428-37.
38. Pankratz VS, Roberts RO, Mielke MM, Knopman DS, Jack CR, Geda YE, Rocca WA, Petersen RC. Predicting the risk of mild cognitive impairment in the Mayo Clinic Study of Aging. Neurology. 2015 Apr 7;84(14):1433-42.
39. Buckley RF, Sparks KP, Papp KV, Dekhtyar M, Martin C, Burnham S, Sperling RA, Rentz DM. Computerized cognitive testing for use in clinical trials: a comparison of the NIH Toolbox and Cogstate C3 batteries. The journal of prevention of Alzheimer’s disease. 2017;4(1):3.