Keywords

1 Introduction

As South Korea rapidly becomes an aging society amid economic growth and medical technological development, mild cognitive impairment (MCI) that depresses cognitive function is increasing [1]. MCI is the problem of weakened memory compared with that of the same age group, referring to the status of impaired activities of daily living other than dementia. That is, MCI is a middle stage between normal aging and dementia and known to be a high-risk group possibly developing Alzheimer’s disease according to preceding study results so far. Therefore, it is clinically important that MCI patients detect it in the earliest possible phase before progressing into Alzheimer’s to maximize treatment effect.

In order to identify MCI patients, cognitive function test must be implemented and it is also essential to provide appropriate cognitive rehabilitation programs for patients. Paper and pencil test based on paper questionnaires is known to be frequently utilized as a cognitive function test method so far. Such a method can hardly control the conditions taking place in the test process according to a patient’s seriousness of disease; thus, it is prone to errors in grading and recording. Moreover, if an inexperienced inspector implements the test, the objectivity of measured data could have a problem. It is also true that test details and grading data are difficult to keep in any convenient manner [2].

In order to resolve such problems, computer-based cognitive function test methods have been developed since the late 1990s to help reduce the effect of test environment while ensuring objective test results. Such computer-based methods are easy to analyze data and keep their details and data efficiently. As their reproducibility and staff efficiency are high, many studies have suggested they would be useful [3, 4]. However, most of the studies using computerized methods stop at finding the usefulness of test in normal people, elderly people and diseases such as schizophrenia. There has been no computer-based cognitive function test method for universal use [5]. Recently, in Korea, cognitive function test applications for mobile use are increasingly developed, which are easily accessible anytime anywhere [6]. Google Play Store provides dementia test applications such as Mental Health Test, Smacare, Dementia Check, and White Paper for Dementia. Most are developed based on the K-MMSE (Korean-MMSE), MMSE-KC (MMSE-Korea Child), and MMSE-DS (MMSE for Dementia Screening) that reorganized MMSE (Mini-Mental State Examination) for Korean people [7,8,9,10]. But, since it is difficult to express the questionnaire details identically in application, many details are arbitrarily changed or removed. Furthermore, the reliability of clinical agreement cannot be confirmed.

Therefore, in this paper, a mobile application was developed for cognitive function test to have the maximum possible similarity with the MMSE-DS questionnaire items which are strong in assessing the reading/writing areas, in particular, according to test subjects’ educational background while having no difference from K-MMSE in distinguishing MCI from dementia. Particularly, MMSE-DS could score differently the test subjects even with the same difference in items and implementation methods between the existing K-MMSe and MMSE-KC questionnaires [9]. It is made by the Korean Ministry of Health and Welfare after simplification to help improve reliability and accuracy [10].

This study sought to help test subjects easily utilize the developed application based on MMSE-DS test items and deliver stronger reliability than that of the existing applications. To test it, the agreement (intraclass correlation coefficient) of results between existing MMSE-DS questionnaire and contents was inspected and the subjects were surveyed to examine their satisfaction with the contents.

2 Implementation of Contents

2.1 MMSE-DS Item Classification and Normative Score

MMSE-DS cognitive function test has items on time orientation (5 questions, 5 points), place orientation (5 questions, 5 points), memory registration (1 question, 3 points), attention (1 question, 4 points), memory recall (1 question, 3 points), naming (2 questions, 2 points), shadowing (1 questions, 1 point), order execution (1 question, 1 point), figure copying (1 question, 1 point), and judgment and common sense (2 questions, 2 points) to make the total of 19 question items with 30 points. The appropriate point of normative score is classified according to users’ academic background, age and sex. In the case of scoring under the appropriate point, the user is decided to belong to the cognitive decline group; and in the case of exceeding it, normal group [11].

2.2 Application Composition

In the application of this paper, the questions of every item were placed in the upper part of the screen and the answers and choices, in the lower part. For the convenience of mobile device operation, 4-choice questions were employed rather than short-answer questions to get answers to multiple choice questions.

As in Fig. 1(a), the application main page consisted of test start, user management and view results. Before test start, users make user registration in Fig. 1(b) and (c). In user registration, name, gender, date of birth, number of education years, and disease details are filled in to create an appropriate normative point.

Fig. 1.
figure 1

(a) Main screen (b) User management DB (c) User registration

2.3 Application Contents

First, for the information on time orientation and place orientation items (time, location, etc.), the values provided from mobile device were employed. Questions in the application were organized identically to those in the questionnaire. Questions on memory registration, memory recall, and shadowing acoustically presented words and sentences for users to remember and select among the choices. The question on attention to subtract 7 from 100 consecutively and the question on naming to answer the right names of “time” and “pencil” were made as 4-choice questions.

Questions on order execution and figure copying were difficult to apply to mobile devices in the form of question to evaluate test subjects’ active execution. Thus, depending upon developers’ arbitrary interpretation, question forms have changed arbitrarily. I order execution questions, test subjects are evaluated for their execution ability to, for instance, fold paper in half according to the order of an inspector. But, existing contents have changed question text differently such as ‘put garbage in the garbage can’, ‘get the correct order of presented pictures’. In this present paper, however, instead of making users fold paper, they are instructed to draw a line on the yellow rectangle as in Fig. 2(a), to ensure the most similar execution to that under the questionnaire. The criterion for correct answer, as shown in Fig. 2(b), is to have a line within the shaded area in the program, which trisects the rectangular both horizontally and vertically as in Fig. 2(c).

Fig. 2.
figure 2

Interface for instruction execution

Figure copying question is to evaluate a test subject’s mirror tracing of the two overlapped pentagons presented. Most of the existing contents either excluded the question or changed forms to “combine two triangles”, etc. However, figure copying question is to assess if a test subject can draw figures correctly by looking at the presented figures and if the figures are correctly overlapped in their drawings with a view to assess their ability of organization. Previously-produced contents can affect the organization ability assessment.

For this reason, it was decided, in this present study, to place in the upper part of screen the same picture as the overlapped pentagons presented in the MMSE-DS questionnaire as in Fig. 3(a). Test subjects choose the figures identical to those presented in the lower left and right sides of screen they drag them to the empty place in the center. After they choose a pentagon, the correct answer is processed by operating the area coordinates of the two pentagons. Figure 3(b) and (c) are the questions on judgment and common sense, which are set up in 4-choice questions so that patients who are unaccustomed to operate mobile device can choose right answers conveniently. The question details were made identical to those in the MMSE-DS questionnaire.

Fig. 3.
figure 3

Interface for figure copying, judgment and common sense

3 Test Method and Results

3.1 Test Subject

The cognitive function test for mobile device was verified with the help of hospitals having a rehabilitation medicine department by testing 20 male and female brain disease patients (9 men, 11 women) aged 19 or older who had given consent to test participation among the outpatients or hospitalized patients receiving rehabilitation treatment. In the event that a test subject or his or her representative requests to cease the clinical research; or if visit schedule is not met or subjects do not cooperate; such subjects were replaced with new subjects to avoid possible disturbance in clinical study progress. The causes of disease of selected test subjects were cerebral infarction in 6 patients; cerebral hemorrhage in 8; brain tumor in 2; traumatic brain disease in 2; and brain damage in 2. Their total average age is 57.9 and the average period from disease occurrence to the present day of cognitive function test is 709 days.

3.2 Test and Survey

Each of the MMSE-DS questionnaire and cognitive function test application for mobile device developed in this study were implemented with at least 7-day interval in between. A total of 2 experienced inspectors participated, who had received sufficient education on consistent evaluation criteria. Then, to understand the convenience of the contents and satisfaction, the inspectors surveyed the 20 patients participating in the treatment with 5-choice questions on accessibility, convenience in use, satisfaction with UI, satisfaction with use, usefulness, etc.

3.3 Agreement Evaluation

To quantify the measurement error in the Inter Class Correlation Coefficient (ICC), an index commonly utilized in evaluating repeatability and reproducibility, or Reliability Coefficient; this paper employed the Standard Error Measurement (SEM) and Smallest Real Difference (SRD). Generally, if the reliability coefficient of ICC = 0.80, agreement level is viewed high [12]. Moreover, to see if each of the patients’ test score change is determined at the confidence level o 95%, the SRD was utilized [13, 14]. The SEM has small measurement errors in the case of less than 20% of the highest score among the measurement values; thus, it seems reliable [15].

3.4 Test Results

Table 1 shows the results of comparing agreement between the MMSE-DS questionnaire and the contents produced in this paper. The ICC of test results is 0.938, exceeding the reliability index of 0.8. SRD = 2.664 within 10% of the max value; and SEM = 0.961 within 10% of the average value. Based on the results, the mobile-device cognitive function test application of this study seems to have sufficiently consistent results with those of the questionnaire.

Table 1. The degree of agreement between MMSE-DS and the application

Figure 4 shows the results of survey on convenience and satisfaction. Responses of at least “Yes” were 80% in Question 1; 55% in Question 2; 80% in Question 3; 80% in Question 4; and 70% in Question 5. On average, “Yes” or better responses were frequent in the results. Such survey results indicate that the application developed in the study is appropriate in terms of convenience and satisfaction.

Fig. 4.
figure 4

Result of questionnaire about convenience and satisfaction

4 Conclusions

With the surging number of patients with dementia in South Korea, research has been going on to detect MCI cases in the early stage to delay its progress into dementia. Accordingly, computerized contents for cognitive function test and training are actively developed and many relevant mobile applications are presented. Most of the existing commercialized applications were developed based on K-MMSE and MMSE-KC but their question items are less similar to those in the original questionnaire.

Such a weakness is shown in order execution, figure copying, reading and writing, and judgment, by excluding some of the existing questionnaire questions or causing arbitrary interpretation. In consideration of the accuracy and reliability of questionnaire questions, this present study employed MMSE-DS questionnaire combining M-MMSE and MMSE-KC and worked to ensure the highest possible similarity to the questionnaire questions.

First, the order execution question to make subjects fold a given sheet of paper in half following the order of an inspector was converted to make subjects draw a line of folding in half on a yellow rectangle.

Second, the figure copy question was to let subjects draw identically to the two presented overlapped pentagons. In this study application, it was changed to let subjects find and arrange appropriate pentagons among those given in the choices.

Third, the judgment question was set up identically to the MMSE-DS questionnaire question. Subjects choose the right answer among the four given choices.

The developed application was tested to compare with the conventional MMSE-DS questionnaire. As a result, the inter-class correlation coefficient ICC = 0.938, higher than the reliability coefficient ICC = 0.8, indicating sufficient degree of agreement. Based on such results, it is expected to use the mobile application to solve the problems of conventional questionnaire such as error in grading and recording, objectivity in measured data, and convenience in test details and grading data storage. Moreover, in the survey on convenience and satisfaction, at least 70% of the subjects gave a positive answer, implying that, regardless of age, most are accustomed to using mobile devices. This finding seems because most of the results pages were visualized for their easy understanding. Therefore, with this convenient and easily accessible mobile application, anyone does not have to be bothered by other people’s awareness on cognitive functional weakening while responding to changes on their own or with the help of their family through continued test taking.

The evaluation of the mobile application developed in this study bases on the results from some of the test subjects. It is necessary to include more subjects to verify in the future.