Advertisement

Effectiveness of m-learning HiSense APP-ID in enhancing knowledge, empathy, and self-efficacy in caregivers of persons with intellectual disabilities: a randomized controlled trial

  • Evelien van WingerdenEmail author
  • Mirjam Wouda
  • Paula Sterkenburg
Open Access
Original Paper

Abstract

M-learning is a flexible form of digital education that can benefit professional caregivers. The m-learning intervention ‘HiSense APP-ID’ was developed to support caregivers of persons with intellectual disability (ID). The intervention focuses on improving knowledge about sensitive and responsive caregiving for persons with ID. This randomized controlled trial of 101 professional caregivers of persons with moderate or mild ID evaluated whether m-learning improves practical and theoretical knowledge about secure attachment in persons with ID, and increases empathy and self-efficacy. The ‘HiSense APP-ID’ consists of 120 multiple-choice questions relating to attachment theory and the experience of persons with ID. Participants answer four questions each day for 30 days. In pre-, post-, and follow-up assessments, all participants completed a series of questionnaires concerning social validity, knowledge, self-efficacy, and empathy. Linear mixed effects modeling was then used to assess the effectiveness of the intervention. Users rated the app positively on usefulness, ease of use, design, and development of their own skills. Knowledge improved in the group of participants who followed m-learning. An interaction effect was found for empathic concern, but no significant effect was found for social empathy or self-efficacy. Thus, m-learning is a useful and flexible educational tool for professional caregivers of persons with ID, and the ‘HiSense APP-ID’ was able to improve theoretical knowledge in very short sessions spaced over a longer period of time.

Keywords

M-learning User experience Sensitivity Professional caregivers Intellectual disabilities Empathy 

1 Introduction

Digital learning has become an increasingly popular way to complement, or even replace, traditional methods of learning. It can be used to accommodate learners who are unable to visit educational facilities. The most well-known form of digital learning is e-learning, which includes web-based and computer-based education and training. E-learning can be a convenient way for staff in health care facilities to follow supplementary courses because it can be done at any place and at any time, and easily fits around irregular work schedules. The present study describes a digital learning intervention that aims to improve basic knowledge about sensitive and responsive interactions in professional caregivers of persons with mild or moderate intellectual disabilities (ID).

Within the field of care for persons with ID, the attention for technology has long been focused on the application of e-health: practical tools that promote mental or physical health or assist persons with ID in carrying out practical or social activities [1, 2]. Few interventions focus on e-learning for day-to-day caregivers of persons with special needs, even though this may greatly benefit the quality of care. Attention for this type of specialist knowledge about additional needs is limited in health care education and caregivers rely mostly on practical experience. For example, McShea et al. [3] interviewed a group of caregivers of adults with ID and found they were unaware of the high frequency of hearing loss in that population. They often did not recognize the signs of hearing loss in their clients. Their formal education on this topic was minimal. When asked about their preferences for supplementary training, most saw the benefit of e-learning.

Generally, students are positive about digital education due to its accessibility, affordability, self-paced learning, and options for repetition. Regular evaluation and feedback, small units of material, and variation in activities contribute specifically to the successful implementation of online courses in healthcare education [4]. However, for e-learning to be effective, it needs to consider several benchmarks. Social validity is a key component in the effectiveness of e-learning interventions. First of all, students must be willing to use the technology, which depends on two main aspects: the perceived ease of use of the e-learning and the perceived usefulness hereof [5]. These two factors are influenced by a range of external issues, such as a person’s computer-related experience, their confidence in performing the task on a computer, enjoyment in the task itself, and encouragement from others to use the e-learning system [4]. Second, the individual impact of e-learning can be influenced by: quality of the system itself, the quality and the relevance of the information, and the intensity of interaction with others during the learning process [6, 7, 8]. For example, highly interactive designs lead to more time spent on learning, although some students prefer an independent, and therefore more time-efficient, setup. In other words, it is important that the method of instruction matches both the objectives of the course and the preferences of its users [6, 9].

The effects of e-learning have been studied frequently among medical care staff. A majority of review articles in this field have reported that learning outcomes using e-learning are equal or better than in control groups using traditional methods [6, 9]. These e-interventions mostly focused on theoretical knowledge, decision making in clinical care situations, or on the improvement of interpersonal skills. However, in the case of professional caregivers, the ultimate purpose of e-learning is to influence daily practice. Elgie et al. [10] demonstrated that an increase in theoretical knowledge can indeed improve practical skills. The study used scenario-based e-learning to improve emergency preparedness among nurses. After following 15 modules of audiovisual demonstrations and text-based tutorials, nurses demonstrated improved theoretical knowledge and in addition they scored better on a practical skills test compared to participants in a control group. Furthermore, e-learning can also improve the confidence of caregivers to fulfill their role. Camden et al. [11] studied an online module for improving the perceived knowledge and skills needed to support children with a developmental coordination disorder. Parents who followed this module reported an increased sense of understanding of their child’s condition and were more confident about their skills in managing the care of their child. In addition, Sterkenburg and Vacaru [12] developed a serious game for enhancing empathy in professional caregivers of persons with ID. They found no direct effect on empathy, but the results indicated a decrease in self-oriented personal distress in response to pain or distress in others, which the authors interpret as confidence in being able to act in case of challenging situations.

The current study explores the feasibility of using m-learning to improve basic theoretical knowledge among professional caregivers of persons with a mild or moderate ID. Mobile learning (m-learning) refers to a subtype of e-learning that is specifically designed for mobile devices, making it even more flexible than e-learning [13]. The present intervention specifically aimed to improve knowledge with respect to sensitive and responsive caregiving. Persons with ID are vulnerable to stress, which increases their need for support from a caregiver who can notice their communicative signals en respond accordingly. Such a sensitive, responsive caregiver is crucial for the development of a secure attachment. Secure attachment, in turn, supports socio-emotional functioning and the development of coping skills in stressful situations [14, 15, 16]. At the same time, several characteristics of persons with ID can hinder the sensitivity and responsiveness of the parent or caregiver [17, 18]. Improved knowledge about attachment theory and the specific needs of persons with ID may increase awareness in caregivers, thereby enhancing the sensitivity for the needs of their clients. Furthermore, this knowledge may improve their confidence in their role as a caregiver.

The primary research question was whether the HiSense APP-ID improves knowledge about attachment in professional caregivers. In addition, we examined whether the m-learning intervention has practical effects, specifically whether participants experienced an increase in empathy and self-efficacy after participating in the intervention. Finally, we investigated the users’ expectations and experiences with the m-learning intervention.

A research protocol for this study was published in TRIALS [19]. Despite our initial intentions, we were not able to recruit enough relatives of persons with ID to participate in the study. The present sample consists only of professional caregivers of persons with mild or moderate ID.

2 Methods

2.1 Design

A two-group parallel randomized controlled trial was conducted to assess the effectiveness of the intervention. Participants were randomly assigned to either the experimental group, in which they were asked to follow the m-learning intervention, or the waiting list control group, in which no change occurred in their daily routine during the time of the study (care as usual). Randomization was balanced within each of the three care facilities, and the staff of each group home or activity center were placed in the same condition.

2.2 Participants

Participants were recruited within three large residential care facilities for persons with ID in the Netherlands. Each care center has multiple group homes and activity centers at different locations. Professional caregivers could apply for participation if they cared for one or more adults with mild or moderate ID and if they were able to follow the intervention for the entire duration.

At T0, a total of 110 participants competed the questionnaires, but the data for one participant was lost for this time point. Retention was 101 participants at T1 and 89 participants at T2. Reasons for discontinuation were illness, change in workplace, or a lack of time to complete the questionnaires. Nine participants who only completed the first set of questionnaires were excluded from further analysis, resulting in 100 participants at T0, 101 participants at T1, and 89 participants at T2. The demographics of the participants at T0 are summarized in Table 1.
Table 1

Demographics of the participants at T0 (N = 100)a

 

Group

 

Experimental

N = 53

Control

N = 47a

Total

Organizationb

1

14

10

24

2

29

28

57

3

10

9

19

Gender

Male

14

14

28

Female

39

33

72

Age, years

18–35

23

30

53

Older than 35

30

17

47

Cultural background

West European

40

41

81

European

6

1

7

Asian

2

1

3

African

2

0

2

Latin American

0

1

1

Other

2

3

5

No answer

1

0

1

First language

Dutch

51

45

96

Other

2

2

4

Education

Lower levels

23

26

49

Higher levels

30

21

51

Years of experience

0–2 y

5

6

11

3–5 y

6

9

15

6–10 y

17

16

33

11–20 y

10

12

22

>20 y

15

4

19

aData were missing for 1 participant

b1 = care facility for persons with visual and intellectual disability, 2 = care facility for persons with intellectual disability, 3 = care facility for persons with intellectual disability

2.3 Procedure

Ethical approval was obtained from the university ethical committee (VCWE-2017-004). The study was registered at the Dutch Trial Register (NTR 6944). Participants were given personal identification numbers to guarantee anonymity of the completed questionnaires. The contact information for participants was kept within the organizations if possible.

Recruitment was carried out by location managers and research coordinators within each organization. After enrollment, participants were randomly assigned to the control group or experimental group. Participants were informed of their allocation shortly after the baseline assessment to reduce the risk of response bias at this point and the test leaders were blind to the group allocation at this time point.

Data were collected at three time points. A baseline measurement (T0) was performed at the start of the intervention. The second measurement (T1) occurred 40 days later, and a retention measurement (T2) after another 30 days. Participants in the experimental group received instructions to start the m-learning immediately after T0. Participants in the control group were given access after completing the questionnaires at T2. Participation was optional for this group.

Data collection was performed by three undergraduates and one Master’s student in child and family studies, one scientist-practitioner, an assistant at one of the care facilities, and the first author. Digital questionnaires were made available online and on tablet computers. Test leaders visited the participants at their workplace at T0, T1, and T2 and supervised while the questionnaires were completed. If possible, multiple participants were seen together and completed the questionnaires in parallel. Participants could complete one of the measurements unsupervised if it was not possible to plan an appointment. This occurred 8 times at T0, 13 times at T1, and 14 times at T2.

2.4 Intervention

The m-learning intervention was a web-based app, HiSense APP-ID, that can be accessed online via a smartphone, laptop, or PC. It consists of 120 multiple-choice questions divided into 30 sets of four items. Participants can log in once a day to answer one set of four questions. They then receive feedback on their answers together with a short explanation (Fig. 1). On approximately one-third of the days, the daily feedback was followed by hyperlinks to additional information about one of the topics. This link referred to online sources, such as websites, infographics, or film clips.
Fig. 1

Screenshot showing feedback on one of the items. The wrong answer that was given is marked in red. The correct answer is in green. The textbox at the bottom explains why the third answer is the better option

The content of HiSense APP-ID was based on consultation with an advisory board of healthcare professionals. A first draft was sent back to the advisory board for feedback. Next, parents and professional caregivers of persons with mild and moderate ID were asked to judge subsets of items on three criteria: a) the clarity of the questions, b) the clarity of the explanation, and c) the perceived relevance of the information. A total of 14 professional caregivers individually judged 12 items at random. Their comments were incorporated into the final revision of the content.

The content of m-learning aimed to provide insight into the development of attachment relationships and the specific needs of persons with ID in this context. The main topics in the intervention were (1) attachment theory in daily practice, (2) socio-emotional functioning in persons with ID, (3) sensitivity and responsiveness to communicative signals, (4) emotion regulation, (5) observation and interpretation of behavior, and (6) basic knowledge about the nature of ID and common co-occurring conditions. Each topic was covered by 15 questions that were spaced over the 30 days of the intervention [20]. Attachment theory had a larger share, as 45 questions referred to this main topic.

The intervention was divided into three blocks of 10 days. Over these blocks, the complexity of the questions increased, corresponding to the first three categories of Bloom’s taxonomy of educational objectives [21, 22]: remembering (knowledge of basic facts), understanding (interpreting the knowledge), and applying (using the concepts in practical examples). The items for each topic were divided over the three blocks. The first block (questions 1 to 40) focused on improving knowledge. In the second block (questions 41 to 80), theoretical knowledge was explicitly linked to daily practice. In the third block (questions 81 to 120), the items presented dilemmas or short case descriptions (one or two sentences), requiring theoretical knowledge to practical situations. At the end of each block, participants could revisit the questions that they had answered incorrectly.

Participants were not required to log in during weekends or days off, but after 2 days of inactivity they received an automatic reminder by e-mail. If more than 5 days had passed after the last login, the participant could no longer continue the intervention.

2.5 Measures

2.5.1 Demographics (T0)

At the start of the study, participants answered categorical questions with regard to their gender, age, cultural background, first language, level of education, and work experience.

2.5.2 Social validity (T0, T1)

A measure of the social validity of the intervention was obtained using a Social Validity Scale [23] as described by Janssen et al. [24] and Jonker et al. [25]. The questionnaire contained 22 items. Participants rated the perceived usefulness, design of learning content, perceived ease of use, expected skill development for themselves and coworkers, and the expected effects on clients on a 5-point Likert scale.

At T0, all participants completed the full questionnaire, which was phrased to ask about their expectations of the HiSense APP-ID. At T1, only the questions about the desirability of the intervention were answered by all participants. The remainder of the questionnaire was answered only by participants in the experimental group, as they rated their experiences with the HiSense APP-ID.

2.5.3 Knowledge (T0, T1, T2)

All three measurements included a different set of 25 multiple-choice questions that were similar to items on the m-learning itself. The knowledge test was developed parallel to the questions in the app. For each of the six main topics, four questions were included on the knowledge test: one question corresponding to the first block (remembering), two questions to the second block (understanding), and one to the third block (application). For the topic of attachment theory, one additional application question was included. The total number of correct responses was used in analyses.

2.5.4 Self-efficacy (T0, T1, T2)

The self-efficacy experienced by the participants was measured at each time point using the translated Self-Efficacy in the Nurturing Role Scale [26, 27]. The questionnaire consisted of 16 items about feeling competent as a caregiver and how much they enjoy their role. Participants indicated on a 4-point Likert scale the extent to which they agreed with each statement. The scale has high internal consistency [26].

2.5.5 Empathy (T0, T1, T2)

Two questionnaires were used to measure empathy at the three time points. The abridged version of the Empathy Quotient (EQ) self-report questionnaire contains 40 items concerning sensitivity in social contexts; the ability to attribute mental states to another person and to respond accordingly [28] (Dutch translation by De Corte et al. [29], adapted by Volman [30]). Participants rate their agreement with the statements on a 4-point Likert scale. The EQ has high test–retest validity (r = 0.97, p < .001) and high internal consistency (α = 0.92) [31].

The Interpersonal Reactivity Index (IRI) [32] contains 28 items divided evenly over four subscales: perspective taking (PT), spontaniously adopting the point of view of other persons; fantasy (F), a tendency to identify with fictitious characters; empathic concern (EC), “other-oriented” feelings of compassion; and personal distress (PD), feelings of unease when witnessing the negative experiences of others. Participants indicated how well each statement described them on a 5-point Likert scale. All subscales have sufficient internal reliability (0.71–0.77) and test–retest reliability (0.62–0.72).

2.6 Data analysis

Chi-squared analyses were used to check for significant differences in demographic variables between the experimental and control groups at baseline, as they were recorded as categorical variables. Independent samples t-tests were used to compare differences in social validity between the control group and experimental group at T0, and paired samples t-test compared social validity at T0 and T1 for each group. Linear mixed effects modeling was used to assess the effect of time and group on the level of knowledge, empathy, and self-efficacy. This method is more flexible than repeated-measures ANOVA in that it is not sensitive to randomly missing data, can include time effects in the model, and can account for variance and for correlation between repeated measurements. In addition, it allows the user to explore a range of models for the data based on the research question. The optimal model can then be selected by comparing fit indices [33, 34].

3 Results

3.1 Demographics

The participant demographics are given in Table 1. A significant difference was found in age, as the experimental group included more persons over 35 years of age and the control group more persons less than 35 years of age (χ2 = 4.18, p = .041).

3.2 Social validity

3.2.1 Between-group differences in social validity

Table 2 summarizes the expectations of participants regarding the intervention at T0. In both groups, the average score on each subscale was between 3.14 and 4.32 on a scale of 1 to 5. In both groups, the highest score was given for the expected effect on the participants’ own sensitivity and responsivity. Participants in the experimental group were more positive in their expectation of the effect of the intervention on coworkers.
Table 2

Differences between the experimental (N = 53) and control groups (N = 47) at T0 on each subscale of Social Validity

 

Experimental

Control

  

M

SD

M

SD

t

p

Perceived usefulness

3.17

0.46

3.14

0.48

0.40

.692

Design of learning contents

3.88

0.51

3.76

0.49

1.26

.212

Perceived ease of use

3.54

0.49

3.41

0.47

1.34

.183

Skills development - self

4.32

0.64

4.07

0.66

1.93

.056

Skills development - coworkers

3.53

0.67

3.21

0.75

2.23

.028

Effect on clients

3.35

0.72

3.18

0.66

1.21

.231

Total

3.53

0.37

3.39

0.38

1.88

.063

Note: Means represent the mean of all items on each subscale. Data were missing for 1 participant

3.2.2 Within-group changes in social validity

At T1, the intervention group was asked to evaluate their experiences with the app after the intervention. The control group only received a set of questions about the perceived usefulness of the intervention. Average scores at T0 and T1 are shown in Table 3. Changes were small, but significant. The participants evaluated the effect of the intervention on their own behavior, as well as the behavior of clients, as being lower than expected before the start of the intervention. However, the perceived ease of use was rated higher after participation than at T0.
Table 3

Average scores on the subscales of the Social Validity Scale at T0 versus T1

 

T0

T1

   

Group

 

M

SD

M

SD

t

df

p

Exp.

Perceived usefulness

3.17

0.46

3.03

0.42

2.32

52

.024

Design of learning contents

3.89

0.50

3.84

0.65

0.54

48

.593

Perceived ease of use

3.56

0.48

4.01

0.69

−4.70

48

< .001

Skills development - self

4.36

0.65

3.82

0.90

3.83

48

< .001

Skills development - coworkers

3.57

0.68

2.69

0.62

6.33

48

< .001

Effect on clients

3.36

0.75

2.69

0.56

5.27

48

< .001

Total

3.39

0.37

3.19

0.43

3.37

48

.001

Control

Perceived usefulness

3.14

0.48

3.03

0.48

1.63

46

.11

Note: Exp, experimental group. Means represent the mean of all items on each subscale

3.2.3 Evaluation of the intervention

At T1, 83.7% of the participants in the experimental group had completed at least 75% of the intervention, and 52.8% had completed all questions. Most participants (77.6%) in the experimental group completed the intervention on their smartphone and some (15.1%) on a PC or laptop. The main reasons for not completing the intervention in time were ‘lack of time’ (5 participants) or ‘I kept forgetting about it’ (12 persons). None of the respondents thought that the questions were irrelevant. Two persons reported technical issues.

Nine participants provided additional comments about the intervention. Five said that the m-learning had been informative, made them more aware, and they were positive about the concept. Two did not notice a difference. One asked for a possibility of re-reading the information from previous days. One commented that the items mainly concerned persons with moderate ID.

3.3 Effects of the intervention

The mean total scores for social validity, knowledge, self-efficacy, and empathy at all three time points are given in Table 4.
Table 4

Sum scores at T0 (N = 100), T1 (N = 101), and T2 (N = 89)

 

Time

Group

 

Experimental

Control

M

SD

M

SD

Social validity

0

77.75

8.17

74.64

8.40

1

73.45

9.44

 

Knowledge

0

16.45

2.56

16.74

1.96

1

19.62

2.97

17.62

3.07

2

19.44

1.96

18.15

2.85

Self-efficacy

0

87.81

8.60

86.72

8.85

1

88.77

11.12

88.10

8.32

2

89.35

10.77

89.54

8.79

EQ

0

47.74

7.81

46.19

7.78

1

47.75

8.14

45.71

8.68

2

48.80

8.80

43.41

7.94

IRI total

0

59.51

8.55

62.38

8.93

1

60.11

9.69

60.37

9.49

2

60.28

11.25

60.93

9.38

 Perspective taking

0

19.19

3.68

19.00

3.65

1

19.38

3.34

18.73

3.67

2

20.07

3.44

19.02

3.17

 Fantasy

0

12.72

3.71

14.43

4.63

1

12.98

5.12

14.42

4.44

2

12.63

5.44

14.65

4.94

 Empathic concern

0

18.06

3.85

19.06

2.91

1

18.51

3.67

17.56

3.58

2

18.81

4.15

17.85

3.06

 Personal distress

0

9.55

3.99

9.89

3.30

1

9.25

3.74

9.67

3.45

2

8.77

3.93

9.41

3.46

Note: EQ, Empathy Quotient; IRI, Interpersonal Reactivity Index

For each outcome variable, a linear mixed effects model was fitted in SPSS with subject at the highest level. As group, time, and the group x time interaction were the primary interest, they were entered into the model as fixed effects. The model accounted for individual differences by allowing individual slopes and intercepts per subject. In addition, we examined whether additional random effects (care center, age category, and the expected skills development for coworkers) added significantly to the model. Fitting a maximal model that includes all three possible random effects for each outcome measure, can lead to loss in statistical power to detect the significance of fixed effects [34]. Preferred models were selected based on the lowest Schwartz Bayesian Criterion (BIC), a fit measure that takes into account both the complexity of a model and the number of participants in the study.

In our analyses, no convergence could be reached for the maximal models that included the fixed effects and all random effects. We therefore added each random effect in a separate model to examine whether they improved the model fit. To account for the variance-covariance structure of the data, covariance type was set to identity when only group, time, and the group x time interaction were entered. When other variables were entered, the model fit was tested with the covariance type set to either Compound Symmetry or Unstructured, to control for correlation among the random effects.

3.3.1 Knowledge

The primary research question regards the effect of the intervention on knowledge of the participants. A significant increase in knowledge was observed as a function of time (β = 0.69, SE = 0.23, p = .003). The experimental group improved significantly more than the control group (β = 0.83, SE = 0.32, p = .011). The intraclass correlation coefficient (ICC) was 0.33, suggesting that approximately 33% of the total variation was due to interindividual differences.

3.3.2 Self-efficacy and empathy

Next, several models were tested to explore additional effects on secondary outcome measures (self-efficacy and empathy). Two of the subscales in the IRI questionnaire did not lead to viable models (subscales PT and F). Outcomes of the remaining four analyses are reported below. In these analyses α was set to .01 to correct for the number of exploratory models that were tested on the dataset.

For self-efficacy, an optimal fit was achieved when the model included age as a random variable. However, age was not a significant predictor in the model (β = 0.57, SE = 0.79, p = .474). Only a significant effect of time was found, indicating that the entire group of participants reported higher self-efficacy over time (β = 1.60, SE = 0.60, p = .009). No interaction effect of time and group was found. A large portion of the variation was due to interindividual differences (ICC = 0.63).

For EQ, no additional random variables could be added to the model. None of the predictors was significant at α = .01 level.

For the IRI questionnaire, two models were analyzed (subscales EC and PD). For empathic concern, a significant time x group interaction meant that the slope in the experimental group was more positive than in the control group (β = 0.92, SE = 0.31, p = .003), ICC = 0.66. This interaction is mainly indicative of a decrease over time in the control group (Table 4). For personal distress, none of the predictors was found to be significant after correction.

4 Discussion

The present study explored whether the m-learning intervention HiSense APP-ID can improve knowledge, empathy, and self-efficacy in professional caregivers of persons with mild or moderate ID. Knowledge about the subject improved in the group of participants who followed the m-learning. In addition, an interaction effect was found for empathic concern. However, no significant effects of the intervention were found for EQ or self-efficacy.

User experience ratings for the m-learning were positive, both before and after the intervention. All participants had a positive attitude towards the usefulness, ease of use, and content at the start of the study, indicating that the intervention matched their interests and their abilities. After the intervention, the average rating for perceived ease of use was higher than at the start, meaning that the m-learning was easily accessible and the instructions were clear for all participants. The items relating to the effect on skills development were rated lower than at baseline, which may be a sign that the content was less hands-on than they had hoped [3]. Perceived usefulness was also slightly lower than before the intervention. However, this can be explained by the phrasing of the questions on this scale, which focused on whether participants wanted to improve their knowledge and empathic attitude towards clients. Therefore, a lower score means ‘less need for improvement’. When looking at individual items on the questionnaire at T1, 27.5% of the experimental group said they noticed improvement in how they approached clients, and 45.1% noticed an improvement in their ability to recognize the communicative signals of their clients. Participants who noticed progress after following the m-learning would be expected to feel less need for improvement in the second questionnaire.

The primary aim of the m-learning was to improve theoretical knowledge about forming secure attachments with persons with ID. The results showed a significant effect of the intervention relative to the control group, who received no additional materials. The learning effect was retained 30 days after the intervention. This result is promising for the use of m-learning by professional caregivers, showing that information with a clear focus, delivered repeatedly in small portions, can lead to a durable increase in knowledge and awareness about secure sensitive and responsive caregiving in persons with ID [20].

Finally, questionnaires were included to measure the effect of the m-learning on the empathy and self-efficacy of our participants. A significant interaction effect was found for a subscale measuring empathic concern. No significant interaction was found for the EQ questionnaire, which is in line with the primary aim of the intervention to reinforce compassion towards people under their care, rather than to improve empathy and social sensitivity in general. The interaction effect was caused by a decrease in empathic concern in the control group rather than an increase in the experimental group. The reason for this decline cannot be determined with certainty from the available data. There may have been a coincidental fluctuation in the state of mind of the participants. In that case, a recovery would have been visible in the control group at a later follow-up. Another possibility is that other factors, such as external conditions or team dynamics, may have caused the decline in empathic concern over time in the control group, whereas the experimental group was continuously encouraged by the intervention to show compassion towards their clients. If that is the case, the results suggest that participation in the m-learning prevented a decline in empathic concern. The present findings draw attention to the many factors that may influence empathy and the importance of maintaining high levels of empathy in professional caregivers of persons with ID.

The intervention was not able to improve empathy or self-efficacy in the experimental group. One obvious explanation is that the impact of this 5-min-a-day m-learning was too small to have a substantial effect on daily practice. The content was aimed at practical applications but was still theoretical in nature. For empathy, another explanation is that the questionnaires did not specifically apply to work situations, but to general social situations. It is plausible that participants did become more sensitive towards clients, but not towards others outside the workplace, showing no significant effect on the questionnaires that were used in the current study.

The HiSense APP-ID has several features that benefit its usability. First, the use of a web-app ensures that participants can log-in on mobile devices as well as a personal computer or laptop. Furthermore, only a small time investment was required, only 5 min a day, a total of only 2.5 hours. Regarding the content, healthcare professionals were involved from the start of development to ensure relevant topics and examples [3]. Effectiveness was promoted by repeating small portions of content over a longer period of time [20], increasing the complexity of the items over time [22], and providing immediate feedback [4]. However, there is still room for improvement. The content design is linear and purely text-based. Conceivably, learning gains can be improved by using an adaptive design that also allows the user to look back at previous information [35]. In addition, adding illustrations or short videos could make the case descriptions more realistic for the user [3].

In future research, it would be preferable to have an external source to assess effects on behavior, such as observation or peer evaluations, in addition to self-reports. Furthermore, the m-learning could benefit from integration in a larger training program or as part of a curriculum in which participants discuss the content and its relationship to daily practice [8]. The web-app was used on its own in this study, though it was connected to the work environment. In our questionnaire, we did not check whether participants discussed the content amongst themselves. It would be relevant to examine how active discussion of the topics contributes to the effectiveness and learning gains of the m-learning [36]. In addition, it would be interesting to investigate its effect on other populations, who have had no training on this topic and have more to gain. For example, the m-learning could be aimed at parents of persons with ID, as well as persons in facilitating professions who work in or around care facilities, such as cleaners, office workers, or chauffeurs. By involving the wider community and emphasizing their role in improving quality of life for persons with ID, the intervention would match the current views on social inclusion.

5 Conclusion

Knowledge transfer was achieved through m-learning in very short sessions spaced over a longer period of time. Furthermore, this study indicates that m-learning focusing on sensitive and responsive caregiving may lead to fewer negative effects on empathic concern. Given the positive appraisal by participants in the present study, the use of m-learning for professional caregivers of persons with ID and other caregivers needs to be examined in the future.

Notes

Acknowledgements

The authors thank Metropolisfilm for programming the web app. Furthermore, our gratitude goes to Dieuwke Kluvers, Sara Colijn, Ilona Olofsen, Arjan Maasland, and the four students of Vrije Universiteit Amsterdam for their assistance with recruiting participants and with the data collection.

Funding information

This research is funded by The Netherlands Organisation for Health Research and Development ZonMw, The Netherlands. Project number 845004004.

Compliance with ethical standards

Conflict of interest

The authors have no conflicts of interest to declare.

Ethical approval

The research was approved by the Scientific and Ethical Review board of the Faculty of Behavioural and Movement Sciences, Vrije Universiteit, Amsterdam, File VCWE-2017-004. The research was conducted in accordance with the Helsinki Declaration of 1975, as revised in 2008.

Informed consent

Electronic Informed consent was obtained from all participants prior to data collection.

References

  1. 1.
    Vázquez A, Jenaro C, Flores N, Bagnato MJ, Pérez MC, Cruz M. E-health interventions for adult and aging population with intellectual disability: a review. Front Psychol. 2018;9:2323.CrossRefGoogle Scholar
  2. 2.
    Den Brok WLJE, Sterkenburg PS. Self-controlled technologies to support skill attainment in persons with an autism spectrum disorder and/or an intellectual disability: a systematic literature review. Disabil Rehabil Assist Technol. 2015;10, 10(1).Google Scholar
  3. 3.
    McShea L, Fulton J, Hayes C. Paid support workers for adults with intellectual disabilities; their current knowledge of hearing loss and future training needs. J Appl Res Intellect Disabil. 2016;29:422–32.CrossRefGoogle Scholar
  4. 4.
    Abdullah F, Ward R. Developing a general extended technology acceptance model for E-learning (GETAMEL) by analysing commonly used external factors. Comput Human Behav. 2016;56:238–56.CrossRefGoogle Scholar
  5. 5.
    Davis F. A technology acceptance model for empirically testing new end user information systems: theory and results. Doctoral dissertation: Massachusetts Institute of Technology; 1986.Google Scholar
  6. 6.
    Sinclair PM, Kable A, Levett-Jones T, Booth D. The effectiveness of internet-based e-learning on clinician behaviour and patient outcomes: a systematic review. Int J Nurs Stud. 2016;57:70–81.CrossRefGoogle Scholar
  7. 7.
    Voutilainen A, Saaranen T, Sormunen M. Conventional vs. e-learning in nursing education: a systematic review and meta-analysis. Nurse Educ Today. 2017;50:97–103.CrossRefGoogle Scholar
  8. 8.
    Cidral WA, Oliveira T, Di Felice M, Aparicio M. E-learning success determinants: Brazilian empirical study. Comput Educ. 2018;122:273–90.CrossRefGoogle Scholar
  9. 9.
    McCall M, Spencer E, Owen H, Roberts N, Heneghan C. Characteristics and efficacy of digital health education: an overview of systematic reviews. Health Educ J. 2018;77:497–514.CrossRefGoogle Scholar
  10. 10.
    Elgie R, Sapien R, Fullerton L, Moore B. School nurse online emergency preparedness training. J Sch Nurs. 2010;26:368–76.CrossRefGoogle Scholar
  11. 11.
    Camden C, Foley V, Anaby D, Shikako-Thomas K, Gauthier-Boudreault C, Berbari J, et al. Using an evidence-based online module to improve parents’ ability to support their child with developmental coordination disorder. Disabil Health J. 2016;9:406–15.CrossRefGoogle Scholar
  12. 12.
    Sterkenburg PS, Vacaru VS. The effectiveness of a serious game to enhance empathy for care workers for people with disabilities: a parallel randomized controlled trial. Disabil Health J. 2018;11:576–82.CrossRefGoogle Scholar
  13. 13.
    Khaddage F, Müller W, Flintoff K. Advancing mobile learning in formal and informal settings via mobile app technology: where to from here, and how? Educ Technol Soc. 2016:16–26.Google Scholar
  14. 14.
    Ainsworth MDS, Blehar MC, Waters E, Wall S. Patterns of attachment: a psychological study of the strange situation. Hillsdale: Lawrence Erlbaum; 1978.Google Scholar
  15. 15.
    Bowlby J. Attachment and loss: volume 1. Attachment. London: Pimlico; 1969.Google Scholar
  16. 16.
    Groh AM, Fearon RMP, van IJzendoorn MH, Bakermans-Kranenburg MJ, Roisman GI. Attachment in the early life course: meta-analytic evidence for its role in socioemotional development. Child Dev Perspect. 2017;11:70–6.CrossRefGoogle Scholar
  17. 17.
    Janssen CGC, Schuengel C, Stolk J. Understanding challenging behaviour in people with severe and profound intellectual disability: a stress-attachment model. J Intellect Disabil Res. 2002;46:445–53.CrossRefGoogle Scholar
  18. 18.
    Warren SF, Brady NC. The role of maternal responsivity in the development of children with intellectual disabilities. Ment Retard Dev Disabil Res Rev. 2007;13:330–8.CrossRefGoogle Scholar
  19. 19.
    Van Wingerden E, Sterkenburg PS, Wouda M. Improving empathy and self-efficacy in caregivers of persons with intellectual disabilities, using mlearning (HiSense APP-ID): study protocol for a randomized controlled trial. Trials. 2018;19:400.Google Scholar
  20. 20.
    Carpenter SK, Cepeda NJ, Rohrer D, Kang SHK, Pashler H. Using spacing to enhance diverse forms of learning: review of recent research and implications for instruction. Educ Psychol Rev. 2012;24:369–78.CrossRefGoogle Scholar
  21. 21.
    Bloom BS, Engelhart MD, Furst EJ, Hill WH, Krathwohl DR. Taxonomy of educational objectives: the classification of educational goals. Handbook I: cognitive domain. New York: Longmans, Green; 1956.Google Scholar
  22. 22.
    Krathwohl DR. A revision of Bloom’s taxonomy: an overview. Theory Pract. 2002;41:212–8.CrossRefGoogle Scholar
  23. 23.
    Seys DM. Kwaliteit van zorg: zorg voor kwaliteit. Doctoral Dissertation. Katholieke Universiteit Nijmegen; 1987.Google Scholar
  24. 24.
    Janssen MJ, Riksen-Walraven JM, Van Dijk JPM. Enhancing the quality of interaction between deafblind children and their educators. J Dev Phys Disabil. 2002;14:87–109.CrossRefGoogle Scholar
  25. 25.
    Jonker D, Sterkenburg PS, Van Rensburg E. Caregiver-mediated therapy for an adult with visual and intellectual impairment suffering from separation anxiety. Res Dev Disabil. 2015;47:1–13.CrossRefGoogle Scholar
  26. 26.
    Pedersen FA, Bryan Y, Huffman L, Del Carmen R. Constructions of self and offspring in the pregnancy and early infancy periods. Kansas City: In: Meetings of the Society for Research in Child Development (SRCD); 1989.Google Scholar
  27. 27.
    Oosterman M, Schuengel C. Expectations and experiences in parenthood: the role of stress-regulation. Denver: paper presented at the Society for Research in child Developent; 2009.Google Scholar
  28. 28.
    Baron-Cohen S, Wheelwright S. The empathy quotient (EQ): an investigation of adults with Asperger syndrome or high functioning autism and normal sex differences. J Autism Dev Disord. 2004;34:163–75.CrossRefGoogle Scholar
  29. 29.
    De Corte K, Buysse A, Verhofstadt LL, Roeyers H, Ponnet K, Davis MH. Measuring empathic tendencies: reliability and validity of the Dutch version of the interpersonal reactivity index. Psychol Belg. 2007;47:235.CrossRefGoogle Scholar
  30. 30.
    Volman I. Empathy quotient (EQ). Dutch: Autism Research Centre downloadable tests. https://www.autismresearchcentre.com/arc_tests/. Accessed 23 June 2017.
  31. 31.
    Billington J, Baron-Cohen S, Wheelwright S. Cognitive style predicts entry into physical sciences and humanities: questionnaire and performance tests of empathy and systemizing. Learn Individ Differ. 2007;17:260–8.CrossRefGoogle Scholar
  32. 32.
    Davis MH. Interpersonal reactivity index. JSAS Cat Sel Doc Psychol. 1980;10:14–5.Google Scholar
  33. 33.
    Gueorguieva R, Krystal JH. Move over ANOVA: progress in analyzing repeated-measures data and its reflection in papers published in the archives of general psychiatry. Arch Gen Psychiatr. 2004;61:310–7.CrossRefGoogle Scholar
  34. 34.
    Matuschek H, Kliegl R, Vasishth S, Baayen H, Bates D. Balancing type I error and power in linear mixed models. J Mem Lang. 2017;94:305–15.CrossRefGoogle Scholar
  35. 35.
    George PP, Papachristou N, Belisario JM, Wang W, P a W, Cotic Z, et al. Offline eLearning for undergraduates in health professions: a systematic review of the impact on knowledge, skills, attitudes and satisfaction. J Glob Health. 2014;4:010406.CrossRefGoogle Scholar
  36. 36.
    Young T, Rohwer A, Volmink J, Clarke M. What are the effects of teaching evidence-based health care (EBHC)? Overview of systematic reviews. PLoS One. 2014;9:e86706.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Clinical Child and Family StudiesVrije Universiteit AmsterdamAmsterdamthe Netherlands
  2. 2.Ons Tweede ThuisAmstelveenthe Netherlands
  3. 3.BartiméusDoornthe Netherlands

Personalised recommendations