Wearable sensors for clinical applications in epilepsy, Parkinson’s disease, and stroke: a mixed-methods systematic review

  • Dongni Johansson
  • Kristina Malmgren
  • Margit Alt Murphy
Open Access



Wearable technology is increasingly used to monitor neurological disorders. The purpose of this systematic review was to synthesize knowledge from quantitative and qualitative clinical researches using wearable sensors in epilepsy, Parkinson’s disease (PD), and stroke.


A systematic literature search was conducted in PubMed and Scopus spanning from 1995 to January 2017. A synthesis of the main findings, reported adherence to wearables and missing data from quantitative studies, is provided. Clinimetric properties of measures derived from wearables in laboratory, free activities in hospital, and free-living environment were also evaluated. Qualitative thematic synthesis was conducted to explore user experiences and acceptance of wearables.


In total, 56 studies (50 reporting quantitative and 6 reporting qualitative data) were included for data extraction and synthesis. Among studies reporting quantitative data, 5 were in epilepsy, 21 PD, and 24 studies in stroke. In epilepsy, wearables are used to detect and differentiate seizures in hospital settings. In PD, the focus is on quantification of cardinal motor symptoms and medication-evoked adverse symptoms in both laboratory and free-living environment. In stroke upper extremity activity, walking and physical activity have been studied in laboratory and during free activities. Three analytic themes emerged from thematic synthesis of studies reporting qualitative data: acceptable integration in daily life, lack of confidence in technology, and the need to consider individualization.


Wearables may provide information of clinical features of interest in epilepsy, PD and stroke, but knowledge regarding the clinical utility for supporting clinical decision making remains to be established.


Wearable sensors Parkinson’s disease Epilepsy Stroke Systematic review 


Wearables is the common term for devices integrated in garments or designed as wearable accessories. Wearables with built-in sensors such as accelerometers, gyroscopes, and magnetometers allow continuous long-term monitoring of movement patterns or physiological variables. In neurology, wearables offer new possibilities to achieve continuous and objective symptom monitoring in clinical as well as out-of-hospital settings. Parkinson’s disease (PD) and stroke are the two neurological conditions, where accelerometry-based technology has been applied most [1]. There is also a growing interest in using wearable devices to detect seizures in epilepsy [2]. Although accelerometry-based devices were introduced for measuring physical activity already in the 1980s and the necessary data management technology has been available since the 1990s, it is only recently that the use of wearable accelerometry-based devices has started to take hold in clinical applications. With increasing use in different neurological diseases, it is necessary to evaluate the clinical efficacy and usefulness of measures derived from wearables. It is also necessary to identify common barriers and facilitators for clinical applications. The different needs for monitoring in the diseases addressed in this review create specific challenges for the use of wearables, but there are also several general problems, where solutions from one disease area might be generalizable and of interest to the other. Individuals with a neurological condition might find it difficult to interact with technology due to physical or cognitive limitations, and visually conspicuous wearables may increase disease stigmatization [3]. A comprehensive understanding and evaluation of technology and end-user preferences is important to further facilitate integration of wearables into clinical practice.

The purpose of this systematic review was to provide an overview and to aggregate both quantitative and qualitative knowledge from clinical research with wearable sensor technology in individuals with epilepsy, PD, and stroke. Clinical application areas, main findings, and clinimetric properties of measures derived from wearables, proportion of reported missing data, and adherence along with perceived experiences and preferences of wearables will be summarized for all three diseases.


A systematic literature search was performed to identify the most relevant quantitative and qualitative studies. Search strategies were created based on the PICO framework (Population, Intervention, Comparison, and Outcome) [4]. The SPIDER tool (Sample, Phenomenon of Interest, Design, Evaluation, and Research type) was used as an extra search strategy to identify qualitative studies [5]. MeSH terms and free keywords were used for searches in PubMed, Scopus, Ovid SP, CINHAL, and Cochrane Library Databases. The search results from different databases were largely overlapping, but PubMed showed the best coverage for quantitative and Scopus for qualitative studies in terms of relevance and number of articles. Therefore, quantitative studies were selected from PubMed and qualitative studies from Scopus. The searches were limited to articles in English published between 1995 and 2015, and updated in January 2017 (see search strategies in Supplementary information 1).

The inclusion criteria for studies reporting quantitative data were: (1) peer-reviewed original studies; (2) use of wearable sensors (such as accelerometers, gyroscopes, and magnetic sensors) in people with epilepsy, PD, or stroke; (3) monitoring of movements and physiological signs; and (4) study outcomes related to symptoms or impairments with clinical relevance to epilepsy, PD, or stroke. The exclusion criteria were: (1) less than ten participants; (2) conference proceedings, reviews, case reports, non-human studies, and grey literature (e.g., theses, reports, policy and government documents, and study protocols); and (3) implantable sensors.

The inclusion criteria for studies reporting qualitative data were: (1) peer-reviewed original studies; (2) analysis of primary qualitative data; and (3) studies on patients’ or clinicians’ experiences and/or preferences on acceptability, expectations, feasibility, and/or usability of using wearables. Studies were excluded if the qualitative data analysis was not related to wearables.

Each title and abstract was screened for inclusion by two independent reviewers (DJ, MAM). Discrepancies were resolved by discussions between the two reviewers until a consensus was reached. Relevant literature known to the authors from other sources was also screened for inclusion. Reference lists of all included studies were searched manually to identify additional studies (Fig. 1).
Fig. 1

Flow diagram of the systematic review selection process

Quality assessment

A critical appraisal of the reporting quality of the quantitative studies eligible for the review was performed using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement [6]. The STROBE was developed to improve the reporting quality of observational studies and to facilitate critical appraisal and interpretation of the study results [7]. The reporting quality is an essential element for a study, and indispensable for proper appraisal of internal and external validity of findings [8]. The STROBE checklist and particularly the key components of the STROBE, coherent with the basic requirements to quality assessment of observational studies were used. Thus, in the current review, all 22 items of the STROBE were assessed, and sufficient reporting quality was assigned to studies which met the following standards: a clear statement of objective(s) (item 3), described eligibility (inclusion and exclusion) criteria (item 6), defined outcome variables (items 7 and 11), described statistical methods used (item 12), description of number and characteristics of participants provided (items 13 and 14), outcomes measures and main results (items 15 and 16) and provided summary and interpretation of key results in concurrence with the study aims provided (items 18, 20, and 21). Fulfilment of these 12 STROBE items corresponds to more than 50% of all 22 STROBE items, a cutoff which has been used in several previous studies [9, 10, 11]. All 22 items of the STROBE statement were discussed between two reviewers before quality assessment (DJ, MAM) to reach a consensus of understanding on each item of the checklist. The first 20 articles from an alphabetically sorted list were scored independently by the two reviewers to ensure consensus. The rest of the included articles (n = 73) were then scored by one reviewer (DJ), and any uncertainties were discussed and rescreened with the second reviewer (MAM).

The methodological quality of studies that reported qualitative data was assessed with the Critical Appraisal Skills Programme (CASP) [12]. The questions of the CASP targeting aims, methodology, design, recruitment, data collection, data analysis, ethical considerations, and findings needed to be fulfilled (see supplementary information 2).

Data extraction and synthesis

The aim, sample characteristics, main findings, proportion of reported missing data, and adherence were extracted from studies with sufficient reporting quality only. A thematic analysis was used to synthesize all text from the results sections reporting qualitative data of the studies that passed the critical appraisal checklist. A free line-by-line coding was performed using the Nvivo software (QSR International, Melbourne, Australia, version 11.0) [13]. Descriptive themes and subthemes were then constructed based on the free codes. Analytical themes were generated and developed in relation with the descriptive themes.


The initial PubMed and Scopus literature search resulted in the retrieval of a total of 1012 articles (Fig. 1). From these, 210 studies were included in the full-text review, and 104 studies were eligible for quality assessment. Fifty quantitative studies were assigned sufficient reporting quality and 6 out of 9 studies that reported qualitative data passed the critical appraisal for methodological quality. Thus, 50 studies reporting quantitative data and 6 studies reporting qualitative data were included for further data extraction and synthesis (Fig. 1). Of the 50 papers reporting quantitative data, 5 (10%) were in epilepsy, 21 (42%) in PD, and 24 (48%) in stroke. All studies in epilepsy were conducted in a hospital environment. In PD, 13 studies were conducted in a laboratory, one study in a hospital environment, and 7 studies in a free-living environment. In stroke, 4 studies were conducted in a laboratory, 6 in a hospital environment, and 14 studies used wearables in a free-living environment. Qualitative data were reported in one study in epilepsy, three in PD, and two in stroke. A meta-analysis was considered unfeasible for the quantitative studies due to large variation of study aims and designs.

Studies reporting quantitative data

An overview of clinical application areas, population characteristics along with methods, and the main findings is provided in Supplementary Table 1. In epilepsy, wrist-worn sensors with built-in accelerometers were used for detection and classification of seizures in hospital settings [14, 15, 16, 17, 18]. In PD, wearables were used to detect and quantify cardinal motor symptoms including bradykinesia [19], tremor [20, 21], and postural sway [22, 23] as well as medication-evoked adverse symptoms such as dyskinesia [24, 25, 26, 27] and motor fluctuations [28, 29]. Wearables were also used to quantify sleep disturbances [30], gait measures [31, 32], freezing of gait [33, 34], missteps and fall [35, 36], and physical activity levels [37, 38, 39]. In stroke, upper extremity activity [40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51], walking, and physical activity levels were investigated in several studies using step and activity counts [52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62].

Wearables in laboratory environment

In laboratory settings, different standardized daily activities and functional walking and mobility tasks with more or less constrained protocols were used in studies with PD and stroke. Video observations, clinical scales, and other technologies such as gait analysis were often used as standard reference to validate variables derived from wearables. In PD, both accelerometers and gyroscopes were used, while step counts from accelerometers and energy expenditure during walking were investigated in stroke (Fig. 2 and Table 1).
Fig. 2

Reported outcomes of measures derived from wearables applied in epilepsy, PD, and stroke. GTCS generalized tonic–clonic seizures, PNES pshychogenic non-epileptic seizures, PD Parkinson’s disease, Sens sensitivity, Spec specificity, COP center of pressure, ICC intraclass correlations, PSG polysomnography, OMCS optical motion capture system, ARAT the Action Research Arm Test, MAL The Motor Activity Log, FMA Fugl–Meyer Assessment, NIHSS the Nation Institutes of Health Stroke Scale, UPDRS Unified Parkinson’s Disease Rating Scale, MiniBEST Mini Balance Evaluation Systems Test, PIGD postural instability and gait disorder, UDysRS Unified Dyskinesia Rating Scale, mAIMS modified Abnormal Involuntary Movement Scale, CDRS Clinical Dyskinesia Rating Scale. *Mean value is presented; §Negative correlation is shown

Table 1

Clinimetric properties of measures derived from wearables in laboratory


Parkinson’s disease



Medication-evoked adverse symptoms


Gait measures

Freezing of gait

Postural control

Step counts

Discrimination, healthy/controls

[24, 26]




[22, 23]

[59] [54]

Discrimination, disease severity

[24, 26]


Standard references

Video-based ratings

[25, 26, 27]a, [29]b



[33, 34]c



Clinical assessment





Visual observations



Other technologies (gait analysis, center of pressure)


[31]b, [32]a

















a3-axial accelerometer and 3-axial gyroscope or inertial measurement units

b3-axial accelerometer

c1- or 2-axial accelerometer

Different measures derived from wearables quantifying tremor, dyskinesia, postural sway, and spatiotemporal gait characteristics discriminated well between individuals with PD and healthy controls [20, 22, 24, 26, 31]; dyskinesia measures discriminated also between patients with and without dyskinesia [24, 26]. Moderate-to-strong correlations were reported between dyskinesia detected from wearables and clinical ratings [24, 25, 26]. In addition, good agreement was found between sway and spatiotemporal gait measures from wearables and other established technologies [23, 31, 32]. Wearables showed good agreement with video-based ratings regarding the number of freezing episodes and the percentage of time with freezing of gait [34]. Postural sway measures derived from wearables have been examined for test–retest reliability (ICC 0.55–0.86) [23] and the mediolateral sway and jerk were shown to be sensitive to detect progression of postural instability in PD over time [22].

In stroke, good agreement was found between step counts derived from wearables compared to step counts from 3D gait analysis [53] or video-based counts [54]. One study reported no significant correlation between step counts derived from arm worn sensors and manual observational step counting, while an inconsistent but moderate-to-strong correlation (r = 0.56–0.85) for measuring energy expenditure was noted with indirect calorimetry [59]. Test–retest reliability for step counts and energy expenditure (ICC = 0.61–0.98) was also reported [59].

Wearables in hospital environment

In hospital environments, patients were free to move and perform their daily activities within the ward or hospital. Only accelerometer data were reported and measurements lasted between 1 and 9 days. No studies investigated the test–retest reliability or responsiveness in free activities at hospital settings. Video electroencephalography (video-EEG), clinical scales, and polysomnography were used as the standard references to validate the variables derived from wearables (Fig. 2 and Table 2).
Table 2

Clinimetric properties of measures derived from wearables in free activities at hospital

Free activity in hospital


Parkinson’s disease



Generalized tonic–clonic seizures

Psychogenic non-epileptic seizures

Motor seizures

Sleep disturbance

Upper extremity activity


Discrimination, healthy/controls


[41, 42, 44, 50]


Discrimination, disease severity


[17, 18]


[40, 41]


Standard references

Video electroencephalogram

[15]b, [16]c

[17, 18]b



Clinical assessment


[40, 41, 42, 44]c










a3-axial accelerometer and 3-axial gyroscope or inertial measurement units

b3-axial accelerometer

c1 or 2-axial accelerometer

In epilepsy, stereotypical movement patterns for motor seizures were detected with three-axes accelerometers in 95% of the motor seizures identified with video-EEG [14]. More recent studies demonstrated detection sensitivity ranging from 90 to 92% for convulsive seizures [15, 16], but the false positive events varied between the studies. One study reported 40 false alarms in 16 out of 73 patients [15], and another study found 81 false alarms reported in 17 patients out of a sample of 30 [16]. Differentiation of psychogenic non-epileptic seizures from epileptic seizures showed a sensitivity of 93–100% with different machine learning approaches, while the specificity ranged from 75 to 91% [17, 18].

Upper extremity activity measures derived from accelerometers discriminated well between persons with stroke and healthy controls [41, 42, 44, 50] as well as between patients with different impairment levels [40, 41]. Moderate correlations were found between arm activity measures (activity counts) and clinical assessments in individuals with acute stroke [40, 41, 42, 44]. In one study, the walking activity measured with ankle accelerometers in hospital showed low correlation with stroke severity, but interestingly, a greater level of asymmetry was detected for individuals with stroke during their daily walking at hospital compared to laboratory gait analysis [61].

Wearables in a free-living environment

Monitoring of movement related symptoms and deficits in a free-living environment is challenging. Differentiating or quantifying disease-related movement patterns like epileptic seizures from common voluntary movements such as teeth brushing can be challenging. To overcome these problems, advanced algorithm development is often required to reach sufficient accuracy. The wearing time in studies conducted in the free-living environment varied between 8 h and 7 days and only data from accelerometers were used. Clinical scales were commonly used to determine relationships between wearables and clinical assessments (Fig. 2 and Table 3).
Table 3

Clinimetric properties of measures derived from wearables in free-living environment

Free living

Parkinson’s disease




Medication-evoked adverse symptoms


Physical activity

Upper extremity activity

Physical activity and sedentary time

Step counts


Discrimination, healthy/controls








Discrimination, disease severity



[35, 36]


[43, 45, 47, 48]



Standard references

Clinical assessment



[38]c, [39]b

[43, 48, 49]c, [46, 51]b







Laboratory tests






[48, 49]

[56, 57, 58]







[55] [62],



a3-axial accelerometer and 3-axial gyroscope or inertial measurement units

b3-axial accelerometer

c1- or 2-axial accelerometer

In PD, acceleration-based assessment of bradykinesia in free-living settings was already described in 1998 [19]. The results showed that acceleration of extremities and immobility measures was effective to discriminate individuals with PD from controls [19]. A more recent study showed that a commercial proprietary algorithm could discriminate between individuals with and without motor fluctuations, and detect changes in fluctuations before and after deep brain stimulation [28]. Quantification of missteps and risk of falling was shown to discriminate non-fallers and fallers [35, 36]. A poor-to-moderate correlation was reported between measures from accelerometers (e.g., step counts and activity counts) and unified Parkinson’s disease rating scale [38, 39]. Over a 1-year period, a decline in physical activity levels was detected using accelerometers in individuals with PD [37].

In stroke, arm activity measures discriminated effectively between individuals with stroke and healthy controls [45], and between different motor impairment levels [45, 47, 48]. Moderate-to-strong correlations were found between accelerometer measures (threshold-based counts per time unit) and clinical upper extremity scales in chronic stroke [46, 49]. The test–retest reliability varied in different studies, but moderate agreements (ICC 0.54 and 0.68) were found for 3- and 7-day monitoring of daily activity counts [56]. Measures based on gait (e.g., step counts and step rate) over 1- or 3-day periods showed good test–retest reliability (ICC = 0.83–0.99) [52]. Threshold-based activity counts of arm activity were also shown to be reliable (r = 0.81–0.9) in test–retest [49]. Measures of activity levels (e.g., amount of time spent in an upright position) showed changes over time both during the acute and subacute stage of stroke [55]. The amount of time spend walking, standing, and number of walking bouts were also shown to be sensitive to change over a 12-week period after stroke [60].

Adherence to wearables

Five studies in stroke and one in PD have reported compliance regarding the use of wearables (Fig. 3a). A large study (n = 408) that investigating adherence to the use of step activity monitor over 2-day reported adherence rates between 61 and 68% for separate days, but only 53% of participants wore the sensors for two consecutive days [63]. Older individuals and those with better balance self-efficacy and walking endurance showed better adherence [63]. An intervention study with stroke showed that participants wore accelerometers 76–89% of waking hours in a 3-day measurement [48, 49]. A study evaluating acceptability of wrist-worn sensors in PD reported that only two persons of 34 did not wear the sensors for the full 7-day period and the non-adherence time was 4% [64].
Fig. 3

a Adherence of continuous monitoring using wearables. b Reported missing data due to technical errors and/or insufficient time of wearing or person related reasons. Mean data is presented. #Adherence rate is shown

Missing and incomplete data

Missing data, as reported in 12 studies, was attributed to technical errors and/or human factors. Four studies reported technical errors including device failures, disconnection between sensors, and data storage problems (Fig. 3b). The average percentage of missing data attributable to technical errors in the reported studies was 10% (range 6–14%). Four studies reported that human factors, such as the device being removed and/or used incorrectly, were the predominant reasons for incomplete data. Missing data attributable to human factors were on average 12% (range 4–24%). The average of missing data resulted from both human factors and technical errors was 19% (range 6–24%).

Studies reporting qualitative data

Three analytic themes emerged in the qualitative thematic synthesis: acceptable integration in daily life, lack of confidence in technology, and the need to consider individualization (Table 4).
Table 4

Thematic synthesis of patients’ experiences, acceptance, and preferences for use of wearables

Analytic themes

Descriptive themes


Acceptable integration in daily life

Acceptable properties

Acceptable different designs

Acceptable long-term use

Acceptable functions in daily life

Easy to don-off

Comfortable in daily activities

Lack of confidence in technology

Psychosocial influence

Self-conscious in public

Anxious in wearing technology

Need for confirmation

Need for technical support

Need for extra training

Need for feedback

Difficulties in use

Difficulties with correct use

Difficulties to deal with technical failure

Difficulties to manage battery

The need to consider individualization

User friendliness

Less obtrusive in appearance

Easy to learn and use

User benefits

Improvement for disease management

Acceptable integration in daily life

In general, individuals with epilepsy, PD, and stroke were positive towards using wearables, such as body-worn small separate sensor units [64], gloves [65], smart glasses [66], and “intelligent” clothes [67]. Acceptable wearing time was reported to be 7 days for patients with PD [64]. Persons with epilepsy reported that they would agree to use a seizure registration device, and 65% would want to use it permanently [67]. Participants with stroke and PD described that wearables did not impact their daily activities [64, 65, 68, 69]. The participants found that wrist-worn sensors were easy to put on and take off [64]; however, in other studies, some participants with stroke felt that extra help would be needed to put the wrist sensors on but the sensors were comfortable to wear during daily activities [65, 68, 69].

Lack of confidence in technology

Participants with PD and stroke were mostly positive and agreed to use wearables both at home and in public environments. Some felt self-consciousness using when they could be seen by others, especially during summer [64, 69]. A potential cause of embarrassment and stigmatization was anticipated when other people might ask or question what they were wearing, and in this way make their disease more apparent. Feeling “embarrassed” and that the sensors might “look funny” were described by participants with stroke [65]. Some participants also expressed feelings of stress and awkwardness towards the very idea of wearing a technological device [69]. PD participants further expressed that it was stressful to fasten the sensors during an off state [64, 69].

Participants worried that the sensors would get wet while washing dishes or showering [64, 69]. Participants with stroke felt a need for clear instructions on how to use the device, including both how to wear and how to operate it [68]. They wanted repeated instructions, confirmation, supervised practice, and external support for technical problems in follow-up sessions to improve their confidence [68, 69].

In addition, participants with PD and stroke reported difficulties in using the device correctly, handling technical errors, and charging the battery. They worried that unpredictable technical errors would lead to confusion about how to handle the wearables [68]. They experienced that keeping and placing the sensors at correct positions were difficult [64, 68, 69], and in PD, this was even more challenging during an off state [64, 69].

The need to consider individualization

Individuals with epilepsy, PD, and stroke reported a wide spectrum of expectations in terms of usability of wearables [65, 66, 67, 68, 69]. Participants with PD and stroke described that wearables should be easy to learn and use [65, 66, 68, 69]. Wearables need to be small and non-obtrusive [64, 65, 68, 69], and some stroke participants suggested that sensors could be worn on the upper arm instead of the wrists to make them less noticeable [65]. Both epilepsy and PD participants further described desirable features of wearables, including the possibility of real-time analysis of data, getting reminders to take drugs and waterproof design [66, 67, 69]. Persons with epilepsy wanted features that would allow improved diagnosis and seizure management [67]. PD participants wanted wearables to assist with physiotherapy training, to improve gait and balance problems [66].


This systematic review illustrates how wearables have been used to monitor movement and disease-related signs in epilepsy, PD, and stroke in different environments, including laboratory, hospital, and free-living. Despite an increasing number of studies using wearables in clinical applications, only half of the eligible studies identified were of sufficient reporting quality. In epilepsy, the wearables were primarily used to detect and differentiate seizures. In PD, the focus was on quantification of dyskinesia, tremor, and bradykinesia, and in stroke, the focus was on upper extremity activity, gait, and physical activity. Clinimetric properties were predominantly investigated in studies using discrete outcome variables such as activity counts or other acceleration-derived variables, in contrast to studies, where complicated algorithms were developed and in which the correct classification and precision of these algorithms were usually tested. The validity of measures derived from wearables was to some extent addressed in several studies, but the reliability and responsiveness have only been studied in PD and stroke. For example, the postural sway measures in PD have been shown to be reliable and sensitive to longitudinal changes in laboratory settings [22, 23]. In stroke, the step counts and measures of upper extremity and physical activity were shown to be reliable [48, 49, 52, 56, 57, 58, 59], and sedentary or upright and walking behaviour measures have been shown to be sensitive to longitudinal changes [55, 60, 62].

The current review also showed that technical errors and human factors influenced adherence and are important reasons for loss of data. The qualitative thematic analysis of studies which reported users’ experiences and acceptance rendered three main analytic themes: acceptable integration in daily life, lack of confidence in technology, and the need to consider individualization. These themes reflect some challenges that need to be met for wearables to be integrated in the clinical practice.

This review included 22 studies conducted in free-living environments, 16 studies in laboratory, and 12 in hospital settings. Data collection in a standardized environment such as laboratory and hospital allows a more detailed evaluation of algorithm and device performance during well-defined movements and tasks in comparison with other established methods like video, optical motion capture or EEG. The evaluation is much more challenging in complex and unpredictable free-living conditions. As a reflection of this, we found no studies on epilepsy based on measurements during free-living conditions. In some cases, it may be possible to “move the laboratory” into free-living environment as a transition strategy to confirm device and algorithm performance as free-living conditions carries the most promising potential of wearables, e.g., with wearable EEG equipment and video monitoring in predefined areas. In the long run, however, evaluation of the performance of wearables in free-living conditions will have to include interventional studies that address the effect of using automatic home-monitoring on disease-related endpoints. At some stage, the transition from laboratory to clinical use will, therefore, involve a leap of faith, where one has to be convinced that the devices and algorithms are good enough to be used in randomized clinical trials. The lack of data on clinical utility of using wearables in free-living conditions is a gap that needs to be filled. Promising results have been reported for capturing motor fluctuations in PD using a single wrist sensor [28] and correctly classifying individuals with dyskinesia using a single ankle-worn sensor in home environment [26]. As these phenomena influence quality of life and can be influenced by changes in treatment, improved detection and evaluation with wearables can be expected to improve disease-specific quality of life and other measurements of disease burden.

Several studies in PD and stroke using wearables during free activities or in free-living conditions reported moderate-to-strong correlations between measures derived from wearables and clinical scales. Although the clinical scales may adequately reflect the patients’ symptoms or disabilities, they are often limited by the predefined ordinal scoring levels and lack sensitivity to more detailed and subtle changes in the clinical status [22, 37, 70, 71]. For the detection of seizures, which are relatively rare and brief events, the requirement for accuracy is greater than for detecting symptoms of PD and activity measures in stroke. It is, therefore, a bigger step to move from controlled to uncontrolled environments. One thing common to the three disorders, however, is that during free-living monitoring, the comparison methods are often subjective and retrospective. The challenge of evaluating wearable devices without a reasonably good reference needs to be addressed before they can be applied in regular care.

Predominantly acceleration signals were used in hospital and free-living settings, even though gyroscope signals have shown promise for increasing the sensitivity and specificity when measuring dyskinesia and postural instability in laboratory settings [23, 24, 25, 26]. One explanation is that gyroscope signals consume more battery power and in this way limit the measuring time. One epilepsy study has suggested that the use of electrodermal activity together with accelerometry might increase sensitivity and specificity in seizure detection, compared to the use of accelerometry only [72]. The idea of measuring multiple physiological modalities can also be transferred to the detection of non-motor symptoms in PD. The practical problems of processing and storing large volumes of data will, however, increase with the use of multiple sensors and modalities. To improve precision, patient-specific algorithms have recently been suggested in epilepsy [73].

Interestingly, we found that in studies where the monitoring time was longer, better adherence to wearables was reported. This could indicate that increased confidence with the use of wearables could have a positive impact on adherence. The lack of confidence in handling the new technology was also one of the main themes that emerged from our thematic synthesis. Optimal wearing time will also vary depending on the nature of the symptoms targeted. For example, 1 month or more could be needed for monitoring seizures in an epilepsy outpatient. For monitoring motor fluctuations in PD or physical activity levels in stroke, 7 days would be ideal because of expected variations between activity during weekdays and weekends, although 1–3 days may be more practical.

Human factors contributed to between 4 and 24% of data loss in the included studies. For routine use, data loss has to be in the lower part of this range and it is, therefore, important to analyse which factors are most important for non-adherence. A positive acceptance towards the use of wearables emerged as one of the main themes from our thematic synthesis and technical support and feedback were considered important factors for increasing motivation and confidence in the use of wearables.

This systematic review, like several before [70, 74, 75, 76, 77, 78], highlights a need to further investigate the clinimetric properties of the measures derived from wearables, to improve standardization of data protocols, variable definitions, and to encourage further development of patient-specific algorithms. The possible benefit of using multimodal information needs to be further investigated. After validating devices and algorithms in controlled environment efforts should be made to subject the wearable technology to randomized clinical trials that can determine if home-monitoring improves management and treatment results. This review also reveals a need to improve the reporting quality of studies evaluating wearables for clinical applications, which would improve dissemination of results into clinical practice. We identified a wide range of outcome measures, but no studies directly addressed the question of the effect wearables may have on decision making or clinical treatment outcomes. The clinical utility, therefore, remains to be established.


Sources of funding

The study was funded by Grants from the Swedish Foundation for Strategic Research (Grant SBE 13-0086), from the Sahlgrenska Academy at the University of Gothenburg through the LUA/ALF agreement Västra Götalandsregionen (Grant ALFGBG-429901) and the Swedish government’s Agreement for Medical training and Research.

Compliance with ethical standards

Conflicts of interest

None of the authors report any conflicts of interest with regard to the present study.

Supplementary material

415_2018_8786_MOESM1_ESM.pdf (256 kb)
Supplementary material 1 (PDF 256 kb)
415_2018_8786_MOESM2_ESM.xlsx (35 kb)
Supplementary material 2 (XLSX 35 kb)
415_2018_8786_MOESM3_ESM.docx (42 kb)
Supplementary material 3 (DOCX 41 kb)


  1. 1.
    Steins D, Dawes H, Esser P, Collett J (2014) Wearable accelerometry-based technology capable of assessing functional activities in neurological populations in community settings: a systematic review. J Neuroeng Rehabil 11:36. PubMedPubMedCentralCrossRefGoogle Scholar
  2. 2.
    Ramgopal S, Thome-Souza S, Jackson M, Kadish NE, Sanchez Fernandez I, Klehm J, Bosl W, Reinsberger C, Schachter S, Loddenkemper T (2014) Seizure detection, seizure prediction, and closed-loop warning systems in epilepsy. Epilepsy Behav 37:291–307. PubMedCrossRefGoogle Scholar
  3. 3.
    Ozanne A, Johansson D, Hallgren Graneheim U, Malmgren K, Bergquist F, Alt Murphy M (2017) Wearables in epilepsy and Parkinson’s disease—a focus group study. Acta Neurol Scand. Google Scholar
  4. 4.
    Richardson WS, Wilson MC, Nishikawa J, Hayward RS (1995) The well-built clinical question: a key to evidence-based decisions. ACP J Club 123(3):A12–A13PubMedGoogle Scholar
  5. 5.
    Cooke A, Smith D, Booth A (2012) Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qual Health Res 22(10):1435–1443. PubMedCrossRefGoogle Scholar
  6. 6.
    von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP, Initiative S (2008) The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. J Clin Epidemiol 61(4):344–349. CrossRefGoogle Scholar
  7. 7.
    Vandenbroucke JP, von Elm E, Altman DG, Gotzsche PC, Mulrow CD, Pocock SJ, Poole C, Schlesselman JJ, Egger M (2007) Strengthening the reporting of observational studies in epidemiology (STROBE): explanation and elaboration. Epidemiology 18(6):805–835. PubMedCrossRefGoogle Scholar
  8. 8.
    da Costa BR, Cevallos M, Altman DG, Rutjes AW, Egger M (2011) Uses and misuses of the STROBE statement: bibliographic study. BMJ Open 1(1):e000048. PubMedPubMedCentralCrossRefGoogle Scholar
  9. 9.
    Folletti I, Zock JP, Moscato G, Siracusa A (2014) Asthma and rhinitis in cleaning workers: a systematic review of epidemiological studies. J Asthma 51(1):18–28. PubMedCrossRefGoogle Scholar
  10. 10.
    Cheng HM, Guitera P (2015) Systematic review of optical coherence tomography usage in the diagnosis and management of basal cell carcinoma. Br J Dermatol 173(6):1371–1380. PubMedCrossRefGoogle Scholar
  11. 11.
    Soares NM, Leao AS, Santos JR, Monteiro GR, dos Santos JR, Thomazzi SM, Silva RJ (2014) Systematic review shows only few reliable studies of physical activity intervention in adolescents. Sci World J 2014:206478. CrossRefGoogle Scholar
  12. 12.
    Programme CAS (2017) CASP (Qualitative Checklists). Accessed 29 Dec 2017
  13. 13.
    Thomas J, Harden A (2008) Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol 8:45. PubMedPubMedCentralCrossRefGoogle Scholar
  14. 14.
    Nijsen TM, Arends JB, Griep PA, Cluitmans PJ (2005) The potential value of three-dimensional accelerometry for detection of motor seizures in severe epilepsy. Epilepsy Behav 7(1):74–84. PubMedCrossRefGoogle Scholar
  15. 15.
    Beniczky S, Polster T, Kjaer TW, Hjalgrim H (2013) Detection of generalized tonic-clonic seizures by a wireless wrist accelerometer: a prospective, multicenter study. Epilepsia 54(4):e58–e61. PubMedCrossRefGoogle Scholar
  16. 16.
    Velez M, Fisher RS, Bartlett V, Le S (2016) Tracking generalized tonic-clonic seizures with a wrist accelerometer linked to an online database. Seizure 39:13–18. PubMedCrossRefGoogle Scholar
  17. 17.
    Bayly J, Carino J, Petrovski S, Smit M, Fernando DA, Vinton A, Yan B, Gubbi JR, Palaniswami MS, O’Brien TJ (2013) Time-frequency mapping of the rhythmic limb movements distinguishes convulsive epileptic from psychogenic nonepileptic seizures. Epilepsia 54(8):1402–1408. PubMedCrossRefGoogle Scholar
  18. 18.
    Gubbi J, Kusmakar S, Rao A, Yan B, O’Brien T, Palaniswami M (2015) Automatic detection and classification of convulsive psychogenic non-epileptic seizures using a wearable device. IEEE J Biomed Health Inform. PubMedGoogle Scholar
  19. 19.
    Dunnewold RJ, Hoff JI, van Pelt HC, Fredrikze PQ, Wagemans EA, van Hilten BJ (1998) Ambulatory quantitative assessment of body position, bradykinesia, and hypokinesia in Parkinson’s disease. J Clin Neurophysiol 15(3):235–242PubMedCrossRefGoogle Scholar
  20. 20.
    Scanlon BK, Levin BE, Nation DA, Katzen HL, Guevara-Salcedo A, Singer C, Papapetropoulos S (2013) An accelerometry-based study of lower and upper limb tremor in Parkinson’s disease. J Clin Neurosci 20(6):827–830. PubMedCrossRefGoogle Scholar
  21. 21.
    Heldman DA, Espay AJ, LeWitt PA, Giuffrida JP (2014) Clinician versus machine: reliability and responsiveness of motor endpoints in Parkinson’s disease. Parkinsonism Relat Disord 20(6):590–595. PubMedPubMedCentralCrossRefGoogle Scholar
  22. 22.
    Mancini M, Carlson-Kuhta P, Zampieri C, Nutt JG, Chiari L, Horak FB (2012) Postural sway as a marker of progression in Parkinson’s disease: a pilot longitudinal study. Gait Posture 36(3):471–476. PubMedPubMedCentralCrossRefGoogle Scholar
  23. 23.
    Mancini M, Salarian A, Carlson-Kuhta P, Zampieri C, King L, Chiari L, Horak FB (2012) ISway: a sensitive, valid and reliable measure of postural control. J Neuroeng Rehabil 9:59. PubMedPubMedCentralCrossRefGoogle Scholar
  24. 24.
    Lopane G, Mellone S, Chiari L, Cortelli P, Calandra-Buonaura G, Contin M (2015) Dyskinesia detection and monitoring by a single sensor in patients with Parkinson’s disease. Mov Disord. PubMedGoogle Scholar
  25. 25.
    Pulliam CL, Burack MA, Heldman DA, Giuffrida JP, Mera TO (2014) Motion sensor dyskinesia assessment during activities of daily living. J Parkinsons Dis 4(4):609–615. PubMedGoogle Scholar
  26. 26.
    Ramsperger R, Meckler S, Heger T, van Uem J, Hucker S, Braatz U, Graessner H, Berg D, Manoli Y, Serrano JA, Ferreira JJ, Hobert MA, Maetzler W, team S-Ps (2016) Continuous leg dyskinesia assessment in Parkinson’s disease-clinical validity and ecological effect. Parkinsonism Relat Disord 26:41–46. PubMedCrossRefGoogle Scholar
  27. 27.
    Mera TO, Burack MA, Giuffrida JP (2013) Objective motion sensor assessment highly correlated with scores of global levodopa-induced dyskinesia in Parkinson’s disease. J Parkinsons Dis 3(3):399–407. PubMedGoogle Scholar
  28. 28.
    Horne MK, McGregor S, Bergquist F (2015) An objective fluctuation score for Parkinson’s disease. PLoS One 10(4):e0124522. PubMedPubMedCentralCrossRefGoogle Scholar
  29. 29.
    Rodriguez-Molinero A, Sama A, Perez-Martinez DA, Perez Lopez C, Romagosa J, Bayes A, Sanz P, Calopa M, Galvez-Barron C, de Mingo E, Rodriguez Martin D, Gonzalo N, Formiga F, Cabestany J, Catala A (2015) Validation of a portable device for mapping motor and gait disturbances in Parkinson’s disease. JMIR mHealth uHealth 3(1):e9. PubMedPubMedCentralCrossRefGoogle Scholar
  30. 30.
    Maglione JE, Liu L, Neikrug AB, Poon T, Natarajan L, Calderon J, Avanzino JA, Corey-Bloom J, Palmer BW, Loredo JS, Ancoli-Israel S (2013) Actigraphy for the assessment of sleep measures in Parkinson’s disease. Sleep 36(8):1209–1217. PubMedPubMedCentralCrossRefGoogle Scholar
  31. 31.
    Lord S, Rochester L, Baker K, Nieuwboer A (2008) Concurrent validity of accelerometry to measure gait in Parkinsons Disease. Gait Posture 27(2):357–359. PubMedCrossRefGoogle Scholar
  32. 32.
    Esser P, Dawes H, Collett J, Feltham MG, Howells K (2012) Validity and inter-rater reliability of inertial gait measurements in Parkinson’s disease: a pilot study. J Neurosci Methods 205(1):177–181. PubMedCrossRefGoogle Scholar
  33. 33.
    Yungher DA, Morris TR, Dilda V, Shine JM, Naismith SL, Lewis SJ, Moore ST (2014) Temporal characteristics of high-frequency lower-limb oscillation during freezing of gait in Parkinson’s disease. Parkinsons Dis 2014:606427. PubMedPubMedCentralGoogle Scholar
  34. 34.
    Morris TR, Cho C, Dilda V, Shine JM, Naismith SL, Lewis SJ, Moore ST (2012) A comparison of clinical and objective measures of freezing of gait in Parkinson’s disease. Parkinsonism Relat Disord 18(5):572–577. PubMedCrossRefGoogle Scholar
  35. 35.
    Weiss A, Herman T, Giladi N, Hausdorff JM (2014) Objective assessment of fall risk in Parkinson’s disease using a body-fixed sensor worn for 3 days. PLoS One 9(5):e96675. PubMedPubMedCentralCrossRefGoogle Scholar
  36. 36.
    Iluz T, Gazit E, Herman T, Sprecher E, Brozgol M, Giladi N, Mirelman A, Hausdorff JM (2014) Automated detection of missteps during community ambulation in patients with Parkinson’s disease: a new approach for quantifying fall risk in the community setting. J Neuroeng Rehabil 11:48. PubMedPubMedCentralCrossRefGoogle Scholar
  37. 37.
    Cavanaugh JT, Ellis TD, Earhart GM, Ford MP, Foreman KB, Dibble LE (2012) Capturing ambulatory activity decline in Parkinson’s disease. J Neurol Phys Ther 36(2):51–57. PubMedPubMedCentralCrossRefGoogle Scholar
  38. 38.
    Skidmore FM, Mackman CA, Pav B, Shulman LM, Garvan C, Macko RF, Heilman KM (2008) Daily ambulatory activity levels in idiopathic Parkinson disease. J Rehabil Res Dev 45(9):1343–1348PubMedCrossRefGoogle Scholar
  39. 39.
    Nero H, Benka Wallen M, Franzen E, Conradsson D, Stahle A, Hagstromer M (2016) Objectively assessed physical activity and its association with balance, physical function and dyskinesia in Parkinson’s disease. J Parkinsons Dis 6(4):833–840. PubMedCrossRefGoogle Scholar
  40. 40.
    Gebruers N, Truijen S, Engelborghs S, Nagels G, Brouns R, De Deyn PP (2008) Actigraphic measurement of motor deficits in acute ischemic stroke. Cerebrovasc Dis 26(5):533–540. PubMedCrossRefGoogle Scholar
  41. 41.
    Gebruers N, Truijen S, Engelborghs S, De Deyn PP (2013) Predictive value of upper-limb accelerometry in acute stroke with hemiparesis. J Rehabil Res Dev 50(8):1099–1106. PubMedCrossRefGoogle Scholar
  42. 42.
    Le Heron C, Fang K, Gubbi J, Churilov L, Palaniswami M, Davis S, Yan B (2014) Wireless accelerometry is feasible in acute monitoring of upper limb motor recovery after ischemic stroke. Cerebrovasc Dis 37(5):336–341. PubMedCrossRefGoogle Scholar
  43. 43.
    Thrane G, Emaus N, Askim T, Anke A (2011) Arm use in patients with subacute stroke monitored by accelerometry: association with motor impairment and influence on self-dependence. J Rehabil Med 43(4):299–304. PubMedCrossRefGoogle Scholar
  44. 44.
    Lang CE, Wagner JM, Edwards DF, Dromerick AW (2007) Upper extremity use in people with hemiparesis in the first few weeks after stroke. J Neurol Phys Ther 31(2):56–63. PubMedCrossRefGoogle Scholar
  45. 45.
    de Niet M, Bussmann JB, Ribbers GM, Stam HJ (2007) The stroke upper-limb activity monitor: its sensitivity to measure hemiplegic upper-limb activity during daily life. Arch Phys Med Rehabil 88(9):1121–1126. PubMedCrossRefGoogle Scholar
  46. 46.
    van der Pas SC, Verbunt JA, Breukelaar DE, van Woerden R, Seelen HA (2011) Assessment of arm activity using triaxial accelerometry in patients with a stroke. Arch Phys Med Rehabil 92(9):1437–1442. PubMedCrossRefGoogle Scholar
  47. 47.
    Shim S, Kim H, Jung J (2014) Comparison of upper extremity motor recovery of stroke patients with actual physical activity in their daily lives measured with accelerometers. J Phys Ther Sci 26(7):1009–1011. PubMedPubMedCentralCrossRefGoogle Scholar
  48. 48.
    Uswatte G, Foo WL, Olmstead H, Lopez K, Holand A, Simms LB (2005) Ambulatory monitoring of arm movement using accelerometry: an objective measure of upper-extremity rehabilitation in persons with chronic stroke. Arch Phys Med Rehabil 86(7):1498–1501PubMedCrossRefGoogle Scholar
  49. 49.
    Uswatte G, Giuliani C, Winstein C, Zeringue A, Hobbs L, Wolf SL (2006) Validity of accelerometry for monitoring real-world arm activity in patients with subacute stroke: evidence from the extremity constraint-induced therapy evaluation trial. Arch Phys Med Rehabil 87(10):1340–1345. PubMedCrossRefGoogle Scholar
  50. 50.
    Michielsen ME, Selles RW, Stam HJ, Ribbers GM, Bussmann JB (2012) Quantifying nonuse in chronic stroke patients: a study into paretic, nonparetic, and bimanual upper-limb use in daily life. Arch Phys Med Rehabil 93(11):1975–1981. PubMedCrossRefGoogle Scholar
  51. 51.
    Urbin MA, Waddell KJ, Lang CE (2015) Acceleration metrics are responsive to change in upper extremity function of stroke survivors. Arch Phys Med Rehabil 96(5):854–861. PubMedCrossRefGoogle Scholar
  52. 52.
    Mudge S, Stott NS (2008) Test–retest reliability of the StepWatch activity monitor outputs in individuals with chronic stroke. Clin Rehabil 22(10–11):871–877. PubMedCrossRefGoogle Scholar
  53. 53.
    Mudge S, Stott NS, Walt SE (2007) Criterion validity of the StepWatch activity monitor as a measure of walking activity in patients after stroke. Arch Phys Med Rehabil 88(12):1710–1715. PubMedCrossRefGoogle Scholar
  54. 54.
    Fulk GD, Combs SA, Danks KA, Nirider CD, Raja B, Reisman DS (2014) Accuracy of 2 activity monitors in detecting steps in people with stroke and traumatic brain injury. Phys Ther 94(2):222–229. PubMedCrossRefGoogle Scholar
  55. 55.
    Askim T, Bernhardt J, Churilov L, Fredriksen KR, Indredavik B (2013) Changes in physical activity and related functional and disability levels in the first 6 months after stroke: a longitudinal follow-up study. J Rehabil Med 45(5):423–428. PubMedCrossRefGoogle Scholar
  56. 56.
    Hale LA, Pal J, Becker I (2008) Measuring free-living physical activity in adults with and without neurologic dysfunction with a triaxial accelerometer. Arch Phys Med Rehabil 89(9):1765–1771. PubMedCrossRefGoogle Scholar
  57. 57.
    Rand D, Eng JJ, Tang PF, Jeng JS, Hung C (2009) How active are people with stroke? Use of accelerometers to assess physical activity. Stroke 40(1):163–168. PubMedCrossRefGoogle Scholar
  58. 58.
    Haeuber E, Shaughnessy M, Forrester LW, Coleman KL, Macko RF (2004) Accelerometer monitoring of home- and community-based ambulatory activity after stroke. Arch Phys Med Rehabil 85(12):1997–2001PubMedCrossRefGoogle Scholar
  59. 59.
    Vanroy C, Vissers D, Cras P, Beyne S, Feys H, Vanlandewijck Y, Truijen S (2014) Physical activity monitoring in stroke: senseWear Pro2 activity accelerometer versus Yamax Digi-Walker SW-200 pedometer. Disabil Rehabil 36(20):1695–1703. PubMedCrossRefGoogle Scholar
  60. 60.
    Sanchez MC, Bussmann J, Janssen W, Horemans H, Chastin S, Heijenbrok M, Stam H (2015) Accelerometric assessment of different dimensions of natural walking during the first year after stroke: recovery of amount, distribution, quality and speed of walking. J Rehabil Med. PubMedGoogle Scholar
  61. 61.
    Prajapati SK, Gage WH, Brooks D, Black SE, McIlroy WE (2011) A novel approach to ambulatory monitoring: investigation into the quantity and control of everyday walking in patients with subacute stroke. Neurorehabil Neural Repair 25(1):6–14. PubMedCrossRefGoogle Scholar
  62. 62.
    Tieges Z, Mead G, Allerhand M, Duncan F, van Wijck F, Fitzsimons C, Greig C, Chastin S (2015) Sedentary behavior in the first year after stroke: a longitudinal cohort study with objective measures. Arch Phys Med Rehabil 96(1):15–23. PubMedCrossRefGoogle Scholar
  63. 63.
    Barak S, Wu SS, Dai Y, Duncan PW, Behrman AL (2014) Adherence to accelerometry measurement of community ambulation poststroke. Phys Ther 94(1):101–110. PubMedCrossRefGoogle Scholar
  64. 64.
    Fisher JM, Hammerla NY, Rochester L, Andras P, Walker RW (2016) Body-worn sensors in Parkinson’s disease: evaluating their acceptability to patients. Telemed J E Health 22(1):63–69. PubMedPubMedCentralCrossRefGoogle Scholar
  65. 65.
    Simone LK, Sundarrajan N, Luo X, Jia Y, Kamper DG (2007) A low cost instrumented glove for extended monitoring and functional hand assessment. J Neurosci Methods 160(2):335–348. PubMedCrossRefGoogle Scholar
  66. 66.
    Zhao Y, Heida T, van Wegen EE, Bloem BR, van Wezel RJ (2015) E-health support in people with Parkinson’s disease with smart glasses: a survey of user requirements and expectations in the Netherlands. J Parkinsons Dis. PubMedGoogle Scholar
  67. 67.
    Hoppe C, Feldmann M, Blachut B, Surges R, Elger CE, Helmstaedter C (2015) Novel techniques for automated seizure registration: Patients’ wants and needs. Epilepsy Behav 52(Pt A):1–7. PubMedCrossRefGoogle Scholar
  68. 68.
    Mountain G, Wilson S, Eccleston C, Mawson S, Hammerton J, Ware T, Zheng H, Davies R, Black N, Harris N, Stone T, Hu H (2010) Developing and testing a telerehabilitation system for people following stroke: issues of usability. J Eng Des 21(2):223–236. CrossRefGoogle Scholar
  69. 69.
    Cancela J, Pastorino M, Tzallas AT, Tsipouras MG, Rigas G, Arredondo MT, Fotiadis DI (2014) Wearability assessment of a wearable system for Parkinson’s disease remote monitoring based on a body area network of sensors. Sensors (Basel) 14(9):17235–17255. CrossRefGoogle Scholar
  70. 70.
    Kubota KJ, Chen JA, Little MA (2016) Machine learning for large-scale wearable sensor data in Parkinson’s disease: concepts, promises, pitfalls, and futures. Mov Disord 31(9):1314–1326. PubMedCrossRefGoogle Scholar
  71. 71.
    Bustren EL, Sunnerhagen KS, Alt Murphy M (2017) Movement kinematics of the ipsilesional upper extremity in persons with moderate or mild stroke. Neurorehabil Neural Repair 31(4):376–386. PubMedCrossRefGoogle Scholar
  72. 72.
    Poh MZ, Loddenkemper T, Reinsberger C, Swenson NC, Goyal S, Sabtala MC, Madsen JR, Picard RW (2012) Convulsive seizure detection using a wrist-worn electrodermal activity and accelerometry biosensor. Epilepsia 53(5):e93–e97. PubMedCrossRefGoogle Scholar
  73. 73.
    Cuppens K, Karsmakers P, Van de Vel A, Bonroy B, Milosevic M, Luca S, Croonenborghs T, Ceulemans B, Lagae L, Van Huffel S, Vanrumste B (2014) Accelerometry-based home monitoring for detection of nocturnal hypermotor seizures based on novelty detection. IEEE J Biomed Health Inform 18(3):1026–1033. PubMedCrossRefGoogle Scholar
  74. 74.
    Jory C, Shankar R, Coker D, McLean B, Hanna J, Newman C (2016) Safe and sound? A systematic literature review of seizure detection methods for personal use. Seizure 36:4–15. PubMedCrossRefGoogle Scholar
  75. 75.
    Godinho C, Domingos J, Cunha G, Santos AT, Fernandes RM, Abreu D, Goncalves N, Matthews H, Isaacs T, Duffen J, Al-Jawad A, Larsen F, Serrano A, Weber P, Thoms A, Sollinger S, Graessner H, Maetzler W, Ferreira JJ (2016) A systematic review of the characteristics and validity of monitoring technologies to assess Parkinson’s disease. J Neuroeng Rehabil 13:24. PubMedPubMedCentralCrossRefGoogle Scholar
  76. 76.
    Noorkoiv M, Rodgers H, Price CI (2014) Accelerometer measurement of upper extremity movement after stroke: a systematic review of clinical studies. J Neuroeng Rehabil 11:144. PubMedPubMedCentralCrossRefGoogle Scholar
  77. 77.
    Espay AJ, Bonato P, Nahab FB, Maetzler W, Dean JM, Klucken J, Eskofier BM, Merola A, Horak F, Lang AE, Reilmann R, Giuffrida J, Nieuwboer A, Horne M, Little MA, Litvan I, Simuni T, Dorsey ER, Burack MA, Kubota K, Kamondi A, Godinho C, Daneault JF, Mitsi G, Krinke L, Hausdorff JM, Bloem BR, Papapetropoulos S, Movement Disorders Society Task Force on T (2016) Technology in Parkinson’s disease: Challenges and opportunities. Mov Disord 31(9):1272–1282. PubMedPubMedCentralCrossRefGoogle Scholar
  78. 78.
    Silva de Lima AL, Evers LJW, Hahn T, Bataille L, Hamilton JL, Little MA, Okuma Y, Bloem BR, Faber MJ (2017) Freezing of gait and fall detection in Parkinson’s disease using wearable sensors: a systematic review. J Neurol 264(8):1642–1654. PubMedPubMedCentralCrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Clinical Neuroscience, Institute of Neuroscience and PhysiologySahlgrenska Academy, University of GothenburgGothenburgSweden

Personalised recommendations