Keywords

1 Introduction

Pervasive and ubiquitous environments have been spreading in our society, especially focusing on tracking user’s location and activities by monitoring physiological signals, motion and orientation.

There are two main cyber-physical approaches to this field, using two different sensorial components, the first is based on computer vision devices for human motion capture, such as using single or a network of cameras, infrared cameras or other optical devices [1, 2], while the task of human movement analysis has also been influenced by using inertial sensors, such as accelerometers and gyroscopes that are present in common smartphones or by using dedicated sensors connected to microcontrollers (which can also communicate to smartphones or other computation devices) [2, 3].

The remote analysis of the user’s position and body posture can provide major benefits in the healthcare and manufacturing industries. Our studies will focus on the real-time analysis of two main problems. The first problem will analyse the position of each sensor to enable the gait analysis of the whole-body, which can in turn be used in low-cost and home-based rehabilitation systems. The second will use pattern recognition techniques to analyse static working postures, taking into account potential ergonomic risk factors of musculoskeletal disorders caused by poor work situations. From these open problems, there is a main research question that arises:

“How to design a low-cost system that is capable of accurate detection in real-time of the user’s whole-body?”

In this paper we propose a system that answers this question (a relation of this work to Cyber-Physical Systems is presented in Sect. 2). Section 3 gives a more detailed overview of the state of the art and the related literature to inertial measurement units and the associated applications of such systems. Section 4 provides a summarized description of our proposed physical system and the associated algorithms of signal processing, the 9 degrees of freedom data fusion and the implementation of a real-time data transmission protocol. The initial laboratory results are presented in Sect. 5, while Sect. 6 gives a summarized conclusion of our working progress and a preview of our future work plan.

2 Research Connection with Cyber-Physical Systems

Cyber-Physical Systems (CPS) have been emerging as the next computing revolution because they integrate the available communication and computational capacities to create new interactions between cyber and physical components [4]. These systems are merging the physical and virtual world and several applications have been identified in communication, transportation, infrastructure, energy, robotics, manufacturing and healthcare [4, 5].

Cyber-Physical frameworks have already been proposed for use in rehabilitation systems [6]. Such systems can be used for home-based physiotherapy services processes, such as the monitoring of users in senior care, patients with reduced mobility (for example with hip or knee replacement surgery), stroke and heart attack patients and other subjects that require occupational therapy to regain day to day skills.

CPS applications are already inside the healthcare system, which has been extensively reviewed [7], and require a special architecture that can handle the privacy of the data, that can also handle communication between the hospital/rehabilitation centre, the storage unit and the sensors. Finally they also need to handle the computational resources that arise from a feedback system that receives real-time data from various patients and a large network of sensors and requires a real-time classification or response back to the users [8].

In the manufacturing area, there have been several advances in hazard risk management and their approach to the preservation of worker’s safety and health has been highly improved, by developing new work equipment features and definition of more secure tasks [9].

A big trend that has been emerging in this area is the definition of a sensing and smart factory, which uses information from a network of physical sensors and virtual databases to make real-time risk assessments [9], which has been extensively overviewed for the construction industry [10]. These sensors can also be used to monitor and track the assembly line, by using real-time information to guide the manual assembly process [11]. Right now, most studies in this area use visual based motion capture frameworks for to identify workers ergonomic risk factors [12], even adapting systems such as the KinectTM sensor for body kinematic measurement in the workplace [13]. Most of these studies lack the possibility of real-time management and will be hard to implement in a real setting, due to inherent problems in these systems: there is necessity of adaptation of the background (other objects can influence the field of view, the lightning can affect the measurements) and the user’s to be completely inside the field of view of the cameras. To solve such a problem, some groups have been fusing data coming from physiological sensors (e.g. ECG, EEG, EMG) location sensors (e.g. GPS or Ultra-Wideband technology) and normal cameras to make a real-time monitor tasks in the workplace and analysis of ergonomic risks [14].

3 State of the Art

Motion capture sensorial systems are usually divided into two categories: optical sensors and non-optical sensors [2]. Optical systems use cameras to acquire motion information of the subject being studied. Due to their scalability and high sensitivity to the motion, they are regarded as the standard on several fields such as gait analysis, film production and video game animation. However, they have some disadvantages: they are usually very expensive, do not work on every lighting environment and require special software for different tasks. On the other hand, there are the non-optical systems, which are comprised of every other system e.g. magnetic arrays, microelectromechanical systems (MEMS)-based [2].

MEMS-based inertial motion capture systems use inertial sensors (e.g. gyroscopes and accelerometers) to acquire motion information, have been subject of great interest due to constant miniaturization of the sensors, ability to work on any environment and relative low cost when compared with other systems. They do, however, also have several drawbacks, namely the lack of information about the full 3D motion and the inherent noise in some sensors.

The MEMS-based sensors have been more recently implemented for real-time human motion tracking in various physical activities, such as boxing, golf swinging, and table tennis [15], but also have been able to actually identify some simple activities using fuzzy logic, such as walking, sitting or stair climbing [16]. These human activity measurements can be useful in the identification of bad postures in the workplace and the identification of ergonomic hazard risks in the workplace.

Both issues have been tackled simultaneously in two fronts, the first is using combinations of sensors to get the most information (e.g. combining 3D accelerometers with 3D gyroscopes to get information about the acceleration and angular velocity in 3 Dimensions). The second is by using Data Fusion algorithms to mitigate the effect of the measurement noise, for example using Kalman filtering in a limited frequency band [17] or other methodologies to filter the inherent noise over the whole frequency band of interest [18].

A final approach is by fusing both visual (both with and without using optical markers) and inertial sensors, in order to solve the intrinsic problems in each system [19]. Concerning the rehabilitation procedure of gait disorders, all the approaches, such as using just inertial sensors, or optical sensors or fusing both of them have been tested and studied [2023].

4 Materials and Methods

In this section we provide a representation of the physical sensor architecture, with the implemented sensor elements and a description of the fusion and signal processing algorithms.

Our current prototype is using the LSM9DS1, a 9 degrees of freedom (gyroscope, accelerometer, magnetometer) digital sensor unit from STMicroelectronics, communicating the sensor output to a Raspberry Pi acting as signal processor, data logger and as gateway to a local network, which allows not only the sensor to be configured remotely but also the sensor output data to be accessed by any computer on the network using a software that allows the user to watch not only the data in text format, but also a 3D Model that mimics the sensor orientation in real-time. In Fig. 1 we show a simple scheme of the prototype and the experimental assembly of the prototype.

Fig. 1.
figure 1figure 1

(A) Prototype schematic (B) Experimental assembly of the prototype.

To address the noise issue described in Sect. 3, we implemented the Mahony’s filter in Direction Cosine Matrix form in both Python and MATLAB following the algorithm in [24], a detailed discussion about the problem and filtering algorithms have been extensively discussed in [24, 25] respectively.

5 Initial Laboratory Results and Discussion

In Fig. 2 we present the results of the static test in which the sensor was idle for 15 min with a sampling rate of 119 Hz on both the gyroscope and accelerometer and 20 Hz on the magnetometer, it shows the difference between the uncorrected signal (A) and the corrected signal (B), the initial behaviour is the filter response. Alpha, beta and gamma correspond to the relative rotation (to the initial orientation) around the x,y,z axis respectively. We can notice that although there is an offset after the drift correction, the values tend to be stable after 15 min. This offset is a consequence of the filters signal response and can be easily removed after a few a few minutes of its stabilization. It is important to note, that this drift issue is still a problem, even in a recent study on rehabilitation systems for gait disorders, which solved this problem by fusing the inertial sensors with information coming from an infrared camera [26].

Fig. 2.
figure 2figure 2

Difference between the uncorrected signal (A) and the corrected signal (B)

For the next experiment, we tested the sensor for rapid and slow movements to check the filter response, using a goniometer to rotate the sensor around the z axis (gamma), while trying to maintain the other axis stabilized. The rotation sequence was the following: at 2.5 s, we moved from 0º (all the angles are relative to this starting point) to 24º, and then we moved to 165º with 2 additional steps of 5º. Afterwards we move to –100º (8 s), which followed to a fast rotation to 90º (11.5 s) then to –45º, then to 150º. At 15 s, we rotated back to 0º, then a fast rotation to 160º, with a slow rotation back to 90º, then again to 160º, with a gradual slow rotation to –25º. At 36 s, we rotated to 45º, back to –110º, then to 120º and a slow rotation to –125º, finalized by a rotation to 110º.

Figure 3 shows the respective test around the z axis (gamma) with a sampling rate of 60 Hz of both the gyroscope and accelerometer and 20 Hz on the magnetometer, (A) represents the uncorrected signal and (B) the corrected one. Although the differences are barely noticeable in gamma plot, on the alpha and beta plots there is a significant effect of the drift, showing that we can also correct this problem when the sensor is moving.

Fig. 3.
figure 3figure 3

Difference between the uncorrected signal (A) and the corrected signal (B)

6 Conclusions and Future Work Plan

In this work, we implemented a low-cost cost measurement unit that been developed for accurate motion capture, by implementing a dedicated circuit with 9 degrees of freedom sensors (accelerometer, gyroscope and magnetometer), connected to a Raspberry Pi, which transmits the data to a local network, allowing access by authorized computers. We implemented signal processing and data fusion algorithms to correct the inherent drift process that occurs even when the sensor is stopped, as shown in Fig. 2. As future work we can use an absolute position coordinate system (e.g. north east down system) instead of the existing coordinate system.

We are planning to use this system in two specific biomedical applications, as the first one is linked to the real-time analysis of working postures and the identification of ergonomic risk factors for musculoskeletal disorders. For this we are planning to start with a network of at least 3 sensors to test the analysis in the upper limbs. We will make a feedback system that uses the ISO 11226 recommendations for the holding time for upper arm elevation in a specific posture and the acceptable shoulder, forearm and hand positions in a static posture [27]. To analyse a dynamic working posture, we will use a similar framework to the project FAST ERGO_X [28], which is based on fuzzy rules. Instead of using camera sensors, we will be using inertial sensors, which will give the ability to make the analysis in real-time.

The second application is related to the development of a low-cost system for gait analysis of the whole-body, which can be used in home-based rehabilitation systems, we will use the same network of sensors for the upper limbs to validate them using the gold standard sensors used in rehabilitation centres.