Keywords

1 Introduction

Along with the improvements of human sensing and AI technology, various studies of emotion recognition are performed more than ever. A study by Ekman proposing recognition of six basic emotions by using facial expressions is one of the most typical in this research field. In this case, feature values of facial expressions are calculated from images or videos shot by cameras to recognize emotion.

Body movements are another key to estimating or recognizing emotions, and there are many studies on this. However, in contrast to recognition by facial expressions, the targeted emotions change in each study [1]. Here, variations of sensing methods and target situations make these studies more difficult to conclude in general findings [2]. The feature values, which depend on sensors and situations, also change in each study. The numbers of open datasets show this difficulty.

We have focused on Laban movement analysis (LMA) to recognize emotions in a more general way [3]. LMA was proposed originally as a general method to interpreting human movements and has been used by dancers, actors, etc. [4]. LMA is composed of several characteristic elements, such as space (direct/indirect), weight (light/strong), and time (sudden/sustained). Space means direction bias of body movements, weight means strength of motions, and time means haste of change of body movements. Nevertheless, they are not defined numerically.

In our previous study, we defined our original feature values based on LMA and enabled the estimation of eight emotions with 60% or more accuracy [5, 6]. However, we needed to capture all motions during the task and then ask that participants answer the emotions during the task. We also proposed a classification method of emotional expression by body movements [7]. Here, we made clear that we could classify and define types of motional expression by body movements, and it became possible to realize semi-supervised learning for emotion estimation. For example, when we classified into three types, the estimation accuracy was about 68%. For both studies, we selected personal fabrication as the target situation, in which various motions and actions occurred.

In this study, we apply our LMA values and the classification method of emotional expression by body movements into a design creation task, one of a new digital fabrication task when using 3D printers, laser cutters, etc. [8]. So, we first reveal what kinds of emotions are evoked and what kinds of characteristics are observed during the design creation task. Next, we preform measurement of the body movements and the emotion interview during the design creation task. Then, we perform evaluation and clarify that our proposed method is applicable to the design creation task.

2 Classification Method of Emotional Expression Type

2.1 The Feature Values Based on Laban Movement Analysis

To estimate emotions from body movements, we focused on Laban theory [4], which is known as the mainstream method worldwide for interpreting human movements such as dance. We proposed an original method for a classification method of emotional expression type on LMA, which applied the theory to action analysis and emotion recognition. Laban suggested a notation method of “Effort-shape description” concerning body movements [5]. In this notation method, we focused on Effort, which expresses movements with respect to inner intention and is deeply related to emotion. Effort has four factors: Space, Weight, Time, and Flow. Among these factors, we selected three factors and defined our original feature values, as shown in Fig. 1. Space refers to biases of directions (Direct/Indirect) of body parts in LMA and is defined as the area of the triangle formed by the head position and the positions of the wrists. Weight refers to strength of motions (Strong/Light) in LMA and is defined as the vertical position of the head. Time refers to the haste of change (Sudden/Sustained) in body movements in LMA and is defined as the maximum value of moving averages of the speed of the head and the wrists across 60 s.

Fig. 1.
figure 1

The feature values based on Laban Movement Analysis.

2.2 Emotion Interview

We also needed to decide target emotions to measure emotions. So, we focused on the core affect model [9]. In this model, various emotions are arranged on a circle that is composed of the horizontal axis representing the pleasant/unpleasant and the vertical axis representing arousal/calm. According to this model, we chose eight target emotions, as shown in Fig. 2. Next, emotions in fabrication are measured by an emotion interview. The emotion interview looks back on the video after fabrication and makes the fabricator select suitable emotions and intensity (5 grades).

Fig. 2.
figure 2

Emotions used in the emotion interview.

2.3 Classification Method of Emotional Expression

We assumed that emotions are expressed via body movements. This was partially clarified by our previous study [5, 6] because we could estimate emotions by using LMA feature values and simple decision trees. However, there is a question of whether the expressed movements have some typical types just like facial expressions. If there are such types, it becomes possible to estimate emotions by using semi-supervised learning.

For this purpose, we first proposed a method to classify expression types only by movements. Here, we focused on sensitivity analysis, which can analyze complex systems by observing inputs and outputs [10]. Based on this, we defined the parameter called expression sensitivity (ES) for each LMA value of Space, Weight, and Time. After this, we introduced Ward’s method for hierarchical by using the normalized ES parameters [7].

Then, as the next step, we analyzed each type of movement. If there were differences in feature values of movements, we concluded how internal emotions are expressed as external movements.

After these steps, we evaluated estimation rate to evaluate the method. If there were some types, emotions could be estimated at a high rate.

2.4 Evaluation Experiment

To verify the effectiveness of our method, we performed an experiment that included measurement of body movements and emotion interview in a fabrication task. The participants were twelve male and eight female Japanese students.

First, we asked participants to make their original synthesizer in pairs by using the electronic building blocks (KORG, littleBits). During fabrication, participants’ body motions were shot using a motion capture system (Vicon, Bonita 10) and video cameras, as shown in Fig. 3. We calculated LMA values from these data. Next, we made participants perform an emotion interview, and we extracted emotions during fabrication.

Fig. 3.
figure 3

LittleBits and experimental scene.

We applied our proposed method to measured datasets and extracted three emotional expression types, as shown in Fig. 4. In the figure, ** means p < 0.01, and * means p < 0.05. Here, we calculated the accuracy of estimations for each type of dataset and randomly chosen dataset by tenfold cross-validations using a discriminator, which we constructed in a support vector machine (SVM). As a result, we were able to estimate the average of each dataset as 67.97% and each randomly chosen dataset as 51.02%. Therefore, our proposed method showed the validity in fabrication.

Fig. 4.
figure 4

Distribution of the classification features.

3 Design Creation Task for FES Watch U

In this study, we selected a design creation task for digital fabrication by using a display watch (SONY, FES Watch U) with which we could design the face and the belt parts. We prepared tools for the design creation using Microsoft Surface Pro 4, the Surface Pen, and a mouse. In the experiment, participants designed a template image of the FES Watch U using Adobe’s Illustrator or Photoshop, as shown in Fig. 5. Then, participants transferred the design image to Apple iPad Pro and Fes Watch U using an application termed FES Closet. It was possible to design in full color, but the design was displayed in monochrome on FES Watch U because its face was made by electronic paper.

Fig. 5.
figure 5

FES Watch U and example of a design.

4 Analysis of Evoked Emotions in Design Creation

4.1 Interview for Extraction of Evoked Emotions

We performed an experiment in which participants performed design creation tasks as described in Sect. 3. After that, we performed an interview for extraction of evoked emotions based on the evaluation grid method [11]. In the interview, we instructed participants to comment positive and negative points in design creation as much as possible. Then, for each commented point, we asked questions for rudder up/down and extracted related evaluation items. The participants were six male Japanese students.

We used E-Grid, which is a visual analytics system for evaluation structures [12], for analysis of extracted evaluation items. All the evaluation items were inputted to E-Grid, and we integrated the categories into one when they had the same meanings. We also modified inverse structures. We performed the modification after discussion with two authors.

In order to make the numbers of positive and negative emotions the same, we set up the thresholds of evaluation items, which is shown in E-Grid as 0.75 and 0.64. Figures 6 and 7 show the positive and negative evaluation structures. The left side of the evaluation structures indicates superordinate concepts (emotions), which are extracted by rudder-up questions, and the right side of the evaluation structures indicates subordinate concepts (conditions), which are extracted by rudder-down questions. As a result of analysis, we extracted fourteen positive emotions and seventeen negative emotions from each structure. The evoked emotions are shown in Table 1.

Fig. 6.
figure 6

Positive evaluation structure in design creation.

Fig. 7.
figure 7

Negative evaluation structure in design creation.

Table 1. Evoked emotions by Evaluation Grid Method in design creation.

4.2 Construction of Core Affect Model

In our evaluation experiment, we analyzed the characteristics of thirty-one extracted emotions by using a two-dimensional plane that consisted of the pleasant/unpleasant axis and the arousal/calm axis, termed the core affect model by Russel [9]. First, we asked participants to evaluate one of the emotions according to the five-point Likert scale. Next, we converted the evaluated data to be within a minimum value of –2 and a maximum value +2 and then calculated the average of each emotion. We arranged each emotion on the core affect model based on the data. The participants were six male and three female Japanese students using the FES Watch U.

We constructed a design creation version of the core affect model, as shown in Fig. 8. These emotions had strong positive correlations between the pleasant/unpleasant and the arousal/calm. Therefore, the evoked emotions in design creation were connected strongly with the pleasant and arousal and with the unpleasant and calm.

Fig. 8.
figure 8

Core affect model of design creation version (in Japanese).

5 Application of Proposed Method to Design Creation

5.1 Experiment

We performed an experiment for measuring body movements and emotion interviews in design creation, as shown in Sect. 3. Experimental scenery and its setup are shown in Fig. 9. During the experiment, we attached plates of motion capture markers on participants’ head, back, arms, and wrists. We shot their body movements using a motion capture system and video cameras. When participants answered the emotion interview, they used the core affect model, which is shown in Fig. 10. This is the emotional model based on Fig. 2, but we added words to describe details based on the results of Sect. 4. The participants were eight male and fifteen female Japanese students.

Fig. 9.
figure 9

Experimental scene and environment.

Fig. 10.
figure 10

Emotions used in the emotion interview of design creation.

5.2 Application of Classification Method of Emotional Expression Type

We applied our proposed method in Sect. 2 to design creation. The same analysis procedure in Sect. 2 was performed using SAS JMP 13. First, we quantified measured data in 5.1 into LMA values and calculated ES values from them. Afterward, we classified the emotional expression type by introducing Ward’s method for hierarchical clustering analysis. As a result of clustering, we extracted four types (Fig. 11).

Fig. 11.
figure 11

Results of hierarchical clustering

Next, we analyzed the characteristics of each type. We performed analysis of variance (ANOVA) for each ES in order to know what kinds of characteristics each type had. As a result, there were significant differences at a significance level of 5% for all types. Furthermore, we performed multiple comparisons using Tukey-Kramer’s HSD test in order to reveal what kinds of relations were between each ES in each type. Figure 12 shows the results. In the figure, ** means p < 0.01, * means p < 0.05, and † means p < 0.10. As a result, each type had different characteristics. In detail, in Type 1, the elements other than Time related to describing emotional expression. In Type 2, Space was an important factor in describing it. In Type 3, Time was the most contributing value. On the other hand, in Type 4, Weight contributed to describing emotional expression.

Fig. 12.
figure 12

Distribution of the classification features.

5.3 Evaluation by Using SVM

Regarding estimation accuracy, we verified whether our proposed method could be applied to design creation. So, we calculated the accuracy by using a discriminator, which was a constructed SVM. We trained the SVM-based discriminator with a radical basis function using the e1071 package and implemented it in R. We performed tenfold cross-validations and optimized the adjustment parameters (gamma and C). We separately trained the discriminators for each type and randomly chosen dataset. Here, randomly chosen datasets consisted of 5076 samples, which were the chosen datasets of all participants. Table 2 shows the results of the estimation. As shown, estimation accuracy was about 20% higher for each dataset type than for the randomly chosen datasets, and our proposed method was valid in design creation. Also, we compared the accuracy of each task and obtained equivalent results. Therefore, our proposed method was applicable to design creation.

Table 2. Result of estimation.

6 Discussion

In this study, we made clear that the classification method of emotional expression type proposed in our previous study was applicable for a different task with different body motions. This means that the classification method is effective in various fabrication tasks. Also, we concluded that this method can be potentially applicable to extracting emotions in various scenes of using PCs, such as during work, school, and private space via body movements.

In this study, our method realized estimation accuracy for each type at an average of 80%. In order to perform higher estimation accuracy, it will be necessary to precisely analyze body movements and include it as a parameter. For example, detection of the particular body movements for each task, especially the timing of changing emotion, would be effective.

We proposed an emotion extraction method via body movements using a motion capture system, but the scenarios in which this equipment can be used are limited. A novel parameter that replaces our LMA values is needed in order to use our method in various situations. So, we plan to use some particular values that can be measured from general PC devices, such as mouse movements and head movements that can be measured using a PC camera.

7 Summary

In this study, we clarified that the classification method of emotional expression type is applicable to design creation by evaluating an accuracy of estimations. First, we extracted the evoked emotions in a design creation task and revealed the characteristics by introducing the evaluation grid method. Next, we performed an experiment to measure body movements and emotions in design creation using FES Watch U. Then, we applied the method to measured datasets. As a result, we extracted four types with different characteristics. Also, we compared estimation accuracy for a fabrication task and a design task, and the results showed that our proposed method can be applicable to design creation as well.