Keywords

1 Introduction

Various characters that we enjoy appear in visual works such as films. One method for presenting those characters is animatronics. This is the combined technology of special effect modeling and mechatronics and is used in various areas such as for the president of Disney World or the creatures in Star Wars. In recent years they are also used in live spectacles combined with projection mapping and provide spectators with a life-like experience. One way of using animatronics is a performer gets into a suit and controls it themselves. This method reflects the body movements of the performer and allows direct interaction with other actors and guests, giving a true sense of presence without using any CGI. But animatronics using suits often involve multiple controlling staff moving parts of the lead character with a remote control in accordance with the performer’s movements and it is difficult to match movement with that of the performer playing the character. Detailed acting such as character facial expressions also largely depends on the experience and senses of the controlling staff.

This study proposes a combined system that allows for the facial expressions of the performer to be expressed on the character in real-time. This system uses multiple photo reflectors and EOG and verifies expression and gaze. With this it is possible to operate in small spaces and use within animatronics without the need of a camera. This system can sync the body movements and expressions of the performer and allows the performer to intuitively act out a character, making it possible for other actors and guests to achieve a high level of interaction with the character.

2 System Overview

This study aims to develop a new space-saving and light weight sensor unit that can be used within an animatronic suit and recognize and express the facial expressions of a performer in real-time. This study will develop a sensor unit that recognizes facial expressions and develop an animatronic which expresses the recognized facial expression information.

2.1 Development of Sensor Unit that Recognized Expressions

A person’s face has 22 types of facial muscles which create the various facial expressions of a person when moved in complex ways. The shape of the face’s skin surface also changes with change in facial expression.

This study recognizes these changes using multiple photo reflectors and estimates the facial expressions of the interior performer. This system also aims to recognize blinking and gaze by combining these with the obtained EOG. Figure 1 to obtain the EOG, biological signals obtained from an off-the-shelf myoelectric sensor were processed through a filter Fig. 1, the moving average was multiplied with a micro-computer (Arduino Mega2560) and treated as the input signal. The myoelectric sensor used was a Grove-EMG Detector from Groove.

Fig. 1.
figure 1

System chart and filter configuration diagram.

2.2 Development of Animatronic that Shows Expressions

An animatronic is developed which actually outputs the facial expression changes of the performer recognized by the sensor unit as facial expressions. Small arms corresponding to each area of the face which create all facial expressions are controlled with the variation amounts estimated by the sensor units as target values. The skin of the animatronic uses silicon (Gel-10) which is used in prosthetic makeup. Gel-10 is an extremely flexible material which allows for rich expression.

3 Experiments

Figure 2 shows the experimental results of this proposed method. The facial expression changes are recognized in real-time and reflected in a CG adapter acting as an animatronic. At present, the range of line of sight movement is recognized by being split into three steps. The five expressions of “Joy”, “Anger”, “Sadness”, “Surprise”, and “Normal” are recognized. However, complicated movement such as sudden eye movement, diagonal movement, or circular movement is not supported. Although change in five large expressions is recognized, incorrect recognition is prone to occur in between all expressions. There is noise contained in the EOG obtained using the myoelectric sensor primary due to difficulty in obtaining quick and complicated eye movements. While multiple filters and amps are used for noise processing and amplification, the EOG is a biological signal and touch sensor, so it is prone to various types of interference such as being affected by open air or the state of a subject’s skin. Thus, it is difficult to process noise with just a LPF or HPF. In addition, an EOG characteristic phenomenon occurs when electrodes have been connected for a long period of time where the drift voltage gradually rises [1]. It is therefore considered that the regular voltage value in this experiment could possibly have changed due to drift. In the future it will be necessary to revise the noise processing method, implement machine learning, and aim to design a system that is not prone to noise influence. Furthermore, the primary reason for incorrect recognition of facial expressions with this system was that although four photo reflectors were used to recognize five large expression changes, including how someone is regularly, partial changes from “Sadness” to “Anger” and “Sadness” to “Joy” were incorrectly recognized. It is thought that the current sensor is badly positioned and unable to correctly obtain changes in skin shape for facial expressions. In the future we will review the sensor position, increase the number of sensors, and aim to recognize changes in detailed facial expressions that occur in-between expressions.

Fig. 2.
figure 2

Experimental results.

4 Conclusion

This paper developed a space-saving and light weight sensor unit that can be used within animatronic suits, aimed to recognize and express the facial expressions of a performer in real-time, and conducted experiments. By using multiple photo reflectors and EOG signals, we were able to recognize simple eye movements (one-dimensional movements) and five large facial expression changes.

In the future we plan to improve the accuracy of recognizing facial expressions and eye movements with sensors and to actually proceed with the production of an animatronic mask which reflects facial expressions Fig. 3.

Fig. 3.
figure 3

Experimental animatoronics mask and sensor unit.