1 Introduction

It is expected that artifacts, such as robots, will increasingly be developed to cooperate with us and assume social roles in human society. While we can communicate with strangers because we know that other people possess the intellectual ability to form a relationship with us, it is difficult to communicate with an unknown artifact because it is unknown whether it can communicate with us or how it will behave. As humans, we recognize an agency depending on actual interaction. For example, we do not regard others who we pass in a crowd as communication partners, yet we often treat computers as agents [1]. It is necessary to clarify how people attribute agency status to an object through interaction. Humans can perceive the properties of an object or agent, such as its animacy and intention, by observing a moving geometric Figs. [24]. It has been previously shown that humans can recognize animacy and intention through interaction with an abstract shape robot [5]. These researches indicate that only actual behavior during the interaction can encourage the development of communication relationships between humans and artifacts. However, the experiments for almost all previous studies involved interaction between participants and objects beforehand through the experimental task. In real situations, people must first realize that the artifacts are capable of interacting with them and react to their actions in the early stages of the interaction. We hypothesize that this process is carried out subconsciously and that, after this process, the interaction shifts to the stage of conscious interaction in order to create a relationship. We define this subconscious process as the “stage of subconscious interaction”. Assuming that this process exists, we conducted an experiment in which participants interact with an unknown entity.

2 Stage of Subconscious Interaction

In human communication, we can immediately recognize a partner as human. The human brain possesses an area that is specifically designed to detect the human body [6, 7]. In addition, as can be seen from the phenomenon called “biological motion” [8], we have a specific perception for recognizing human body movement. On the basis of these abilities, other people are regarded as agents with which we can communicate. On the other hand, artifacts must have a physical appearance that can be recognized as an agency or, through interaction, encourage humans to perceive them as such. We focus on interaction, rather than physical appearance, because it can be used to design the form of an artifact. In previous research on interaction with artifacts, participant behavior based on the premise that the participants and artifacts can interact has been observed. However, in real situations, people must first realize that the artifacts are capable of interacting with them and react to their actions in the early stages of the interaction (Fig. 1). We hypothesize that this process is carried out subconsciously and that, after this process, the interaction shifts to the stage of conscious interaction in order to create a relationship. We define this subconscious process as the “stage of subconscious interaction” and attempt to clarify it. Until recently, many studies on interaction have used upper limb motion. Entire body movements, such as gait, differ from upper limb action [9]. While walking, humans unconsciously adjust direction and automatically avoid obstacles underfoot. Therefore, unconscious motions occur at a higher frequency than interaction using only the upper limbs. By using an abstractly shaped robot whose function is only moving a flower, we can observe lower-limb-driven interaction.

Fig. 1.
figure 1

Stage of subconscious interaction

3 Experiment 1

3.1 Method

As shown in Fig. 2, we used two rooms as the experimental environment. Both rooms were constructed with similar appliances. The positions of the participants were mirrored by the robots. That is, the position of the robot located in one room mirrored the position of the participant in the other room. In this way, each participant was able to interact with the other participant without recognition of each other (Fig. 3). We used a Roomba vacuum robot controlled via Bluetooth. An encoder of the tires measured the robot’s position. The participant’s position was measured with a laser rangefinder. We assigned 10 pairs (20 university students) to the unknown condition and the known condition. All participants were instructed to move freely within a three-meter square field. In the unknown condition, the participants were not told about the robot’s behavior. The pair participant assigned to the known condition group was instructed, “The robot moves as an entity that you want to relate to.” Their partner was told, “The robot moves as an entity that you want not to relate to.” Every participant was left alone in the room, and the interaction between the participants and the robot was observed for three minutes. The participants then responded to questionnaires.

Fig. 2.
figure 2

Experimental setup

Fig. 3.
figure 3

An interaction scene

3.2 Results and Considerations

Figure 4 shows the result of the time rate of interaction in which both participants and robot ambulate at the same time. Under the known condition, the participants interact with the robot through the entire experiment time. On the other hand, interaction under the unknown condition is difficult to continue. In the case where an interaction was carried out continuously, the distance under the known condition changed more frequently than the distance under the unknown condition. These results show that the interaction pattern with an interaction partner that is a known entity differs from the interaction pattern with an unknown entity. It is easy for an interaction with unknown entities to become deadlocked. This points to the importance of the primary stage of the interaction between humans and artifacts. In this experiment, it is difficult to identify the point when the unknown condition participants realize that the robot can interact with them. To find that point, we carried out an experiment using a think-aloud method and try to clarify the relation between behavior and cognitive states.

Fig. 4.
figure 4

Interaction rate

4 Experiment 2

4.1 Method

Apparatus. As shown in Fig. 5, we used two rooms as the experimental environment. Both rooms were constructed with similar appliances. The participants’ movement was restricted within a circle three meters in diameter. The positions of the participants were mirrored by the robots. The position of the robot located in one room mirrored the position of the participant in the other room. In this way, each participant was able to interact with the other participant without recognition of each other. We used a Roomba controlled via Bluetooth. An encoder of the tires measured the robot’s position. In addition, the robot’s position was adjusted using a video camera. The participant’s position was measured with a laser rangefinder. The participants put on a headset and audio recorder for recording their thinking aloud during interaction. In this way, we tried to specify how participants regarded robot behavior (action or reaction), and why participants moved.

Fig. 5.
figure 5

Experimental setup

Participants and Experimental Task. Nine pairs (18 university students) participated in this experiment. The participant pairs were guided into the rooms separately without knowledge of their partners. The participants were required to say whatever they were thinking or feeling. To practice thinking aloud, participants performed the tangram for five minutes before an interaction. After the practice, the participants were instructed to move freely within the interaction field while thinking aloud. The contents of speech were recorded by voice recorder through a headset participants wore. The participants were not provided information about the robot’s behavior. The participants were left alone in the room, and interaction between the participants and robot was observed for three minutes. The participants then responded to questionnaires.

4.2 Observed Data

We observed and analyzed the following data:

  • Behavioral data

    • Log data of participant position (every 125 ms)

    • Log data of robot position (every 125 ms)

    • Interaction video

  • Speaking data

  • Questionnaires

    • Free descriptions about behavior of participant and robot.

4.3 Results and Considerations

We assume that humans often realize that a partner has the ability of forming a relationship with them through unconscious interaction. Thus, physical interaction seems to cause recognition of others who have the possibility of carrying out interpersonal interaction. By analyzing behavioral and speaking data, we try to model the relation between awareness of agency and physical interaction. With the think-aloud method, we can investigate participants’ recognition of the behavior of the robot. We took dictation of the participants, and labeled these data. By focusing on speaking about robot behavior such as, “It’s moving.”, “It stopped.” or “The robot is coming here.”, and about themselves such as “I will stay here.”, “I keep my distance.” or “I will approach it.”, the relation between speaking and behavioral data was explored. On labeling speaking about robot behavior, robot action and robot movement are distinguished. Speaking about robot behavior for participants, such as approaching or departing, is labeled as robot action, and speaking about robot behavior, such as moving or stopping, is labeled as robot movement. The physical interaction is analyzed using a Bayesian network (BN) because it can express the causal relationship between some parameters.

The participants’ main actions were to approach the robot or to stand away from it. The participants’ actions seem to be decided based on distance from the robot, previous participants’ actions and previous robot actions. Therefore, the following values are used as parameters of the physical interaction.

  • Dist: distance between participant and robot

  • \(\varDelta Dist\): Dist change from previous frame

  • \(V_P\): perpendicular ingredient to robot position of participant’s velocity

  • \(V_R\): perpendicular ingredient to participant’s position of robot’s velocity

  • \(V_P^+\): \(V_P\) of next frame

  • \(V_R^+\): \(V_R\) of next frame.

However, the following limits were given beforehand:

  • \(V_P^+\) and \(V_R^+\) do not become parents

  • Dist does not become parent of \(\varDelta Dist\).

  • \(V_P\) and \(V_R\) do not have any parents.

BN structure was learned every second from behavioral data. Each data consists of 120 frames (15 sec of data). These data were treated as a continuous value. Each arc of learned BN and speaking labels every second were accumulated, and a chi-square test was carried out.

Table 1. Relationship between arcs from Dist and Speaking
Table 2. Relationship between arcs from \(\varDelta Dist\) and Speaking
Table 3. Relationship between arcs from \(V_P\) and Speaking

Tables 14 show the result of the chi-square test between arcs and speaking label frequency. Significant relationships were found to exist between arcs from \(\varDelta Dist\) and speaking label. By residual analysis, when the arc from \(\varDelta Dist\) to \(V_P^+\) exists, speaking about robot action and self action is increased. Also, when the arc from \(\varDelta Dist\) to \(V_R^+\) exists, labeling of robot action increases. A significant relationship was found to exist between the arc from \(V_R\) and speaking label. By the residual analysis, when the arcs from \(V_R\) direct to Dist, \(\varDelta Dist\) or \(V_P^+\), speaking labeled as robot action increases. The relationship between arcs from \(V_P\) to DistDist indicate the same result as \(V_R\), but the relationship to \(\varDelta Dist\) and \(V_R^+\) is not obvious because some expected frequencies are less than 5. At this stage, the analysis is a restricted relation of single arc existence because of insufficient data.

Table 4. Relationship between arcs from \(V_R\) and Speaking
Fig. 6.
figure 6

Conception of modeling subconscious interaction

5 Discussions

In this study, we aim to model the primary stage of interaction that a person undergoes in realizing that a robot has the ability to form a communicative relationship. As a first step, it is important to clarify the relationship between physical interaction and cognitive state. In this experiment, we try to model the physical interaction by Bayesian network and to explore the cognitive state by the think-aloud method. As a result, some arcs’ existence relates to the speaking data. Because of the relation between arcs from \(V_R\) and speaking data, the existence of arcs from robot proximity to distance or participants seems to reflect robot action or reaction. This learning BN structure seems to be able to express the process of awareness that the partner has the ability to interact. Figure 6 shows the conception of modeling subconscious interaction. However, the effects of arcs’ combination were not investigated because of insufficient data. Also, labeling of speaking data should be classified in a more detailed manner to model the process. A further experiment will be necessary to solve these problems.

6 Conclusion

To extract the subconscious interaction, we observed interactions with known and unknown robots whose positions were mapped by other participants. The participants under the unknown condition interacted more infrequently, and the interaction was more easily inhibited than for the known condition group. These results suggest that a stage of subconscious interaction exists for regarding objects as interaction partners. To model this interaction, another experiment was carried out. In this experiment, we tried to clarify the relation between behavioral and cognitive state by the think-aloud method. As a result, the learned Bayesian network structure based on behavioral data relates to the speaking data. By modeling the process, artifacts that are able to form relationships with humans will be designed, thus contributing to the promotion of communication and interaction between humans and robots.