Keywords

1 Introduction

Interactive surfaces such as tabletops and large interactive displays provide a collaborative focal point, allowing people to gather around them and to engage in various tasks, individually or as a group. By dynamically changing their locations around interactive surfaces people can engage in tasks, change their role in a collaborative setting, protect their privacy, and disengage from a task by simply stepping away from the surface [8, 13]. These benefits of interactive surfaces are founded on the ways people use furniture and tables for thousands of years, augmenting the classic table with the additional advantages of computation and interactive visualization [13, 21]. More than a decade ago ConnecTables presented interactive displays that can be merged, manually, in order to affect the workspace and the interaction around it [20]. The premise is that changing the physicality of the interactive spaces we provide users will impact their interactive experience and affect the quality of their collaboration. We pursue this vision by proposing interactive surfaces that can automatically deform, move, connect and disconnect, changing their size, location and shape according to users’ need. Recently, TransformTable explored interactive surfaces that can deform in order to address different tasks and collaborative needs [18]. In this paper we propose interactive surfaces that can automatically change their positions in order to address different tasks and collaborative settings, and present a design exploration of this vision.

Our exploration is based on a set of practical prototypes we call MovemenTables (Fig. 1). MovemenTables (MT) can move, rearrange, connect and disconnect in various forms according to task and needs. MT’s movements can follow users’ requests, or be mediated by collaborative or social need, and autonomously initiated.

Fig. 1.
figure 1

MovemenTables

Realizing MT required us to tackle three main challenges.

(1) Technical implementation of MovemenTable. We implemented two MT prototypes, MovemenTable Senior and MovemenTable Junior, following a human-robot Interaction (HRI) design approach. In the technical sense MT can be viewed as a robotic interactive tabletop that autonomously moves, can be tracked and benefits from some level of situational awareness of its environment [4].

(2) Designing motion cues with tabletop content. In order to be more socially acceptable to the people interacting with it, MTs should provide simple cues regarding their movements, clearly expressing their intent to start moving, to turn, or to stop. Previous research shows that people can infer goal and intent from non-verbal motion cues [3, 6, 7]. This tendency to relate social intent to even abstract motion cues was later explored in HCI and HRI (e.g., [11, 19]). MTs are employing a similar approach, communicating their physical locomotion intentions using motion stylization of the interactive visual content the display. In essence, MT is using its tabletop visual content as if it is an animated cartoon character, styling, squashing, stretching and augmenting the visual content in order to communicate its locomotion intent.

(3) MT’s user experience. Ultimately, MT’s goal is to impact users’ interactive social experiences. Provided with sufficient social awareness, MTs can attempt to support, and even guide people’s social interactions by moving, connecting or disconnecting. For example, two or more MTs may connect in order to provide a wide interactive display for a large group of users, or break away when users are engaged in different tasks or require privacy. MT can take advantage of their spatial relationships and proxemics [5, 8] to people in order to socially guide the interaction, for example by approaching a reluctant user, or provide privacy to one group of people by avoiding others. While our work stop short of tackling these challenges in practice and in naturalistic setting, we report the results of an extensive experimental study of the fundamental attributes of MT, confirming that MT’s motion stylization cues help users infer its movement intention (study 1), that MT’s basic spatial movements are recognizable and socially acceptable by users (study 2), and that the basic movements of a single or of multiple MTs impact users’ spatial behaviors and perspective around the MTs in interactive and collaborative settings (study 3 and 4).

This paper reports how we pursued the aforementioned three MT research threads, the insight gained on the concept of automatically moving tabletop interfaces, and the remaining challenges.

2 Related Work

2.1 Social Interactions and Tabletops

The physicality of tabletop interfaces was shown to affect the social interaction between users, for example by influencing people’s personal spaces and their spatial arrangements around the surface (e.g., territoriality [16], group coupling [21]). The design of better collaborative tabletop workspaces was shown to be affected by the table and group size [13]. While these effects were observed and reflected upon, static tabletop interfaces are not capable of dynamically changing their physicality and cannot physically affect the spatial behaviors of people interacting with them.

In collaborations, workspace dynamic connections have been well researched with personal tabletop [20] and personal tablets [9]. ConnecTable [20] is a manually movable personal tabletop display, motivated by the simple social observation that people move closer to each other when engaged in discussions. Two ConnecTable displays can be seamlessly coupled in order to create a larger display workspace, or can be detached to provide two separate interaction spaces. TransformTable is a self-actuated shape-changing tabletop display enabling basic deformations of its interactive surface according to task and interaction settings, but with no locomotion capabilities [18]. MT is extending these past efforts by integrating locomotion capabilities in interactive tabletops, and examining how their intentional movement affects people’s spatial behaviors and experience during collaborative interaction. MT’s dynamic spatial relationships with users closely relates to research on spatial relationships between displays, interfaces and people, for example proxemics interactions [8] and F-Formation [9]. The concept of space-aware interactive tabletops which detect and sense their users was explored in static tabletop interfaces (e.g., [1]).

2.2 Human-Robot Interaction and Robotic Tables

MT is practically a robotic tabletop interface. Interaction between people and robots was shown to follow behavioral and social spatial interpersonal principles, such as proxemics [14] and the design of MT and its movements is informed by these themes. MT augmentation of its physical movements using classic cartoon art and motion stylization techniques [7, 10], allows it to express its locomotion intentions. Animation and cartoon art techniques were previously introduced to HRI (e.g., [19, 22]), though MT’s adaptation of these principles to interactive tabletops is, as far as we know, new in its transformation of the tabletop visual content into an implicit animated character providing motion cues to the user.

Past research in robotic interfaces proposed the concept of robotic tables. For example, [15] implemented multiple table robots that could autonomously change their positions and arrangement according to different tasks. The project was focusing on actuating the classic table furniture, and did not address interactive tabletops, motion cues, or the collaborative tasks that tabletop interfaces afford.

3 MovemenTable

We designed two types of MovemenTable prototypes: MovemenTable Senior (MTSr.) and MovemenTable Junior (MTJr). MTSr is more robust, providing a larger interactive space for small group collaboration and implemented using an internal rear-projector, while the smaller MTJr is designed for personal use, and is implemented using a commercial touch display and a Roomba for its locomotion.

The MT prototypes are being controlled by an off-board server that communicates with the MTs and handles their movements. The server synchronizes the MTs coordinated actions, for example connection or separation of two MTs. The server also helps easy administration of the MT Wizard of Oz (WoZ [12]) algorithms we employed when evaluating the tabletops.

3.1 MovemenTable Senior

MTSr is a wheeled robotic interactive tabletop display (Fig. 2a), 96 × 96x100 cm (WxDxH) in size. The physical dimensions of MTSr were chosen to allow four adults to comfortably interact with the tabletop. The interactive surface is a typical FTIR tabletop with rear projection, using a 40’, 850 × 850 mm screen, with maximum projection area of about 800 × 600 mm, and 4:3 aspect ratio. The projector is fed by an external PC using a wireless HDMI connection. MTSr requires an external power supply to drive the projector and is carrying a power extension cord as it moves about. MTSr’s locomotion is controlled by an onboard PIC microcontroller, which communicates with an external PC through Bluetooth and manages the motor driver. MTSr translates and rotates using a traditional differential two wheeled robot at the bottom of the table enclosure. MTSr controls its translation, with speeds of around 0.3 m/sec, and rotation, with speeds of around π/4 rad/sec. The location and the orientation of MTSr and its users are being tracked using a motion capture system.

Fig. 2.
figure 2

MovemenTable prototypes

3.2 MovemenTable Junior

MTJr is a smaller robotic table that is designed for personal use. It carries a commercial touch display (27 inch full HD, 1980 × 1080 pixels display, IIyama ProLiteT27), controlled by a laptop and buttery within the MTJr enclosure. MTJr is completely wireless and carries its own standalone battery power supply. For locomotion, MTJr uses a Roomba with Bluetooth receiver (RombaSCI), mounted below the MTJr wheeled alumni frame box enclosure (67 × 42x90 cm WxDxH, in dimension, see Fig. 2b) enabling movement capabilities which are identical to MTSr. MTJr’s maximum speeds are 0.5 m/sec in translation and π/3 rad/sec in rotation. In our fourth study (described in Sect. 6.2, below) we explored fundamental MT’s connecting and separating movements using two MTJrs.

3.3 Visual Motion Cues

To provide users with motion cues about the MTs movements and spatial intentions we implemented an augmentation of the MT’s physical movements with visual motion cues (Fig. 3). While other motion cue modalities can be implemented, the current prototype of MT was designed so it will be as true as possible to its tabletop metaphor, rather than becoming, for example, a speaking or arms waving robot. MT is using its current visual tabletop content to create a set of implicit animated characters that convey its locomotion intentions via non-verbal motion cues. In principle MT’s motion cues can incorporate a rich variety of motion stylization and animated cartoon art techniques, for example, MT could use different motion cues to express different emotive motions such as hesitation, determination, shyness, or submission [23]. Our current MT prototype provides motion cues only for basic straight movement, using five movement phases: ready, set, go, stop and relax. When the MT is about to move it generates motion cues for the five movement phases by capturing its current visual tabletop content and using it as an implicit animated character. The visual content will then be used to generate the different animated motion stylizations, which are informed by animation techniques such speedlines [10] and squash-and-stretch [7]. Below we briefly discuss the implementation of the five movement phases based on the cartoon squash-and-stretch [7] steps (Fig. 4).

Fig. 3.
figure 3

Motion cures by animated tabletop content

Fig. 4.
figure 4

Squash-and-stretch motion stylization

Ready.

MT disables the touch input on its interactive surface and freezes the screen content. The screen content is captured as an image and mapped onto a 20 × 15 points grid (Fig. 3a). Following that, the image, using the point grid, dynamically deforms, contracting onto itself and shrinking slightly over two seconds in a sequence that suggests concentration and preparation for oncoming events. A side effect of the ready shrinking is that it frees screen space which subsequent movement phases use for their motion stylizations (e.g. drawing speed-lines).

Set.

This motion cue was designed to generate anticipation to movement towards a specific direction. The squash cue [7] and the level of exaggeration, provide indication of the intended movement direction, as well as a sense of the expected distance (Fig. 3b). The duration of the compression animation is 500 ms, and the compressed form is kept for another 200 ms before moving to the next motion cue.

Go.

This phase initiates and augments MT’s actual physical movement towards its intended direction, as shown at the Set phase. As MT starts to move the tabletop image is quickly stretched and deformed into an arrow-like shape, pointing towards the physical movement direction (Fig. 3c). Go also includes dynamically moving speedlines and shadows [10] which are overlaid behind the arrowed image to further augment MT’s physical movement, stopping only when MT reached its destination.

Stop.

As soon as MT reached its destination it physically stops, with its visual cues showing a squashed animation of the visual content, communicating exaggerated deceleration in the direction of the previous movement trajectory, and then recovery, the entire Stop animation sequence is about 400 ms in length (Fig. 3d).

Relax.

Following the exaggerated Stop deceleration, MT presents a slow recovery cue, with the screen leisurely stretched back to its full screen version, in a 2 s animation. During Relax the touch input on the screen is re-enabled. The Relax cue is designed to clearly show that MT’s physical movements are all done, prompting users to reengage with the tabletop interactive content (Fig. 3e).

4 Interaction Scenarios with MovemenTables

In this section we assume that MTs is provided with sufficient social and situational awareness of the users surrounding it, e.g., their position, orientation, group size, and some low-level of insight on their overall interactive goal and social engagement. While these assumptions are currently not realized in our MT prototypes, we believe they can become feasible, and they are useful in allowing us to consider the design of MT’s interactive scenarios. Figure 5 shows some examples of MT’s fundamental potential applications. As shown in Fig. 5 upper row, MT’s approaching movement can invite a specific person (e.g., a shy or reluctant user) and provide her with access to a digital workspace at her position. MT also can keep the workspace close to a moving person by following their movements or moving out of her way when the interaction is over by avoiding movements.

Fig. 5.
figure 5

Interaction scenarios of MovemenTables

Figure 5’s middle row shows basic usage examples of a single MT working with a group of people. Similarly to the single user case, MT’s approaching and avoiding movements affects the group’s physical workspace, and may change their task-flow and group dynamics. For example, MT’s centering movement between two people can physically emphasizes their work or conversational space, providing a shared physical and interactive surface, or MT clearing the shared space can indicate the ending of a task. Following, MT’s movements, if supported by sufficient situational awareness of the group (e.g., [4, 8, 9]) can assist and augment face-to-face collaboration.

Figure 5’s lower row presents a simple example of how a synchronized group of MTs can support users’ collaboration. For example, two or more MTs may connect in order to provide a wide and shareable interactive display for a large group of users, or break away to provide individual workspaces when users are engaged in different tasks or require privacy. We envision that these MTs’ movements can be actuated by user’s explicit commands, by implicit inputs like proxemics and F-Formations, or autonomously based on the task phases.

In summary, MT’s autonomous movements, given sufficient situational awareness, can physically create and change its users’ workspaces adapting them according to the task-flow, and potentially helping users improve task efficiency, and allowing them to feel more comfortable during different phases of a dynamic task. The following sections describe our exploratory evaluations of the fundamental attributes of MT, how MT’s movements are recognized by users based on the motion cues, and how users spatial behaviors and space awareness are impacted by MTs basic movements.

5 Understanding of MT’s Motion Cues

We conducted two observation studies to investigate whether tabletop users can infer, understand and socially relate to MT’s basic movements based on its motion cues. The first study focused on MT’s straight movement and examined whether observers could anticipate MT’s locomotion intention based on the motion cues it displays. The second study investigated whether observers can infer the social essence of MT movements. We used single MTSr for both of observation studies, allowing participants to perceive MT’s movements along with their associated motion cues.

5.1 Study1: Linear Movement

Goal.

Our first study was set to examine if MT’s motion cues are comprehensible, and whether they can help people anticipate MT’s locomotion intent prior to its actual movement.

Method.

This study was conducted in a 5 m x 5 m experimental room. We recruited fourteen participants from the local university (six male, eight female, average age: 21.9) who were not informed of MT’s locomotion capabilities. We used two MT’s conditions in a within-subject design study: animated motion stylization (AMS); and MT without the motion stylization for baseline, presenting a static image while its moving (SI). The order of the conditions was counterbalanced. For each condition, MT displayed the visual content shown in Fig. 3, and moved back and forth twelve times along a 3 m straight line. Each participant observed the movements from approximately one meter away from the line-of-movement. Afterwards each answered a questionnaire about MT’s movements using a 5 Likert scale (1: disagree, 5: agree), and was interviewed.

Result and Discussion.

We confirmed the overall tendency of the obtained Likert-scale data fits normal distribution and analyzed the data with one-way ANOVA. Unsurprisingly, MT’s motion cues made sense to participants, and they reported understanding the coupling of the animated visual content with the table’s physical movement and movement intentions in the AMS condition compared to reporting inability to infer MT movements in the SI condition. More interestingly, participants reported higher awareness of the screen visual content in the AMS condition (4.57 when answering “I paid attention to the screen”) than in SI condition (3.43) (p < .05). They reported surprise when MT moved for the first time in both conditions, with average rating of 2.71 and 3.42 for AMS and SI respectively (p > .05), meaning that AMS did little to diminish the surprise effect of an interactive tabletop move for the first time. Overall the results show that MT’s animated motion stylization significantly prompted users to look at the screen content, and helped them infer the table’s movement and its direction prior to the actual movement.

When asked to reflect on the AMS motion cues, most participants remembered the first three phases (Fig. 3a, b and c), and could map them to the metaphor of crouching to set before starting to move. Particularly, participants liked the arrow-shaped Go stylization (Fig. 3c), which received a relatively high score of 3.58 when answering “I felt that the animated screen content supported the table physical movement”. While the Stop phase (Fig. 3d)) was successful in reflecting MT’s reaching its end of movement (4.07 for AMS), the relax phase (Fig. 3e) was less effective in expressing that MT relaxes, with AMS of 1.86 for “I felt the table becomes relaxed when it stopped.” while SI was 1.36 (p > .10).

During the interview many participants reiterated that the integration of physical and visual elements on MT was providing efficient cues for the table’s movement intentions. The interview provided many positive comments on MT and on its use of AMS: e.g., “funny”, “interesting”, or “useful device that extends classic furniture”.

5.2 Study 2: “Social” Movements

Goal.

Our second observation study examined how observers recognize and understand MT’s basic social behaviors when it was moving by itself, or when it was moving in relation to actors. This study was conducted with identical experimental setting and with MT providing AMS motion cues in all the conditions.

Method.

Ten participants (four males and six females, average age: 22.5) in a within-subject design following six MT movement conditions. Participants were not informed of MT’s locomotion capabilities. In every movement condition, participants were asked to observe MT’s movement for 30 s from 1 meter away from the movement range. Participants were then asked to reflect on their experience via a GodSpeed questionnaire [2], reflecting on five HRI perceived aspects of a robot: anthropomorphism, animacy, likability, perceived intelligence, and perceived safety. We used GodSpeed, a standard qualitative HRI evaluation tool, in order to evaluate user’s perceptions and social acceptance of MT’s social movements, and to reflect on how these can scale to the perception of MT as a social agent. Each dimension was evaluated using 5 points scale. Some of MT’s movements were done in relations to actors; the six conditions were presented in a customized counterbalanced order among participants. Below we outline the different MT’s movements examined in the study:

StandMT was located at the center of the room, not displaying any visual content on its surface and not moving.

MoveMT displayed a static image on its surface and moved randomly within the room.

MT MoveMT randomly moved within the room while displaying motion cues.

MT FollowingMT automatically followed an actor who was walking freely in the room (MoCap tracked). It was demonstrated several times for duration of around 30 s. (Figure 6a).

Fig. 6.
figure 6

MT social movements in study 2

MT CenteringMT located itself, with motion cues, at the central position between the two actors, simulating a basic social behavior of MT suggesting its physical workspace to collaborating users. It was demonstrated several times for duration of around 30 s. (Figure 6b).

MT AvoidingMT continuously kept away from the moving actor as long as possible within the room (Fig. 6c).

In all six conditions MT’s initial location was at the center of the room and its MT’s movements were initiated by the actor’s movements.

Result and Discussion.

In addition to the Godspeed HRI questionnaire, we also asked participants to reflect on their understanding of MT’s movements by selecting descriptors from six candidates. Overall, as we expected, participants correctly understood the meaning and intentions of MT’s social movements. For example, MT Following condition was accurately judged as “Table is following a person” and MT Centering condition was correctly recognized as either “Table is following” or “Table is approaching to be used”. On the other hand, MT Avoiding condition was not always (5/10 participants) recognized correctly by participants. The reason could be that participants were less inclined to infer an unhelpful movement of a tabletop interface escaping its user, or from lack of agility in the MTSr movements when trying to avoid the approaching actor.

We confirmed normal distribution of the obtained 5 scale data and used one-way ANOVA and Bonfferoni test to compare the average scores of the six conditions for each GodSpeed dimension. For anthropomorphism, the overall scores for all conditions were less than the neutral (e.g., 2.1 for Stand, 1.7 for Move, 2.5 for Animation and MT Move, 2.7 for MT Avoiding) while only the MT Following condition provides a high score of 3.4 (with significant difference than Stand and Move). This result reflected on overall weak perception of MT as having human-like attributes. Animacy is the property of alive agent. This showed that the three social movements, MT Following, MT Centering, and MT Avoiding gave significantly higher scores of 3.7, 3.6, 3.5 respectively, than others. Similar perceptions were found in the interviews, for example, “like creature” “The table was suggesting its workspace”. For likeability, MT Following (3.7) and MT Centering (4.0) had substantial high scores. Here, MT Centering had a significant difference against Move condition. In Perceived intelligence, we obtained similar tendency to the Likeability in which the scores of MT Following and MT Centering were larger (sig. diff than Stand). The dimension of Perceived safety offered complex results, our brief summary shows that the static conditions were judged as stable, calm and quiet while only MT Avoiding, a social movement was negatively perceived as rough and supersize. This can be also explained by the clumsy movements of MTs in this condition, which can also be fixed with minor technical improvements.

In summary, the basic social movements of MT were recognizable by participants. Also, from Godspeed, the MT Following and MT Centering were perceived as socially acceptable with the impressions of intelligent agent, likable and safe. This is an interesting finding that shows MT can be adopted to various social settings.

6 Impact of MT on Interaction and Collaboration

The two observation studies established an understanding of two important aspects of how people perceive MT’s movements. We were also interested in exploring how users behave and change their interaction flow in order to adapt to the dynamically changing workspace afforded by MTs. Therefore, we conducted two interactive studies (study 3 and 4) that asked participants to perform a collaborative task with a partner using MT, allowing us to probe some of the social and collaborative aspects of interaction with a moving interactive tabletop. MTJrs were used for studies 3 and 4 because they incorporated a more sensitive multi-touch display, and their compact enclosures was more convenient, stable and safe when investigating MT’s movements during user interaction.

Our two interactive studies were designed as an early reflection on the interaction scenarios described in Sect. 4, taking a simplified approach to reflect on some of the fundamental aspects of group interaction with automatically moving tabletop interfaces. While the participants were asked to perform valid tasks on the tabletop, MT’s movements were basic, and initiated by a simple WoZ [12] algorithm.

6.1 Study3: Single MT Single User

Goal.

This study examined how the movement of a single MT can impact interaction. MT performed a set of basic movements, but this time with a participant being part of the interaction scene, not merely observing from the sidelines.

Method.

Twelve participants (8 males and 4 females, average age: 21.8) from local universities participated in this study, which took place in the same experimental room as studies 1 and 2. Participants were provided with a simple MT-based tool and a generic picture browsing task: MT displayed an interactive picture browser that allowed participants to browse pictures presented on the tabletop surface. The browser (Fig. 2b) consisted of two views: pictures thumbnails were shown at the left half of the screen, while the right half presented a preview window showing a larger version any of one of the thumbnail pictures. The interface was populated with twenty contemporary pictures relating to popular topics such as sports, politics, and hobbies. The participants could select any of the thumbnails which will in-turn populate the preview window.

Participants were not provided with any specific instructions on how to use the tool, or MT and were Mo-Cap tracked as they moved through the experiment room. Each task was short, around 40 s long, allowing the participant to engage with the tabletop task and with a different movement by MT. This study was conducted with a within-subject design that examined impact of the following MT’s movement conditions; static, approaching and avoiding (Fig. 7). The order of the conditions was counterbalanced among participants. All the MT movements in this study included motion cues on its display. The three conditions below were run by the study administrator using a simple WoZ algorithm. The three conditions below were run by the study administrator using a simple WoZ algorithm. The conditions and method of this study were based on the basic movements of MT shown in Fig. 5’s upper row where MT is approaching a person in order to engage her in interaction or MT is avoiding the participant, disengaging from interaction.

Fig. 7.
figure 7

Single MT interactive study

Static – A reference condition; MT was located around the center of the room, and remained static, running the picture browser tool on its surface (Fig. 7a).

ApproachingMT was initially located around 2 meters off the room center. Ten seconds after the task started MT initiated approaching movement, towards the center of the room and the participant (Fig. 7b). AvoidingMT was initially located at the center of the room. Ten seconds after the task started, MT initiated a movement away from the center of the room, and the user (Fig. 7c).

Results and Discussion.

The qualitative analyses were performed based on questionnaire asking participants to rate their experience using a 5 scale Likert. We also conducted a post-interview and a grounded theory [17] video coding behavioral analysis. Participants in general accepted the automatic movements of MT and had positive reactions to the motion cues on the MT tabletop. Participants reflected on the importance of timing when triggering MT’s movements during interaction.

We coded the participants’ spatial actions and behaviors, reflecting on the three movement conditions and analyzed our coded events by a comparative analysis relating to our control factors. We compared the coded events numbers in relation to the movement condition in which they occurred and the timing of the event relatively to the movement (i.e., before, during and after table movement). Below we briefly highlight our main findings regarding participants’ spatial behaviors:

  • All participants approached MT. About 30 % of the participants followed MT’s in its avoiding movement.

  • Following MT’s movement, participants frequently moved and looked around.

  • All participants touched MT’s surface and interacted with the tabletop picture browser significantly more after MT’s movement compared to before its movement (in touch/min).

  • Some participants were visibly surprised with MT’s first movement, but this effect disappeared in the following movements.

Generally, and surprisingly to us, our statistical analysis of questionnaire results were overall flat: participants remained relatively neutral, or just below neutral, regarding their MT experience, and there were no meaningful differences in ratings between the conditions. From interviews, a possible explanation for these findings emerged: participants who were left alone with the moving MT were not sure about the purpose of the task, which lacked by design social context. Several potential applications were suggested in the interview, for examples a “seller robot”, “domestic robot for elder persons”, or that MT will be “useful in hospital, classroom, and office meeting” etc.

Summary.

The video coding showed that participants interacted with the visual content on MT’s surface, and that their interactions with MT were significantly altered after MT’s movement occurred, causing people to follow MT spatially, and interacting more with its tabletop interface. Although all participants accepted that MT had potential, the questionnaire results did not indicate clear differences between MT’s movement conditions. Our assumption is that when evaluated by a single user in the current study setting and thus taken out of the collaborative context, MT becomes ‘just’ a robot, a perception which diverts from the social role it was design to serve, helping a group of people as a dynamically moving and changing interactive surface. While we do not argue that the results of single MT with a single user are not valid, they do highlight MT limitations, and point to its true potential serving a group of collaborative users, which we explore in the following study.

6.2 Study 4: Multiple MTs Multiple Users

Goal.

Our 4th study was probing a more holistic MovemenTable vision: what will be the impact of a group of MTs who are dynamically changing users’ interactive collaborative workspace? For simplicity, we studied synchronized physical connections and separations of two MTs and investigated how these movements affect two users’ spatial behaviors, workspace awareness and interactions during collaborative tasks over the MTs. The purpose of this study is also to understand basic effects of MTs by using a simplified WoZ study design with timed-movement of MTs during users’ collaborative task, which will offer fundamental key findings of how people react to changing workspace by MTs.

Method.

Participants were the same group of 12 university students recruited for study 3, paired into six teams, and located in the same MT experimental settings (Fig. 8). The teams members knew each other. Based on their experience in study 3 team members were aware of MT’s locomotion abilities, and knew how to use the interactive picture browser on its surface. This method allows us to see more realistic social experiences by multiple MTs movement in the study 4, rather than seeing fundamental or first-time reaction of the MTs’ automatic movements, which was already observed in Study 3. We conducted a within-subject study with the following four conditions using a customized counter balance order among teams.

Fig. 8.
figure 8

Multiple MTs for multiple users study

Connected –Two MTs were arranged side-by-side around the center of the room (Fig. 8a). Each display surface ran a standalone picture browser. MT remained static throughout the condition.

Separated –Two MTs were placed 1.8 meters apart from each other (Fig. 8b). The distance was sufficient for participants to maintain privacy as they interact with MT. MT remained static throughout the condition.

Connecting –Two MTs were placed 1.8 meters apart (Fig. 8c) and started to move towards each other 30 s into the interactive task.

Separating –Two MTs were connected to each other at the center of the room (Fig. 8d) and departed to different directions of the room 30 s into the interactive task, stopping when they were 1.8 meters apart from each other.

The connecting and separating conditions allowed us to test how MT intentional dynamic changes to the workspace, moving from a unified large interactive space to two distributed smaller ones affect the collaborating users, reflecting on basic collaboration themes such as individual vs. collaborative, or private vs. shared data access. For each condition, participants were instructed to collaboratively create a story on the MT based on pictures they selected individually. The pictures followed a different theme for each of the four conditions (e.g., pictures relating to sports, hobby etc.), which were presented in a balanced order to the different teams. The task was completed once participants reported that their collaboratively created story is ready. This simple task was designed because we were interested in how users interaction, behaviors are affected MT’s movement during typical collaboration that requires both touch interaction and some discussions with partner. The two dynamic conditions of connecting and separating included motion cues in all MTs movements.

Results and Discussion.

We probed the impact of the four movement conditions on participant’s behaviors and interactions using a questionnaire, video coding, and post-interviews. Regarding analysis of questionnaire, we used ANOVA and Bonfferoni test for normal distribution data while we used Freedman test in the case that data does not have normality. Table 1 details how the different conditions were rated on the questionnaire, showing only questions (designated by Q below) that reflected significant difference between the movement conditions. The results demonstrate that most participants thought of following the MTs movements (Q1), and were affected by the changes of their collaborative workspace due to MT movements. For example, MTs connecting guided the two participants to spatially get closer to each other, spatial behavior that could trigger discussion of the task within the team (Q2). On the other hand, the MTs moving away from each other would cause participants to finish their chat (Q3). Participants were collaborating closely on their team tasks in the connection and connecting conditions (Q4) while separation from their partner caused them to pursue a separate task, for example picture browsing on the tabletop instead of the requested story composition (Q5, weak effect). Q6 showed an interruption effect by MTs. The separating and connecting conditions disturbed team collaboration; however, this negative effect was arguably not high, with ratings close to neutral.

Table 1. Study 4 questionnaire results

In the video coding, we compared the number of participants’ spatial actions to the movement condition in which they occurred, and to the timing of the event relatively to the movement (i.e., before, during and after MT movement). Our main findings are:

  • All participants approached MTs and touched their surfaces during collaboration.

  • About 70 % participants followed the MTs’ movements and changed their positions accordingly. This is encouraging since we did not provide any instruction on the MTs movements or on how to react to them. The participants that did not follow “their” MT when it separated remained with their teammate’s MT, sharing it after it stopped its movement.

  • While there was no accidental collision between participants and MTs. 16 % of the participants had at some point to visibly back out of the MTs trajectory.

  • Participants pointed their finger at the partner’s table. This effect was more evident in connected and connecting, and grew more especially after the connecting movement in frequency of interactions, touch/sec.

  • Frequencies of gestures such as nodding, hand waving, pointing and looking at the partner significantly increased after the MTs separated.

Summary.

The collaborative workspace was significantly influenced by the MTs’ physical connecting and separating movements, strongly impacting the collaborative and spatial behavior of participants. Our findings suggest that MTs’ connecting can lead a more focused collaboration in a shared space, allowing teams to work closely together on a task, possibly so than in the connect condition with its preexisting larger surface. MTs separating on the other hand led to a more individual work style, with teams keeping apart and focusing less on their collaborative task. While this influence was generally perceived positively it is important to note that our study was not done on a naturalistic settings, and it remains to be seen how MT’s movements will affect users in actual workspaces.

A technical limitation emerged from MT’s physical bezels limited the ability of two MTs to connect into a single unified and unobscured surface. A study design limitation was recruitment of our participants who are aquatinted. On one hand, this is a clear bias, on the other; it did allow us to observe more natural and dynamic collaboration in the study teams. Given the preliminary nature of our study, being the first examination of multiple moving interactive tabletops with multiple users, we preferred this approach, but it leaves the question of MT’s potential social impact on users who do not know each other unanswered.

7 Discussion

All our prototypes, MTSr and MTJr functioned well in our four WoZ studies, moving around the environment, tracked and controlled with the aid of a MoCap system, providing continuously the interactive surface function, and the motion cues visualizations augmenting their movements. However, more work needs to be done to improve the technical aspects of MT: using a bezel-less tabletop, allowing seamless merger of several MTs, improving MT’s motion agility to support more accurate movements during multiple MTs synchronization, and developing algorithms for autonomous and safety MT behaviors.

We are quite satisfied with MT’s motion cues technique, using the tabletop visual content as an implicit animated character according to MT’s movement. People found MT’s non-verbal motion cues helpful and were successful in inferring MT’s movement intention, direction and distance based on its squash-and-stretch and speedlines motion cues. Using of the tabletop content to create animated motion cues maintains continuity of content on the tabletop surface, and does not move too far from the metaphor of an interactive tabletop, which may not be the case if the tabletop will provide motion cues using voice or physical robotic arms waving. We are planning to explore the abundance of other expressive motion cues, inspired by the rich possibilities afforded by cartoon art and animation.

Our interactive studies demonstrated that multiple MT’s movements had strong impact on user’s spatial behaviors and interactions. This effect was most impressive in synchronized movements of multiple MTs in collaborative scenarios. These finding are encouraging and point to the potential contribution MTs can have on affecting and guiding social and collaborative interactive settings. While our current findings are limited to WoZ experimental settings, the potential shown through our studies offer several steps and directions of future work. It would be interesting to explore practical performances of MTs in a targeted context by comparing with manually moving tabletop (e.g., [20]). We also plan to explore MTs autonomous control by leveraging situational awareness tracking technologies, instead of the WoZ study. Based on study 4 findings we are able to pursue larger and more complex MT spatial arrangements for example moving beyond simple connections to L- and U- shape table arrangements, deploying MTs in large spaces and social functions, applying more advanced algorithms, such as flocking, for grouping/ungrouping of a large number of MTs, and providing MTs with tools for better situational group awareness of their environment and users.

Finally, we are hoping to assess MTs’ effect more and more in the wild, following at the beginning a few simple scenarios. Grabbing attention of passersby and enticing to interact is a fundamental challenge in interactive public displays. While our work still stop shorts of introducing automatically moving interactive tabletops into realistic public settings we think that MT has the potential to provide a novel solution to this basic public display challenge.

8 Conclusion

We presented an exploration of MovemenTables (MTs), moving interactive tabletops designed to affect their workspace and collaborative settings by changes to their spatial position and arrangement. We implemented two types of MT prototypes and designed animated tabletop content as motion cues that help MT users infer the tabletop locomotion intentions. We evaluated the prototypes via a set of user studies, based on several simple interaction scenarios. Our findings demonstrate that moving interactive tabletops can be accepted by users, and can influence their spatial behaviors. Our studies of a group of MTs also suggest that MT’s fundamental movements, such as approaching a user, centering between two collaborating users, or connecting and separating have substantial impact on users’ spatial behaviors and on their workspace awareness. The current findings motivate our future MT effort towards designing and testing more realistic collaborative scenarios. As part of our future work we will be designing proof-of-concept MTs that will guide users interactions and will probe the impact the dynamically changing interactive surfaces’ location and size have on the collaborative task and the quality of the interaction.