Keywords

1 Introduction

The term haptic relates to or proceeds from the sense of touch, and with developments in technology, and haptic interfaces were developed to allow a user to receive tactile feedback via movement of a limb or the head [1]. There are two primary types of haptic devices as follows: force feedback devices and tactile displays, where the former provides reacting forces to our movements in space, and the latter targets the skin [2]. In force feedback haptic devices, movements of user are measured and a force is generated in response as a virtual representation of the displayed physical environment. These devices have degrees of freedom that corresponds to axes where the touched end of the haptic device can move along, ranging from 1 to many (e.g. 3 matches to graphical three-dimensional space) [2]. Sensable Devices’ PHANToM is noteworthy due to it being the first commercially available force feedback display with 3 degrees of freedom [2]. As haptic technologies have advanced, haptic interfaces became more affordable and accessible. Sensory stimuli became more believable [3] which enables users to interact with computers in a more “realistic” way with the feedback provided, guiding the user through the task in a non-visual way. Haptic devices are now being used in different areas, from professional areas such as surgery simulation and animation to more consumer-focused markets such as gaming [3].

One way to utilize haptic devices is to use it as an input device to manipulate desktop GUIs on computers. Certainly, mouse is the most widespread input device, along with keyboard, and there have been attempts to enhance it with haptic feedback to improve user performance. Studies exploring the effects of haptically enhanced mouse resulted in reduced error rates but similar speed, which are later described in this paper.

There are studies which incorporate haptic devices to control desktop like GUIs. A study incorporating a new technique enabled it as a secondary input device with only haptic feedback to control a palette-like toolbox that is mostly seen in graphic manipulation software such as Adobe Photoshop, and mouse in dominant hand to apply the selected tool, showed that similar performance was obtained, but traditional interface was faster and more preferred [4]. Another study used a haptic device to control a desktop GUI to solve multi-target haptic interaction problems and results showed that although haptic enhancements did reduce error rates, the speed was not improved [5].

In this paper, we show it is possible to manipulate desktop like GUIs with haptic devices, while taking task switching paradigm into consideration for both switching between tasks and also the input devices, mouse and haptic device.

On the following sections of this paper, we present the background study consisting of mouse as input device, haptic interfaces and previous studies on utilizing haptic enhancements and interfaces for manipulation of desktop-like GUIs, in addition to task-switching paradigm, followed by experimentation details and tasks. Later, we present the analysis of obtained results and finally, we present conclusions and discussions of this study.

2 Background of Study

2.1 Mouse as Input Device

As an input device, mouse has some advantages over other input devices; its widespread availability and acceptance, and it being non-obstructive to user’s view despite the requirement of hand-eye coordination, as movements of cursor needs to be projected on the movements of user [6].

Mouse can be used to point at an object, to move it or click it. It can be stated that pointing actually consists of three stages which is going towards target, reducing the speed and then aiming the target, respectively. Clicking adds an additional stage where pressing/releasing mouse button is necessary whilst mouse is kept stationary. Lastly, moving an object would require repetition of those four stages to be done twice, as the steps needed to move an object consists of hovering mouse over the target, clicking the mouse button to select and holding the button pressed during the movement, and releasing the mouse button when object has been positioned at the desired spot [6].

Fitts’ law predicts that the time to acquire a target is logarithmically related to the distance over the target size [7]. It has been used prevalently to study and compare input devices such as mouse and trackball [810] in addition to predicting user performance in some tasks such as point-select and point-drag using those input devices.

Although Fitts’ Law can be used to compare mouse movements, on the basis of target variables and movements speed, it only applies to error-free movement [6]. Moreover, there are cases where Fitts Law results in incorrect predictions such when the input device is not suited to Fitts’ law, such as isometric joysticks that are force sensing and endure negligible human limb motion [11]. Fitts Law has also some limitations such as not covering the performance difference between preferred and non-preferred hands [12] and the observation that subjects were uniformly more accurate when arm motions were towards the body than when they were away from the body movements [7]. In addition, Fitts Law doesn’t not work for trajectory based activities such as drawing, writing and steering [13]. Lastly, it does not address parameters such as system response time, mental preparation time for user, home timing, etc. [11].

2.2 Haptic Feedback with Current Interaction Techniques

Haptics have been added to interaction techniques previously, especially with mouse featuring haptic feedback to indicate certain occasions to user such as when a cursor reaches a certain point, or when they enter a certain target, such as a study by Akamatsu, Mackenzie and Hasbroucq [14]. However, results did not prove to be significant, as although the error rates have been reduced, overall pointing time was not improved [4]. Even though Dennerlein, Martin and Hasser [15] achieved an improved performance where the cursor was to be moved down in a “tunnel” to a target, it must be noted that the path was more restricted than general pointing.

There have been attempts to add haptic effects to GUI features such as window borders, buttons and checkboxes; forces being used to pull pointer towards a target or keep it on target once reached [16], but in [4], it is suggested that neither of those studies report empirical evaluation of their designs.

Haptic devices have been around for a while and as the technology advances, their costs have been drastically reduced, making them widely available and more accessible. In addition to studies that uses force-feedback mouse to provide haptic interaction, there are attempts to utilize a haptic device, such as PHANToM,Footnote 1 to manipulate desktop-like GUIs. An interaction technique called Pokespace [4] uses a Sensable PHANToM device which is to be used with non-dominant hand, while mouse is used with the dominant hand. On that technique, haptic device was used to select tool and alter its parameters (e.g. font style), and mouse was used to point on where the selected tool was to be applied. The technique featured a haptic wall to act as a backstop to indicate that cursor moved enough to select desired command (out of 8 possible commands). The results indicated that although haptics can provide strong enough feedback to perform selection without visual feedback, users were able to do it faster with traditional interface. However, it must be noted that Pokespace is an important technique that showed haptic interfaces can be used without visual attention in order to let users focus on their primary goals.

Another study utilized a haptic interface, PHANToM, for cursor control on a menu system that is similar to Microsoft Windows Start Menu where three conditions were tested, namely Visual, Haptic and Adjusted; visual condition not featuring any haptic enhancement, other conditions incorporating haptic feedback for menu items, lining them with walls to produce a tunnel-like feedback, with adjusted compromising reduced forces having an effect of providing weak forces to oppose user’s motion and strong forces supporting it [5]. Users were to click a start button and select a menu item. Results showed that adjusted condition produced “best of both worlds” with less target selection errors as in haptic condition whilst maintaining the speed when compared to visual condition [5].

Based on the results of previous attempts mentioned earlier, in [4], it is suggested that new interaction techniques must be designed from scratch with taking strengths and weaknesses of haptic and motor systems into consideration, stating that those previously stated techniques were simply haptic decorations of existing interaction techniques.

2.3 Task Switching

In task switching occurs when one has to switch between different tasks, although the generic explanation of the term “task” is rather debatable [18]. In practice, tasks to be performed within those experiments need to provide some specified mental operation or action as a response to a stimulus input [18]. Switch cost, preparation effect, residual cost and mixing cost are the four phenomena directly associated with the task switching.

Switch Cost.

Usually, it takes longer for initiation of responses on a switch trial than on a non-switch trial, where error-rate is also usually higher after a task switch.

Preparation Effect.

The average switch cost is often reduced when advance knowledge about upcoming task is given.

Residual Cost.

Although preparation reduces switch cost, it does not completely eliminate the switch cost. A substantial asymptote that reduction in switch cost seems to have reached to where substantial residual costs have been reported even when more than 5 s of preparation has been allowed [19].

Mixing Cost.

Although performance recovers quickly after a switch, responses remain slower than when just one task is needed to be performed.

Process or processes of task-set reconfiguration (TSR) must occur to change tasks. This can consist of shifting attention between stimulus attributes or elements, or between conceptual criteria, acquiring what to do and how to do it into procedural working memory, enabling a different response set [19].

There are different paradigms regarding task-switching experimental methods such as predictable task switching, task cueing, intermittent instructions, voluntary task selection, and comparing mixed-task blocks vs. single-block tasks, although the latter one is rarely used due to criticism it received [18].

In predictable task sequences, also known as alternating-runs paradigm, task switch is in a regular manner after a fixed number of trials (or runs), involving the same task, tasks switches occurring in every second trial such as AABBAABB sequence [16]. This paradigm revealed that switch trials had increased reaction time and error rates when compared to repetition trials [18].

Task-cueing paradigm with unpredictable sequences has been developed as an alternative to predictable sequences. In this paradigm, order of the tasks are random, hence the order of task switches and repetitions, too. Performance is usually worse in switch trials compared to repetition trials, as in predictable runs paradigm; but it differs regarding further reduced response times when the same task is repeated for several times [18]. Performance in this paradigm also relies on the type of cues given, transparent (e.g. word cues) or non-transparent (cues are needed to be learned by participants). Several studies have shown that switch costs are smaller with transparent cues than with non-transparent cues [18].

In intermittent instruction paradigms, participants are required to perform a sequence of trials with the same task. A cue to inform participants about what to do on the following trial sequence usually interrupts the sequence of trials, where the order of the task cues are also random, ensuring that the tasks either repeat or switch in sequential runs [18].

In voluntary task selection, participants decide between two tasks on each trial to perform. Responses for the two tasks are given on separate and non-overlapping sets of keys to allow experimenter to deduce the chosen task. Despite the free-choice, robust switch costs emerge in this paradigm [18].

An important diary study analyzing task switches found the interruptions participants encountered over a week and discussed designs to support task-switching and recovery [20]. This study focuses on multitasking of information workers and causes of task interruptions and proposes some design prototypes to support multitasking. In [20], it is suggested that “methods for capturing and remembering representations of tasks may be valuable in both reminding users about suspended tasks, and in assisting users to switch among the tasks”. Although the study is important, it focuses on interruptions caused by systems, and tools to help workers remember the tasks to be done by providing ways to organize, and group them in addition to visual cues.

3 Research Procedure

For this study, a user interface is developed to better understand the behaviors of the participants while performing some tasks that need to be conducted by a mouse or haptic interface. As seen from Fig. 1, participants can start the experiment by entering their name (“isim) and last name (“soyisim) information. However, the participants’ names are not stored; instead, an ID is assigned to them for the records.

Fig. 1.
figure 1

Entering to the system

The experiment is organized in two groups of tasks. The first group of tasks are organized for better understanding the task switching process between mouse and haptic interfaces. Accordingly, as seen from Table 1, the participants clicked different but-tons shown in different locations in the screen.

Table 1. Task switching between haptic device and mouse

As seen from Fig. 2, the button was to be clicked using haptic device (H) for two times. When the haptic cursor is on the button, it turns green to guide the participant to click on it. For the clicking process, the buttons on the haptic device were used.

Fig. 2.
figure 2

Clicking buttons with haptic device

After the participant clicks two buttons on the screen by haptic, other two buttons are asked to be clicked by mouse as seen in Fig. 3.

Fig. 3.
figure 3

Clicking buttons with haptic device and mouse

The second group of tasks are organized for better understanding the task switching process between tasks “drag & drop” and “click on the button” by using haptic device and mouse. In this group of experiments, as seen from Table 2, the participants used haptic interface first to perform Click on Button (B) task or Drag & Drop (D) task.

Table 2. Task switching between haptic device and mouse

As seen in Fig. 4, for the Drag & Drop tasks, the participants were asked to drag the circle to the dashed circle area, either by using mouse or haptic device. Input device to be used for the task was shown on the top of the screen.

Fig. 4.
figure 4

Drag & drop task by using haptic device and mouse

As seen in Fig. 5, when the haptic or mouse is on the circle to be dragged and dropped, the color of the circle changes to blue.

Fig. 5.
figure 5

Drag & drop by using haptic device

However, during performing the tasks with only haptic interface, because of the limitations of the haptic movements since a calibration is required at some points the haptic operation is stuck. This happened in average of 5 times out of 48 tries of haptic tasks. At these stages the application is stopped and re-started from previous two tasks. Because of the technical limitations, this calibration problem could not be solved in this experiment.

4 Results

A two-way repeated measures analysis of variance (ANOVA) was conducted to examine the effect of task type and task switching on reaction time. The within-subjects variables were task type with two levels (i.e., click, drag-and-drop) and task switching with two levels (i.e., no switching, and switching). There was a significant main effect for task type, F(1, 29) = 96.46, p < .001, partial ƞ2 = .77, with a very large effect size. The main effect of task switching was also significant, F(1, 29) = 418.41, p < .001, partial ƞ2 = .94, with a very large effect size. The interaction between task type and task switching was significant, F(1, 29) = 12.81, p = .001, partial ƞ2 = .31, with also a very large effect size. Planned comparisons were carried out between the switching and no switching trials for each task type. Separate paired-samples t tests showed that participants spent more time on the switching condition compared to the no switching condition in the click task, t(29) = 18.01, p < .001, and in the drag-and-drop task, t(29) = 20.32, p < .001, (see Fig. 6).

Fig. 6.
figure 6

Reaction time in the click and drag-and-drop tasks for switching and no switching trials

A separate two-way repeated measures ANOVA was run to examine the effect of input device and task switching on reaction time. The within-subjects variables were input device with two levels (i.e., mouse, haptic) and task switching with two levels (i.e., no switching, and switching). There was a significant main effect for input device, F(1, 29) = 92.35, p < .001, partial ƞ2 = .76, with a very large effect size. The main effect of task switching was also significant, F(1, 29) = 13.96, p = .001, partial ƞ2 = .33, with a very large effect size. The interaction between input device and task switching was significant, F(1, 29) = 77.52, p < .001, partial ƞ2 = .73, with also a very large effect size. Planned comparisons were carried out between the switching and no switching trials for each input device. Separate paired-samples t tests indicated that participants spent more time on the switching condition compared to the no switching condition when the input device was mouse, t(29) = 12.16, p < .001, (see Fig. 7). However, the effect of task switching approached to significance when the input device was haptic, t(29) = -1.96, p = .06. In contrast to the expectations, there was a tendency of higher reaction time in the no switching trials than in the switching trials (see Fig. 7).

Fig. 7.
figure 7

Reaction time when the input device is mouse and haptic for switching and no switching trials.

5 Discussions and Conclusion

There are variety of input devices that can be used to manipulate desktop GUIs, from mouse to joysticks and haptic devices such as PHANToM. Although mouse is a widely used input device and desktop GUIs are made to be used with them, interaction techniques available in desktop GUIs such as drag-and-drop, point, click and move, can also be used with other devices such as stylus and tablet, and there are studies showing that those devices can have a similar performance to mouse [10].

Although there have been attempts to incorporate haptic interfaces for the manipulation of GUIs, decreasing visual clues did not help the performance and using traditional interface observed to be quicker than using haptic-only interaction [4]. Nevertheless, the study showed it is possible to get similar performance without visual attention. Another attempt of operation of desktop GUIs with haptic devices showed that haptic enhancements did reduce the error rates, however there was not significant improvements on speed rates [5]. The study is noteworthy, but limited in terms of its scope as it aimed to provide an alternative solution to multi-target menu interaction problems by designing a haptically enhanced menu.

It is important to note that haptic interfaces have not been accepted as an input device in a similar way of computer mouse. Although there are studies, some covered within this study, on manipulating GUIs with haptic interfaces, they are either not used as a primary input device or focus on limited GUI areas.

This study compares mouse and haptic interfaces, utilizing both as only input device in tests and with GUIs in both cases rather than depending on solely haptic feedback when using haptic interfaces for the manipulation of GUIs. Our experiment showed the significant impact of task switching for both devices, where reaction times varied across different types of tasks. Switch conditions required more time. We showed that it is possible to obtain acceptable performance results when using the GUI with a haptic device but further study and experimentation is necessary to include more participants who have more experience with haptic devices. Another important point is that desktop like GUIs as well as the GUI used within this study was 2-D and we believe that with the development of 3-D GUIs for haptic devices will provide much better performance results, forming need for even further studies on the topic.

It must be taken into consideration that this study is inspired from a surgical education system where both devices need to be used in different parts of the GUI and we believe showing this possibility is important to eliminate split attention that occurs due to switching those input devices.