1 Introduction

Recent technological advances in computer graphics and head mounted displays have enabled the active development of various high quality virtual reality (VR) systems, which are increasingly being adapted to new applications. Such systems can enable users to view, move through, and even physically interact with objects in fully immersive virtual environments. Realistic simulation of the experience of physically touching or grasping a virtual object requires new technology, designed to present physical force feedback sensations that communicate the stiffness of the object. Additionally, for use in free-space virtual environments, devices that present force feedback sensation must be wearable.

Stiffness is an important material property that is generally sensed when grasping an object. To reproduce such a sensation, a haptic device must produce a force sensation that simulates backward extension of the fingertip. Various studies have used a mechanical actuator to provide a grasping feedback sensation in which the fingertips are pushed in a backward-extension movement [1, 2]. Some techniques involve grounded actuators and others comprised wearable robotic mechanisms [3, 4]. However, devices that use physical force are often large, heavy, and rely on complicated mechanisms.

To overcome these issues, we developed a technique to deliver pseudo-force sensation to the fingertips. We previously reported that a DC motor could produce an illusionary rotational force sensation when the input voltage was asymmetric (i.e. saw tooth waveform) [5]. We also mounted DC motors to the backside of the thumb and index finger and confirmed that the pseudo-force sensation occurred in multiple fingers simultaneously [6]. Here, we applied this technique to the presentation of a feedback sensation during grasping of a virtual object. Our haptic feedback device, shown in Fig. 1, is simple in mechanism, compact, and lightweight. We also developed a 3D virtual reality system that enables users to perceive stiffness sensations via both haptic and visual feedback.

Fig. 1.
figure 1

A 3D virtual reality system that uses pseudo-force perception produced by DC motor rotational acceleration to present stiffness feedback sensations to the tips of the thumb and index finger during grasping of a virtual tube. We modulated the visual feedback by changing the cross-sectional shape of the tube from a circle to an ellipse.

In this paper, we describe the algorithms used to produce haptic feedback via our VR glove and visual feedback via deformation of the shape of a virtual object. We also conducted an experiment to investigate whether participants could interpret the material stiffness of a virtual object during grasping. We modulated the initial vibration amplitude, which represented the reaction force when the thumb and index finger contacted the surface of the object, and asked participants to match each haptic feedback condition to a visual feedback condition. Our results showed that a stronger initial vibration represented materials that were harder and more rigid.

2 Related Work

2.1 Exploring Virtual Objects via Haptic Feedback Delivered to the Fingertip

Several studies have examined the efficacy of vibration feedback during exploration of the surface or shape of a virtual object [7,8,9]. However, vibration actuators cannot deliver force sensations, and thus cannot be used to communicate the stiffness properties of an object. Other haptic technologies have used grounded actuators to produce force feedback [3, 4, 10]. A representation of the stiffness of a virtual object, from very soft to hard, can be communicated to the user by outputting a force through an end effector. Such devices are tool based, and limit the movement of the user’s fingers. Further, while some grounded actuators allow finger movements with a higher degree of freedom, these devices are relatively large [11, 12].

Several researchers have developed wearable devices that can deliver grasping-force feedback to the fingertips without imposing workspace limitations [1, 2]. Although these devices, which typically include an exoskeleton, can deliver a reaction force to the fingertips, they are often large and heavy because they contain robotic mechanisms. Other devices provide tactile feedback to the finger pads via skin deformation [13,14,15]. Although skin deformation has been reported to enhance the sensation of stiffness, cutaneous stimulation devices cannot present the sensation of a force that extends the finger backward. Therefore, reproduction of the sensation of grasping an object is limited in virtual reality.

2.2 Delivering Pseudo-force to a Fingertip via a Vibration Actuator

Previous studies have used physical force to present grasping feedback sensations to the fingertips. However, because such methods generally require an actuator with a high power output and a relatively large size and weight, we sought to use different technology to deliver an illusionary sensation instead of a physical force. Previous studies on human perception have reported that reciprocating asymmetric vibrations with different accelerations can elicit the sense of being pulled in a particular direction [16,17,18,19,20]. This is called a pseudo-force sensation. Existing linear vibration actuators that have been found to produce an illusionary force sensation include the voice coil [21], Hapuator (Tactile Labs Inc.) [22], and Force Reactor (Alps Electric Co.) [23].

Similarly, we previously reported that when driven by a saw tooth waveform, a DC motor can engage in asymmetrical motor rotation, producing a rotational pseudo-force sensation [5]. In addition, we developed a fingertip glove in which a DC motor is mounted onto the tip of the index finger. We found that an illusionary force could be felt in the finger even when not gripping the vibration actuator. This illusionary force can induce fingertip forward-flexion or backward-extension, and so we considered it to be useful in the development of a VR glove. Previously, our experiment revealed that the illusionary force could be compared to the reaction force of grasping a real object. The equivalent physical force generated by this illusion ranged from 10 to 30 grams [6].

3 System

Figure 1 shows the system used in our study, which consisted of (1) two fingertip gloves to present pseudo-force sensation to the fingertips, (2) a virtual environment to provide visual feedback, and (3) a motion capture device (Leapmotion Inc.) to measure finger movements.

3.1 Fingertip Glove and Hardware

The fingertip glove comprises a DC motor (Maxon, 118396) and a motor-fixing attachment. The attachment for fixing the DC motor to the fingertip was made from titanium using a three-dimensional printer. It can be mounted such that the motor shaft rotates in the pitch direction with respect to the fingertip. The size of the glove can be adjusted to the size of the user’s finger. The pseudo-force sensation was presented in the direction of the pitch axis, and the user felt a force in their finger moving from the finger pad to the fingernail (Fig. 2).

Fig. 2.
figure 2

A fingertip glove without DC motor (left) and two gloves attached to the tips of the thumb and index finger (right).

The vibration waveform for driving the DC motors was generated by a microcontroller (mbed LPC1768, NXP Semiconductors) and amplified with an amplifier (OPA2544T, Texas Instruments, Inc.). Figure 3 shows the control hardware and asymmetric waveform used to produce the pseudo-force sensation. The microcontroller shared information with a computer via serial communication. The vibration amplitude could be adjusted from the computer. The outgoing data was renewed every 20 ms according to the frame rate of the visual feedback.

Fig. 3.
figure 3

Hardware for driving the DC motors (left) and asymmetric vibration waveform (right).

3.2 Visual Feedback and Algorithm

The movement of a user’s virtual fingers in a virtual environment should follow the position of their fingers in the real world, as measured via a motion capture device. However, the user’s virtual fingers can easily move inside the virtual rigid body of an object when the user attempts to touch or grasp it if there is no physical force to resist the fingers (Fig. 4 [left]). This is a common issue for every wearable haptic device. In the current study, we developed an algorithm to address this issue as follows. When the user contacts a virtual object using one finger, this is classified as a pushing state, and the object moves in the direction of the pushing movement (Fig. 4 [right]). In contrast, when the user contacts the object using two fingers (i.e. thumb and index finger), this is classified as a grasping state, and the object follows the moment of the palm (Fig. 5 [left]). We made the virtual fingers invisible when they moved inside the virtual rigid body of the object, and showed a copy of the fingers grasping the surface of the object (Fig. 5 [right]).

Fig. 4.
figure 4

Representation of a common issue in which the virtual finger moves inside the virtual rigid body (left), and the pushing state prior to grasping the object (right).

Fig. 5.
figure 5

The proposed algorithm for maintaining thumb and index finger contact with the surface of the virtual object (left) and deformation of the virtual object when grasping (right).

We used the following equation to determine the strength of the force feedback on the fingertips according to the vibration amplitude of the input voltage.

$$ V\, = \,\left\{ {\begin{array}{*{20}c} {k_{haptic} \Delta x + V_{0} } & {(if\,V < V_{max} )} \\ {V_{max} } & {\left( {if\,V \ge V_{max} } \right)} \\ \end{array} } \right. $$
(1)

where \( V \) is the asymmetric vibration amplitude of the input voltage, \( k_{haptic} \) is a constant that represents the spring coefficients for haptic feedback, and \( V_{0} \) is the initial vibration amplitude for presenting force feedback when the thumb and index finger initially contact the surface of the object. \( \Delta x \) is the total distance that the thumb and index finger move inside the object when grasping it (Fig. 5). \( V_{max} \) is a constant that limits the voltage of the vibration amplitude.

The surface of the virtual object starts to deform when the thumb and index finger apply a grasping force to the object. The amount of deformation of the virtual surface is proportional to the total distance that the thumb and index finger move inside the object. It is expressed by the following equation.

$$ \Delta d\; = \;d_{visual}\Delta x $$
(2)

where \( \Delta d \) is the total deformation distance between two contact points on the skin of the thumb and index finger, and \( d_{visual} \) is a factor value that represents the deformation property of the visual feedback. The virtual object becomes a non-deformable rigid body when \( d_{visual} \) is equal to zero. Figure 6 shows the invisible fingers and the finger copies when the deformable tube is pressed in our virtual environment.

Fig. 6.
figure 6

When the user’s virtual fingers entered the rigid virtual object in the virtual environment (left), we made them invisible and showed copy fingers grasping the surface of the object (right).

4 Experiment

We conducted an experiment in which we asked participants to complete a matching task where they paired a haptic feedback condition with a visual feedback condition. Our goal was to determine whether participants could interpret the type of material that a virtual object was made of when they were exposed to each corresponding haptic feedback condition. There were two tasks in this experiment. In the first task, we investigated whether pseudo-force provided sufficient information about the material of the object, such as whether it was made of rubber or metal. In the second task, we investigated the requirement of virtual object deformation with respect to the initial vibration amplitude \( V_{0} \).

4.1 Design

In first task, there were two initial vibration amplitudes (\( V_{0} \, = \;0 \), \( V_{0} \; = \;V_{max} \)) and virtual cylindrical objects made of three types of material (rubber, wood, aluminum) (Fig. 7). In the second task, there were three initial vibration amplitudes (\( V_{0} \; = \;0 \), \( V_{0} \; = \;0.5V_{max} \), \( V_{0} \; = \;V_{max} \)) and three factor deformation conditions for a virtual tube (\( d_{visual} \; = \;0.0 \), \( d_{visual} \; = \;0.2 \), \( d_{visual} \; = \;0.4 \)) (Fig. 8). \( d_{visual} \; = \;0.0 \) was a condition in which the virtual tube was non-deformable. The value of \( k_{haptic} \) was fixed for all haptic feedback conditions.

Fig. 7.
figure 7

Task 1: Two asymmetric vibration amplitudes (left) and cylindrical virtual objects made of three types of material (right).

Fig. 8.
figure 8

Task 2: Three asymmetric vibration amplitudes (left) and three deformation factors for a virtual tube (right).

4.2 Participants and Procedure

Six participants took part in this experiment: five males and one female, ranging in age from 22 to 25 years. All participants were right-handed.

In the beginning of the experiment, participants were asked to sit on a chair where we attached each of the two finger gloves to the thumb and index finger of their right hand. As shown in Fig. 1, the participant moved their right hand in range of the motion capture system (Leapmotion) to grasp a virtual object shown on a monitor. The participants were instructed to grasp the virtual object with the tip of their thumb and index finger. We asked the participants to grasp the object one or two times in each condition of haptic feedback condition, and then in each visual feedback condition. Thus, the participants had experienced all of the haptic and visual feedback conditions in each task before matching the associated conditions. For matching, we asked them to choose the visual feedback that they considered to be most appropriate match for each haptic feedback that had been presented. We explained that there was no correct answer and asked them to just follow their perception. We allowed them to adjust their answer if they felt they had made a mistake.

4.3 Result

Figures 9 and 10 show the results for tasks 1 and 2. The horizontal axis in each figure shows the haptic feedback conditions in terms of initial vibration amplitude. The vertical axis in Fig. 9 shows the participant responses in terms of the three kinds of material (rubber = 1, wood = 2, and aluminum = 3). The vertical axis in Fig. 10 shows the participant responses in terms of the three deformation factors \( d_{visual} \).

Fig. 9.
figure 9

Results for task 1: matching each virtual material to each haptic feedback condition.

Fig. 10.
figure 10

Results for task 2: matching each deformation factor to each haptic feedback condition.

5 Discussion

The results of task 1 show that when the initial vibration amplitude \( V_{0} \) was zero, most participants matched this haptic feedback condition to the rubber or wood. In contrast, when \( V_{0} \) was equal to \( V_{max} \), most of the participants matched this condition to wood or aluminum. In this task, the virtual object did not undergo deformation when touched, and so the participants only received visual feedback regarding the color of the material. Our data indicate that the participants interpreted the stimulation as resulting from touching a harder material when the initial vibration amplitude was higher. Thus, even though we did not provide visual feedback regarding object deformation, the participants clearly interpreted the aluminum object to be harder than the wood or rubber objects.

In task 2, we presented a virtual object that could be deformed when grasped with the thumb and index finger. The virtual object was a tube that changed in shape from a circle to an ellipse. When the initial vibration amplitude \( V_{0} \) was zero, most of the participants paired this condition with that with the highest amount of deformation. When the initial vibration amplitude \( V_{0} \) was equal to \( V_{max} \), most participants matched this with the non-deformable object. Thus, a lower initial vibration amplitude elicited a perception of deformable and softer material.

The results of tasks 1 and 2 showed that, when the amplitude of the initial vibration was low (or even zero), participants engaged in material comparison chose the softer object (rubber or wood), and those in the shape deformation condition chose the more highly deformable object. Therefore, certain haptic feedback stimuli may be useful in multiple stiffness perception conditions, according to the visual feedback presented.

In this experiment, all participants interpreted a higher initial vibration amplitude to represent harder material. This initial vibration amplitude provides important information about the strength of the pseudo-force sensation when the thumb and index finger initially contact the surface of the object. Thus, the reaction force during the initial contact is important for presenting the stiffness (softness or hardness) property of the material.

6 Conclusion

We developed a system in which two finger gloves with DC motors delivered a pseudo-force sensation to the thumb and a fingertip. We examined the utility of this system in presenting grasping feedback and the sensation of material stiffness. We also developed a 3D virtual reality system that allows users to grasp any object in the virtual environment. To address a common issue in which a virtual finger enters a virtual rigid body, we proposed an algorithm where the finger inside the object is rendered invisible and is replaced by a copy that moves on the surface of the object.

We tested our system using two tasks. Our results showed that, when the initial vibration was weak, most participants in task 1 interpreted the object to be made of rubber or wood, and most participants in task 2 interpreted the material to be highly deformable. When the initial vibration was strong, they interpreted the material as wood or aluminum and non-deformable in tasks 1 and 2, respectively. In future work, we plan to examine the cross-modal relationship between haptic and visual feedback.