1 Introduction

The percentage of people over 65 years in the world is projected to increase from eight percent to twelve percent by the year 2030 [1], which will increment further the already high demand of elderly care. To address this issue, global leaders carried out a UN World Assembly on Aging in 2002 which had, as one of its main topics, the objective of providing enabling and supportive environments for the elderly [2].

As a solution to this workforce shortage of elderly care, the governments of nations such as Japan, US and Germany have been encouraging the introduction of robots in nursing homes. Recent efforts towards this goal can be seen in the work of Nagai et al., which provides an analysis of the challenges to introduce robots into these environments in [3]. Pineau et al. introduced a robot assistant that autonomously guided the elderly and also reminded them of their schedules [4]. In Germany, the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) developed the Care-O-bot 3® with the objective of assisting elderly people in domestic environments [5]. The need for such domestic service robots can also be seen in the RoboCup@Home competition [6], especially the “Emergency situation” scenario, where robots deal with an accident in a home environment.

Despite the introduction of robots in domestic environments, their physical interaction with humans remains limited, a skill that is crucial for robots to eventually become reliable caregivers. The robots have to operate safely in these highly dynamic and uncertain environments. Manipulation under these conditions, while extremely complicated for robots, is performed effortlessly by humans. This proficiency achieved by humans, highly depends on their tactile sensing abilities while executing manipulation tasks [7]. Based on this insight, together with the improvement of tactile sensing technology, robotic researchers have produced algorithms inspired by human tactile sensing [8], and used tactile feedback to reactively adjust grasps [9, 10]. Although most of these approaches allow robots to interact physically in a domestic environment, their main concern is manipulation of objects.

With the long-term goal of enabling a robot, namely a Care-O-bot 3®, to safely interact with humans (e.g. guiding people with vision impairment in a nursing home), we develop a grasping approach that considers the pressure exerted during manipulation to prevent the application of excessive grasping forces. The pressure information provided by the tactile sensors of the SDH-2 hand is used as a feedback signal to control the fingers’ motion and react to contacts. Aside from the tactile information, force-torque sensors of the manipulator are used to enable the detection of contacts between the robot’s arm and its environment. Furthermore, the high-level control of our implementation is based on the phases observed in human manipulation.

To validate our work, we recorded empirical data of grasps on a set of objectsFootnote 1 with distinct features such as hardness, shape, and size. Our approach effectively reduced the exerted force on the grasped objects, by at least, half of the original force. In a particular case, the applied force was reduced by a factor of 20, while still successfully executing the grasp. We also analyze the limitations of this approach and compare its performance to an open-loop grasp approach. An early version of this work has been demonstrated during the competitions of RoboCup@Home German Open 2013 in Magdeburg and RoboCup@Home World Championship 2013 in Eindhoven.

The remainder of the paper is organized as follows. Section 2 provides a brief description on human grasp and the involved tactile information, as well as current applications of tactile sensors in robotics. Section 3 describes our approach and the hardware it uses. In Sect. 4 the evaluation method is detailed and the results obtained are reported. A summary of the paper is presented in Sect. 5.

2 Related Work

2.1 Human Manipulation

Johansson and Flanagan noticed the importance of tactile signals during manipulation by humans [7]. These tactile signals are denoted as tactile afferentsFootnote 2 by Johansson. They can end at skin level (type I) or, deeper, at the dermis (type II); and they can have fast or slow frequency responses. Thus, the tactile afferents used by the hand are: fast-adapting type I (FA-I), slow-adapting type I (SA-I), fast-adapting type II (FA-II), and slow-adapting type II (SA-II). Besides studying these tactile signals they analyzed the phases involved during a manipulation task. The phases of a simple pick and place task, as described by Johansson in [7], are:

  1. 1.

    Reach: Fingers make contact with the object and FA-I afferents are activated.

  2. 2.

    Load: Enough force is applied to the object to obtain a firm grip. During this phase the SA-I and SA-II afferents are triggered.

  3. 3.

    Lift: The object is lifted off the support surface and the FA-II afferents are activated.

  4. 4.

    Hold: Forces are applied to the object to prevent its slippage. SA-I and SA-II afferents are activated in this phase.

  5. 5.

    Replace: The object makes contact with the support surface and the FA-II afferents are triggered in this phase.

  6. 6.

    Unload: The fingers release the object and FA-I afferents are activated.

2.2 Tactile Sensing in Robotics

Robots with tactile sensors have recently been used in object recognition [11], evaluation of grasp stability [12], and grasp adjustment. Our review of related work focuses on the latter application.

Hsiao et al. [9] apply corrective actions, using the tactile information of a PR2 gripper, to improve the location of the contacts. They define corrective actions to open the PR2 gripper when a contact is sensed, and moving the wrist in the direction of the sensed contact. This approach is able to compensate for position errors to yield better stability of the grasp. Prats [10] also improved the performance of a robotic control system by adding tactile information as feedback. Their previous approach only considered visual and force signals for feedback. The tactile feedback drives a controller that moves three degrees of freedom of a robotic arm to open a slide door. Romano et al. [8] developed an approach, inspired also on human manipulation, that uses tactile sensors to design low-level signals and control loops that mimic the tactile afferents FA-I, SA-I, FA-II. However, this implementation is specific to the PR2 gripper, a parallel jaw gripper with only one actuator. We therefore seek to extend their work to control a gripper with more than one degree of freedom, e.g. a SDH-2. Compliant grasps have also been achieved without the use of tactile sensing [13].

3 Approach

3.1 Hardware

The SDH-2 is a servo-electric 3-finger gripping hand with seven degrees of freedomFootnote 3 (DoF). The three fingers are actuated by two joints each, one is rooted to the hand’s palm and the other is in the middle of the finger. Both of these joints have a range of motion of -90\(^\circ \) to +90\(^\circ \), and enable the extension and flexion of the fingers. The seventh actuator allows two fingers to rotate simultaneously in opposite directions and generates an abduction or adduction movement. The range of motion of this actuator reaches 0\(^\circ \) to +90\(^\circ \). Figure 1 depicts how these motions are executed by both a human finger and an SDH finger. Moreover, each finger has two phalanges: a proximal phalanx which is closer to the palm, and a distal phalanx which is further away from the palm. Each phalanx is equipped with a tactile sensor matrix.

Fig. 1.
figure 1

Flexion/extension and abduction/adduction motions of (a) the human hand [14] and (b) the SDH-2.

The six tactile sensors from Weiss Robotics [15] provide the contact information. This information is represented as a matrix that either contains \(6 \times 14\) tactile elements (tactels), for the proximal phalanges, or \(6 \times 13\) tactels for the distal phalanges. Each tactel produces an integer value between \(0\), when there is no pressure, and \(4095\), the maximum pressure value that represents \(250\,KPa\). The tactels in the proximal phalanges have identical sizes of \(3.4 \times 3.4\) mm. However, the sizes of the tactels in the distal phalanges vary slightly, because the tactile arrays are curved. For simplicity of the calculations, the size of all tactels is assumed to be the same. Figure 2 shows a diagram of a tactile sensor together with a visualization of a contact sample.

Fig. 2.
figure 2

Left: A diagram of a tactile sensor, right: the pressure profile of a contact.

The SDH-2 is mounted to a KUKA Lightweight Robot with seven DoF that provides torque signals in each joint [16].

3.2 Tactile Signal Processing

Following the idea of considering the information produced by the tactile sensors as grayscale images, as proposed in [12], we process the tactile data online and offline. The online signal processing is used to monitor the pressure applied to a grasped object, and the offline signal processing is used to calculate the exerted force of the grasp. Both online and offline signal processing are detailed next.

Online Processing. The online signal processing uses the following algorithms:

  • detect_contacts: Given a number of tactile arrays, with their respective threshold values, it returns a Boolean array that indicates if a tactel is exceeding the contact threshold value. A \(0\) is assigned for no contact, and a \(1\) represents that a tactile array has a contact.

  • detect_thresholds: This inverts the result of detect_contacts. I.e. a \(0\) in the returned array, indicates a contact, while a \(1\) represents no contact.

Each element in the Boolean arrays controls the motion of a single phalanx. The output of detect_contacts selects which phalanges to move, this is to only move those phalanges that are in contact with the object (i.e. is used while the phalanges are not moving). The output of detect_thresholds is used to stop the movement of the phalanges that have reached the desired contact value.

Offline Processing. The offline processing is used after the hand has stopped moving. We applied the steps applied by Li et al. [17], namely:

  • Threshold: For each tactile array, and their respective pressure threshold, it sets the tactels below this threshold to zero. The application of this thresholding is optional to remove low contact values, which may be caused by pressure applied to an adjacent tactel. Due to the rubber layer covering the sensor, pressure applied to a single tactel also activates its neighbors [18].

  • Label: Using the connected-component labeling algorithm with a 4-con-nectivity criteria [19], it labels the contact regions in each tactile array. The purpose of this step is to segment areas of contact for further classification (e.g. determine the largest contact area or the strongest contact area).

  • Extract: This step differs from the one described in [17], by extracting the strongest contact region instead of the largest contact region. The strongest region is defined by its normal force. The normal force of each contact region is calculated with the equation \(F = P*A \) (where \(P\) represents a normalized pressure of a contact region, and \(A\) is the area of the region). The region with the highest normal force is selected as the strongest. The normalized pressure \(P\) is calculated as the ratio of the maximum pressure range (250 KPa) to the maximum displayed value (4095 bits) times the average value of the active tactels (i.e. tactels with a contact value greater than zero). The area \(A\) is calculated by multiplying the individual area of a tactel times the number of active tactels. As noted in Sect. 3.1, the size of the tactels on the distal phalanges is assumed to be the same as on the proximal phalanges.

  • Locate: For each tactile array, this step calculates the centroid of the contact regions. The centroids are calculated, as suggested in [18], using the raw moment formula:

    $$\begin{aligned} M_{pq} = \mathop {\sum }\limits _{x} \mathop {\sum }\limits _{y} x^p y^q I(x, y) \end{aligned}$$
    (1)

    where \(x\) and \(y\) represent the coordinates of a tactel in a tactile array and \(I(x,y)\) is the intensity (i.e. pressure value) in tactel \(x,y\). The order of \(x\) and \(y\) is determined by \(p\) and \(q\), respectively. A centroid can be then calculated using:

    $$\begin{aligned} \begin{bmatrix} x_0 \\ y_0 \end{bmatrix} = \frac{1}{M_{00}} \begin{bmatrix} M_{10} \\ M_{01} \end{bmatrix} \end{aligned}$$
    (2)

3.3 Architecture

Our architecture follows the human manipulation phases, as described by Johansson [7]. Figure 3 illustrates our architecture. Note that this architecture is based on a pick-and-place task. When grasping a human, the lift/hold and place phases will be different.

Fig. 3.
figure 3

Phase-based architecture, inspired by human manipulation. The colored boxes indicate the phases implemented in this work.

The make_contact phases move the phalanges (first the proximal phalanges, followed by the distal phalanges) from their initial, open configuration, to a desired closed configuration. Each phalanx is controlled by the loop shown in Fig. 4, where \(P\) is the pressure of the highest-valued tactel in the tactile sensor, \(P_{ref}\) is the pressure threshold and \(\dot{\theta }\) is the desired joint velocity. The controller is a simple bang-bang controller that sets \(\dot{\theta } = 0\) when \(P_{err} \le 0\). Once all joint velocities have been set to zero the make_contact phase ends and the load phase is started. During the load phase each joint, except for the one generating abduction/adduction movements, is also actuated using the pressure control loop shown in Fig. 4. The make_contact phase shapes the hand according to the object, while the load phase regulates the pressure to achieve a stable grasp.

When the load phase has finished, the object is grasped and the lift/hold phase raises the arm to lift the object from a surface and it holds the object during the transportation to the placement pose. Next, the place phase moves the arm downward, while using a force monitor to detect an abrupt change of force exerted on the hand, that indicates a contact between the object an a surface. This allows to safely place the object on the surface. Finally, the unload phase opens the hand to release the object.

Fig. 4.
figure 4

Control loop using tactile feedback.

Each of these phases is composed by simpler components, which can be replaced without modifying the overall structure of a phase, thus separating concerns as described in [20]. These components implement algorithms that perform computations and communicate their outputs publishing messages through ROS topics [21].

4 Experimental Evaluation

To evaluate the performance of our reactive grasp we compared it to the current approach, an open-loop grasp, which does not consider the grasp force as feedback. First, we describe the materials involved in the experiments, then the procedure is detailed. Finally, we present the results obtained from the experimentation.

4.1 Materials

The platform used to carry out the experimental evaluation was the Care-O-bot 3 [5], with a KUKA Ligthweight Robot (LWR4) [16]. The end-effector located at the end of the LWR4 is a SDH-2. Furthermore, 18 objects were selected to represent the following three features:

  • Hardness: The objects were regarded as deformable (D) when the open-loop grasp would either leave a mark on the object, or change its shape or size. If no mark or modification was observed, the object was labeled as non-deformable (N).

  • Shape: The shape of an object was considered to be one of the following: prismatic (Pr), spherical (Sp) and cylindrical (Cy).

  • Size: An object was classified as small (S), medium (M), or large (L).

A sample of the selected objects can be seen in Fig. 5, and their classification is shown in Fig. 6.

Fig. 5.
figure 5

A sample of the test objects. Missing objects in the figure are: dictionary, orange, melon and soda can (full).

Fig. 6.
figure 6

Categorization of the test objects, according to their hardness (D/N), shape (Pr/Sp/Cy) and size (S/M/L).

4.2 Procedure

For each test object, the robot’s arm started in a predefined pose (see Fig. 7a). Both approaches were executed three timesFootnote 4, for each of the six locations shown in Fig. 7b. Spherical and cylindrical objects were centered on the marked locations, while prismatic objects were placed on the marked locations along their edges. These locations were chosen to cover a range of positions within the grasp, e.g., close/away from the wrist and close to the fingers/thumb.

Fig. 7.
figure 7

Experimental procedure.

4.3 Results

The results obtained by the experiments conducted on 18 different objects, using both the open-loop grasp and the reactive grasp, are summarized next. Table 1 shows the success rate of both approaches. Based on the performance of the grasps, two aspects were further analyzed: the grasp force applied by each grasp, and the cause of each failed grasp. The grasp forces of the objects that had a 100 % success rate with both approaches are displayed in Table 2. The force on each object is the average of all trials (i.e. 18 trials). This average represents the sum of the forces on each tactile sensor.

To conclude this section, the 84 failed grasps along with their causes are presented in Table 3. The majority of the reactive approach failures (i.e. no grasp) were caused by the phalanges pushing the objects out of the grasp (27 failed grasps), and by the phalanges not receiving their required stop commands, either because the specified joint limits could not be reached or the desired grasp force could not be reached (15 failed grasps). The overall success rate for the reactive grasp approach was 78.64 %, and of 94.17 % for the open-loop grasp approach.

Table 1. Success rate of both approaches, the open-loop grasp (OLG) and the reactive grasp (RG).
Table 2. Grasp forces applied by both approaches, in Newtons.
Table 3. Categorization of failures.

5 Conclusions and Future Work

This paper presented a software architecture that emulates the human manipulation phases together with an approach that significantly reduces the grasp force through the use of tactile feedback. The approach was specifically tuned for the SDH-2. The pressure information of all experiments, was recorded and made available at https://github.com/jsanch2s/tactile_info. However, the success rate of our reactive grasp approach was not as high as the open-loop grasp approach (78.64 % vs 94.17 %), mainly due to the following limitations:

  • The tactile sensors do not completely cover the fingers, causing the reactive grasp to not reach the desired contact values.

  • Low sensitivity of the tactile sensors hinders the ability to detect light contacts (this accounts for 40 % of the failures). Integrating the signals of a force-torque sensor signals, as demonstrated in [22] could improve contact detection.

  • Insufficient force on the grasp caused objects to slip or rotate within the grasp, caused by low values of the contact thresholds.

Future work will be focused on the implementation of the offline signal processing and the improvement of individual components to detect contacts that tactile sensors cannot (e.g. using force-torque sensors), improve the location of contact points using arm motions as Hsiao et al. demonstrated in [9], and detect slippage by analyzing temporal readings from the tactile sensors. A video showing the capabilities of our reactive grasp is available at https://www.youtube.com/watch?v=fJoSDVKSdm0. The video shows a slower version due to safety reasons.