Keywords

1 Introduction

Accessibility evaluation to enhance accessibility and safety for the elderly and disabled is increasing in importance. Accessibility must be assessed not only from the general standard aspect specified by International Organization for Standardization [1] but also in terms of physical and cognitive friendliness for users of different ages, genders, and abilities. Conversely, human behavior simulation has been progressing in crowd behavior analysis and emergency evacuation planning [2], and there is a high possibility that the simulation can be applied to the accessibility evaluation. In human behavior simulation, the activities of a pedestrian model in indoor and outdoor environments are predicted. Moreover, human behavior simulation using kinematic digital human models (DHMs) even in a three-dimensional (3D) environmental model has become possible because of recent advances in computer performance [3]. However, in the previous human behavior simulation [2, 3], an environmental model represents only “as-planned” situations, in contrast to “as-built” or “as-is” environments. In addition, a pedestrian model cannot generate detailed articulated movements of specific groups of people such as the elderly, children, males, and females.

Our research aims to develop a virtual accessibility evaluation by combining realistic human behavior simulation using DHMs with as-built environmental models. As shown in Fig. 1, our research group has developed a prototype system of environmental modeling and walking simulation for a kinematic DHM to achieve this goal [4]. In our system, the kinematic DHM can walk autonomously with different strides and turning angles adapted to walking environments such as stairs and slopes. However, the kinematic DHM cannot necessarily recreate human-like articulated walking motion of various types of people. Motion-capture (MoCap) data of an individual person’s walk enables the DHM to generate human-like walking motion. However, it is still difficult to acquire a large collection of MoCap data from many people with various strides and turning angles in different walking environments such as stairs and slopes.

Fig. 1.
figure 1

Overview of the research

Therefore, in this research, we developed a new algorithm that adaptively recreates a human-like DHM walking motion using only one MoCap data reference in different flat indoor environments. As the first stage of the research, a walking simulation algorithm is proposed that enables a DHM to walk autonomously, while adapting its strides and turning angles to different as-built and flat indoor environments by combining inverse kinematics (IK) with interpolation of reference MoCap data of flat walking. Modeling and simulation efficiency and accuracy are evaluated.

2 Related Work

This research relates primarily to three areas, human behavior simulation, 3D environmental modeling from laser-scanned point clouds, and digital human modeling for walking simulation.

In human behavior simulation, studies have been conducted on crowd behavior analysis or emergency evacuation planning [2]. For example, Helbing [5] proposed a two-dimensional (2D) human behavior simulation based on a social force model. However, its simulation algorithm still remains in 2D as-planned environments. Recently, Kakizaki [6] proposed an evacuation-planning simulator that enables kinematics-based walking simulation in a 3D environmental model. However, this work used “as-planned” 3D computer-aided-design (CAD) data of a building as the environmental model. In a simulation by Pettre [7], a navigation graph representing a free space and environmental connectivity can be generated automatically from a 3D mesh model. However, Pettre used a simple-shaped 3D mesh model that was too rough to capture the detail of as-built environments.

3D as-built environmental modeling from massive laser-scanned point clouds has been actively studied. Algorithms can automatically extract floors and walls [8], household goods [9], and a constructive solid geometry model of environments [10] from massive laser-scanned point clouds. However, these researches focus only on general object recognition, without necessarily aiming for human walking navigation in the environmental models. Moreover, the algorithms cannot model small barriers on a floor as the environmental model, which is an important factor for accessibility evaluation.

Many algorithms have been developed in digital human modeling for walking simulation. Among them, a variety of human walking patterns can be synthesized using principal component analysis (PCA) [11]. Although such simulation systems can generate human-like walking motion, they still require a large collection of MoCap data in advance. Therefore, many researchers focus on estimating and generating an arbitrary human motion only from a small number of existing MoCap data. Motion synthesis and editing [12], motion rings [13], and machine learning [14] are typical examples. However, they still require a small number of MoCap data to adapt DHM strides and turning angles to walking environments. In addition, it is generally difficult to persuade elderly persons, the main targets of accessibility evaluation, to join prolonged MoCap data collection efforts. Conversely, human-like walking motions have been generated in recent physics-based walking simulation using motion controllers and game-engines [15, 16]. However, naturalness of the walking motion is sensitive to the motion controller parameters. Recently, Rami [17] proposed a motion-retargeting algorithm in which joint positions relative to the walking surface can be adapted to changes in the walking environments. However, the resultant motion is not evaluated. In addition, there is no guarantee that human-like articulated movements can be generated on slopes and stairs, because the algorithm does not use real motion data for the adaptation.

In contrast to previous research, our walking simulation algorithm

  • generates human-like walking motion using only one MoCap data reference,

  • adapts the DHM stride and turning angle to different as-built and flat indoor environments,

  • provides fast and autonomous walking simulation directly in the point clouds-based environmental models,

  • provides a sufficiently small difference in joint angles between the DHM and MoCap data.

3 3D Environmental Modeling from Laser-Scanned Point Clouds

Figure 2 shows an overview of the 3D environmental modeling and the walking simulation of the DHM. For the DHM walking simulation, first, the 3D environmental models are constructed automatically from the 3D laser-scanned point clouds of the as-built environments [4]. As shown in Fig. 2, a 3D environmental model consists of two point clouds (the down-sampled points with normal vectors and the set of walk surface points) and the navigation graph. Processing overviews are described in the following sub sections.

Fig. 2.
figure 2

Overview of 3D environmental modeling and DHM walking simulation

(A1) Down-sampling and normal vector estimation.

Multiple laser-scanned point clouds are first merged to make one registered point clouds. This registered point clouds contain many points; hence; it is down-sampled using a voxel grid. In this study, around 1,000 points per square meter is sufficient for modeling and simulation.

Then, normal vectors at the down-sampled points are estimated using PCA on the locally neighboring points [18]. These point clouds with normal vectors \( Q^{D} = \left\{ {\left( {\varvec{q}_{i} , \varvec{n}_{i} } \right)} \right\} \) are used as a part of the 3D environmental model to express the geometry of the entire environment.

(A2) Walk surface points extraction.

The walk surface points, representing walkable surfaces such as floors and stair-steps, are automatically extracted from the down-sampled points \( Q^{D} \).

First, as shown in Fig. 3(a), if the angle between a normal vector \( \varvec{n}_{i} \) at a point \( \varvec{q}_{i} \) and a vector \( \varvec{v} = [0,0, + 1] \) is smaller than the threshold \( \varepsilon \) (we set \( \varepsilon = 20 \) deg.), then the point is added into horizontal points \( Q^{H} = \left\{ {\varvec{q}_{j}^{H} } \right\} \) placed on a horizontal plane.

Fig. 3.
figure 3

3D environmental modeling from 3D laser-scanned point clouds

Then, as shown in Fig. 3(b), the horizontal points are clustered into a set of walk surface points \( W = \left\{ {Q_{k}^{W} } \right\} \) using a region growing algorithm based on k-nearest search [4].

(A3) Navigation graph construction.

Finally, we generate a navigation graph representing the environmental pathways for navigating the DHM during walking simulation. To generate a navigation graph from laser-scanned point clouds, we extend the algorithm of Pettre [7], which generates navigation graphs from a simple-shaped 3D mesh model.

As shown in Fig. 3(c), the navigation graph \( \langle G_{N} = V,E,c,\varvec{t}\rangle \) comprises a set of graph nodes \( V \) and a set of edges \( E \). Each node \( v_{k} \in V \) represents free space of the environment. It has a position vector \( \varvec{t}\left( {v_{k} } \right) \) and an attribute of a cylinder \( c\left( {v_{k} } \right) \), whose radius \( r\left( {v_{k} } \right) \) and height \( h \) represent the distance to the wall and the walkable step height, respectively. Each edge \( e_{k} \) represents connectivity of the environment and is generated between two nodes with a common region [4].

4 MoCap-Based Adaptive Walking Simulation in as-Built Environments

After environmental modeling, a walking simulation of the DHM is performed directly in the point clouds-based environmental models. To generate human-like walking motion, we use only one reference MoCap data of flat walking in a gait database (DB) containing the data of 139 subjects provided by the National Institute of Advanced Industrial Science and Technology (AIST) [19].

As shown in Fig. 2, our DHM has 41 degrees of freedom (DOF) totally and the same body dimensions as the subjects in the gait DB. The walking simulation consists of both macro- and micro-level simulations. Details are described in the following sub sections.

4.1 Macro-Level Simulation

As shown in Fig. 4(a), in the macro-level simulation, first, a set of DHM walking paths \( V^{W} = \left\{ {V^{i} } \right\} \) is found automatically using depth-first search over the navigation graph \( G_{N} \). Each path \( V^{i} \) consists of a set of graph nodes and edges from the user-defined start node \( v_{s} \) to goal node \( v_{g} \). Next, a suitable walking path \( V^{P} \in V^{W} \) is selected automatically from \( V^{W} \) based on the path cost and user’s path preference [4].

Fig. 4.
figure 4

Overview of the macro-level simulation

Then, as shown in Fig. 4(b), the DHM walking trajectory \( S^{O} = \left\{ {\varvec{s}_{i} } \right\} \) tracing a time sequence of the DHM pelvis position \( \varvec{s}_{i} \), is generated automatically. An optimization algorithm is applied to make the trajectory \( S^{O} \) more natural and smooth, while avoiding contact with the walls [4].

4.2 Micro-Level Simulation Based on MoCap Data

After the walking trajectory \( S^{O} \) is determined, one-step DHM walking motion along the trajectory \( S^{O} \) is generated in the micro-level simulation. Variable stride and turning angle during the walk can be achieved while estimating footprint positions automatically using reference MoCap data of flat walking selected from the gait DB [19]. As shown in Fig. 2 (A6–A9), one-step walking motion is generated according to the following processes.

4.2.1 Updating Locomotion Vector (A6)

As shown in Fig. 5, when the DHM passes through the trajectory point \( \varvec{s}_{k} \in S^{O} \), the system determines a sub goal position \( \varvec{x}_{t} \) as \( \varvec{x}_{t} = \varvec{s}_{k + 2} \), which serves as a temporary target position during the simulation. The sub goal position \( \varvec{x}_{t} \) is continuously updated as the DHM walks along the trajectory \( S^{O} \). Then, the next locomotion vector \( \varvec{v} \) is determined using \( \varvec{ v} = \left( {\varvec{x}_{t} - \varvec{x}_{c} } \right)/\parallel \varvec{x}_{t} - \varvec{x}_{c} \parallel \), where \( \varvec{x}_{c} \) represents the current DHM pelvis position.

Fig. 5.
figure 5

Updating next locomotion vector

Then, as shown in Fig. 6, the next footprint point \( \varvec{x}_{f} \) is determined on the walk surface points \( W \). To this end, a cylindrical search space \( C_{F} \) is generated centered at a point \( \varvec{p}_{f} \) placed ahead of the current heel position \( \varvec{x}_{hs} \) by the stride length \( w \). Next, a subset of the walk surface points \( W_{S} \) with the maximum point number inside of \( C_{F} \) is extracted from \( W \). Finally, the next footprint point \( \varvec{x}_{f} \) is determined as a centroid of \( W_{S} \). Note that any user-defined stride length \( w \) that is different from the original stride length of the reference MoCap data in the gait DB can be used.

Fig. 6.
figure 6

Overview of the micro-level simulation

4.2.2 Generating Target Posture (A7)

As shown in Fig. 6, to achieve the next footprint point \( \varvec{x}_{f} \), a DHM target posture \( F_{t} \) is generated using cyclic-coordinate-descent inverse kinematics (CCDIK) [20]. First, aey-frame posture \( F_{I} \), representing the full-body posture at the initial contact frame in the next walking step is obtained from among a set of frames in the reference MoCap data selected from the gait DB. Then, we apply a CCDIK method, an iterative inverse kinematics (IK) solver for redundant link-mechanisms, to the key-frame posture \( F_{I} \) to determine the 14 DOF leg posture of the DHM. To obtain a plausible target posture \( F_{t} \), we introduce a range of motion (ROM) and symmetric hip joint angles as CCDIK constraints.

4.2.3 Interpolating Stance Leg Motion (A8)

As shown in Fig. 6, once a target posture \( F_{t} \) is obtained, as described in 4.2.2, the stance leg motion is interpolated so that the motion finally satisfies \( F_{t} \) at the end of the interpolation. Figure 7 shows the interpolation algorithm. In Fig. 7, \( \theta_{i}^{DHM} \left( \phi \right),\,\theta_{i}^{DB} \left( \phi \right),\,\phi \in [0,1] \) represent the \( i^{\text{th}} \) joint angle of the DHM’s stance leg, the corresponding joint angle obtained from the reference MoCap data, and the normalized walking phase, respectively.

Fig. 7.
figure 7

Interpolating stance leg motion

First, the specific key-frame angle \( \theta_{i}^{DB} \left( {\phi_{j} } \right) \) of the \( i^{\text{th}} \) joint at \( \phi_{j} \) is loaded from the reference MoCap data. We assume that the angles of the stance leg at the middle stance phase changed less against strides, and we select angles \( \theta_{i}^{DB} \left( {\phi_{j} } \right) \) at \( \phi_{j} \in \left\{ {0.4, 0.5, 0.6} \right\} \) as the key-frame angles to be left unchanged. Then, the stance leg angles \( \theta_{i}^{DHM} \left( \phi \right) \) are interpolated using a cubic spline curve, so that they coincide with the key-frame angles \( \theta_{i}^{DB} \left( {\phi_{j} } \right) \) at \( \phi_{j} \in \left\{ {0.4, 0.5, 0.6} \right\} \). The angles \( \theta_{i}^{DHM} \left( 0 \right) \) and \( \theta_{i}^{DHM} \left( 1 \right) \) and the angular velocities \( \dot{\theta }_{i}^{DHM} \left( 0 \right) \) and \( \dot{\theta }_{i}^{DHM} \left( 1 \right) \) at the current and target postures \( F_{s} \) and \( F_{t} \), respectively, are also used as boundary conditions. The angular velocity \( \dot{\theta }_{i}^{DHM} \left( 0 \right) \) is estimated by fitting a cubic polynomial curve locally to the DHM joint angles. And \( \dot{\theta }_{i}^{DHM} \left( 1 \right) \) is also estimated similarly by fitting a cubic polynomial curve locally to the reference MoCap data.

If a turning or steering motion is required on the floor, the internal or external angle of the stance leg hip joint is increased gradually in each frame during one-step walking until the rotation angle \( \theta^{rot} \) is reached. \( \theta^{rot} \) represents the angle between the current locomotion vector \( \varvec{v} \) and the locomotion vector \( \varvec{v}^{pre} \) at the previous one-step walking.

4.2.4 Generating Swing Leg Motion (A9)

Applying the generated stance leg angles \( \theta_{i}^{DHM} (\phi ) \) to the DHM automatically determines the DHM pelvis position. As shown in Fig. 8, the swing ankle position trajectory \( \varvec{f}_{a} \left( \phi \right) \) is then interpolated using a cubic spline curve, so that they coincide with the key-frame ankle positions \( \varvec{f}_{a} \left( {\phi_{j} } \right) \) at \( \phi_{j} \in \left\{ {0.2, 0.4, 0.6, 0.8, 0.9} \right\} \). The positions \( \varvec{f}_{a} \left( 0 \right) \) and \( \varvec{f}_{a} \left( 1 \right) \) and the velocities \( \mathop {\varvec{f}_{a} }\limits^{ \cdot } \left( 0 \right) \) and \( \mathop {\varvec{f}_{a} }\limits^{ \cdot } \left( 1 \right) \) at the current and target postures \( F_{s} \) and \( F_{t} \), respectively, are also used as boundary conditions. As in stance interpolation (Sect. 4.2.3), the velocities \( \mathop {\varvec{f}_{a} }\limits^{ \cdot } \left( 0 \right) \) and \( \mathop {\varvec{f}_{a} }\limits^{ \cdot } \left( 1 \right) \) are estimated by fitting a cubic polynomial curve locally to the reference MoCap data.

Fig. 8.
figure 8

Generating swing leg motion

Finally, all of the joint angles of the swing leg are determined by solving IK analytically to achieve the interpolated ankle position trajectory \( \varvec{f}_{a} \left( \phi \right) \) in each frame. In this simulation, walking phase \( \phi \) is increased by 0.01 in each frame, and the one-step DHM motion is interpolated by 100 frames.

5 Simulation Results

We validated our modeling and simulation algorithm in a laser-scanned indoor environment. The 3D point clouds were acquired using a terrestrial laser scanner (FARO Focus3D S120) and have seven million points. Figures 9(a), (b) and (c) show the entire point clouds, walk surface points, and navigation graph, respectively. As shown in Fig. 9(b), the walk surface points represent walkable surfaces such as floors and stairs. Moreover, as shown in Fig. 9(c), the navigation graph represents the free space and its connectivity even in stairs.

Fig. 9.
figure 9

3D environmental model construction

5.1 Walking Simulation Results in the 3D Environmental Models

Figure 10 shows a walking simulation result in a corridor with corners. The DHM was able to walk along the pathway automatically step by step. In addition, the proposed simulation algorithm could plausibly recreate the oscillation of the pelvis that has been observed as a feature of human walking [21] without any direct specification or interpolation of the pelvis movements. This shows the effectiveness of generating human-like walking motion while respecting human walking features.

Fig. 10.
figure 10

Walking simulation in a corridor in case of a subject males, age 22

In addition, Fig. 11 shows a comparison of DHM walking motions with those of human subjects of different ages and genders. The DHM motions were estimated using a limited number of MoCap data frames from corresponding subjects. As shown in Fig. 11, the DHM could generate joint angle patterns similar to those of human subjects. The maximum angle differences between the simulation and reference MoCap data are approximately 10 and 5 deg in the knee and hip joints, respectively.

Fig. 11.
figure 11

Comparisons of DHM walking motions with those of human subjects

5.2 Modeling and Simulation Performance

Table 1 shows the elapsed time of the 3D environmental model construction from the original point clouds and of the walking simulation. The Point Cloud Library (PCL) [22] was partly used for point cloud processing, and OpenGL was used for rendering. The 3D environmental model construction required approximately 6 s for approximately seven million points, significantly faster than manual modeling.

Table 1. Elapsed time of modeling and simulation (CPU: Intel(R) Core(TM) i7 3.30 GHz,Memory: 32 GB,GPU: GeForce GTX 560 Ti)

The elapsed time for the macro-level simulation, a preparation for beginning the micro-level simulation, was less than 0.1 s, and the elapsed time for one-step walking motion simulation with 100-frame interpolation in the micro-level simulation required approximately 0.3 s, less than that of a human walking. This shows the possibility of real-time walking simulation for one DHM.

6 Conclusions

In this study, we developed a new algorithm for generating human-like walking motion for a DHM using one MoCap data reference of flat walking selected from a gait DB. The DHM could walk autonomously, while adapting its stride and turning angle to as-built environmental models automatically constructed from laser-scanned point clouds. The simulation results were validated through comparison with original MoCap data. It was confirmed that the DHM could walk autonomously in the environmental model, while respecting the walking motions of various types of people of different ages and genders. The elapsed time for modeling and simulation was suitable for practical application.

As future work, we aim to develop walking simulations for more complex environments. An algorithm for stairs and slope walking would significantly advance the field of the virtual human accessibility evaluation.