Human Motion Simulation Computer Science Essay

Published:

Human motion simulation is an ill-posed problem, and has never been well modeled due to superfluous of the human musculoskeletal system. This implicates that there will be some problems during simulation and they may not be similar. (Kawato, 1996; Faraway et al., 1999). Many approaches have been proposed to solve the ill-posed human motion simulation problems. Some of the approaches will be reviewed in the following sections.

A simple lifting motion from a point A to point B may have infinite possible trajectories and it is important that we are able to predict the human motion accurately. Human CAD systems require accurate simulation of human postures and motions based on the description of task and the performer as the input data (Chaffin, 2001; Hsiang and Ayoub, 1994; Jung et al., 1995; Nelson, 2001; Ianni, 2001; Bowman, 2001; Thompson, 2001; Jimmerson, 2001)

Human motion data can be gathered using two methods (Robertson et al., 2004). They are experiment-based and simulation based. In experiment based method human subjects are asked to perform specified lifting tasks in a real or replicated work environment and the actual human motion behaviors and trajectories are captured. This method has more accurate results as it is performed using real humans. But it is restricted to the subject or individual performing the lifting task. It may not be applicable to the entire population. Safety of the subject is also a major concern here. The subjects should be well informed and a consent form should be signed prior to the experiment.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

In simulation-based method humans and the work environment are modeled by mathematical algorithms and motions can be predicted by computing these algorithms by providing input data describing the tasks and human participants. The main advantage of this method is that if it is accurate then it can be used efficiently to get motion data. It can also be applied to predict various lifting scenarios. It can also cater to a larger variety of individuals. This dissertation primarily focuses on developing a new human motion simulation model to accurately simulate and predict human lifting motions.

2.2 GENERALIZED MOTOR PROGRAM THEORY

The human simulation model is based on the GMP (Generalized motor program) theory. According to the GMP theory, the human neural system stores the images or templates or patterns of motion paths in memory. These images or templates or patterns can be reused by the brain to plan motion trajectories for various lifting tasks through manipulation. By using this it is possible to predict various human motions accurately.

In order to accomplish this task, a Memory based Motion Simulation (MBMS) model was first developed. This model consists of a memory or motion database, a motion finder, a MoM (Motion modification) algorithm for motion pattern modification.

Generalized motor program consists of a sequence of preplanned or previously acquired motor commands structured in memory which is used made a template to plan a class of motions. Specific parameters are used for this purpose and the invariant features are used for guidance.. The parameters in this case are variables which can be changed. Some examples of the parameters may be overall duration and overall magnitude of motion. The invariant features are constants and cannot be changed.

Symbolic motion structure representation (SMSR) algorithm is inspired by the generalized motor program theory. The SMSR algorithm recognizes a basic temporal structure when a body is in motion. A flow of elemental motion segments is obtained as a result of solving a multi-joint motion represented by every joint angle-time trajectory. Thus, in order to represent each shape, symbols are used to denote each motion segment.

2.3 INVERSE KINEMATICS

For modeling more complex mechanisms Inverse Kinematics is used. Inverse kinematics involves using mathematical approximations to define the degrees of freedom of a kinematics chain given the expected final position of the end-effector. IK deals with the study of geometry of motion. It does not consider the forces and moments that cause the motion. IK can provide infinite number of solutions. This concept of obtaining a set of segmental angles based on the data on a matrix describing the movement, which is the basic premise of inverse kinematics, has been proposed as a modeling technique to achieve this goal, both in robots and in humans (Craig, 1989; McCarthy, 1990). For example, IK can lead a multi-link robotic arm to grab an object.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Inverse Kinematics has been applied successfully in robots. But it has not been the same when it is applied in humans. The main reason for this problem is the difference in degree of freedom properties between humans and robots. In robots, degrees of freedom can be quite easily modified according to type and number. The degrees of freedom in humans are far more complex and difficult to modify. It is more difficult to predict a motion simulation using IK if there is higher redundancy of degrees of freedom in a kinematic chain. In such cases approximations can be used to get the desired output (McCarthy, 1990). Since humans may have "preferences" for certain postures, and inverse kinematics algorithms, by themselves, have no ability to specify what the preferred postures are (Perez, 2005), inverse kinematics is usually facilitated by some other tools to select the preferred posture from a feasible solution set (Guez and Ahmad, 1990). The most commonly used in recent times is the optimization tool.

2.4 OPTIMIZATION BASED

Optimization is used directly as a prediction method nowadays. They are also used as a selection tool in IK based motion prediction algorithms. In this type of method, an objective function is maximized or minimized according to a series of constraints. The output obtained by this method is a time-based series of predicted joint angles. Optimization techniques offer either a distinct optimal solution or a multitude of equally optimal solutions. The feasibility of the solutions obtained from the algorithm is assured by the constraints that are imposed as part of the problem. The limitation of the optimization approach lies in its requirement for the minimization or maximization of a goal. Similar to control models, the assumption that the central nervous system and the musculoskeletal system act according to one or two simplifying principles is required. Although it has been the focus of a considerable amount of research, an objective function that consistently explains postural behavior while lifting has remained elusive. Furthermore, convergence of the algorithm to a solution in a finite number of steps is not always assured, and situations can arise where the optimization problem is ill-posed.

2.5 ARTIFICIAL NEURAL NETWORKS

Researchers have approached the motion prediction problem by the use of artificial neural networks rather than concentrating on how the human motor control system works. This is achieved by building virtual neural structures that function similarly to real human brain structures. These virtual neural structures are also called artificial neural networks (ANNs). The artificial neural structures are trained to learn from existing observed kinematic patterns. If this is done effectively then similarities to how the human brain controls movement using similar structures can be obtained.

An ANN, as the name suggests is an artificial representation of biological networks of neurons. An ANN is a data processing system that has some performance characteristics similar to biological neuron networks. They have been designed as generalizations of mathematical models of human neural biology, based on the assumptions that:

Information processing occurs at many simpler elements also known as neurons;

Neurons help transferring the signals with the help of connection links;

The transmitted signal is multiplied by each of the connection links because of their associated weight;

In order to determine the output signal, an activation function is applied to the input by the neurons.

Variables are given as inputs to an input layer, and signals from this layer are propagated through the rest of the network until the output layer is reached and an output signal is produced (Figure). ANNs have the ability to simulate complex patterns based on highly interconnected entities. This has motivated its use as motion modeling algorithms.

Figure 1: A sample structure of an ANN

In the evaluation done by Massone and Bizzi (1989), the performance of an ANN was impressive in representing and generating unconstrained aiming movements of a limb. The training of the network was performed via simulated bell-shaped velocity trajectory profiles. Their results showed that the aiming task could be learned by a three-layer sequential network, which performed successful generalization and velocity profile adaptation.

Jung and Park (1994) evaluated the applicability of ANNs to the prediction of human reach motions. A feed forward-back propagation neural network was used to predict the shoulder, elbow, and wrist three-dimensional positions given the three-dimensional positions of the target. The starting position of the movement was implicitly included in the model, since all movements originated from the same position. While the units for the prediction errors are not provided, based on observation of the figures in the paper they seem to be small, and statistical tests found no difference between model simulation results and empirical observations. This approach, however, neglects the intermediate postures between beginning and endpoints of the movement and is not appropriate for real-time motion simulations. While arguably the hand coordinates could be subsequently altered very slightly to produce a set of intermediate postures between the beginning and the end of a movement, the network was not trained in these postures and, thus, would most likely perform poorly in predicting the motion generated between the starting and end postures.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Artificial neural networks have its advantages and disadvantages. The main advantage of the ANN approach is that no assumptions are made with respect to functions that the body attempts to minimize or maximize or control principles that are used. The network simply determines, based on its training process, the most appropriate output for the sets of inputs presented. The physical feasibility of solutions can be assured through manipulations of the neuronal activation functions, and a trained network will always provide a solution.

The main disadvantage of an ANN is the training process. It is required to make the network learn the correct input/output associations. In order to provide useful results, the network has to be exposed to a series of input-output sets that allow it to modify its internal connections and adapt itself to represent the data that are presented. Thus, the training set that is presented to the network has to be carefully chosen. This supervised training approach can also limit the extent to which the network is considered predictive, as its outputs are usually based on some previously obtained empirical information.

In real world the inputs that can be fed to a neural network are home and target locations. A feed forward back propagation neural network was used for this prediction. The important concept behind the solution is that, the units of the output layer are back propagated to find out the errors for the units, hence the name - the rule of back propagation learning. Back propagation can also be expressed as a generalization of the delta rule for nonlinear activation functions and multi layer networks. In general, an artificial neural network has been used

as a pattern classifier or a pattern associator [7]. As a pattern associator, the network consists of a large number of simple neural-like processing units organized into layers. Each of these units has adjustable and weighted connections with the units of the previous and the next layers. The network is "trained" to associate a set of input vectors to a set of target vectors in a process during which pairs of such vectors are presented to the network. A "learning" algorithm is used to modify connections' weights, so that upon convergence the network will perform the correct association.

Back-propagation is a systematic method of training multilayer artificial neural networks. It is built on high mathematical foundation and has very good application potential. Even though it has its own limitations, it is applied to a wide range of practical problems and has successfully demonstrated its power. Artificial neural network is developed to mimic the characteristics and functions of a biological neuron. Analogous to a biological neuron, an artificial neuron receives much input representing the output of the other neurons. Each input is multiplied by the corresponding weights analogous to synaptic strengths. All of these weighed inputs are then summed up and passed though activation function to determine the neuron output. The activation function is chosen as a nonlinear function to emulate the nonlinear behavior of conduction current mechanism in a biological neuron. However, as the artificial neuron is not intended to be a photo copy of the biological neuron, many forms of nonlinear functions have been suggested and used in various engineering applications. It is also seen that for sigmoidal functions, the output of a neuron varies continuously but not linearly with the input. Neurons with sigmoidal functions bear a greater resemblance to biological neurons than with other activation functions.

The adapted perceptrons are arranged in layers and so the model is termed as multilayer perceptron. This model has three layers; an input layer, and output layer, and a layer in between not connected directly to the input or the output and hence, called the hidden layer. For the perceptrons in the input layer, we use linear transfer function, and for the perceptrons in the hidden layer and the output layer, sigmoidal functions are used. The input layer serves to distribute the values they receive to the next layer and so, does not perform a weighted sum or threshold.

In the three-layer network the activities of neurons in the input layers represent the raw information that is fed into the network. In the hidden layer, the activity of neurons is identified by the activities of the neurons in the input layer and the connecting weights between input and hidden units. Similarly, the activity of the output units depends on the activity of neurons in the hidden layer and the weight between the hidden and output layers. This structure is interesting because neurons in the hidden layers are free to construct their own representations of the input.

Learning rate coefficient determines the size of the weight adjustments that was made at every iteration and hence influences the rate of convergence. Poor choice of the coefficient can result in a failure in convergence. The coefficient should be kept constant through all the iterations for best results. If the learning rate coefficient is too large, the search path will oscillate and converges more slowly. If the coefficient is too small the descent will progress in small steps significantly increasing the time to converge. But better results are obtained by using adaptive coefficient where the value of the learning coefficient is the function of error derivative on successive updates.

There is another way possible to improve the rate of convergence by adding some inertial or momentum to the gradient expression. This can be accomplished by adding a fraction of the previous weight change to the current weight change. The addition of such a term helps to smooth out the descent path by preventing extreme changes in the gradients due to local anomalies. The value of momentum co-efficient should be positive but less than 1.

A back propagation approach is a kind of supervised learning method. The algorithm which is commonly used to train networks updates the weights using the gradient search technique to minimize the mean square of the differences between the desired output vector and the actual output vector. A two-phase propagate-adapt cycle is used to feed the input-output example pairs to form a predefined set. An output layer is generated by propagating the input pattern, the stimulus, applied to the first layer of network units. The generated output is compared with the desired output, subsequently determining the signal-to-noise ratio. Now, the error signals are transmitted backward to the nodes in the intermediate layer from the output layer such that it contributes directly to the output. Each unit in the intermediate layer gets only a part of the total error signal which is known from the relative contribution of the unit made to the original output. Layer by layer, the process is repeated, until each node in the network has received an error signal that proportional to the contribution of the total error.

The connection weights are updated by each unit, based on the error signal received, such that the network converges toward a specific state that permits all the training patterns to be encoded [2].

5.1 ELMAN NETWORK ARCHITECTURE

Joint angles are generally predicted using Elman networks, comprising of two layers. The feedback system moves from the first-layer output to the first-layer input. Thus the Elman network detects and generates time-varying patterns with the help of the recurring patterns. The tansig neurons form the hidden (recurrent) layer of the Elman network and the output layer is made of purelin neurons. This combination is special in that two-layer networks with these transfer functions can approximate any function (with a finite number of discontinuities) with arbitrary accuracy. The only requirement is that the hidden layer must have enough neurons. As the complexity increases, it is important that more hidden neurons are included to fit the function perfectly.

As suggested above the specialty of Elman networks are the use of recurrent connection in the first layer unlike the conventional two-layer networks. Thus, the delay in this recurrent connection helps to store the values from the initial time step, used in subsequent time steps. Hence, two Elman networks with exactly same weights and biases, are supplied as identical inputs at any given time step, they will still have different outputs due to the difference in feedback states.

The network is very useful for future reference as it stores information and because it can learn spatial and temporal patterns. When the Elman network i trained to respond it can generate both spatial and temporal patterns. The joint angles of the five-segment human were measured and used as the predicted values for further study and successive simulations.

5.2 TRAINING

Elman networks can either be trained by train or adapt functions.

In this discussion we used the "train" function. This is the sequence of steps that has to be followed.

At each epoch,

1. The entire input sequence is given to the network, and in turn the outputs are calculated to generate an error sequence after comparison with the target sequence.

2. For each specified time step, the error is back propagated in order to find the gradients of the errors for each weight and bias. The gradient is merely an approximation as the delayed recurrent connection is ignored, which actually contributes to the weights and biases to errors

3. The weights are then updated using the obtained gradient, with the chosen back propagation training function. The function "traingdx" is used.