To the limit. Studios producing computer animated films are releasing at Least one movie per year, while digital characters are becoming commonplace in Traditional films. Animation is an important ingredient in computer games as well, and cinematic shorts are becoming more commonplace. The public is enjoying it: seven of the ten top grossing films in the United States(2009-2010) 1 have scenes with some form of computer animation, and the computer gaming industry is one of the fastest Growing industries of recent times.
This high demand has resulted in a need for improving the efficiency of the computer animation process. Although one should not truncate the time spent in the initial artistic process, such as the design and construction of appealing characters, there are several technical steps in the animation pipeline that offer the opportunity of automation.
Then every new character will have the same basic motions. This correspondence Provides an opportunity for automating the rigging process. Such an automated System must be flexible to account for variations in the characters' physical characteristics, such as height and weight.
In practice, this problem is more complex than it seems, especially when dealing with the human face; the most expressive part of the body, communicating both language and emotion. Human beings have developed an acute sensitivity to facial expressions and artefacts in facial movement are immediately noticeable. Additionally, although humans all share the same underlying facial structure, the
range of face types is immense. Current facial animation techniques, discussed in the following pages, attempt to overcome some of these issues. For example, parameterized models provide ready-made, configurable and animatable faces, but the number of characters
which can be represented are limited by the number of conformal parameters. Morphing systems, on the other hand, can represent virtually any character, but they depend on facial expression libraries which must be built for each character.
Additionally, these popular morphing algorithms do not simulate the motions in the face correctly. In this work, we present a system for automating the rigging process of human
Faces, allowing the artist to immediately concentrate on the animation phase.
Research Background and Related Work
There has been lot of work done in the field of rigging. Initially a rigger has to create a joint setup for character. But it was hard to animate through joints. Then they create control which drives joints. Later they develop complex control setups but it use to take lot of time. Then they scripted it out to make whole processes auto-maze. These auto rigging and skinning script is very useful script for production studio. Its will help to generate a good quality rigs in very short period of time. It
will be very useful for secondary and crowed character. It can be used to rig biped, quadruped, imagery creature and props. It will automatically do the naming of joints and controls, which will help to transfer animation from one character to another.
One such step is the rigging step. Rigging is the process of defining and implementing The possible motions of a virtual character and providing controls to execute them, much like tying strings onto a marionette. This is a complex and Time-consuming procedure requiring both artistic and technical savvy. However, when we restrict ourselves to a particular type of character-say, human beings- then every new character will have the same basic motions. This correspondence provides an opportunity for automating the rigging process. Such an automated system must be flexible to account for variations in the characters' physical characteristics, such as height and weight.
In practice, this problem is more complex than it seems, especially when dealing with the human face; the most expressive part of the body, communicating both language and emotion. Human beings have developed an acute sensitivity to facial expressions and artifacts in facial movement are immediately noticeable.
Additionally, although humans all share the same underlying facial structure, the Range of face types is immense.
Current facial animation techniques, discussed in the following chapters, attempt to overcome some of these issues. For example, parameterized models provide Ready-made, configurable and animatable faces, but the number of characters which can be represented are limited by the number of conformal parameters.
Morphing systems, on the other hand, can represent virtually any character, but they depend on facial expression libraries which must be built for each character.
Additionally, these popular morphing algorithms do not simulate the motions in the face correctly.
In this work, we present a system for automating the rigging process of human faces, allowing the artist to immediately concentrate on the animation phase
Bone-the fundamental structure of beings
Bones are the important part of a living being and and animated creature, for example humans have 206 separate bones .these bones will help them to do their locomotive activities. Bones are collective tissue organ that from the part of the endoskeleton of vertebrates
The major part of vertebrate's Skelton is formed by calcified connective tissues called bones. All the bones are contains so many joints. In the same way bones and bone setups are much more important in 3d animation.
Bones of living vertebrates have three function, first one is giving protection for internal organs (brain ,lungs and heart) and another one is support for the body mass and locomotive actions .in living world only vertebrates have this kind of skeletons and multiple bone system. But in 3d world most movable things have their own bone set up (tree, vehicle....etc)
Bone setup and skeletal system
Skeletal system is an important anatomical structure of an anatomy, it consists of both fused and individual bones supported and supplemented by ligaments, tendons, muscles and cartilage. In the case of an adult living being skeleton comprises 14 % of the total body weight. This is bone mass of an organism. Basis of organic creatures, they have two type of skeletal system one is endo skeletasystem
Exo skeletal system, (its a type of skeletal system ,which is a harder outer structure which provides from structure protection, and support, to the soft inner.)
Skeletal functions of living organism
Skeletal system helps to maintain their body shape as a frame work.
The joints between bones helps to movement .movement is powered by by skeletal muscle.
Muscle, bones and joints provide the principal mechanics for movements ,all these are coordinated by the central nerves system.
The skull can protect brain, eye and inner eyer.The the vertebra protect the spinal cord
The ribs cage can protect heart, lungs and main blood vessels. the ilium and spine protect the digestive and urogenital systems and the hip
A Generic Face Model and Rig
The goal of this research is to speed up the rigging process of a human face by applying a predefined rig to new geometric face models. I accomplish this by deforming a reference model and rig to match models generated by a facial digitizer. The reference model is expressive, capable of a large number of facial movements and emotions, and flexible, working in a large number of geometric configurations. I have chosen muscle contractions as the driving parameters for the reference rig since, Additionally, other, more intuitive interfaces can be constructed on top of these low-level controls. To implement the rig, we have developed a novel skin/muscle model based on wire deformers and point constraints. The wire deformers intuitively act as skin attachment points for the muscles and the point constraints average out the skin displacement due to various muscle contractions. Furthermore, the point constraint weights allow the artist to fine tune the shape of the deformation, providing more control over the appearance of the face.
This module describes the generic model and rig in detail. I first cover the basic concepts required to understand the deformation chain, we introduce the skin/muscle model developed for the rig, and we conclude with a detailed description of the reference model in Section 4.3.
Computerized surface models specify the position of their vertices with respect to an arbitrary origin, also known as the pivot point. By changing the position and Orientation of the pivot coordinate system, the entire surface is repositioned within the scene. A system of hierarchical transforms works by applying the transform Chain to the pivot point, thus altering the locations of surfaces.
However, it is sometimes desirable to constrain the behaviour of the pivot point's Coordinate system. For example, a rig can require the animator to specify the Orientation on an object while constraining both eyes to match this orientation.
Another example constrains the orientation of the eyes to point toward the control Object. These control systems are illustrated in Figures 1 and 2. In these cases the eye shapes are referred to as constrained objects and the controlling object is called the target object.
Our generic rig uses three types of constraints. Point Constraint: Forces the position of the constrained object's pivot to match the position of the target object's pivot. The constrained object's orientation remains unaffected.
Orientation Constraint: Matches the orientation of the constrained object's co110
Figure 1: Example of an aim constraint. In the above images, the Character's eyes are forced to look at, or aim toward, the red diamond on the control figure.
Figure 2.: Example of an aim constraint. In the above images, the character's eyes are forced to look at, or aim toward, the red diamond on the control figure.
Ordinate system to the target object's coordinate system. The position remains unaffected.
Aim Constraint: Forces the constrained object's coordinate system to orient itself so that it looks at, or aims toward, the pivot point of the target object. The position of the constrained object remains unaffected. Note that these constraints support more than one target object. In this case, the constrained object's final position or orientation is calculated as if each target object was the only target, and the results are blended using user provided weights. These weights are normalized to ensure that the final position and orientation are bounded by the target positions and orientations. A point constraint with multiple.
Figure 3: An example of a weighted point constraint. The red cone is point constrained to the orange cones. In the first image, the weights for cones 1, 2, and 3 are all equal to 1.0, whereas in the second image the weights are 1.0, 2.0, and 6.0, respectively. By normalizing the weights, the pivot point of the red cone is constrained to the triangle defined by the orange cones.
Targets are illustrated in Figure 3.
Deformers are procedural manipulations which change the positions of a surface's vertices or control points, thus deforming it. Note that a deformer need not change the positions of all of a surface's vertices. In fact, it is possible to define a deformation set to specify those points which a certain deformer should act upon. The benefits are two-fold. First, the artists are given greater control over the extent of the deformation. Second, the number of computations in a high-resolution model is greatly reduced. This is analogous to the definition of regions in Pasquariello and Pelachaud's Greta model [PP01].
Cluster deformers provide a way to attach a second pivot point to a selection of vertices or control points of a curve or surface. This new pivot point is generally different from the original pivot which defines the curve or surface's position in
Figure 4: An example of how cluster deformers can be used to con-strain individual vertices of a surface. In the images above, a cluster has been applied to a single vertex of the polygonal plane. The cluster handle, shown by a black letter "C," has been point-constrained to the pivot point of the green sphere. Therefore, as the sphere is moved, the cluster handle follows, bringing along the vertex. The cluster handle, therefore, acts as a second pivot point for the vertex, independent of the surface pivot point.
Space. Transformations applied to the cluster handle (representing the new pivot point) are also applied to the vertices in the cluster's deformation set. This is useful when the artist wants direct vertex manipulation during animation. Clusters are also used for constraining regions of a surface. Since constraints affect surface pivot points only, the artist can use a cluster to provide an alternate
Another degree of freedom is provided by using cluster weights. These user defined weights control the degree to which the cluster transform is applied to
The elements of the deformer set. Figure 4.5 shows the results of changing cluster
Weights on an object.
Figure 5: The images above show the effects of cluster weights on a deformed object. The left image shows the polygon surface before deformation. The orange vertices represent the elements of the cluster deformer set. At centre, the cluster has been displaced upward, moving the vertices in the deformer set along with it. The image on the right shows the deformed surface after changing the cluster weight on the outer rows of vertices to 0.5 (the default weight is 1.0).
The generic rig used in our animation system was developed using a mixture of structural and ad hoc components. Since the rig will animate a variety of geometrically different faces, we have based the animation parameters on the common human muscular-skeletal structure. However, there is no formal structural basis for the deformations employed by the skin/muscle model used in the rig; these simply provide good results. Nevertheless, there are a few intuitive principles which led to the development of this model. Therefore, simultaneous contractions of these muscles will pull the same patch of skin in more than one direction. Additionally, the displacement
Figure 6: An example of the push-to-surface deformer. The top image shows the reference surface in a purple color, before the deformation is applied to the blue surface. The middle image shows the effects of the deformation, and the bottom image shows the effects of altering the reference direction.
Figure 7: Joints were chosen to model muscle contractions due to their visual feedback. The images, from left to right, show a relaxed and contracted muscle joint, respectively.
Of skin due to muscle contraction is strongest near the muscle's insertion point, dropping off as one move away from this region. Our model takes these properties into account. The basic elements of the model are muscles and skin. Muscles are defined by an origin and an insertion point, and the skin is simply the deformable surface. These elements are coupled together by a deformation chain, which is triggered by muscle contractions. To simplify the description of this deformation chain, we will describe its components as we piece it together. The muscle contraction is modelled as a scaling transformation about the muscle's origin in the direction of the insertion point. Although any object can be used to represent this muscle transformation, we have chosen to use joints due to their
Intuitive visual feedback, shown in Figure 7.
The muscle contraction affects the skin surface indirectly using wire deformers. As the muscles contract, the wires change shape, thereby affecting the skin. This procedure is shown in the simple diagram of Figure 4.11. You can think of the wire as skin tissue being pulled by the muscle. Figure 4.12 shows a sample setup with two joints affecting a wire, which we will use to illustrate the construction of the deformation chain.
As stated above, a muscle contraction does not affect all points on the skin
Figure 4.11: Simple version of the skin/muscle deformation chain.
The facial rig provides the controls that bring Leo to life. The controls are based on human facial structure and make use of the skin/muscle model introduced above. The rig itself was implemented in Alias Maya 5.0 and can be decomposed into three components: the facial mask rig, the eye rig, and the tongue rig.
Figure 8: Features of Leo's topology are shown in this image. The area highlighted in light blue shows the radial topology used around the mouth and eye regions. The edges highlighted in yellow are additional edges included for creating creases.
Figure 9: Models for the eyes, teeth, and tongue. Note that the tongue shape, which is attached to the lower teeth, is hidden by the upper teeth.
The facial mask rig accounts for 95% of all movement in the face. It is responsible for simulating all muscle contractions, skin creases, and neck and jaw rotations. Therefore, it is the most complex component of Leo's rig.
Figure 10: Deformation chain for the facial rig. A simplified deformation chain diagram for the facial mask rig is shown in
Figure 10. Conceptually, the chain is divided into four components. Initially, the deformers representing muscle contractions are performed using the skin/muscle model. Since this model does not account for skin sliding over bone, a skull no penetration deformation follows. Next, a skin creasing deformation, triggered by certain muscle contractions, further enhances the expressions of the forehead and eye regions. Finally, deformations controlling head and jaw rotations (termed bone movements) and eyelid opening/closing are applied. The reasons for placing the gross motor movements of the head and jaw at the end of the chain.
Figure 11: Muscles used in Leo's rig.
Figure 11 shows the muscles implemented in the rig. Vertices on Leo's polygonal mesh were selected to act as origin and insertion points for each muscle, thereby parameterizing the muscle construction to the model topology. Additionally, the muscle origin position was scaled uniformly by 0.95 about the centre of Leo's bounding box before creation of the joint to simulate the subcutaneous attachment to the skull, as illustrated in Figure 11. With the exception of the procures and mentalist, all muscles have symmetric joints on the left- and right-hand sides of the face. The frontalis muscle has two components per face half, allowing for contractions of the inner and outer portions of the muscle. Another feature of note on the mentalis muscle is that the origin and insertion points are switched, thereby simulating a muscle contraction by scaling the joint outward. This orientation worked better in practice than the standard treatment.
: This view of Leo's left eye shows how the muscle joints are constructed from the vertices in the topology of the face mask. Note that the ends of the joints are positioned at the vertex locations-the roots of the joints are displaced to simulate attachment to the skull (see text).
since, in contrast to the other muscles of the mouth, the mentalist pushes on the lips instead of pulling. Observe that the sphincter muscles are notably absent from this muscle system . In case of the orbicular is oculi, two joints have been used to simulate the skin rising in the cheek region due to contractions (see Figure 11). Only the incisive fibres of the orbicular are oris have been included in this rig. The levator palpebrae muscle is also excluded from this system, as it is implemented in the final deformation component of the rig. A comparison of the muscle joints to the human musculature is shown in Figure 12.
The rig muscles attach to sixteen wires on the skin,. Intuitively, these wires represent the areas of the skin that are most affected by muscle contractions. Once again, with the exception of the upper and lower central wires around the mouth, each wire has a symmetric twin on the opposite side of the face. The wires are constructed by interpolating specific vertex positions on
Figure 13: Comparison of Murphy's rig muscles to the human muscu- lature. Note that the risorius muscle, which appears to be missing, is actually hidden by the triangular is muscle, as shown in Figure 11.
The wires used in Leo's rig.
The face mesh, maintaining the topological parameterization. Although we used Cubic NURBS curves initially for the wires, experiments showed that linear curves provided similar results with lower computational costs2. Table 4.1 shows which Muscles are connected each wire using the skin/muscle deformation chain. The Weights on the cluster point constraints in each wire were adjusted manually until the desired facial contraction effects were obtained. Due to the proximity of the upper and lower lips in the generic face mesh, vertices in the lower lip are within the drop-off distance of the upper lip wires and vice versa, which is a source of undesirable deformations. To correct this problem, the region around the mouth was carefully partitioned into deformation sets specific to each wire. These deformation sets are illustrated in Figure 4.26.
An additional benefit of this partitioning is the reduction of computation in each wire deformer. The deformation sets for the wires around the eyes and cheek regions were also specified, as shown in Figure 4.27.
As briefly stated previously, the skin/muscle model has no provisions for simulating skin sliding over a skull. This is not a problem in the mouth region since the face geometry does not follow the skull lines closely there. However, this does create a problem in the forehead region, specially when the eyebrows are raised (contraction of the frontalis muscle), as shown in Figure 4.28. As the muscle joints pull on the skin, the region of the eye representing the eyebrow ridge bulge rises with the wire, giving the face an unnatural, stretched-out look. The second stage of the facial deformation chain fixes this using a push-to-surface deformer, which pushes out the skin in the skull region and restores the natural look of raised eyebrows.
Figure 4.27: Deformation sets corresponding to the wire deformers around the forehead and cheek regions.
Deformation sets corresponding to the wire deformers around the mouth region.
Figure 4.29 shows the procedure we use to generate the skull surface directly
From Leo's topology. First, we generate a set of cubic NURBS curves over the Forehead surface. Each curve is constrained to pass through a series of face mask Vertices contained within a vertical section of the forehead. The curves are then lofted to create a bicubic NURBS surface used by the push-to-surface deformer.
We use the z axis vector ([0 0 1] T) to define the forward direction of the deformer,
Which is the same direction Leo looks toward?
The push-to-surface deformer is the bottleneck of the facial deformation chain. The slowdown occurs from the number of ray/surface intersection tests performed against the bucolic NURBS skull surface. Therefore, we have attempted to trim down the skull deformation set as much as possible.
Rigging and Animating Fitted Scans
In the previous chapters we developed a reference model and rig for a generic human face. The rig has been parameterized to the topology of the reference head surface in order to rebuild or reattach it easily. We have also developed a method to deform the reference surface mesh to conform to digitized facial data produced by a Cyber ware 3D scan. By combining these two systems, we are able to work on the animation of a particular person's face quickly and effectively.
In this module, we merge the work described previously into a functional facial animation system. The system provides interfaces which allow the user to load a fitted scan, attach the reference rig, and animate easily and effectively. These interfaces have been implemented in Alias Maya 5.0 since the reference rig was implemented using the same software. The animation interfaces provide a convenient mechanism for manipulating the values of rig parameters. However, we have no guarantee that the rig developed for Murphy will deform fitted facial scans in the correct manner; for example, we do not know whether setting the rig parameter values in a smiling configuration will make a fitted mesh smile. We show in this chapter that the muscle-based rig does, in general, deform fitted faces correctly. Additionally, if the artist is not entirely satisfied with the resulting expressions, he/she is free to modify them to his satisfaction. Therefore, our animation system jumpstarts the artist into the final "tweaking phase" of the modelling and rigging steps in the animation pipeline. The remainder of this chapter is organized as follows. First, we present an interface for loading and rigging fitted scans .we describe a second interface which provides a versatile and flexible animation environment.
We close this chapter with a discussion of the results obtained from our system
Rigging the Fitted Face
Before a fitted scan can be animated, it must be loaded into the animation software
and rigged. This procedure is simplified by the user interface we have developed,
which provides a simple and convenient method for applying the reference rig to
a fitted scan. The actual procedure consists of two steps: a loading stage and a
rigging stage. The interface is likewise divided into two components, but only one
of them is enabled at a particular time.
Loading the Models
The purpose of the first step of the rig setup procedure is to load all of the surface
model information defining the face. The user interface for this task is pictured in
Figure 6.1. Using this interface, the user selects a name for the head model and a
fitted surface to load. The surface, produced from the fitting procedure described
I have presented a system for the automated rigging of human face models. Our system takes advantage of previous work which deforms the surface of a reference face model to match digitized face models generated by a Cyber ware facial scanner. To automate the rigging procedure, we have parameterized the construction of the reference rig on the vertex topology of the reference model. The rig can then be procedurally reconstructed onto deformed copies of the reference model. The animation parameters in our rig are structural in nature, based on the expressive muscles of the face. A novel skin/muscle model, based on wire deformers and point constraints is used to implement this parameterization. The model supports any number of muscle contractions about a particular skin patch (for example, the corner of the mouth) while allowing the artist to control the shape of the deformations.
Finally, we presented a user interface for quickly loading and rigging new models and a flexible environment for animation. The models and resulting facial deformations produced by our system are realistic and expressive, although we have not provided any objective measure of the "realism" of our generated models. However, our main goal has been to reduce the amount of time required to rig new face models. Thus, our system provides a good starting point for the animator to tweak the shapes or deformations which are not to his or her liking. Considering the amount of time the same artist would need to create and rig a head model from scratch, the time savings provided by our system are significant. In the future, systems of this type will greatly reduce the amount of time required for generating realistic facial animations of different
characters. Our system offers other advantages over popular modelling and rigging methods such as parameterized models and morphing systems. For instance, the number of characters a parameterized face model can represent is limited by the range of values of its conformal parameters. In our system, the number of characters is limited only by the number of configurations achievable by displacing the vertices of the reference model, providing a much wider range. Also, it is well known that morphing systems can be used to animate any character's face. However, the only achievable expressions in these systems are combinations of target shapes which have been previously modelled or captured. This extra modelling work is not necessary in our system-only the default, neutral expression is required. This can be modelled or captured using the reference model surface as a template, and the reference rig will be capable of generating a wide range of expressions. Our approach, therefore, saves time and storage space by reducing the number of surface models needed to create an animatable face. Additionally, if a morphing system is required due to design constraints, our system can be used to generate the target shapes or shape templates. Although effective at generating believable facial animation, the current implementation of our system can be improved. As mentioned in Chapter 5, the scan fitting procedure implemented in this work provides no guarantee that the vertices in the reference and fitted models will be located in corresponding regions of the face. Since our system implicitly assumes that this correspondence exists, we recommend that future implementations use either feature-based or constraint-based fitting algorithms. Another option is to limit the fitted region to the facial mask only, since the rest of the head is not involved in facial motion. Enhancements should be made to the reference rig as well. A significant deficiency is the lack of sphincter muscles, which should be used in the eye and mouth regions. Such muscles could be implemented by a scaling transformation about the centres of the muscle, and control over the final shape of this deformation should be given to the animator. Complications can also arise due to the interaction of skin with the underlying facial structure. This issue must be addressed to avoid self-intersections and intersections with the teeth and the eyes. Although sculptor and push-to-surface deformers can help, their computational expense might prohibit interactive responses. How to prevent these intersections procedurally is an area of future research. Our system currently makes no attempt at modelling or rendering hair or skin accurately, relying on basic texturing routines. Future developments should incorporate more advance procedures for added realism. Although we have formulated our rig for animation by a rained artist, current industry trends have focused on driving animation using motion captured data. Additionally, MPEG-4 Facial Animation FAP streams also provide pre-recorded motion data for driving face models in real time. Whether topologically parameterized rigs can be driven by these technologies should be explored. Finally, although we have focused entirely on the human face in this work, we believe the concepts presented are flexible enough to be used on non-human characters.