Chapter 3: Skin

Skinning is the process of attaching a renderable skin to an underlying articulated skeleton. There are several approaches to skinning with varying degrees of realism and complexity. Our main focus will be on the smooth skinning algorithm, which is both fast and reasonably effective, and has been used extensively in real time and pre-rendered animation. The smooth skinning algorithm goes by many other names in the literature, such as blended skinning, multi-matrix skinning, linear blend skinning, skeletal subspace deformation (SSD), and sometimes just skinning.


This chapter will explain the smooth skinning algorithm in detail and provide additional information about the offline creation and binding process. Binding refers to the initial attachment of the skin to the underlying skeleton and assigning any necessary information to the vertices.


Smooth skinning, while fast and straightforward, does have its limitations, and so alternative techniques are also briefly introduced and referenced, including a variety of deformation techniques and some more elaborate anatomically based approaches that simulate muscle and skin deformations.

3.1 Smooth Skinning

3.1.1 Rigid Parts and Simple Skin

Before looking at the actual smooth skinning algorithm, we will quickly consider two very simple approaches: rendering characters as rigid components, and the simple skin algorithm.


Rendering Characters with Rigid Components

Robots and simple characters made up from a collection of rigid components can be rendered through classical hierarchical rendering approaches. Each component mesh is simply transformed into world space by the appropriate joint world matrix. This results in every vertex in the final rendered character being transformed by exactly one matrix.


Expressed mathematically, we can say that for every vertex, we compute the world space positionby transforming the local space positionby the appropriate joint world matrix:


††††††††††† †††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.1)


Remember thatis a 4x4 matrix andis a 1x4 homogeneous position vector:


Every vertex is each mesh is transformed from the joint local space where it is defined into world space, where it can be used for further processing such as lighting and rendering.


[Image 3.1: rigid parts]


Rendering with rigid components works just fine for robots, mechanical characters, and vehicles, but it is clearly not appropriate for organic characters with continuous skin.


Simple Skinning

With the simple skinning approach, the characterís skin is modeled as a single continuous mesh. Every vertex in the mesh is attached to exactly one joint in the skeleton, and when the skeleton is posed, the vertices are transformed by their jointís world space matrix. As with the rigid component method, every vertex is transformed by exactly one matrix using an identical equation: . This implies that simple skinning should run about the same speed as rendering a character as rigid parts, and in practice, the two techniques often perform similarly with equal sized meshes.


Limitations of Simple Skinning

The simple skinning technique is adequate for low detail models, but is clearly not sufficient for higher quality characters. In practice, the simple skinning algorithm can be made to work for characters with perhaps 500 or even as many as 1000 triangles, as long as care is taken in vertex placement and bone attachment. Simple skinning may be sufficient for lower detail characters, but for higher quality, it is too limited and a better solution is required.


[Image 3.2: simple skinned joint]

3.1.2 Smooth Skinning Algorithm

Smooth Skin

Smooth skin extends the concepts used in simple skin. With smooth skinning, each vertex in the mesh can be attached to more than one joint, each attachment affecting the vertex with a different strength or weight. The final transformed vertex position is a weighted average of the initial position transformed by each of the attached joints. For example, the vertices in a characterís knee could be partially weighted to both the hip joint (controlling the upper thigh) and knee joint (controlling the calf). Many vertices will only need to attach to one or two joints and rarely is it necessary to attach a vertex to more than four.


[Image 3.3: smooth skinning on a cylinder]


Let us say that a particular vertex is attached to N different joints. Each attachment is assigned a weightwhich represents how much influence the joint will have on it. To ensure that no undesired scaling will occur, we enforce the constraint that all of the weights for a vertex must add up to exactly 1:


†††††††† ††††††††††† †††††††††††††††††††††††††††††††††††††††††††††††† (3.2)


To compute the world space position of the vertex, we transform it by each joint that it is attached to, and compute a weighted sum of the results:


††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.3)


where is the untransformed vertex in skin local space, the space in which the skin mesh was originally modeled. The matrixis the world matrix of the joint for attachment i. It is the result of the forward kinematics computations described in [chapter 2]. We use the indexing notation [i] to indicate that we donít want the matrix of the ith joint in the skeleton (which would be written), but instead we want the world matrix of attachment iís joint. For example, if a particular vertex is weighted 60% to joint #37, 30% to joint #6, and 10% to joint #14, then we have:



,, and

, , and


The matrixis called the binding matrix for joint [i]. This matrix is a transformation from joint local space to skin local space, and so the inverse of this matrix, , represents the opposite transformation from skin local space to joint local space. The combined transformationin [equation 3.3] therefore first transforms v from skin local to joint local, then from joint local to world space. As the number of joints is likely to be small compared to the total number of vertices that need to be skinned, it is more efficient to compute for each joint before looping through all of the vertices. We will call this transform defined by


††††††††††† †††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.4)


The skinning equation that must be computed for each vertex then simplifies to


††††††††††† †††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.5)


Binding Matrices

In order to understand the role of the binding matrix B in equations [3.3] and [3.4], it is necessary to make some clear definitions about the different spaces and transformations involved. We start with a definition of the untransformed vertex position v and the concept of skin local space.


The original untransformed skin mesh is modeled in a comfortable neutral pose relative to a coordinate system we will call skin local space. In keeping with our conventions specified in [chapter 2], we provide skin local space with a right handed coordinate system with the x-axis pointing to the right, the y-axis pointing up, and the z-axis pointingbackward. The untransformed mesh is defined by convention with the character standing on the y=0 plane, centered left to right at x=0, and facing in the Ėz direction. The untransformed vertex v is any point defined in this space.


[Image 3.4: character mesh in skin local space]


During the offline modeling process, the skeleton is created along with the skin, and at some point, they must be attached to each other in a process called skin binding. The binding process attaches every vertex to one or more joints and sets the weights for each attachment.

In order to bind the skeleton to the skin, the skeleton must first be posed so that it matches the pose of the skin mesh defined in skin local space. We will refer to this pose as the binding pose. This is often different from the zero pose, which is the pose of the skeleton when all of the DOFs are set to zero. The skeleton binding pose is set to match the skin mesh and then the world space matrix W of each joint is recorded and stored as the binding matrix B for that joint.


[Image 3.5: character skeleton in zero pose and in binding pose (inside skin)]


The B matrix for each joint is constant and set at the time when the skin is first attached to the skeleton in the modeling process. This implies that thematrices are also constant and can be pre-computed and stored ahead of time. Usually an array of inverse binding matrices would be stored right along with the skin mesh data, as they are closely related. Also, even though it might seem natural to store the binding (or inverse binding) matrix directly in the joints, it may make more sense to store them with the other skin information. One reason why this makes more sense is that it allows for several different skins to be bound to a single skeleton, each with a different binding pose if necessary.


Rendering Skin

In interactive character animation applications, the actual rendering of the skin is usually handled by some hardware graphics chip supporting a real time rendering API such as [OpenGL] or [Direct3D]. In fact, many modern APIs contain direct support for the smooth skinning algorithm. After transformation to world space by the skinning algorithm, triangles (defined by their vertices) are passed to the renderer where they may be shaded, projected for viewing, texture mapped, or further processed using user programmed vertex and pixel shaders [ref]. The details of the rendering process are outside the scope of this book and the reader should consult [MOLL99] for more information on real time rendering.

3.1.3 Handling Normals

In addition to transforming the vertex positions into world space, the skinning algorithm must also transform normals that the renderer will need to perform lighting calculations. We will assume that in the initial untransformed mesh, a normal n is specified for every vertex v. This normal is usually specified offline through an interactive modeling tool. To compute the world space normals, we use exactly the same skinning weights as the vertices, and so the normals are treated in very much the same way:


††††††††††† †††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.6)


Even if the untransformed normalis unit length, the weighted averaging in the equation could cause the length of the intermediate blended normalto vary. To accommodate for this change in length, the final blended normalwill most likely need to be normalized after the skinning is applied, in order for lighting calculations to work properly:


††††††††††† ††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.7)


It should be noted that in extreme cases, it would be possible for the intermediate normalto shrink down to zero length. This would make its direction undefined and cause a divide by zero error in the normalization. In practice, it would be very uncommon to have situations where a normal disappears entirely and so this case can usually be ignored. If necessary though, a test for zero and special case handling can be implemented.


It is also important to recall that the normal vector represents a direction, rather than a position. This implies that its expansion to 4D homogeneous coordinates should have a 0 in the value of the w coordinate, instead of a 1 as in the vertex position.


This ensures that the normal will only be affected by the upper 3x3 portion of the matrix, and not affected by any translation [see appendix [A]].


Normals & Non-Rigid Transformations

Most character skeletons are constructed using rigid (orthonormal) joint transformations such as rotations and translations. If non-rigid transformations such as scales and shears are used, the normals will require some additional special handling.


If the vertices of a mesh are transformed by a non-rigid matrix M, then the normals should be transformed bywhich is the inverse transpose of the matrix. This modifies the equation for skinning normals to:


††††††††††† †††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.8)


For rigid matrices, the inverse transposesimply equals M, so this reduces to [equation 3.6].


It should be noted that for improved quality, it would be a good idea to perform a vector normalization on every transformed normal before multiplying by the weightand then renormalizing once again for the final result. This will result in a more accurate blending of the normals for situations where large scales or other deformations are applied. This does, however, require considerably more vector normalization, which can impact performance.


Tangent Vectors

In addition to positions and normals, some applications may require tangent vectors to be transformed with the vertices. Tangents are sometimes needed for more advanced shading techniques such as anisotropic lighting, and bump or displacement mapping [MOLL99]. Like the normal n, a tangent vector t represents a direction, and so its expansion to 4D homogeneous coordinates is:



Unlike normals, however, they do not require any special handling when the skeleton uses non-rigid matrices. In other words, we can always use the M matrices to transform tangents, instead ofas with normals.


Depending on the application, tangent vectors may need to be normalized as well, as the results of the blending will not produce unit length tangent vectors. Assuming normalization is required, we can use the following formulas to compute the blended tangent vector


††††††††††† ††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.9)


†††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.10)


For more information on normals, tangents, and non-rigid transformations, see [BARR84] and [TURK90].

3.1.4 Optimizing Smooth Skin


Many modern real time rendering architectures contain both hardware and software support for skinning, and many of those systems use the smooth skinning algorithm presented above or some variation of it. Taking advantage of the rendering systemís native skinning algorithms will most likely have the biggest payoff in terms of performance gains, although this could potentially sacrifice some flexibility. [OPENGL], [DIRECTX]


More and more real time graphics systems support user-programmable vertex microprograms or vertex shaders that allow custom code to be applied to every vertex in a mesh. Most architectures supporting vertex shaders are able to allow a complete implementation of smooth skinning within the shader, while still being able to do additional processing such as lighting. [NVIDIA][ATI]


For software implementations without the advantage of hardware vector units, one can almost always use 3x4 matrices instead of 4x4 matrices, as described earlier in [section 2.1.5].


Algorithm Organization and Memory Caching

For best performance, the runtime data needed by the smooth skinning algorithm should be laid out in a cache-friendly manner. To implement the smooth skinning algorithm, one could loop through bone by bone (bone-major), or vertex by vertex (vertex-major).


Looping through bone by bone requires first clearing the array of transformed vertices, and then looping through each bone adding a weighted transformation to each vertex attached. This can be bad for memory caching behavior because the vertex array may be large and access to it may be scattered.


Looping vertex by vertex involves computing the fully transformed position and normal of each vertex all at once. This requires scattered access to the matrix array, but this will generally be much smaller than the vertex array and therefore should be more cache friendly. Also, caching behavior will benefit from the fact that the vertex array does not need to be cleared initially which cuts the amount of memory writing down significantly. Vertex read access is linear which should be efficient for caching as well. In practice, the vertex-major approach outperforms the bone-major technique by at least two to one, and is the approach used in most hardware and vertex programming in practice. [WEBE02]


Skin Partitioning

An approach to optimizing skinning that can be combined with effective use of hardware and caching is to partition the mesh into separate rigid and skinned components through an offline preprocessing phase. Triangles in the mesh whose vertices are all 100% weighted to the same joint can be rendered as rigid parts which may be able to provide a performance improvement on some architectures.


[Image: skin partitioning]


Some character models may be particularly well suited to this approach, namely those with large rigid components in the mesh. A knight in a suit of armor or a half-human, half-robot creature would be two possible examples. In these cases, it is possible to simply just use the smooth skinning algorithm as is, but on most real time graphics hardware, it is likely that one would get better performance by separating out the rigid objects and rendering them with single matrix transformations.


The actual performance of the partitioning method compared to the standard smooth skin algorithm is difficult to predict, as there are some tradeoffs involved. The smooth skin algorithm does more math per vertex, but is very simple and the entire mesh can be processed at once. Partitioning will allow for fewer math computations, but can potentially add more graphics state changes and interfere with the quality of triangle strip generation, or tri-stripping, which can adversely affect rendering performance. Most of the effort in partitioning is offline however and its runtime performance will vary from mesh to mesh and from machine to machine.


Speeding up Normals

One of the slowest operations in the smooth skinning algorithm is the renormalization that must be applied to the normals. This can add a significant cost per vertex, as renormalization requires an inverse square root () operation, which can be expensive on some architectures. It is possible to use approximation algorithms or table lookups for this, but fortunately, more and more graphics chips are supporting fast normalization directly in hardware. Whichever approach is chosen, the issue of normal renormalization will probably need to be confronted if smooth skinning of the normals is desired.


In some situations, it may be acceptable to simply transform the normal by the matrix with the largest weight and not bother blending with the other matrices. Transforming by only the largest weighted attachment will preserve the length of the normal and eliminate the need for renormalization. It should have a relatively small impact on visual quality in most cases and may be an acceptable optimization.


Number of Bone Attachments

As mentioned earlier, it is common for skinning systems to limit, the maximum number of bones a vertex is allowed to attach to. In practice, a common choice for this number is 4. This is a reasonable compromise and is actually more than sufficient for most applications. Examination of several character models of approximately 2000 triangles used in production applications showed that on average, around 60% of vertices were attached to a single bone, around 35% were attached to two bones, and about 5% were attached to three bones. Rarely if ever were vertices attached to 4 bones, and when dealing with skin for average human or animal characters, these numbers are understandable. Keep in mind that these numbers only represent a small sampling and will vary from case to case.


Putting an upper limit on the number of attachments simplifies the data storage requirements per vertex and allows for straightforward implementation of the algorithms in either software or as vertex microprograms.



The skinning information for a mesh requires storage of additional data for every vertex. Each vertex needs to store N, the number of joints attachments, and then a weight and joint index [i] for each attachment. Fortunately, this information can be compressed effectively.


The skin weights themselves can be compressed down to 8 bits or even fewer instead of using full 32-bit floating point weights. Using an 8-bit value will allow the weights to be specified to within 0.2% error (0.5 / 256), which should be more than sufficient. In addition, the weight value for the last vertex attachment does not need to be stored, as we can always compute it, assuming that all the weights add up to exactly 1.


The joint indices should also be easy to compress. Characters will rarely have more than 256 bones and so an 8-bit bone index will be sufficient in most cases.


If each attachment requires an 8-bit weight and an 8-bit index, we have a total of 16 bits per attachment. If we limit the maximum number of attachments per vertex to 4, then we should be fine with 64 bits of data total. Remember that we can also save 8 more bits if we donít store the final weight, so even 56 bits should be enough. The value of N can be compressed into 2 bits, or left out entirely and just implied by setting any unused weights to 0.


The per-vertex cost of storing the skinning information is small compared to the other information that typically needs to be stored for a vertex, such as the 3D position, normal, and possibly colors, texture coordinates, and other data.


Pseudocode Algorithm

A pseudocode implementation of the smooth skin algorithm is presented below. It includes the initial step of computing the skin matrices based on the binding and world space joint matrices.


†††† // Generate an temporary array of skinning matrices from

†††† // the constant binding matrices and the world matrices

†††† // produced from the skeleton forward kinematics

†††† for each joint j {

†††††††††† Compute

†††† }


†††† // Loop through every vertex and compute blended position

†††† // and normal

†††† for each vertex {

†††††††††† Create temp

†††††††††† Create temp


†††††††††† // Loop through all of the attachments for this vertex

†††††††††† // and add a weighted contribution from each

†††††††††† for each joint attachment i of current vertex {


†††††††††† }

†††††††††† †††††††††† // Store position

†††††††††† ††††††††† // Normalize and store normal

†††† }

3.2 Skin Creation and Binding

In order to achieve good results with smooth skin, care must be taken in the offline processes of modeling the skin mesh, building the skeleton, and attaching or binding the skin to the skeleton. Usually, this is done in a 3D modeling and animation package using a variety of interactive tools. The smooth skin algorithm is popular among many commercial animation software packages and character skins suitable for interactive applications can usually be prepared with off-the-shelf tools.

3.2.1 Skin Mesh Creation

Most real time character skins are modeled as a mesh of triangles and the smooth skinning algorithm is used to transform the vertices and normals of the mesh on the fly. It would also be possible to model characters with more complex primitives such as NURBS surfaces, progressive meshes, or subdivision surfaces, as is done in pre-rendered animation [ref]. The skinning algorithm can be used to manipulate the control vertices of these surfaces as well. For simplicity, we will assume that the mesh is built from simple triangles, but many of the techniques presented here apply equally well to the more complex surface types.


The triangle mesh can be built interactively in a modeling tool or can be scanned, procedurally modeled, or created by a combination of means. There are very few restrictions on the construction of the mesh, but one to be aware of is that T-intersections can cause havoc when skinning is applied, and they should be avoided. The vertex at the T-intersection does not necessarily transform to a position on the line segment after skinning and so a visible crack may appear.


[Image: T-intersections]

3.2.2 Skin Binding

Once the mesh is constructed, it can be attached to the underlying skeleton through a process called skin binding. This binding process involves assigning each vertex to one or more bones and setting the appropriate blending weights. Several different approaches exist for automatically binding a skin to an underlying skeleton, but because they are all based on heuristic rules, no technique is going to provide perfect results. Automated algorithms can still provide a reasonable first guess, which can then be refined by hand through an interactive editor.


Three categories of binding algorithms are briefly reviewed here: containment binding, point-to-line mapping, and Delaunay tetrahedralization.


Containment binding algorithms work by comparing each vertex to an approximation volume of each bone. The volume could be cylindrical, ellipsoidal, or some custom shape, and could either be created automatically or adjusted by hand. Comparing the vertex to the volume generates a score based on some additional arbitrary geometric rules, and the bone with the best score is selected for binding. Some containment algorithms always go with the single best fitting bone and then use smoothing algorithms to smooth out the weights, while other variations attempt to apply appropriate weights directly to multiple bones based on the containment scores. Many variations on this approach exist with many approximations volumes and scoring rules. [WEBE00] [MAYA] [3DS]


[Image: containment binding algorithm]


Sun, et., al., used a point-to-line mapping to attach skins to an underlying skeleton of line segments [SUN99]. A plane was computed for each joint that split the space around the joint into subspaces that could be used to relate vertices in those subspaces to the appropriate joint.


[Image: point to line mapping]


Shin and Shin present a technique for skin attachment based on Delaunay tetrahedralization. [SHIN00] The space containing the skin vertices and joint pivot points is broken up into tetrahedrons using an automated computational geometry technique [SHEW00]. The topological properties of this Delaunay tetrahedral mesh allow for a heuristic algorithm to make reasonable guesses as to which vertices should be attached to which joints by traversing the edges of the mesh and finding the closest topological neighbors.


[Image: Delaunay triangularization]

3.2.3 Weight Adjustment

Automated binding algorithms are not perfect but they do provide a good start on assigning vertex to bone attachments and weights. Usually an automated binding algorithm is used and then the weights are adjusted by an artist manually. There are several additional tools that can be helpful in the weight adjustment process.


Rogue Removal

Automated skin binding algorithms can sometimes result in rogue vertices. These are vertices that have been mistakenly assigned to a bone far away from the correct one and can cause noticeable stretching when the character is moved. Weber proposes a rogue removal algorithm designed to detect and correct this condition [WEBE00].


First, each bone determines which of its vertices is most likely to be correctly attached. The vertex closest to the actual bone or the vertex that received the highest score in the binding containment algorithm would make a good choice. Any vertices that are contiguously connected to this best vertex are marked as validly attached to the particular bone. If a vertex is not marked as valid for a particular bone that itís attached to, then that attachment must be changed. Weber recommends a simple correction by finding the nearest valid vertex and attaching the invalid vertex to the valid vertexís bone [WEBE00].


[more: present algorithm]

[image: rogue vertices]


Weight Smoothing

[more: present algorithm]


Weight Painting

Some interactive character modeling and animation systems provide a weight painting interface to setting skinning weights. Vertex weights can be visualized as colors defined at the vertices and interpolated with Gouraud shading, and an interactive process allows the user to paint and manipulate these colors with a variety of tools. This gives an artist a method of visualizing the attachments directly and the ability to discover and fix problems.


[Image: weight painting]


Direct Manipulation of Skin Weights

Mohr, Tokheim, and Gleicher propose a direct manipulation approach to setting skinning weights that allows users to directly manipulate the deformed vertex positions [MOHR02]. They compute and display the subspace of possible deformed vertex positions, and then allow the user to place the vertex within this subspace. The algorithm then computes the proper weights automatically.

3.2.4 Additional Issues and Effects

Limitations of Smooth Skinning

The smooth skinning algorithm is simple and has found widespread use throughout computer animation, but it does suffer from a few shortcomings worth noting. Overall, it performs very well for small joint rotations, but tends to decrease in quality for larger rotations. The averaging process applied to the vertices tends to cause the limb to shrink and lose volume with both bending and twisting motions. Severe bend and twist angles can cause total collapse of joints. Also, the overall appearance of the bend can tend to be a bit too smooth, and complex shape deformations such as those found at the human elbow, are difficult to achieve convincingly.


[Image: severe bend & twist]


Bone Links

The smooth skinning algorithm works pretty well for joints with up to 60 degrees of rotation, but can fail badly when the angle gets close to 180 degrees. One method of improving the behavior of joints undergoing large rotations is to use bone links [WEBE00], [WEBE02]. Using this method, a joint like the elbow, that may undergo a large range of motion, is replaced with two or more joints, each making a smaller motion. Bone links are additional joints inserted at the joints so that no individual bone rotates more than some adjustable threshold, 60 degrees for example. These additional bones are controlled automatically by the algorithm and can be ignored by higher level code.


[Image: bone links]


The number of links created should be related to the total range of movement of the joint (the difference between its joint limits). Normally, joints in creatures will not bend more than 180 degrees and so three links is often enough. Ultimately, the number and placement of the links can be left up to the user in an interactive process to allow maximum control. Weber presents a variety of tools to assist in creating and controlling the additional bone links [WEBE00][WEBE02].


Difficult Body Parts

Attaching and weighting a skin properly can be a difficult and time consuming task, even with the help of good tools. Every joint presents its own challenges. In human characters, the shoulder joint is perhaps the most notoriously difficult to skin properly, as it undergoes a large range of motion in all of its DOFs. Often in games and real time applications, the entire shoulder is oversimplified and treated as a single 3-DOF ball joint. This is bad for both skinning and animation purposes as the human shoulder is more complex than that. The addition of a 2-DOF clavicle joint before the 3-DOF shoulder joint is very helpful for both skinning and character expression. Adding a scapula bone can assist the skinning even further, and modeling the correct motion of the scapula as the arm moves around is an important step in making this successful. The level of detail put into the skeleton and skinning will of course vary based on the application, but where more quality is needed, accurate anatomical modeling can help.


[Image: shoulder joint]


The forearm can present its own share of problems as well. A common approach to modeling the forearm is to treat the elbow as a 1-DOF hinge joint and the wrist as a 3-DOF ball joint making a total of 4 DOFs. This can lead to unrealistic motion of the skin in the forearm, particularly when the wrist is twisted. One approach to improving this behavior is to take the twist DOF out of the wrist and treat it as an individual joint, located by the elbow. The forearm is instead modeled as two 1-DOF (elbow, wrist twist) and one 2-DOF joint (wrist bending forward/backward and side-to-side), but still has a total of 4 DOFs. This technique is conceptually closer to the actual behavior of the forearm, but is still a simplification. Bone links applied to this twist joint can also help prevent the wrist from shrinking as it goes through its range of motion.


[Image: forearm skinning]


The study of human and animal anatomy is invaluable to the proper set up of the skeletons and skinning for animated characters. Programmers and artists interested in character animation should always have a good anatomy reference near by. For more information on the human skeleton, see [CALA93][FEHE96][HBD].


Muscles and Other Effects

Simulating the deformations caused by muscle action can be an important step in increasing the realism of the skin. One way of doing this is to place additional joints in the skeleton that exist purely to affect the skin. For example, the bulge in a bicep that occurs when a character flexes its elbow can be simulated with a single axis translational or scale joint parented under the shoulder bone and weighted to the vertices in the bicep region of the skin.


[Image: bicep example]


This same general approach can be used for various soft tissue parts on a body including muscles and fat. The expansion of the chest during breathing can be modeled with a scaling joint in the torso, and even the effect of simple clothing can be achieved through clever use of skinning and auxiliary bones. By creating these additional bones and controlling them with automated expressions, the smooth skinning algorithm can be general purpose and effective for a variety of applications. [Chapter 5] discusses a framework for the use of expressions used within the skeleton and rigging system.


[Image: fat, simple cloth, breathing]

3.3 Free Form Deformations

Non-Linear Deformations

Free Form Deformations


3.3.1 Deformation Techniques

Bend and Twist Deformations

Lattice FFDs

Arbitrary Topology FFDs

Axial Deformations


3.3.2 Surface Oriented FFDs (SOFFDs)

Control Mesh

Deformation Equations


3.4 Anatomical Modeling

3.4.1 Anatomy of Skin and Muscle

3.4.2 Anatomical Simulation

3.5 Summary

In this chapter, we introduced the smooth skinning algorithm, in which a digital characterís skin mesh is smoothly deformed by the motion of the underlying skeleton. Each vertex of the mesh is attached to one or more joints in the skeleton, each attachment with an associated weight . To compute the vertex position in world space after the skeleton is posed, we compute a weighted average of the vertex position as if it had been transformed by each of the attached joints:


††††††††††† †††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.5)


where skinning matrices for each joint can be computed at the beginning of the process using


††††††††††† †††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.4)


whereis a jointís inverse binding matrix, which is a transformation from skin local space to joint local space. W is the jointís world matrix, transforming from joint local space to world space. Mesh normals can be smoothly skinned using [equation 3.6]:


††††††††††† †††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.6)


or alternatively using [equation 3.8] if the skeleton contains joints with non-rigid transformations such as shears or non-uniform scaling:


††††††††††† †††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† ††††††††††† (3.8)


In either case, the normal should be re-normalized after blending using:


††††††††††† ††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††††† (3.7)


For skinning to work effectively, care must be taken in the creation of the skin mesh and in binding it to the skeleton. In the binding process, vertices are assigned to bones and the weights are set. This process can involve a variety of automated and interactive tools such as containment binding, point-to-line mapping, rogue removal, weight smoothing, and other techniques.


In addition to controlling skin deformations caused by bone motions, the smooth skinning algorithm can be used for additional effects including muscles, facial expressions, and simple clothing.


The smooth skin algorithm is very simple, fast, and is supported by a wide range of tools. It achieves pretty good results at a relatively low cost in terms of both performance and memory. It does however, suffer from some limitations and by itself, does not offer the level of control required for high quality, close-up detail. In [chapter 4], we will see how shape interpolation techniques can improve the skin deformations and also achieve control over facial expressions.