[MUSIC] Animating the face is very different from animating the body. Body animation is based on the skeleton. You move the body by rotating the joints of its skeleton. But the face has only one movable joint, the jaw. Most of the movement that creates facial expressions are due to movements of small facial muscles. Now, you could just use the same basic technique. Just like you create virtual bones to represent the real bones in the skeleton, you can create virtual bones in the face. The difference is that these no longer represent any real bones. Instead, they either represent facial muscles or just put where they're most convenient. This method is called facial bones. It's efficient, because it just uses the same technology as body animation and that often makes it popular with game engines. But it can be harder for animators, because the bones no longer represent real things. It's harder to understand what they do. It's pretty easy to know what rotating an elbow should do and how to rotate it, but it can be much harder with a facial bone. So animators will commonly use a different technique called blend shapes or morph targets. A blend shape is a complete copy of the face mesh with a different facial expression. Animations are created by blending together these different facial expressions to combine them to form new expressions. So to prepare a character for blend shape animation, you need to create basic facial expressions. What kind of expressions do you use? You can have complete facial expressions, maybe emotional expressions like smiling. But often we want little bits of expression that can be combined together. For example, you might have a blend shape for a raised eyebrow or a closed eyelid. You can combine several of these expressions to create a full expression. This approach can give you a lot more flexibility. These small expressions are commonly based on the Facial Action Coding System, abbreviated to FACS, that was developed by Paul Ekman. FACS is a set of minimal facial movements we can make called action units. These are really good bases for creating blend shapes that can be easily combined. Another common set of blend shapes are used for lip sync, creating mouth movements for speech. For this, you need blend shapes corresponding to the mouth shapes we make when we make certain sounds like ooo or aaa. These shapes are called visemes because they are the visual version of the basic sounds, which are called phonemes. A well-rigged character will typically come with a large set of blend shapes corresponding to all of these expressions. So you have a lot of freedom for animation. You animate with blend shapes by setting key frames for the weights of each blend shape. The weight is how much of the blend shape is included in the character's expression from 0 to 100%. For full expressions, you might just want to turn it on or off, 0 or 100%. But with action units, you can create complex expression blends by combining different blend shapes with different weights. Keyframe animation is a good way of creating facial animation, but you can also use motion capture. Facial motion capture works differently from body motion capture. One of the big problems with body mocap is occlusion. One part of the body can hide another part. Since your face is almost flat, this doesn't happen. Another benefit of the face is that it has a lot of easy to recognize features like eyes, nose, and mouth. That means it's easier to use markerless mocap, recording the movement from a single camera without having to put marker points on the face. This can make facial motion capture a lot cheaper than body mocap, so it can be within the reach of small independent developers. So there are a lot of options for facial animation. Creating very realistic animation is a lot of work, but you can still create basic movements, which will add a lot of expression to your characters. [MUSIC]