Expressions: Happy, sad, angry, laugh, curious, scared, confused, embarrassed, excited, stubborn, bemused, guilty, Hopeful, nervous, confident, disgusted, bored, tired, compassionate, grief, irritated, shocked, WTF?, my bad…
Expressions: Happy, sad, angry, laugh, curious, scared, confused, embarrassed, excited, stubborn, bemused and guilty.
● Root, neck, head, eyes, mouth
● Basic cumulative rotations
● Basic skinning (no additional deformations nor influence objects)
● Static hair/fur
● Ear rotation on head control
Goal: Face UI exercise demonstration
● Adjustable eye rotation skin influence
Back to 25 %
● Independent eye rotation controls
● Iris expansion/contraction on eye controls
● Independent eyebrow translation controls
● Independent open/closed eyes for upper and lower eyelids
● Independent translation eyelid controls
● Puffed cheeks
● Outer and inner cheek controls
● Nose controls
● Jaw rotations control
● Chin control
● Lower lip control
● Mouth control
Cartoon rigging inspired by cgAnt applied to a photorealistic character
● Additional jaw rotation control
● Corner position controls
● Independent lip position controls
● Lips deformation compensation
● Mouth corners trigger:
● sculpted deformations
● U/O mouth deformations
Fictive Portrait made using 3D pipeline:
- micropolygonal modeling, retopology, UVW, sculpting relief and exporting as normal and displacement maps.
- Rigging to close mouth and open eyes.
- Texturing using layers for specific render engine shaders: diffuse, roughness, subsurface scattering, transmission.
- Rendering and compositing using AOVs for diffuse, specular and SSS, direct and indirect lighting and Z-Depth passes.
Modeling for Character: Both projects on this video require high polygon subdivisions and sculpting, although only the first one is going to need a thorough retopology with working edge loops for anatomically correct deformations. The second character is a robot made out of metal, so it doesn’t need any. I am solving the silhouettes via displacement maps, which proved more efficient than using subdivisions, and relief detail using normal maps.
Texturing for Character: In both projects, we need a photo-realistic render for video. I am providing it using PBR textures. The second project doesn’t need any subsurface scattering because is made out of metal.
Rigging & Skinning for Character: Both projects require functionalities like switches for forward and inverse kinematics and some skinning. The first project is multiple times more complex than the second. I skin the first character using Maya Muscle and the second using nCloth.
In the first example (facades), my employer wanted to make a real-time version of the famous Barcelona’s Barri Gòtic. We needed to model every visible building using max 700 tris each. I am building them into a render time scene and animating a camera, using low polygon modeling techniques, i.e. no subdivisions, curved surfaces base only on soft edges.
For the second study, I am using elevations and floorplans, high polygon subdivisions and sculpting (watch my Modeling and Concept Demo for more details). In both projects, I am using PBR materials for the textures (Texturing Demo) and lighting the scenes using skydomes with HDRI and directional lights. Using nParticles and volumetric lights to add some visual effects (watch my Visual Effects Demo), Maya Fluid for mist, paintEffects for trees and flowers, Maya Fur for grooming and grass, animated displacements for water and volumetric lights and flares. I am rendering using layers and passes and compositing using The Foundry Nuke (take a look at my Compositing Demo).
I am using following tools for these specific simulations:
- Molten Metal: polygon-rendered Maya Fluid.
- Smoke and mist: Maya Fluid.
- Sparks and dust on air: nParticles.
- Flexible cables, dress and wings: nCloth.
- Female character’s hair: nHair.
- Flowers, leaves, pampas grass, trees and grass: paintEffects.
- Grooming and grass: Maya Fur.
- Glows, flares and volumetric lights.
I create specific shaders for most of the particles and fluids and animate as well as their emitters’ surfaces as intricate interacting force fields.
Rendering in layers and passes allows me to add, e.g. particles reflections later in post-production.
Organic and Hard Surface Modeling examples: Edge loops, anatomically correct deformations. Displacement and normal maps. Low polygon modeling, high polygon subdivision and sculpting.
My general workflow for modeling characters, environment and props consists on:
- Drawing orthographic views and elevations.
- Modeling the big silhouette first, no details.
- Import into ZBrush, sculpt silhouette details using Dynamesh.
- Retopology the mesh.
- Open UVW, apply finer details.
- Export retopologized model, displacement and normal maps.
- In some specific cases, it could come to applying details from finished sculpting into the retopologized model.
For the first character, I am using the following functionalities:
- Torso controls with advanced twist, stretchy spine with x axis scale joints.
- Head constrains, that allow the rotations to center anywhere (neck, torso, shoulders, etc.).
- Neck locks and shoulder constrains, one control for rotations and another for translations.
- Arm and shoulder, forward and inverse kinematics for rotations and translations.
- Elbow locking, with stretchy forearm centered on lock position.
- Independent hand controls.
- Leg forward and inverse kinematics with knee lock and reverse foot.
- Skinning the body using Maya Muscles following anatomy and reinforcing edge loops.
- Facial expressions with Maya blendShapes.
In these examples, you can see how the textures, applied to shaders, show up again later, once rendered as passes, in compositing, allowing a second, more accurate, grading pass and interacting with masks and visual effects.
Basing the renders on concepts, via layers using passes (e.g. diffuse, ambient occlusion, reflection, subsurface scattering, etc.), I export my scenes’ pixel information using 32 bit exr format. This dynamic range resolution allows me to composite the scenes in a photo-realistic fashion and apply nodes and filters by rendering vector passes, e.g. motion blur or depth of field, in a fraction of the time compared to rendering directly on each frame.
Effective Animation and Visual Effects do not happen out of the blue, they are the result of accurate character and environment development, where art and technical directors make decisions before production phase.
- Narrative Script, tells the story before drawing or modeling.
- Color Script, making decisions on value and color before lighting or texturing.
- Storyboard, composition, animation and cameras in a 2D visualization process.
- Animatic, animating storyboard panels in 2D and 3D space to provide templates for animators and compositors.
- Character sheets, including views of the characters from multiple angles, recognizable silhouettes, orthographic views for the modelers, facial expressions for modelers and animators, clothes and gear sheets for prop modelers and visual effects artists, etc.
- Environment Sheets, including maps where characters can move in game levels defining what actions are possible and where they may take place.
- Concept art, sometimes using paintovers on rough 3D models, showing general looks and fine details of the future production, providing a thorough template for models, textures, light sets and post-production.
You make every decision during pre-production phase in an animation project, before production, in order to keep it manageable. It is easier (and more efficient) to fix problems in 2D before they happen in 3D.
Character prep and execution. Concept art, modeling, retopology, human rigging, skinning, muscle, hair and cloth simulation, blend shapes sculpting for facial expressions, animation, vegetation simulation, compositing. 1:05 min.