3D Artist
Jan
10

Photoscan and clean up a model for a cinematic

Tips & Tutorials
by
Santhosh Koneru

This tutorial will help you approach character modelling and texturing for CG cinematics


Photoscan and clean up a model for a cinematic

This tutorial was written by the amazing Santhosh Koneru and appeared in issue 112 of 3D Artist. Subscribe today and never miss an issue!

Tools used

Agisoft PhotoScan, Mari, Maya, Mudbox, Ornatrix, ZBrush

In this tutorial I will be walking you through the process of how I approach character modelling and texturing for CG cinematics. We will be covering photo-scanning techniques to scan actors with just a single digital camera and use the data in Agisoft PhotoScan to generate the digital version. We then cover the scan cleanup using ZBrush and go through the process of sculpting tertiary details in Mudbox by making good use of the layering system the software offers. After that we will explore the export settings in Mudbox for exporting displacement maps and how to apply them to a lowresolution mesh in Maya using V-Ray.

Once the model is ready we will use Mari to create some highly detailed textures. We will be covering pipeline techniques on how to organise the layers and efficient use of the layer instancing method, with which you could edit a layer and have it updated across the channels instantly without the need to manually edit in individual channels.

For skin shading in V-Ray we will be using the alSurface shader. Here we will be discussing the SSS and how subdermal, epidermal and fat layers under the skin could help bring realism to the render. We will be using the subdermal map to create epidermal and fat maps within Maya.

Finally, we will be generating hair with the use of Ornatrix. In this process you will learn effective grooming techniques using curves and interactive comb tools that are provided
with Ornatrix. You don’t have to be a grooming expert when it comes to the Ornatrix software, as it’s a very straightforward tool to work with.
 

Step 01 – Photoscan the actor


Photoscan and clean up a model for a cinematic

This process will help to eliminate hours of work trying to accurately sculpt an actor’s likeness. For this technique you need a digital camera with good pixel density and for the shoot we need a location with neutral lighting and no harsh shadows. Based on the location lighting, set the ISO, shutter speed and exposure manually in order to get enough contrast in the images, but not so much that it washes out all of the details. Make sure that you leave the focus on auto. Once everything is set up, start with shooting the actor from the top, eye level and lower angle in 360. Once the full head shoot is complete, next move closer to the face and shoot the actor’s eyes, nose, lips and ears. More images will help Agisoft PhotoScan calculate the data better with every detail.
 

Step 02 – Generate scan in Agisoft PhotoScan


Photoscan and clean up a model for a cinematic

Import all of the images into Agisoft PhotoScan, then go to the Workflow tab and select Align Photos. Change the Accuracy to Medium if you are using hi-res images and have 90 images or more, and leave the rest to default. Once the point cloud is generated clean up the unwanted points leaving only the point cloud of the face for calculating dense cloud. Open the Build Dense Cloud option and set the Quality to High and Depth Filtering to Moderate. Once the dense cloud is generated open Build Mesh from the Workflow tab, change the Face Count to High and leave the rest to default.
 

Step 03 – Clean up the mesh


Photoscan and clean up a model for a cinematic

Import the mesh generated from Agisoft PhotoScan into ZBrush, then by using the TrimDynamic brush clean up the beard and other irregular surfaces. By using a combination of Clay Tubes, Dam_Standard and Inflate brushes sculpt any extra details and secondary forms to add photorealism to the sculpt. Next export the decimated mesh into Maya and retopo the head ready for animation along with UVs. Import the new mesh into ZBrush, then subdivide the mesh and project the details we previously sculpted into the new mesh.
 

Step 04 – Sculpt tertiary details


Photoscan and clean up a model for a cinematic

This next step can be performed either in ZBrush or Mudbox depending on which software you prefer for sculpting tertiary details. I opt to use Mudbox to perform this task because it has a better layering system which can be very handy in this process. Import the hi-res mesh from ZBrush into Mudbox and rebuild the subdivisions we had in ZBrush from Mesh>Rebuild Subdivisions. Then import the hi-res displacement maps as Stencils. While projecting the stencil detail onto the model, make sure to reduce the intensity of the brush. The displacement level can later be adjusted on the layer itself.
 

Step 05 – Export displacement maps


Photoscan and clean up a model for a cinematic

Before exporting the displacement maps export the level 1 or 2 subD mesh of the face. Then go to UVs and Maps>Extract Texture Maps>New Operation. Under Target Models select the level we just exported, as we will be baking the displacement to that mesh. For the Source Models option make sure that the highest level is selected. For Method select Subdivision from the drop-down menu. In Image Properties select the resolution you want the maps to be. Finally in Output options open Base File Name – this is where we set the export destination and name the map. Before saving make sure the format is set to OpenEXR [32 bit Floating Point, RGBA].
 

Step 06 – Plug in displacement maps


Photoscan and clean up a model for a cinematic

Import the low-res mesh into Maya and add a V-Ray Displacement node from Create>V-Ray>Create Single Displacement node. Go to Attributes>V-Ray and check Subdivision, Displacement Control and Subdivision and Displacement Quality. Open the extra V-Ray attributes located at the end of the Attribute Editor list, check Keep Continuity and change the Displacement bounds from Automatic to Explicit. Open the colour history of Min value and change the colour space from HSV to RGB, 0 to 1.0. Change the RGB values to -10 and for Max value change the RGB values to 10, as this helps with reading the displacement map. Create a File texture node, check the ‘Allow negative colors’ option in Attributes>V-Ray, change the file type to Off and plug in the displacement map. Since we exported our displacement maps from Mudbox, change the UV Tiling mode to Mudbox. This has to be done only in case of multiple UDIMs.
 

Step 07 – Skin texturing


Photoscan and clean up a model for a cinematic

For this we could either use the photos we captured for the scan or hi-res textures like those found on texturing.xyz. Import the decimated mesh from ZBrush or Mudbox into Mari, then create a new layer in the WorkChannel and paint the Albedo. Paint up close without downsizing the texture map, so as not to lose the crisp texture detail. Painting the texture by keeping the stamp true to its resolution will help take full advantage of the detail in the texture map. Repeat the same process for painting the subdermal map. Next create two new channels, Diffuse_1 and Subdermal, then open their layer pallet and instance (Shift+left click, drag and drop) the Albedo and subdermal layers into their respective channels.
 

Step 08 – Texture and shade in Maya


Photoscan and clean up a model for a cinematic

Assign the alShader to the geo and connect the Albedo to the diffuse and spec map into Reflection-1 through a ramp. A ramp is used to control the intensity of the reflection on skin. Also set the Reflection-1 distribution to GGX. Now connect the subdermal map to sss1 color, create a Gamma Correct node to turn the subdermal map into a yellow shade, then connect the node into sss3 color. To create the map for the fat layer underneath the skin create a Remap Color node to enhance the reds and yellows on the map. Then plug the Remap Color and Gamma Correct nodes into a Multiply Divide node and set its Operation to Multiply. While plugging in the Remap and Gamma you can plug them into input 1 or input 2 depending on what gives a better result. Finally paint the SSS mix map in Mari to control the SSS on the skin.
 

Step 09 – Generate hair with Ornatrix


Photoscan and clean up a model for a cinematic

Grooming can be done using the interactive tools that come with Ornatrix, but I found having custom guide curves helps in getting the desired look faster. So, spend some time drawing the guide curves using the Curves tool. Then, group the curves such as beard, scalp hair, moustache and so on. To generate the hair select the guide curves and open ‘Add hair to selection’ and select Hair from Curves. In the Operator stack select GroundStrandsNode. Click on Set Distribution Mesh and select the mesh, then click on Ground Strands. Create a distribution map in Mari and plug it into Distribution Multiplier to dictate where and how much you want the hair to grow – the same mask can be used to groom hair length. Then use other groom operators to give the hair a more realistic look. The advantage of Ornatrix is that it detects UDIMs, so you are no longer restricted to one UDIM.