Face off. Cleanup case study

[kc_row use_container=”yes” force=”no” column_align=”middle” video_mute=”no” _id=”802638″][kc_column width=”12/12″ video_mute=”no” _id=”245113″][kc_column_text]

Finally got to try out new technique for digital makeup. And while conceptually, the technique itself isn’t that new, it was definitely interesting to reproduce solely in Blender, working on a real project

(well, some final steps after all were done in AE due to lack of time).
So here’s the case. This is what I’ve got as input


Here you can see that due to complexity of practical makeup, some unrealistic skin folds occured in places where pieces of prosthetic makeup connects with each other. And that’s my job to clean this mess up, while retaining skin texture and dealing with lighting and shadows moving across the actors face. On top of that, the shot is pretty long (here you can see just the fragment of it. It’s actually 34 seconds long!).

So dealing with it using traditional point or planar tracking might become a tedious process with unsatisfying result and inflexible setup.

So, without pointless experiments, I decided to use good ol’ trusty Blender’s object tracking to track the actors face, to then project original footage onto proxy geometry and bake unfolded face texture to image sequence, which is much easier to cleanup… I know, might sound complicated right know, but let’s go step by step.

So, first step, camera and object tracking, went smooth without any surprises, even with no set measurements and camera data given from the film set. Blender once again proved itself a worthy production tool in this field. Before moving on to next step, I made a little test for head tracking fidelity 🙂

The next step was to create and setup a proxy geometry as close as possible to the actors face, and a simple armature rig to match natural face deformations throughout the shot. Base geometry created with the help of Manuel Bastioni Human Lab v1.6 addon. Then armature added deform all the “key” areas of the face. Oh, and of course – a proper UV map is a must, since we’re gonna bake projected footage onto it.

So the final setup looks like this. The armature parented to the objects, that holds head tracking data. Then the proxy face parented to the armature and deformed by it. There is also a linked duplicate of the proxy face, precisely in place. It’ll be use to bake projected footage.
So, we use UVProject modifier on our Face Proxy object with solved camera set as projector. Before baking, we create a new square image, then assign it as an active face texture for our Face To Bake object and we are ready to hit “Bake” with bake type set to “Textures” and “Selected to Active” checkbox active.
But standart baking in blender only bakes one image, while we need to bake a sequence of projections for every of 853 frames. Good thing there is an Animated Render Baker addon bundled with Blender. Just need to set the framerange and hit “Animated bake” button.

Proxy geometry on top of the footage

And this is how baked unfolded projection of the face looks like.

I little creepy, isn’t it? But now, since all the face features are relatively still throughout the footage, it is now much easier to cleanup tricky areas not worrying about perspective and lighting. A few clone strokes in AE and it’s done.

Now cleaned up texture sequence is ready to be applied back to geometry. I could of save it back to another sequence and use it as diffuse map back in Blender, then render it with shadeless material with the same camera it was projected from, then compose it into original footage using nodes… but as I said at the beginning – I was running short on time, so for the final step I did use AE as well.

Solved camera and animated face proxy were exported from Blender to AE,  and then geometry was handled with Element 3D plugin. Cleaned texture sequence was set as Cutom Texture layer and applied to the model. That allowed for flexible and fast solution in case of possible revision.

The whole thing worked like a charm in the end. Yeah, maybe still not as fast as with Nuke and Facetracker, but sure as hell ~8k US dollars cheaper 🙂

Oh, and here’s the final result. The difference might be subtle (chin, cheek, forehead, eye) but important, nonetheless

Of course, given case is relatively simple. But this technique can be used not only for retouching, but to add some realistic VFX, such as open wounds, animated textures, subskin effects, ect.

I hope you find it interesting and usefull! Share your thoughts and let me know if you’re interested in this kind of posts down in the comment section.

Happy Blending!


Leave a Reply

Your email address will not be published.