Natalis is a diploma film created at the Institute of Animation, Visual Effects and Digital Postproduction at the Filmakademie Baden-Württemberg by Daniel Brkovic and Jan-Marcel Kuehn. It tells the story of EA; half woman, half machine, who walks through a mysterious forest. Upon encountering a cocoon-like plant inside which floats a human embryo, EA experiences a terrifying vision of world-threatening destruction.
We recently spoke to Brkovic about the short, and exactly what went into the process of its creation.
Natalis is the graduation project of four students at the Filmakademie Baden-Württemberg in Ludwigsburg, Germany. We are a core team of six students and one producer, and we have been working on the project for about one and a half years. The original idea was developed by Felix Mertikat. Although Felix decided to pursue another project for his graduation, we really liked his ideas and he allowed us to take it as a base for Natalis. Since the initial idea was too complex for us to be realised, we shortened it and developed the characters so they will still work for this story.
Natalis tells the story of the robot Ea, who is threatened by visions and a fear of a devastating future. To avoid this, she has to decide u[on the life or death of the newborn baby creature, Enki.
Our inspirations were great films like Blade Runner with its humanoid Replicants and Avatar with its great creatures. We were also inspired by various images from a few digital artists. We liked the early designs and sketches of the creatures and the world that were created by Felix. It was a pleasure to work with that base and develop it further.
One of the main goals for us was to learn as much as possible. Everybody, in his own area, tried to create something new – artistically as well as technologically. The result of this process is, for us, an interesting film which will entertain but is certainly thought-provoking as well.
The process began in 2010. We started with rough storyboard sketches, trying to consolidate the initially rather complex story. These sketches were then used to create a first animatic, which was gradually replaced by previz. During this stage we prepared our mo-cap shoot which we shot within two days at the facilities at Filmakademie. We cleaned the data and then enhanced the previz with our mo-cap takes. As we lit and rendered the shots, the characters were constantly replaced with the newest features (deformation rigs, displacement maps, textures) and finally rendered. All steps where highly parallel, so we were modelling, rigging, animating and lighting at the same time and gradually pushing the shots to their final stage.
At the time none of us had much experience with mo-cap or working with actors, and it was also the actress’s first experience with mo-cap. We all learned together.
The mo-cap shoot took place in the local studio which is dark and cold, however our Actress Evi had to feel like she was in a jungle, so the day before the shoot we went to a local botanical garden to rehearse and give her the right feeling to play the role.
We captured on a Vicon stage and the performance was solved in Vicon IQ before handing over the data to Motionbuilder where we did all the re-targeting and motion editing. The final animation was then brought over into Maya where it drove our deformation rigs. Due to the tight Maya/Motionbuilder integration and good pipeline scripting we were able to go seamlessly back and forth between Maya (deformation rigs) and Motionbuilder (mo-cap/animation rigs) to add and update the animation as we went.
We also did facial-capture, but separate from the body mo-cap and in a different studio. We had a co-operation with EPFL in Lausanne where they were creating a technique called Face/Off: Live Facial Puppetry. It was also published as a Siggraph paper in 2009 and was the predecessor of Face Shift. Basically it captures 3D geometry with structured light scanning in real-time and then tries to fit a predefined geometry to those scans. This way we got a consistent animated geometry which then built the base for our blendshape animation system.
Our core software was Autodesk Maya, which was heavily customized by our TDs to maintain a good workflow. This incorporated mel and python scripts for data/scene management as well as C++ plug-ins for specialised deformers and other nodes.
For modeling we used various packages; Maya, 3DS Max, Sculptris, 3DCoat, Mudbox and ZBrush. For most of the assets we made sketches in Sculptris or ZBrush, which were retopologised and finally sculpted in Mudbox or ZBrush. Texturing was almost entirely done in Mudbox, with only minor tweaking in Photoshop. This workflow allowed us to create assets in a very artistic way, spending only a little time on the technical aspects.
The mo-cap was captured and solved with Vicon IQ and the animation was done in Maya and Motionbuilder. Rigging and Rendering was then entirely done in Maya. Most of the Effects were built using Houdini and a few with Maya. Final Compositing was done in Nuke.
The realistic look of the film created challenges in almost every aspect of production. This started with the required detail for the hundreds of assets and the very limited time allotted for each one of them. Tom (Ferstl, rigging TD and animation pipeline developer) had to deliver a fully fledged, anatomically correct muscle system with skin-sliding which was very heavily featured and required a lot of development time and anatomical study beforehand. What also turned out to be quite challenging was the rendering process. Karsten (Wagenknecht, shading TD and rendering pipeline developer) had to do a lot of testing and tweaks, until we were able to render our images as well as maintaining acceptable render times.
Our main rendering package was V-Ray. To achieve a photorealistic rendering, V-Ray is always a good choice in terms of lighting and shading. To set up the shading we used the standard V-Ray and subsurface scattering material. V-Ray achieves great shading only by connecting the maps. We only had to tweak a few shader settings in many cases.
In terms of lighting we used exclusively area lights and a dome light with HDRI images to light the characters. For the forest shots it was impossible to render the characters and the location together in one pass. To get the job done, we split the character and the location in separate render passes.
Because of the render times we were not able to use global illumination for the forest shots either. However, for the desert shots we rendered the characters as well as the set in one pass so we were able to use raytracing for primary and secondary bounces during GI calculation. This helped significantly to create a realistic look and feel.
To fake the GI and to integrate the characters in the forest we rendered quite a few additional image passes like diffuse and specular passes for every light and id passes for every material. These additional passes helped to create the final look in compositing.
One of the most important aspects for the success of the project was our structured pipeline, developed by Karsten and Tom. This ensured we were able to focus on the artistic side without having to worry about file management and technical aspects.
￼For people using a similar approach where everything is happening in parallel it can be a bit daunting when initially there doesn’t seem to be much progress. But don’t be discouraged; the reward is that it goes really fast at the end of the project, as a lot can be finished in very short time whilst really pushing the quality.
If you want to read more interviews just like this, then get your hands on issue 51 of 3D Artist, out now. You can buy issue 51 through the Imagine Shop or digitally on a broad range of devices. Alternatively, why not make big savings and buy yourself a year’s subscription to the mag?