Stages of How To Make Your Own VFx-Heavy Music Video


A aim of many visual effects artists is to in a position to make use of expertise learnt from engaged on advanced feature films, TV shows or video games to make their very own content material, say, a brief movie or perhaps a feature. Certainly, the trail to creating any content material may be powerful, significantly due to time and price range. However, with instruments turning into extra accessible, and tasks in a position to be realized throughout borders, there’s now increasingly more scope for realizing private tasks.

That’s one purpose why Vancouver-based Denys Shchukin, who has labored at many studios together with Framestore and Image Engine, determined to tackle an adventurous particular person challenge. He scouted round for an appropriate match for his want to create imagery, and stumbled on the tune ‘Wasted’ by Ukrainian singer Ivan Dorn. Shchukin would in the end efficiently pitch the singer on a music video that includes a totally CG human character, who carried out quite a few magic-power-like moves.

Here’s how Shchukin got here to direct, produce and write the music video, and contribute closely to the VFX, together with different key collaborators. Step-by-step, from preliminary concept to the ultimate frames.

  1. The origins of the thought

I made a decision that I wanted to create a ‘masterpiece’ alone. Originally I considered making some form of FX/CG mini reel/story, however then it in a short time reworked to the thought of making a full CG music video. Funnily sufficient, my first few years within the VFX business had been in music video production/post-production.

An evolution of Dorn’s character within the music video.

Usually, artists would possibly make a challenge after which search for appropriate music. I made a decision to do it in reverse order. Because of my previous expertise in dancing and choreography, I made a decision to discover a correct music composition first, after which have the music itself to encourage me and trigger visuals and rhythm in my creativeness.

At the time, the brand new album from Ivan Dorn (considered one of my favourite musicians), OTD, was launched. Three tracks actually caught in my head – ‘Wasted’, ‘Collaba’, and ‘Such a Bad Surprise’.

Cloth sims would change into a major a part of realizing the character.

After I discovered contacts for Ivan, we organized a chat and I expressed to him how a lot I favored his tune, what concepts I had in my thoughts, and that it will be full CG. Ivan favored my fashion and inventive mind-set and shared my pleasure about this challenge, so we determined to start out the ball rolling. A number of days later I offered the script for the music video. Ivan accepted it with none feedback or adjustments and we moved to imagery.

  1. Planning it out
    Considering that complete challenge was imagined to be 3D from the start, I made a decision to skip conventional ideas, sketches and storyboards. Instead I made stills of all 140 videos in 3D.
A final still from the music video.

Later on we edited it to the music and it was utterly clear which course we had been going and the way it will look from modifying, rhythm, digicam angles, structure, and logistics contained in the scene. After the bottom idea was established, we began to do some animation, render and shading exams, challenge scheduling and, on the facet, we began to assemble a ‘modular’ workforce.

  1. Motion capture
    Valentine Ushakova from Digital Cinema Ukraine assisted us with the mocap session. They used a T160 Vicon set-up and 52 markers for the video. Ivan did each transfer by himself and that made the movement capture much more lifelike.
A still from the mocap session.

Unfortunately I wasn’t in a position to attend mocap—it was taking place on the opposite side of the globe—however I used to be controlling it remotely with video calls and fixed updates from the b-roll digicam. We booked two shifts of 8 hours every and I used to be on-line the entire time. At the top of the day I used to be reviewing all of it and making a want listing of feedback and enhancements for the second day.

I used to be very fortunate to have at this mocap session our challenge animation supervisor Slava Lisovsky. We had been talking 24/7 to ensure that we had an an identical concept of what we had been attempting to attain and the way it was going to look.

Ivan Dorn coated in monitoring dots for a facial seize session.

We had deliberate the entire sequence earlier than the mocap shoot and had two varieties of animation clips on the finish of the day: story pushed animation and a dancing library. Crazy photographs like flying, falling and underwater swimming had been animated in a basic keyframe manner—body by body.

  1. Digital human construct
    The first stage of the construct was carried out utilizing tons of of reference images of Ivan in a T-pose, different important poses and detailed close-up images of Ivan’s forehead, ears, arms, palms and many others.
Previs head modeling.

We additionally used a separate big set of static images of Ivan in several poses of various feelings, after which two video recordings of Ivan singing with out markers, and performing feelings and expressions with out markers. Then we did the identical factor once more with markers on.

The construct strategy was kind of conventional, besides that we had no choice to make use of a full body 3D scanner. We did do a really tough scan utilizing photogrammetry, after which finalized the model in ZBrush.

Dorn’s CG model takes form.

The head model for previs functions was made by Andrey Ryzhov. The final version of the head and body had been made by Alexey Rodenko from KB Studio. Facial rigging and animation was carried out by Slava Lisovsky.

  1. Effects sims
    For the CG hair and clothes, Igor Velichko carried out tailoring in Marvelous Designer, after which we used two variations of garments; for rendering and for simulation (with decrease resolutions and with out thickness). For the simulation we used Vellum in Houdini.
Dorn’s hair and material sims set-up.

After that, the movement was transferred to a denser high-res geo with a thickness for rendering, utilizing a customized asset. Grooming was created in Houdini utilizing the Hair and Fur system. Vellum was additionally used to simulate hair. We used Python for automatization since we had lots of shots. This made it attainable to simulate every shot in a single cross simply by selecting a shot number.

For the extra magical effects such because the power rays, power hits and teleportations, we had been attempting to make them engaging and interesting not with complexity and quantity of the weather, however moderately with light/colour/shape/motion integration into the shots.

Lightning effects had been a giant a part of the FX sims.

I had had expertise with fire simulations earlier than so I made a decision to do these results myself. Volumetrics had been all the time the trickiest half. I used a parallel simulation technique (not distributed). The distinction was, our final fire was break up into 22 impartial containers which might run on the farm in parallel or in sequential order. The advantages of that strategy had been that each container simulation didn’t take a lot time, after which you can regulate separate elements individually.

  1. Rendering
    We chose Redshift for rendering. One of the explanations is that it has superb integration into Houdini. I believed, if I wished to maintain my micro-pipeline as small as attainable, then I wanted to have the ability to do the job of the a number of departments in the identical DCC, so Houdini for me was a no brainer. Redshift integration is near excellent. It meant I might simply work in a conventional manner with a procedural node-based strategy.
Rendering was handled in Redshift.

Another large and essential issue that led me in the direction of Redshift was its scalability. Redshift is a GPU render, and to be able to multiply render energy, I might simply purchase extra video playing cards , plug them into my system, and add 100-plus per cent render velocity with every extra GPU. With CPU-based renders, I would want to assemble extra workstations, which is rather more expensive.

There are nice digicam settings in Redshift, with customized DOF, bokeh, lens distortion and photographic exposure setting with PostFX. Later on, I polished my settings and was in a position to render frames in final quality in about 30 to 45 minutes.

  1. No compositing
    The whole project was in a compact pipeline with a restricted variety of artists. Rendering in AOVs, managing variations, coping with a number of interdependent layers—these are good and good methods to go on large function movie tasks, particularly whenever you’re mixing sensible 2D plates with numerous 3D layers from a number of departments.
A final electric shot.

In our case, however, which was a must go totally 3D/CG, all the pictures had been delivered from the identical DCC with the identical render engine. Considering the velocity of the Redshift renderer, and the dynamic nature of adjustments, it was simpler to re-render the shot or sequence directly into magnificence without an additional step.

Another purpose for rendering immediately into beauty: I used to be utilizing Redshift PostFX, which you clearly can apply later within the submit course of, but it surely was a lot simpler to have the ability to see the final picture proper within the viewport.

  1. Working modularly
    Overall, the modular or distributed strategy labored simply nice for me, contemplating all the pros and cons. At the preliminary finish whereas we had been doing head and body modeling, structure, and many others, there was no want for 24/7 communications. A number of emails, group chats and group video calls a few instances every week was simply sufficient.
A screenshot from the animation stage of production.

Later on after we began to do intense blocking of all 142 shots with animation supervisor Slava Lisovsky, we jumped into Shotgun and managed the center a part of the project over there. Then after animation was nearly carried out—when the project was already in the midst of shading, lighting and lookdev—I used to be taking good care of this stage on my own, so no a lot external communication was needed.

I used an external farm for rendering known as ForRender, with Ruslan Imanov and Roman Rudiuk, they usually supplied 24/7 help over Skype. There was the flexibility to log into the farm machine to see what was taking place or to have the ability to repair it on my own.

Dorn’s character in CG form.

And on the very end of the project, the Abracadabra FX team joined us for the previous couple of weeks to be able to assist with FX creation, generation and population, they’d their very own project management system. I used to be given my very own private credentials in it and all variations had been despatched to me in Telegram Bot channel as properly. Which could be very handy; you get up within the morning and you’ve got able to go playlist proper in your cellphone.

Considering that I wanted to ramp up and down extraordinarily shortly, the modular/distributed strategy was the one option to go. Shotgun can also be perfect for this type of collaboration when artists and management should not necessary in the same place.

Director / Producer / Scriptwriter – Denys Shchukin
VFX / CG / FX Supervisor – Denys Shchukin
Animation Supervisor – Slava Lisovsky
Camera Animation – Slava Lisovsky
Digital Grooming and Tailoring – Igor Velichko
Cloth and Hair Simulation – Igor Velichko

Head and Body created by “KB Studio”:
Kate Bekasova
Alexey Rodenko
Render Farm “FORRENDER”:
Ruslan Imanov
Roman Rudiuk

Motion Capture “Digital Cinema Ukraine”
Valentine Ushakova

Petr Kuznetsov
Nikolay Prudov
Roman Bazhura
Ilya Lindberg
Andrey Shvetsov
Leonid Panov

Additional FX: Alexander Kratinov

Previz Head Modeling: Andrey Ryzhov

Additional CG-Artists:
Oleksandr Nepomniashchy
Denys Leontyev