With its spacious, cinematically arresting environments, detailed characters and quirky story Falling Sky quickly became one of NFTS’s most ambitious projects of the past several years. Pushing the boundaries of their previous experience with facial recognition technology and tracking, the team of Falling Sky pushed the limits of their limitations to new heights by securing and succeeding in doing motion capture for this ultra low-budget student game. In this interview, the producer of the project, Nikolay Savov details the process, hurdles, and learnings of their experience of this outstanding achievement in the history of NFTS games.
Q: Why did you choose to do Motion Capture on this project, particularly?
Both of us desired to do a narrative-lead game, which lend themselves very easily to technology like MoCap. It was an opportunity to learn a process heavily used in big project shoots and a chance to experiment with things for our game.
A big driver for this was Jonathan himself, who is always researching what’d be the most innovative and advance way of bringing life to his games. In our first collaboration, ReTreat, we utilised facial-capture technology and we saw the benefit of it in animation: it made the characters’ facial expressions much more realistic and it sped up the animation process itself. With Falling Sky Jonathan and I decided to take this a step further and try to use motion capture.
Q: How do you set out to do MoCap for a student project?
The biggest challenge for us was being able to afford it. It is a technology heavily utilised by big studio projects as they have the resources and manpower to process the data. The renting of sizeable motion capture facilities could easily exceed 2-3k per day on a student discount rate, which gives an idea how much big studios can spend for their hangers filled with cameras… This meant we needed to find the right size of studio space and number of cameras for the right amount of resources we had. Generally speaking for those new to the technology, the more cameras a motion capture studio has within a space, the more accurate and free one is to use different ranges of motion. Bigger companies have facilities with more cameras, but that also means a bigger charging rate.
Well understanding our limitations, we started with was asking the big industry leaders directly. For anyone who does low-budget student projects, it is always better to try your luck with the top companies and work downwards as chances are they will be able to help out in ways smaller companies cannot afford to do so. Our first port of call was Centroid3D because of their good relationship with the school. A good advice I would have for anyone entering this industry is to try and leverage the personal connections you have access to. It goes a long way, then coming into a conversation as an outsider.
Q: How did you go about convincing them? What was the value for them in taking on this project?
At first, Centroid3D were too busy at first and could not offer us help with their facilities. This was well within our expectations and I had gotten used to hearing “No”, so we did not give up. What we thought it might be viable backup approach is to go about and search for what is known as overflow facilities connected to Centroid3D. These facilities take on projects which the Centroid3D can’t do at the main space and they redirect them there. Luckily, one such facility we found in the neighboring [always try and keep your shoots close to your base!] to us Amersham and Wycombe College [AWC]. We found great support in Simon Clayden and Neil Bedecker who managed the facilities and we built a relationship which eventually led to our success with the motion capture shoot.
Q: Tell us about the initial steps and the difficulties you have faced!
NFTS was very supportive of this relationship, so that made things smoother. Previous projects have leveraged these connections before and have used the space at AWC, but nobody had done so in a while.
Truth is, no other project at the school has had this amount of motion capture done and processed. This is not only because of the difficulties for one to secure the space, but also because of the amount of manpower needed to process the data. A common pitfall is, that once you done the shoot it's done, you made it…that’s what we thought originally, as well. But what comes after, what we didn’t foresee is that somebody would have to process the data and make it usable for implementation within the game’s engine. This meant taking the captured data, which represented a bunch of dots in the 3D space, fixing any data anomalies, and finally making it Maya compatible. After Maya Jonathan finalised the models and imported them into the Unreal engine.
Q: How long did the motion-capture take?
Because this was the first time Jonathan and I had done this and we were aware of our gaps in knowledge, we spent approximately 2-3 weeks of regular visits to AWC. I needed to sufficiently aquatinted with the requirements of the shoot logistically, and Jonathan needed to understand the software and its workings. We recorded several test motions with the characters and we made the pipeline test to ensure that we can confidently go through the process. We asked friends and colleagues to help us out. At Amersham it was the patience and guidance of Neil Bedecker and Simon Clayton, who taught us how to do everything, that made this possible.
Q: What was the biggest thing you’ve learned during the test that saved a lot of time for you later down the line?
Nothing particularly about saving time, it’s more that the test has given us perspective.
It’s one thing to make a test with a simple walking cycle (we recorded how one person walks) and it’s completely different thing to record a four-minutes scene with 2-3 actors performing.
The excitement from our tests were successful and knowing that the pipeline works, blinded us a bit to the reality of how big our task will be in processing the data after the shoot.
Q: How did you choose the actors for the project?
When you are thinking about motion capture, you are not just recording voice. We had to treat the MoCap shoot as a fiction shoot and not like a voiceover recording session as our previous projects required. Crucial difference. It dictated where we were getting our actors from. Normally for voice over actors you go to agencies that represent talent with those capabilities, but knowing that we have to approach it as a fiction shoot meant that we could search for our actors via casting agency systems like Spotlight or talent management agencies.
This was the case for Josh, Lucien and Christy, but it wasn’t for Stephan who we worked before with on ReTreat and had a personal relationship.
Q: Walk us through the shoot. How many days, hours what were the challenges?
The shoot lasted in total for 2 days and the biggest difficulty we faced was gauging how much material will be able to shoot within the span of two days. We had to work around actors availabilities’, restrictions, child employment working hours and AWC’s opening and closing hours. This effectively meant that we had 5-6 hours of pure shooting time each day. We started prepping every day from 7.30am and were there until 5.30. 4 to 5 hours were spent setting up the cameras, re-setting, calibrating, lunch break and rehearsing.
A big part of the process was making sure all of the recording devices were working in sync. We used head-mounted camera [HMC] units provided by Centroid3D who were kind to give them to us for the shoot. This meant we had to be make sure the motion capture systems, the HMC units and sound all record accurately and in sync to avoid any difficulties afterwards. If anything had gone wrong, Jonathan would have had to synchronize manually the body performance and the facial performance which would have taken much more time.
We were caught off guard during the shoot with how quickly we covered the material we planned. Normally, on single camera fiction shoots you go through 3 pages of dialogue per day. However, on the first day of MoCap we managed to shoot 20 pages of simple movements and dialogue and that difference in volume was staggering. We had to write material over-night after the first day and feed it to the actors the next day as they hadn’t got time to prepare for it. Once we shot the new material, we went into recording additional voice-over which we never intended to do in those two days.
After the shoot itself, we kept going back to Amersham and Wycombe College for a week to process the data we captured. Jonathan, Simon, Neil, and a team of volunteers (Amersham students) were processing the data from 9am to 5pm. Not as fun as the main shoot, but equally as important as without this none of the work we had done so far would have mattered!