Terminator Genisys Shoot Story

DoP Kramer Morgenthau is one of the lucky few who bestride the twin disciplines of movie and television. He has shot two episodes of the television juggernaut Game Of Thrones, he shot VFX heavy popcorn favourite Thor 2: The Dark World and now Terminator Genisys with it's time-scapes, pivotal VFX and a more than 80 day shooting schedule: "I've definitely gone back and forth quite a bit between film and TV. I follow the material and the writing more than anything else. Television right now has some of the best material. The best writing gives you opportunities to make the best visual. Of course as a cinematographer it’s a lot of fun to work with light and all the things that go along with cinematography but the most exciting thing is what you’re actually shooting. Television shows like Game Of Thrones are just so much fun to work on, I wish more movies were as good as that show."

Also this golden age of television production has benefits for image makers like Kramer. He gets to use similar equipment or as he sees it similar sensors. "TV and film have similar kit but it’s different. I would say I’m using the same sensor but I’ve chosen to shoot different aspect ratios and resolutions, I never use ArriRaw in television which is down to time and budgets, time is budget in a way. You just don’t need that much data and you don’t need to blow it up that much although you do alot of VFX with a show like Game Of Thrones. Maybe they might mandate that  now but certainly when I worked on it they didn’t.

"Big screens needs anamorphic, 2.4:0 aspect ratio, Also the way you compose shots for film is different, you’re thinking about the larger canvas. Other than that film making is film making.

"I feel like I know the Alexa sensor pretty well now. I can walk in to a room and know how it’s going to react to the light in the room and it’s a very comforting thing. Just as when I was shooting film I would get to know certain film stocks and pretty much knew how it was going to look on film, as much as possible. The Alexa has the most dynamic range, the most filmic looking images and you don’t have to work to get it to work, it settles in very nicely. I’m pretty familiar with it and loathing the day when they upgrade to another sensor, but it’s inevitable. I’m sure it’ll be the same thing but much better. The basic philosophy behind it is that it has bigger pixel buckets and fewer pixel buckets that allow larger photo sites which allows for more dynamic range against cramming photosites on a sensor to say it’s 4k or 6k or 8k. You can get more resolution but it’s harder to get dynamic range that way."

Film Prep

"When I was prepping the film, a lot of it was around the visual effects but it’s really a visualisation of the world building that you’re doing. ‘What is this world going to look like? How much of it is going to look like the older versions? How much of it are we going to put our own touch on? What does a time displacement device look like? What does time travel look like? What does a future world look like? What does 2017 and 2027 look like? The mixture of 1984 against 1973 and designing looks for the time periods. You’re thinking colour palettes and the whole visualisation of the movie and then you get in to how you can technically achieve some of these things. You work very closely with the visual effects supervisor and that team in previs and illustrations from the art department and working out how the sets look. Then you get in to how you’re going to do it. That’s months and months of work. What locations? How much are we going to build on stage? How do you flip a bus? How do have Arnold have a fight with himself? All that cool stuff.

"My way of working is to work at a certain stop. I shot the entire Terminator Genisys at a T4, at least the interiors, there wasn’t much day exterior on Terminator.  That was just where the lens looks the best, If you’re too wide open with the anamorphics it’s hard to get anything to resolve as eveything looks a little ‘mushy’. Spherical lenses you can shoot at a 1.4 if you want, they also don’t look the best there but that’s a look in itself, a softer look.  So at T4 the lens looks great but you still have enough shallow depth of field because it’s anamorphic which is much shallower anyway. When you’ve been lighting for 80 days on Terminator your eyes get used to lighting for a T4, you don’t even need a meter anymore. I know I need a 20k to get a certain level at a certain distance. So you’ve tuned your eye in to the sensor and tuned it to that stop, to that ASA.  You can just start working and not worry about things like the camera not being up yet, that kind of thing.

"I work with a light meter anyway because I’m used to that, but you can dial it all in and once you see the image on a monitor you can start fine tuning it." 

Kramer equates his 'comfort zone' of working mostly at a certain stop to music composer Irving Berlin who only wrote music in the key of C. "I like to play in the key of C like Irving Berlin did where I can really not think about flats, sharps things like that. I dial in and get the best melody and you can tranpose that to anything. It’s like working in one comfort zone."

"I’m one of these people that digs out old glass and fine tunes the glass with the lens tech at Panavision and we use older C Series lenses, anamorphic glass, even B Series. We only used C Series for Terminator. You fine tuned it to the project you’re working on, at the optimal stop that you like to work at, at the optimal distance and range. But what I like about the older glass is it’s softer and takes the electronic edge off, the anamorphic has a bokeh that is just magical and brings some of the magic you had with film back. I think older glass or fine tuning glass is going to be here to stay." 

Technicolor handled all the data management once they were done shooting. The on-set capture was then backed up two or three times on LTOs and then got sent off to archives and converted in to DPX files or into an EXR files. That was also how the VFX dealt with it. Everything stays in the 2K space as working in 4k is still too expensive for VFX work, very few movies do that. 

DI and Post 3D

"You also tune your eye to the DI suite and avoid half a day hanging blacks and truss on a bright wall for instance when you can get it later. Of course it could take you half a day in the DI of rotoscoping so you get those shots in camera. You want to get your digital negative in range so you can still bring it back, if it’s clipping, white or black, you can never bring it back.  If you don’t go in with a visual design you’re not going to discover it in post, if it’s not there, it’s not there. 

"The creative side of it in post is something I spend many hours doing once the effects are finalised. Even before that we do passes of plates to send out to visual effects so they have an idea what it should look like. I also, while shooting, create a look bible which are frame grabs from every scene with a look put on it and while we’re shooting I create looks, CDLs (Colour Decisions Lists) that are in the Metadata. I stamp it out as much as possible as I’m going and then once it’s all cut together you see how it’s working and it can be completely different or it can really flow and you fine tune and fine tune and shape, finesse. It’s like printing a great photograph.

"I thought they did quite a good job with the post 3D conversion, I thought it was quite beautiful. There are a couple of different philosophies of how you should shoot for a post 3D process. I think the best way is to think about how the shots might play in 3D and then there’s the ‘boots on the ground’ version where you just make the film and let them figure out how to dimensionlise it. But there are certain shots that don’t work as well like things with lots of foreground, over the shoulder shots, shots with things protuding to the edges but we more or less made the 2D film we wanted to make.  We didn’t have a great amount of time to re-compose shots. There are different philosophies like actually have a stereography on-set with you advising you. We didn’t do that but I think it worked out."

Saving Cinema

99% of the movie was shot on the Alexa but there were other cameras that Kramer used. "There were some GoPros in there. Sometimes we used a shot but we didn’t use it in the movie but I remember seeing a GoPro shot in the ‘Bus flip’ scene which looked great. There was some RED Dragon footage in there for some shots because of the size of the camera before the Alexa Mini came along (Now the Alexa Mini looks like a RED). A Phantom was used for some very slow motion. There might have been some Canon 5D and films shots as well but I’m not sure whether they made it in to the movie. You just find the best tool for the job and sometimes that is a smaller, lighter camera that you can put somewhere. 

"As far as camera design goes I don’t think miniaturisation is the answer to the cinema world’s problems but I think that certainly having an Alexa Mini will be fantastic or another tiny camera. But it’s not always needed for example you could use the Alexa 65 which is a behemoth, well not that but it looks like a ‘fat’ Alexa. I think larger sensors or medium format sensors are definitely where Cinema is going, not television as I don’t think you need it for TV.  But giant sensors for theatrical projection is where it’s all going and will be fantastic. It’s going to be more immersive, it’s going to be so high resolution you going to feel that you could just walk in to the screen."

Posted on July 31, 2015 and filed under cinematography, case study, sensor.