Posts filed under comment

Peter Jackson Welcomes the End of Film and the Coming of 4K as a 70mm Substitute

Peter Jackson was sending a message to the RED event at the recent NAB Convention from his New Zealand base © REDIn a message from his New Zealand base director Peter Jackson congratulated RED on their introduction of a revolutionary camera product and heralded the end of the use of film as a medium. He also went on to support the use of 4K (you could argue that RED isn't 4K as it's uses a Bayered sensor) and compared it to the use of 70mm film in movies like Doctor Zhivago. His point was that Directors should have an optimum resolution for 'epic' movies.

Perhaps of more interest Mr Jackson goes on to mention that when he was shooting The Lovely Bones movie he used to take his own RED camera out in the morning to shoot his own background plates. More enabling from digital cinematography.

See his beautifully shot message HERE

Posted on April 23, 2010 and filed under cinematography, comment.

Bad Skills In The Face Of Mother Nature

Shooting in Hurricane Gustav © Medical Student Flickr.comOne of the pleasant by-products of the HD evolution are the benefits brought to standard definition broadcasts. Well lit and shot standard definition material looks far better on a large HD screen than a small screen. Similarly HD productions broadcast in SD also look better than if they had originated with SD cameras and lenses.
Posted on April 21, 2010 and filed under comment, mike brennan, technique.

Save The DIT (Digital Image Technician)

 

DoP George C. Palmer argues that there are no official qualifications for the role of the DIT and that opens the door for abuse

I know this will sound harsh for the non-technical/non DIT backgrounded folks, so I apologise in advance, but as someone to whom procedures like 'modulated white shading' and 'back focus' are simply basic knowledge and normal practice, I must say, that when procedures like this need to be discovered in places like the Internet on the job or from chat forums, it seems to indicate that there was no qualified DIT on the set.

If that is true, and it really is too true, too often, given that EVERY DIT training program teaches that all one has to do is to 'learn the menus' it is a sad statement of the state of our craft.

To be clear 'menus' are just software interfaces that have replaced all of those 'pots' and other controls that formerly existed on the electronic boards in all camera heads, that were too daunting for creative side folks (DPs, cinematographers, operators, ACs, colourists, etc.) to undertake; as a result camera 'engineers' were always employed to adjust and set up cameras using those 'pots' because they not only understand what the labels were on those 'pots', but precisely what their effect is on the 'look' of the camera.

DIT Or Not?

The current corp of DITs does include precious few of those qualified camera engineers, most of the DITs in the pool do not have that background and fall on the side of the former creative side folks (listed above). This is not a condemnation of any individual, work category or an intended insult of either, or an attack on or defence of any job category just my view of the personnel landscape of our craft.

So having said all of that, I do wonder why our craft seems to believe that, just because a simplified software interface has replaced the myriad of physical pots and other controls, the qualifications for the person making the adjustments has also, somehow, been simplified. The truth is exactly the opposite. Even good video camera engineers have had to expand what they knew for TV video engineering to accommodate the more complex colourist picture requirements of digital film, and the film makers’ look requirements. So I constantly wonder why everyone, including our craft unions and the DIT training programs seem to purport that clearly untrained folks, from the point of view of the 'science of imaging' and camera processing that clearly must underline any creative tool, could replace an existing pool of technically trained engineer/colourists.
If we truly want the best outcome for our digital film, I believe that anyone who is in a position to affect the optimum performance of any camera used on that film, including HD and Digital Cinema, should have the proper background to actually assure that outcome.

Instant DIT

Concerning the spate of instant DIT-dom that has erupted since the advent of HD and 24P. I think a look at the genesis of work categories in the camera craft might be useful in establishing a context for the creation of any craft position. Aside from the legacy issues associated with union 'jobs', i.e. family, friendship, favours, being a good person, and the myriad of 'old boy club' promotions, there actually is a basis for differentiating between jobs within a craft based on qualification. Mostly to guide those who need our services to the 'right' person for their job.

DPs, for instance, usually are defined by some unique combination of knowledge, experience, and leadership skill sets that imply 'time-in-grade' and time spent as an operator, AC and/or in lighting/electric, above average visualisation abilities that lend themselves to translating a script into a visual story, and an understanding of the elements of colour and light.

Does that mean that they are required to understand and know how to do such things as operate a film printer, drive a DaVinci, build, drive, and maintain a computer that runs Scratch or Speed Grade, calibrate a viewing room projector, calibrate a modern day film transfer device like a Spirit?

Some, of course, may, but most will not. So they depend on other professionals to do those things. Most reasonable film DPs communicate their vision to their timers and colourists, and processing professionals with, perhaps, on-set photographs, industry descriptions that imply visual directions such as visual effects like 'bleach by-pass', and with descriptions of desired saturation, density, over exposure and under exposure, brightness and darkness. Some of the most renowned movies ever made came from this process. With the advent of electronic cameras, many of those professional functions have been consolidated into the hands of the person driving the menu driven in-camera processing controls of modern HD and 24P cameras, the Video Controller or the DIT.

Role Playing

Would the typical DP simply reach in front of their colourist during a grading session and start driving the controls to get their look, or demand to physically drive the printer light console on a film printer, or insist on setting up the film processor and the chemical combinations at the lab to get just the perfect 'bleach bypass'? Do they even assume they have the technical knowledge of all of those specific technologies to enable them to do so? Probably not. But do many current DPs, 1st and 2nd ACs, (and other even lesser technically qualified folks) aspire to do the exact equivalent of those things using the 'menus' of 24P HD cameras? You bet they do and many are darn proud that they did.

If that doesn’t challenge your sensibilities, let me offer you a somewhat hyperbolic, high tech analogy for parallel comparison. If a brilliant Physicist with broad understanding of the principles of propulsion, lift, atmospheric conditions, and even the design of air foils were to be handed a list of the controls in a Boeing 777 airliner, would anyone trust him or her, after a one or two day familiarisation session in the cockpit with a pilot trainer, to take one up with a load of passengers on the third day?

Not, of course, that DPs or DITs are, or need to be, rocket scientists or brilliant physicists, or even that we have life-or-death responsibilities, but we sure treat our jobs and our films with similar motivations.

DIT Qualified

The DIT work category was established, mostly by craft unions, to provide professional personnel to fill the one-stop-shop, on-set HD/electronic requirements of a professional who performs many of the formerly post-processing functions (listed before), for the digital image creation process. The term DIT and the job have been adopted in both union and non-union settings. But the problem is that no one, to the shameful discredit to our professions, has ever established ANY qualifications for the position, even though the DIT performs many of the functions of a camera engineer (VC), colour timer, colourist, HD technologist and film processor, that were formerly accomplished with the knowledge, experience, and vision of those folks who were required to actually demonstrate those professional attributes. But without any stated list of professional requirements, nor a requirement of proof of skills in performing any of those jobs.

And the result is that, based on my observation of current, normal practice on HD/24P/Digital Film productions, everyone in the camera department (and by extraction the entire production community) seems to believe that simply memorising the menus and some pre-production experimentation (sometimes referred to as camera tests) is adequate preparation for the routine on-set manipulation of those menus on a multi million dollar movie, many times to the actual exclusion of a DIT on the crew, and many times as a means of marginalising the DIT to the functional role of a 24 frame playback person or even to the role of their digital cable utility. The latter practice may even be the reason that so many under skilled DITs are even included and welcome on the camera crew.

So maybe these are just the ramblings of an old fart that has actually endeavoured to achieve professional status gained through the demonstration of professional practice, acquisition of knowledge, experience, and time-in-grade. And many might characterise me and other traditionalists like me as simply envious of the 'new, young, aggressive' breed of DITs. Or it may actually be a sad commentary on the state of our craft..

Many young, inexperienced, lightly qualified DITs (and Operators and ACs are, to their credit, eager and flexible folks who want to be part of a leading edge movement within our industry. Some actually want to learn all of the things that will eventually make them the professionals they hope to be viewed as. But that takes time, experience, and training. While many new, wannabe DITs (and Operators and ACs) pay bucks to attend menu manipulation seminars, my experience and many informal polls of those folks tell me that most would NEVER pay pennies to be trained in the actual technologies of the equipment that contains those menus. Simply assuming the job title, or attending one seminar, or pre-production experimentation does not instantly or automatically confer or infer the professional requirements of the job onto the 'assumer'. 

 

Posted on April 20, 2010 and filed under cameras, cinematography, comment.

Hiro Quibbler: Loving Judder

Loving Judder Learning to Love JudderHD is not film, and there are endless missives on the differences, discussing everything from gamma curves to grain. But there are other differences – why does HD exhibit motion judder so much more than film? And why does HD appear to judder more when viewed on a CRT monitor than when recorded to film and viewed at the cinema?

First lets look at the history of cinema to establish why we have arrived at the frame rates and perception of flicker in the cinema that we have today.

Initially the frame rate of motion picture cameras and projectors was around 15 frames per second, this was increased to 24fps when talkies came along. The number of frames per second needed to capture fluid movement is one question, the number of frames needed to display it another.

A film projector must have a blanking phase to allow each frame of film to be transported into the gate. Then the stationary frame is projected. Our eyes are very sensitive to flicker, especially in a dark environment, so 24 black flashes per second is very disturbing.

The solution to reducing the duration of the black flashes is to show the same stationary frame two times within 1/24th of a second. So instead of one long black flash between frames we get two short black flashes between frames. This makes the black flashes less objectionable. (In the days of 15 frames per second projection a three bladed rotary shutter was used to create 45 images per second).

So today at the cinema we are seeing the same frame flashed twice, creating a 48 image per second display (some projectors have three bladed shutters but most are two). For a century both the industry and audience standard for exposure duration has been 1/48th sec per frame (for features shot at 24fps). This shutter speed combined with the frame rate has a characteristic sharpness and motion judder that is in my view the strongest characteristic of what we call the film look.

So is 24 frames per second okay for actually capturing the motion on set? Actually it isn’t ideal - it’s not taxed by talking heads and wide shots with not much movement, but when the subject starts to move the cinematographer has to be very aware of the limits the subject can move between frames. If the subject moves too much then they will appear to judder.

Page one of “how to be a cinematographer” circa 1890s says that a “subject must remain stationary in frame or move slowly through frame”! Feature films spend a good deal of time and money making sure that the subject does not move through the frame too quickly. Cranes and dollies help the camera track a subject through a set, but do it so that the subject itself does not move through frame too quickly. Extraordinary devices have been created to accommodate the relatively low frame rate of film.

However look at the background on such a crane or dolly move and if it is in focus enough it will appear to judder. If the audience around you is looking at a juddery background and not the subject then it is a poorly composed or framed shot - the idea is to look at the subject not the background.

The sharper the image is, the more the perception of judder. One of the tools the cinematographer has on 35mm is a narrow depth of field that pulls the background out of focus. So, for instance, a subject can be running down a corridor being tracked by Steadicam, preventing it from moving within the frame too much, yet the fast moving background will not judder if the camera crew keep it out of focus.

This brings us to the main reason HD appears to judder more than 35mm film - Depth of Field. Current HD cameras, with their small sensors (by film standards) have a large minimum Depth of Field - a law of optics, as discussed in last issue’s Qi. Put simply, it’s hard to keep your background out of focus with a 2/3in sensor, so motion judder will be more apparent. Larger imager digital cameras reduce the minimum DoF, solving some of these problems

New directors, in particular, feel that the judder of HD is unacceptable – a problem related to the HD working environment. When working with HD we see a high quality, live, progressive picture from the camera whilst a shot is being set up, re-framed or focus checked. This involves a lot of swishing and panning around, so the director sees lots of judder during this phase of the shoot and can become overly sensitive to it, often to the point in a rehearsal of looking at the background juddering rather than the subject!

This background judder can also be more prominent when watching on a small location monitor from a distance than on a larger screen. The dark black border of the HD monitor forms a strong contrast if the background subject is bright, and due to the size of the monitor the viewer may not be fully engaged with the subject by not being close enough to the screen.

The upshot of this is that your background subject should be carefully considered in planning a shot on HD. High contrast sharp verticals, such as white door frames against dark walls should be avoided, if they can’t be thrown sufficiently out of focus. Reducing the contrast of verticals in frame is a smart move. Even more for you to think about, as a DoP, but then that’s why they pay you the big bucks, right?

Posted on March 29, 2010 and filed under comment, educational.

Book Reviews



Digital Film Making
By Mike Figgis
Faber and Faber - Available on Amazon UK for £8.99
156PP 18mm high by 11mm wide paperback format

Mike Figgis is a noteworthy Director of movies like Leaving Las Vegas, Stormy Monday, Timecode and Internal Affairs but he is definitely not a Hollywood man. His book is presented in a small paperback format designed for your gear bag or for quick access. It’s not because the book is separated in to chiselled sections but mainly because Mike’s wisdom is. His words aren’t a cynical grumble on the Hollywood machines but as a shove against stupidity in film making. It’s clear that Mike loves the process of making films and especially cameras and increasingly digital cameras. One story about how in a major Hollywood film on a bright sunny day he was totally confused about how the lighting crew put up the biggest light he had ever seen when God’s light was so bright. It was that British sensibility against the madness of Hollywood that held the entertainment factor for me, but plenty of interesting gear talk too.



Direct Your Own Damn Movie!
By Lloyd Kaufman with Sara Antil and Kurly Tlapoyawa.
Focal Press - Available from Focal for £10.99
201PP with Filmography and Index

Where to start to describe the Lloyd Kaufman ‘schtick’. Maybe a quick look at www.troma.com (His film company site) and you will understand where he’s coming from as a film maker – Toxic Avenger and the like! This book is his advice on how to make your first movie but above and beyond that it’s hilarious and comes off the page like a stand-up performance, not unlike a paperback Mel Brooks. One example is Footnote Man, yes an imaginary voice as a Footnote. Loads of interviews from his peers with sometimes helpful advice but mostly just conversations. He takes you chronologically through the process with often scurrilous interjections from his life as a film maker. Thoroughly entertaining and one to dip and out of when your own project is getting you down.

 





The DV Rebel’s Guide
An all digital approach to making killer action movies on the cheap.
Peachpit Press - Available on Amazon US for £31.99
320PP with DVD including The Last Birthday Card short and ‘making of’ with his commentary and vfx breakdowns

Stu Maschwitz is a former employee of Industrial Light and Magic who left, soon after working on The Phantom Menace, to form a new post house called The Orphanage. Well that’s his day job, in his spare time he champions the guerilla or rebel film maker. In his ‘The DV Rebel’s Guide’ Stu, as I feel I can now call him, pours out everything he knows about making movies in the best and cheapest possible way. Everyone who has had even a small notion of making some kind of movie production should read it because it is a film school course in the form of a book written from someone who breaths film making and is quite willing to pass on his film making secrets for the greater good.

You would be wrong to label him as just a post guy even though that where he makes his living, but he is also an obvious fan of the acquisition end of things too. The main theme of the book seems to be rebellion as the title suggests as Stu is always backing the small guy against the Hollywood elite. He goes on about Production Value and how certain ways of working can keep your production value strong enough to fool people in to thinking they are watching a movie with a much bigger budget. He admits that he wanted to make his own movies because the shorts he had seen at festivals didn’t seem to have ‘entertainment’ as their watch word. Entertainment for Stu is based on lots of guns, car chases, visual effects and breathless story telling.

The book is linear in its contents as far as making a movie is concerned and covers not just the production but also storyboarding, scheduling and of course script writing. Stu is also not precious about his art and concedes that the sound is more than 50 percent of the effect of a movie – a well know Lucas Film mantra.

Chapters include extensive advice on how to plan your shoot which seems to be where most effective cost cutting can also be planned for, but Stu is always advising you to be prepared to run out of the door if a shot presents itself. For instance he needed a shot of some fire trucks outside the apartment in his first short The Last Birthday Card. The apartment in the short was on fire and the trucks had to be arriving to deal with the fire. He soon realised that the fire station on the same street practiced every Friday and he was able to get the right angle for a truck to be driving slowly passed the apartment. He later composited fire and smoke to be coming out of the apartment’s window.

You’ll find plenty of reference to movies in the book and lots of analysis of certain scenes which he picks apart so you can recreate similar ones with his advice. There are also frequent Jargon busting paragraphs again to bridge the gap between those who are in the business and those who want to be.

You can tell when the VFX chapters hit that here is where Stu is most comfortable but he is careful not to alienate the audience and walks you through much of the process. The accompanying DVD was a disappointment as it didn’t seem to have a lot of the files promised on it but maybe that’s a territory thing. However it does include The Last Birthday Card with a commentary track and VFX breakdowns

Thoroughly recommended reference book for all content creators.

Posted on March 29, 2010 and filed under cinematography, comment, reviews.

Hiro Quibbler: CMOS v CCD

Which is better CMOS or CCD? Which is better CMOS or CCD?Film and television are, perhaps, two of the few places where C.P. Snow’s ‘Two Cultures’ actually do communicate. An extraordinary amount of science and technology is required to produce these defining art forms of the 20th Century. Albert Einstein is best known, of course, for a couple of theories of relativity, but this work was far too controversial at the beginning of the last century for an physician called Allvar Gullstrand, an influential member of the Nobel Committee. So, despite 60 or so nominations for relativity, in 1922 the Royal Swedish Academy of Sciences awarded Einstein the 1921 Nobel Prize for his discovery of the law of the photoelectric effect, kind of as a consolation prize – we can thank a virtually unknown physicist called Carl Wilhelm Oseen for brokering that particular deal. Einstein gave the considerable sum of money he received to his ex-wife.

Stillsinmotion_v2

Albert’s Nobel prize winning theory gets to help the modern day image-maker too. Imaging sensors use a variation of the photoelectric effect he described and here, too, there are developing two cultures – those who follow the Charge Coupled Device (or CCD) and those true believers in CMOS.


The first step in either type of imager is to convert the light (photons) falling onto the chip into an electric charge (electrons). This requires a lot of difficult quantum mechanical calculations, so you can thank Einstein that he started the ball rolling with the sums. Fortunately, the photon’s don’t care – they hit some silicon and raise the energy of some of its electrons enough to free them and we have our charge. The more light that hits the silicon, the more electrons we get. A bigger bit of silicon (i.e. a bigger light sensitive area) will give us more charge for a given intensity of light as there are more photons hitting its surface – like the difference between catching rain in a sail or in a handkerchief. In principle, both CCDs and CMOS sensors convert photons to electrons the same way – the differences come in what happens to the electrons afterwards, and in the implications that has to the way the chip are made.

The little light sensitive pixels (photosites) in our sensor will continue to convert photons to charge as long as light falls on them, which is why digital stills cameras still tend to have mechanical shutters. The CCD’s electronic shutter works by transferring the charge in each pixel to a storage bin next to the light sensitive area. The chips are arranged so that charge can be passed from bin to bin until it gets to an amplifier at the edge of the chip where the charge is converted to a voltage.

CMOS chips transfer the photosite’s charge to a capacitor and then convert the charge on the capacitor to a voltage by having amplifiers right next to each pixel. Their ‘shutter’ is a switch between the photosite and the capacitor.

But which is better?

CCD image sensors have been around a lot longer than CMOS ones, so the technology is better developed. However, the techniques use to make CCD chips don’t lend themselves to making all the other digital gubbins that constitutes a modern image sensor. As a result, the analog to digital convertors, clock generators and so on are separate chips – which puts the overall system cost up. CMOS, on the other hand, is the technology used for most silicon chips, so a CMOS sensor chip includes everything you need to get a digital image data stream, reducing cost. They also use less power, so your batteries last longer.  There is a common myth that, because CMOS chips use the same technology as, say, the microprocessor in the computer I’m typing this on, they can be made on the same production line, further reducing their manufacturing costs. In fact, that is rarely true – the massive size of sensors and some of their manufacturing requirements tend to restrict them to dedicated production lines.

Talking of myths, CMOS sensors all exhibit ‘rolling shutter’ artefacts right? These artefacts are caused when a sensor doesn’t grab the charge state of all it’s pixels simultaneously, but instead transfers the charge line by line, so the pixels you are seeing at the bottom of the image were captured later than those at the top. In fact, there is nothing inherent in CMOS technology sensors that precludes the simultaneous capture of the whole frame, however this requires more transistors in the ‘switch’ between the photosite and the capacitor. That means that, for each pixel on the sensor the photosite must be smaller (to make room for those extra transistors). A smaller photosite is less sensitive to light than a larger one, so a CMOS sensor with a rolling shutter has better low light performance than a similar sensor with a full frame shutter. A stills camera has that mechanical shutter, so the designers will go for a rolling electronic shutter to improve the sensor’s ISO rating. The decision is harder with the CMOS sensor in a video camera. Most manufacturers choose low light performance and use a rolling shutter, but before you condemn the decision, remember that the shutter in a film camera also ‘rolls’ across the frame. In fact, at high shutter speeds (nowadays above about 1/250th), the same is true of a stills camera, as shown by Jacques-Henri Lartigue’s famous 1913 photograph ‘Car Trip, Papa at 80 kilometres per hour’.

For a given photosite size, CMOS and CCD sensors are now of a very similar quality, in terms of dynamic range (the range of light intensity, from black to white, that the sensor can capture) and signal to noise ratio, but CMOS sensors suffer from another image distortion artefact. Remember that each pixel has its own amplifier? Unfortunately, manufacturing tolerances tend to mean that the gain of each amplifier is slightly different, so exposing the sensor to a uniform grey won’t produce a uniform grey output. CCD’s aren’t immune from exposure artefacts either, of course. In a CCD sensor, if too much charge accumulates in a photosite it can leak into the adjacent pixels – because of the way the chips are designed, this usually appears as vertical or horizontal ‘blooms’ around highlights.

In both CMOS sensors and CCDs some of the silicon area of an image sensor is taken up by electronics that isn’t photo-sensitive - the transfer channels and anti-blooming gates in a CCD and the switching transistors and amplifiers in CMOS. Manufacturer’s often place an array of micro-lenses over the pixels to focus more of the available light onto the light sensitive part of the chips in an attempt to increase the overall sensitivity of their sensors. These work well when light hits the sensor at right angles, but towards the edge of the sensor it is less effective, resulting in vignetting.

CCD image sensors have been around for 40 years or so, but in only about 10 years, CMOS sensors have nearly caught up – the general consensus is that CCD still has a tiny edge in image quality, but each new generation of CMOS brings massive improvements. In the past the choice was clear – for quality imaging you use CCDs. At the current state of the art the choice is less clear and you would expect the next few years to edge CMOS ahead of CCD.

Either way, the ex Mrs. Einstein was very happy with Albert’s Nobel Prize, though she never really got the credit she deserved. It turns out Al was terrible at maths and got his (then) wife to give him a hand. Relativity in action, perhaps?

Posted on March 29, 2010 and filed under comment, dslr, educational.

There's More To 3D Than Not Having To Say Sorry!

3D Can Be Fun! 3D Can Be Fun!I was quoted, in a recent issue of this magazine, criticizing the methods of live-action stereo capture as advocated by Messrs. Pace and Cameron with Avatar, specifically their choice of slaving convergence to focus.  Whilst I maintain the view that stereo anomalies will occur and are in evidence in some of Avatar’s live action sequences as a consequence of shooting this way, I felt a little shocked and embarrassed by the sheer venom my opinion was laced with by the time it was expressed in print. True, I personally wouldn’t choose to shoot this way, but in the grand scheme of things, who am I to appoint myself grand inquisitor and condemn it as 3D heresy? There is no one single right and correct way to shoot stereo, there are only preferences, and I’m clarifying this because the whole point of this article is to promote an understanding of ‘Stereography’ as a creative discipline, rather than a solely technical one, which is kind of untenable if I am known to have first thrown all subjectivity out the window. The glorious and beautiful reality is that 3D production is so very much dependent upon what your stereo objectives are from the outset, and upon what the audiences’ 3D expectations are of your film, which of course is tied into genre and a myriad of other creative and practical factors that will all duly colour your stereo choices and application thereof.

Of course, there are some absolute rules and conditions you would be well advised not to violate, that is, unless in the spirit of Dali and Buñuel perhaps, your objectives are so avant-garde that you actually wish for the viewer to throw up. But good 3D must surely be measured by more than creating the perception of depth whilst successfully keeping everybody’s food down?

Yes it is generally and rightly accepted that good 3D is, first and foremost, comfortable to watch! It immerses us in the story world and it helps reinforce the suspension of disbelief, both within a narrative context, and in the context of attendance, such as the sensation of actually being at a 3D concert or sporting event, for instance. Good 3D therefore draws no attention to itself. If the viewer is forced to look away because of discomfort, or is equally disengaged by stereo-show-boating, then you have in both instances betrayed the requirements of your story and made the viewer cognitive of its processes. Just because you can hurl a javelin into the viewers’ forehead, doesn’t mean that you necessarily should. It doesn’t mean that you necessarily shouldn’t explore and exploit negative-Z depth (out of screen functionality) options either, provided it is not clichéd, but more on this later.

3D is no different from any of the other technical disciplines that in concert, and with creative application, make motion pictures, so do let’s get the science of it into realistic perspective. We would all generally accept that extensive training and a relatively acute mind is required of a Director of Photography or a Visual Effects Supervisor, and if these skill-sets were scant and not so widely understood, as is the case with stereography, then they too would probably be scrutinized with similar measures of both reverence and skepticism.  Despite the technical knowledge required of a DoP, we still tend to speak of them in artistic terms, distinguishing the good from the bad by their levels of invention and ability to paint with shades of light and colour. A stereographer should be defined similarly in their ability to manipulate and control depth for dramatic effect, because just as the art of lighting is so much more than insuring everything is correctly exposed, so too the art of 3D is so much more than just adding dimension and comfort to a shot.

Is the 3D take-up unstoppable and how do you shoot this 3D anyway? Is the 3D take-up unstoppable and how do you shoot this 3D anyway?3D simply doesn’t operate on a one size fits all setting. Sure, you will most likely set and fall back on a global operating level, but this should dial up and dial down dynamically throughout your story and even within an individual shot if there’s a dramatic requirement for it.  Let’s say I was shooting a prison break, for example. I might very well want to dial down my stereo in the beginning, compress the space because I want a heightened sense of claustrophobia and oppression inside the prison cell or escape tunnel.  Once my jailbird takes flight and is out in the open, drinking in the sun and clean air, I might open up my depth budget and swallow him up in an expanse of space to accentuate a sense of freedom or even agoraphobia if it was relevant, and thus, potentially increase the dramatic payoff of this sequence.

Okay, this is a very basic example, but a pretty effective one for illustrating how the manipulation of depth can serve a dramatic purpose, aside from just endowing your story world with dimensionality.

And there are many ways to manipulate or personalize the perception of depth. Setting interaxial and/or convergence, is not, nor should it be, your first consideration. Setting your scene and your action should always be the priority and this will then in turn inform you how to set up your camera system. Your internal action and camera moves will in themselves have a bearing on the perception of depth. How is the space utilized in shot? Might it best serve your stereo expectations and support action to dolly through space i.e. along the z-axis rather than across the scene?

Consider that stereopsis, our ability to see the world three dimensionally, is only one of many ways in which we reference and register depth in our daily lives. We also use motion parallax, perspective, and scale, to list but a few, not forgetting atmospherics (The lunar astronauts expressed great difficulty judging distance on the moon because zero atmosphere meant that a mountain range on the horizon had the same visual sharpness and clarity as a boulder only meters away) and we should also remember that sound informs our perception of depth too, for example, the Doppler Effect. All these alternative and additional depth queues can be added to the filmmaker’s stereo palette and be employed in varying degrees to enhance our immersion and perception of volume and space.  Conversely, they might also be consciously omitted to collapse it.

Avoid un-watchable 3D

3D therefore has some very specific requirements of composition and framing, which in turn impacts the blocking of action, which of course effects camera moves and shot duration. There is a tendency, certainly with film action sequences, music videos and commercials to significantly exceed the shot count actually required to deliver the intended message or exposition. This is done in the most part to maintain visual interest and keep modern audiences engaged.  But any rapid shot assembly of this nature, particularly coupled with agitated camera moves will most definitely create unwatchable 3D, certainly so for the big screen, to a lesser extent on television, but the ‘keep it comfortable’ rule will nevertheless be violated.  When you watch a 2D movie your eyes are more or less set on a single point of focus and convergence, which is the actual screen surface on which the movie is projected. Your eyes are only ever likely to shift focus and convergence when you look down having spilled your pop corn, or turn around to scowl at the person who left their mobile on.  When you’re watching stereo content, however, your eyes are literally interfacing with the film world the same way that they would interface with the real world. In the real world, our eyes of course adjust focus and convergence as we shift from one point of interest to another. It is a physical muscular activity conducted over time, and regardless of how imperceptible this action and its duration may seem, our eyes simply do not snap from one state of rest to another instantaneously, which is precisely what a rapid shot assembly would be asking them to do. By bombarding the viewer with visual stimulus and information in such a way, you literally overload their physical capacity to view and process that same material.  Ironic hey! Which is why it is also good practice, even with a more languid pace, to still balance your stereo across the cut i.e., shift your entire depth budget along the z-axis so that the stereo from one shot to the next meets at a convenient and comfortable middle ground during shot transition. Something that is infinitely easier to do, if you have plotted your stereo properly from the outset. The implication anyway, certainly for 3D production is that film grammar is going to have to shift from a reliance on this visual shorthand or patois, to a singularly more concise vernacular! In other words, longer shots individually stacked with more information. The good news though is that dimensionality inherently adds visual interest to a shot and audiences tend to want to have a little more time to explore and savour the eye candy you pack into it.  Additionally, it is my experience that the same shot in 3D generally feels faster and more dynamic than when viewed in 2D and I suspect, although admittedly I am no neurosurgeon, that this is down to a certain degree of latency within the brain while handling stereo tasks – it simply requires more processing power to do it.

3D on a budget? 3D on a budget?Anyway, with all this in mind, producers should be encouraged to couple their director with a stereographer at the earliest stage of pre-production, so that between them they can identify where and how variations in depth might serve the narrative, flag situations that are likely to create edge violations and reveal where compositing might even be required to meet stereo expectations. Ideally you should really be able to graphically chart your depth budget throughout your entire project, be it a movie or music video, follow its undulations, and peaks and troughs, as they correspond to variations in the action, drama, suspense, melody and choreography. The point is, your 3D considerations and awareness should not be starting on the first day of principle photography, based on the assumption that the shaman with the beam-splitter will just cast his stereo voodoo over everything and that’ll be the end of it. If you want your stereo bespoke and fashioned in your own image, I would even advise you to go and learn the basic principles for yourself, because although you can hire the technical crew and shoot in 3D tomorrow, until you are working from a wholly informed position, until you know how to exploit the rules and their loop-holes for yourself, you are likely miss a trick, and nobble your full creative potential.

3D is a wholly subjective process! If you need further evidence of this, then consider how fashionable it is right now to stigmatize the employment of off-screen effects. Apparently it has no relevance to modern 3D cinema and all instances of it should be tarred with the same gimmick brush. I’m not about to endorse a cheap poke in the eye with a long stick, don’t get me wrong, but, as I have been trying to qualify from the outset, stereography is the art of impacting or underscoring the viewers’ emotional engagement with the story through variations in perceptible depth, geometry and yes, even proximity. So, with this I fire my parting shot and very unfashionably maintain that if it is narrative driven and heightens the audiences’ emotional engagement with the scene without pulling them out from the story, then absolutely, why shouldn’t you have the occasional and very cool off-screen effect? Do you really think all those people sitting in the auditorium with dark glasses on have absolutely no expectation of this at all?  Choosing to sneer at this simply as a backlash to old corny gimmicks is to voluntarily hamstring your productions’ full 3D potential and inadvertently admit you lack the invention to make such effects anything but cheap.  Well that’s my humble and subjective take on the matter anyway.

Posted on March 26, 2010 and filed under 3d, broadcast, comment.

HDV – The Democratisation of High Definition

For most video professionals, high definition has long been synonymous with high cost. Hence, despite the plethora of cameras, VTRs and other tools now available for production in various HD formats, HD has not been adopted for everyday use by the legions of hard working professionals.  The good news is that the advent of a new low bit rate HD format, called HDV, may change all of that by offering many of the benefits of HD at the cost of DV. 
Posted on November 12, 2003 and filed under comment, compression, educational.

Re-claiming High Definition

The definition of the term high definition is becoming increasingly slippery. In fact I’d say for practical purposes the term can be as misleading as “low fat” or “British beach”.

For example this month I’ve been working on a 3D 3.5k x1k shoot and also fielding calls from a producer who wants to shoot “high def 24p”. Both shoots have been described as high definition but really, they are not.

The “24p” producer was referring to what was in fact mini DV 24p not HD. On discussion he should have been asking for a HD camera in ENG mode. He had been side tracked by confusing digital with 24p DV and 24p with high definition. In his mind there was not much difference between 24p and high definition. He hadn’t bothered to see the difference for himself.

And what to call a 3.5k x 1k 3d digital image? Tri D?

Kodak is now using the term High Definition. Their recent European advertising campaign claimed that Kodak film produces  “high definition” images. For those of us regularly working in HD this claim was as incongruous as HD camera manufactures promoting HD as having “grain”.

While Kodak is using the relatively vintage term “High Definition” the electronics manufacturers are doing their best at predicting the future. Quantel has recently invented the term HDRGB and Sony compressed 444 and Thomson was early off the mark with the brilliantly conceived term filmstream.

Who knows what term in the future will aptly describe the basket of technologies and formats. My 1985 Oxford concise dictionary has no reference at all to the word internet, which back then was more likely to be considered a tennis expression than anything futuristic.

We shouldn’t be distracted by the terminology. Fortunately it is relatively simple, even for the uninitiated to judge for themselves a standard of image. We need only trust our eyes.

If anyone doubts that the concept of high quality digital images hasn’t entered the public arena, then the Kodak campaign is confirmation that the HD train has departed the station, albeit with each carriage travelling in a different direction to as yet unknown destinations.

This magazine will continue to chart where we have come from and where the movers and shakers consider we are going.

We are all embarking on a High Definition journey of one kind or another so lest you want to end up in Milawi instead of Monaco pack this magazine in your digital rucksack.

Bon voyage!

Michael Brennan

Posted on September 1, 2003 and filed under comment, mike brennan.