Patrick Inhofer: Color Correction 07/01/12
Every month, a daily progression of fundamentals on a topic.
Our ‘Workflow Whisperer’ for July is Patrick Inhofer of the Tao of Color.com, a color correction training site. He will explain color correction fundamentals from theory to application. As a working colorist at Fini.tv, and as someone who has interviewed many of the top colorists in the world, Patrick’s perspective is one of a kind. If you like this series, check out his free Newsletter, The Tao Colorist, which links out to articles all over the web. It focuses on the art, craft and business of color correction and delivers most Sundays – just in time for your morning coffee or evening glass of wine (depending on where in the world you live). Did we mention it’s free?
We’ll be tweeting links to these tips out daily, so make sure to follow us if you don’t already.
TABLE OF CONTENTS:
WEEK 1: Color Grading Fundamentals
MON: Complexity of Seeing
TUE: The Mind’s Eye
WED: The 60-Second Rule
FRI: Bypass Your Brain
WEEK 2: Setting Up Your Color Correction Workspace
TUE: Selecting a Monitor
WED: Grading In Your NLE?
WEEK 3: Color Grading Fundamentals
WEEK 4: Putting It All Together
TUE: Creating a Look
WED: Control Surfaces
WEEK 1: Color Grading Fundamentals
Monday – 07/02/12
The Complexity Of Seeing
Have you (or one of your clients) ever walked into an edit room the day after being on a shoot, looked at the footage on a professionally calibrated monitor and declared, “Hey, my footage didn’t look like that?!”
Except, of course, the footage looks exactly – precisely – like that. It looks like that today. It looked like that yesterday. And it’ll look like that tomorrow.
Over the next 3 posts, that’s exactly the question we’ll be answering, and in the process discover what the answer means to anyone who practices the craft of color grading moving images.
The first step in understanding “what happened” is understanding that we don’t see the way we think we see.
Seeing Seems Easy Open your eyes, turn on the lights, and the reflected light washes your retina with visual data. Similar to a CCD in a camera, photons hits a variety of specialized nerve structures that ignite electrochemical responses and allow our brains to recreate the scene in front of us.
Assuming our physical eyeballs are functional, seeing is simple. It’s natural.
And what we see is real. Isn’t it?
Seeing is Believing. But is it real? Well…not exactly.
R. Beau Lotto, a noted vision scientist, often speaks of the raw data that our retina sends our brain and how that data is, in itself, meaningless. Any given color of any given brightness can be created under many wildly different circumstances. Raw retinal data on its own is nothing.
Retinal data must be manipulated by our brains. Our brains take the raw data and superimpose meaning onto that data. Meaning is derived from context. And for our purposes, context is just another word for memory.
Visual meaning is completely derived from our past experiences. From our memories.
Seeing Isn’t Passive Building from Beau Lotto’s introduction: we usually think of seeing as a mostly passive activity.
Nothing can be farther from the truth. We’ll use this optical illusion from LottoLabs as an example (be sure to hover your mouse over the mask to reveal the colors that we’re comparing).
If you’re on a Mac, dig out the Digital Color Correction app from your Utilities folder. Now go back and sample those colors from the ‘unmasked’ image.
When I sampled them, the RGB values of the gray ramp on the right is 118/127/127 and the values on the left are 118/126/128. Essentially, identical shades of gray with identical brightness values.
So why do we perceive them as being different? I’ve broken down the image below.
Points 1 & 2 have identical RGB values – yet they seem to vary intensely in overall brightness. Why?
1. Look at the lower shadow cues in this image. Shadows provide context. In this case, they’re informing us that the main illuminant in this scene is coming from screen right. And the illuminant is fairly low on the horizon. AND both points are being simultaneously illuminated by the same single source of light.
2. The subtler shadow cues at the top of the two surfaces make clear that Point 2 is directly illuminated by the light and Point 1 must be in shadow. We know this by our accumulated experience in which we’ve seen physical objects interact with light and shadow. And there is no reason to believe that this moment in time is any different than our past direct experiences.
Here’s where it gets interesting…
3. Our brain has enough data to put Points 1 & 2 in context. If Point 2 is in direct light and has the exact same RGB value as Point 1 (which is in shadow), then Point 2 must be a darker surface than Point 1. Therefore, knowing that the values are the same but that they are each under two different lighting conditions, our brain reinterprets the raw retinal data and puts it in context.
If Points 1 & 2 have simultaneously identical RGB values while under differing lighting conditions, the nature of their surfaces must be completely different! Therefore, comparatively, Point 1’s surface is much lighter than Point 2’s surface – and that’s how our brain shows it to us.
Re-stating The Result
Our brain took two identical RGB values, placed them in the context of the light source, took into account the nature of the reflected light and remapped those RGB values making them look completely different so they’d conform with the overall context of the scene.
Paradoxically, our brain forced those RGB values to LOOK different precisely because they’re the same!
The Big Hairy Truth
Our raw retinal data is actively re-visualized by our brain. It presents to us an image which, as an objective truth, is sometimes wrong!
As professional colorists, this fundamental truth about the human visual system is nothing short of – if you’ll excuse my language – a pain in the ass!
The Tao of Professional Color Grading: Seeing the Present, Accurately
Armed with this basic insight, a professional colorist’s first task is to take the image in front of them and see that image for what it is – not what it should be based on past experience. Not what our brain actively manipulates in an effort to help us see the world in proper context.
We need to find ways to bypass our brain’s well-meaning instinct.
As professional colorists, we judge an image for what it actually looks like if we are to have any hope to revise it and conform the image to our creative goals.
Luckily, there are tools and methodologies that professional colorists use to help them “see the present, accurately.” These include:
- SPEED: Execute initial judgements quickly
- WAVEFORM MONITORS, VECTORSCOPES AND HISTOGRAMS: Objectively analyze brightness and color values
- WORKING ENVIRONMENT: Design our workspace to help visual acuity and color perception
- ACCURATE MONITORING: Viewing images on neutral displays that don’t have their own built-in biases
Splice Vine’s “Tao of Color” Series
Over the next two weeks we’ll be covering all of these topics – much the information for which comes from researching the training series I’ve developed for the TaoOfColor, and for seminars I’ve given to User Groups and at NAB. The first two weeks lay the necessary groundwork for us to apply these concepts as working professionals.
In the last two weeks of this blog series we’ll delve into actual color grading workflows and practices. The concepts we cover are platform agnostic. No matter what nonlinear editing system or color grading application you’re using, you’ll find these techniques useful.
Please ask questions via SpliceVine’s contact form. Email Eric by the end of Week 3 and we’ll do an audio podcast where he’ll gather your questions and we’ll discuss them.
Tuesday – 07/03/12
The Mind’s Eye: Tripping Up Colorists Daily
Yesterday we looked at a simple optical illusion. Its purpose was to demonstrate how our brain adds meaning to what we see, often at the expense of objective reality.
Hopefully you walked away with an understanding of how this behavior creates a problem for professional colorists. Today, we’ll show some practical applications of how this mind’s eye phenomenon can trip us up.
The Colorist’s Dilemma
Every professional colorist needs to confront the same dilemma:
“Is what I’m seeing actually what the image looks like?”
Before we can devise a plan to confront the Colorist’s Dilemma, we need to step back into the salty primordial goo of our brains and re-examine how we see.
To do that, let’s revisit our trusty vision scientist, R. Beau Lotto. This time I’ll direct you to this shortish article he wrote for BBC Online.
Once you’ve read his article and checked out the optical illusions, continue reading…
Optical Illusions Are A Roadmap
The optical illusions in R. Beau Lotto’s article are very simple. In their simplicity they demonstrate the power of our brain over our retina.
These illusions are a roadmap showing us the pitfalls we must avoid. And we’re not side-stepping these pitfalls through some theoretical Zen-like “still your mind to see the world” advice – but through some very practical actions to keep us from making basic “mind’s eye” mistakes in our day-to-day work.
Practical Illusion: Brightness Contrast
The first illusion in that article is the grand-daddy of optical illusions. With striking simplicity it shows the power of surround fields. So how can we, as professional colorists, put that information to practical use?
Start by imagining a three-day color grading session for a 22-minute show:
- Day 1: You color grade with the overhead lights turned on. Light levels are fairly high. You leisurely grade the first 11 minutes of the show.
- Day 2: You wake up in a foul mood and decide you want a more solemn atmosphere today and turn off the overhead lights, working in near pitch black. You grade the last 11 minutes of the show.
- Day 3: Today is Client Review. You’ve got a backlight behind your monitor and since it looks more impressive you turn on the backlight (with the overheads off) and review the grade with your client.
Now, let’s consider these two questions:
- What are the chances that your decisions about brightness and contrast will be consistent across the entire show?
- What are the chances that your images on Day 3 will look anything like what you did on Days 1 and 2?
If you go back and review the Brightness Contrast Illusion and frame your answers in the context of that illusion, you’ll see there’s a terrible chance that you made consistent decisions, and almost zero chance of those decisions looking as expected on Day 3.
The Brightness Contrast Illusion teaches us that the same set of RGB values will look brighter or darker depending on what’s also in our field of view. The human mind doesn’t see in absolute values, but relative values. When color grading we must control for variable of the surround field.
Practical Illusion: Cube Illusion
The third of Beau Lotto’s illusions is a personal favorite. Whenever I demonstrate this live, it bowls people over. It doesn’t matter that I’ve been showing that cube for years. It doesn’t matter that I know precisely the outcome of the Illusion – those two center cubes never ever look identical to me. And when I use a Digital Color Meter to measure the values of those cubes, I’m always amazed that they’re an exact match. Even knowing the outcome, this illusion gets me EVERY TIME. It’s unavoidable.
Now lets go back to our three-day color grading session and add complexity to our viewing environment, keeping in mind the Cube Illusion:
- Day 1: The overhead light is fluorescent, with a strong green bias.
- Day 2: No light is turned on, so the predominant light is coming from our displays, set to a cooler 5000K – which is a bluer tint than the fluorescent bulb.
- Day 3: The backlight has an industry standard 6500K bulb – which is significantly bluer than the fluorescent bulb.
Questions, considering what we now know about the Cube Illusion:
- What are the chances that skin tones in your images are going to be consistent between Days 1 & 2?
- And do you think any of the images during the Day 3 client review will look like they did on the days you were color grading?
That would be a no to both questions. And now we’re in an even worse situation, because not only are our brightness and contrast levels going to be set differently between Days 1 & 2, but most of our color decisions will also be different between those two days – and neither will be correct for Day 3’s client review.
What’s a colorist to do?
Colorist’s Dilemma: Lessons Learned
After digesting everything from the first two posts in this series you should be getting a sense of how important the viewing environment is to the colorist. There’s a reason I started this series by talking about these optical illusions. As a colorist you need to understand this fundamental point:
- Our brain is actively re-interpreting what our eyes see – every moment of every day. The objective truth of our raw retinal data is twisted and morphed by our brain at a very low level. And it’s as hard to control as the rate of our heartbeat.
Therefore, we need control as many variables as possible when color grading. Here are a few preliminary rules:
- Surround field is everything
- Controlling the light in your room is paramount – including light levels and color temperatures of those lights
- You must use a variety of tools to verify what your eyes are telling you
These rules are NOT arbitrary. These rules are not a cynical ploy by established professionals to raise the barrier of entry to becoming a professional colorist. These rules arise from deepest recess of human biology. These optical illusions arise precisely because our mind’s eye imposes on retinal data a lifetime of visual observations.
Always remember: Your mind’s eye forces the current reality to conform to your past experiences. You always see the present through the eyes of your past.
- LottoLab.org: A good resource exploring a variety of optical illusions.
- TED Video: A must-see TED Talk by R. Beau Lotto with active stage demonstrations of many of the concepts we’ve talked about, plus a whole lot more.
- Color Correction Handbook: Professional Techniques for Video and Cinema: A fantastic book for anyone learning the craft of color correction, which includes a chapter on room setup (some of which we’ll cover later this week). It’s practically an encyclopedia of facts, tools and techniques that’ll benefit everyone reading this blog series – from novices to seasoned pros.
- The Art and Technique of Digital Color Correction, Second Edition: Just updated, another fantastic book that – like the Color Correction Handbook – will take you much deeper through many of the concepts we’ll be talking about in this blog series. It also includes extensive interviews with the United States’ top colorists, breaking down their workflows and habits to help you understand, develop and refine your own workflows. A real gem of a book.
Wednesday – 07/04/12
How To Improve Your Color Corrections Using “The 60-Second Rule”
In the first post of this series we examined how the brain takes raw retinal input, finds visual clues and then superimposes our past experiences onto that visual data. Our brain puts out a picture that can be quite different from the raw input.
Yesterday, I introduced the notion of the Colorist’s Dilemma. It’s the first question that every colorist must answer:
“Am I’m seeing what the image actually looks like?”
In that post we learned how our viewing environment can play havoc with our perceptions of color and contrast – and how two colors with identical RGB values can simultaneously look completely different, precisely because they’re identical! K-razy, right?
Personally, I’m convinced that this innate ability of ours to force retinal data to conform to past experiences explains why a client can walk into our room the day after a shoot and proclaim, “That’s not what I shot. It didn’t look like that yesterday on set!”
This can be true even if our client is completely controlling the lighting in the video village so that it perfectly matches our room, and even if the client was using our exact same monitor in the field. The problem is that, yesterday, our client had the benefit (curse?) of something she doesn’t have the very next day…
The Immediacy of Reality
What did our client have yesterday? – the actual, real scene in front of her.
Think about it: at one moment our client is on set, looking at the actual scene with full access to every detail of the image. She turns away and in 15 seconds is in the video village looking at the camera tap. But she’s doing so with the full benefit (curse) of knowing exactly what the scene actually looks like. It doesn’t matter that the viewing environment is identical both on set and later in the edit room; her immediate access to the original scene is what her brain knows to be true. When she looks at it on the monitor – if a few details are wrong, if the camera isn’t capturing the scene precisely as she desires – her brain reinterprets the data and presents the image as being identical to what she saw 15 seconds ago.
Even worse, the more she looks at the camera tap, the more time her brain has to reinterpret the image, matching it more closely to the reality that’s only a few steps away. What’s a director to do to keep from getting lost while trying to evaluate the image?
All Is Not Lost
Our director, just like colorists and DPs, could use the tools we talked about in the first post of this series to objectively analyze the image. Or, she could rely on a methodology I call “Work Fast.” What does Work Fast mean?
When is her (and our) best chance of seeing the image in its raw state? Not surprisingly, it’s in the moments before our active brain can fully reinterpret the raw retinal data.
This phenomenon – I’ll call it the “first best glimpse effect” – is well known to colorists, and is extensively documented in the book The Art and Technique of Digital Color Correction by Steve Hullfish. In this book he calls this The 60-Second Rule.
The 60-Second Rule
Professional colorists all agree that when you first look at an image, you’ve got 45-60 seconds to make your initial judgement about how the shot looks and what’s wrong with it, and then apply your initial corrections. After those first 60 seconds pass, your brain takes over and re-works the image to balance out the inconsistencies, dulling the quality of our work.
60 seconds is not a lot of time for evaluating and fixing a shot. It can be extremely challenging. And once done, we need an objective way to evaluate if our work is accurate. It’s logical to ask ourselves: Can we extend that 60 seconds?
More precisely: Is there a way to bypass the brain and its highly interpretive visual system?
Brain Bypass, Anyone?
If you’re a cameraman, you already have a tool to do this. It’s called The Light Meter. The value of a light meter is that it outputs raw numbers. Visually, the raw numbers are meaningless. If the light falling on two different surfaces is equal, the light meter will tell us – with a numerical output. Numbers are a logic mechanism, they have nothing to do with our visual system. Our brain won’t easily transpose those numbers just because the two surfaces look different.
By using ratios, a cameraman can examine different elements of the image, plot them against each other and, even if his eyes tell him something different, the raw data gets him get closer to the truth and he’ll set his lights accordingly.
When it comes to color correction, we need our own version of the light meter.
The Colorist’s Light Meter
Luckily, every color grading software and every non-linear system has some version of this tool built-in. Here’s a quick overview (later this week we’ll take a closer look at each of these):
- WAVEFORM MONITOR: measures brightness (some versions overlay color information)
- RGB PARADES: A variant on the Waveform Monitor that displays the red, green and blue components separately but simultaneously
- VECTORSCOPES: Measures color as both hue and saturation (the intensity of color)
- HISTOGRAM: Plots out the number of pixels in the image, from black to white. Some variants mimic the RGB Parades.
We can also go out and purchase external hardware that’s designed specifically for measuring images. These often have their own proprietary displays (and are beyond the scope of this series). Here’s a series of demos using a Tektronix scope.All of the above “light meters” serve one purpose: they allow us to look at an image without looking at the image. These light meters present the image to us abstractly. Our visual memory mechanism is never invoked and this allows us to act on raw retinal data without having to study the image itself. Practically speaking, these tools allow us to extend The 60-Second Rule.
From 60 to 360 Seconds
If we use tools like the RGB Parades and Vectorscopes to bypass our visual system, we’ve taken those initial 60 Seconds and put them in reserve.
In other words: We can spend 60 seconds manipulating our image while looking at the RGB Parade. In doing so we’ve completely bypassed our visual memory mechanism and our quick glimpses at the image itself have only barely eaten away at the “60-Second Clock.”
THIS is why, if you want to get good at color correction, you MUST use scopes. Scopes show us raw retinal data. And since they’re abstract, our brain won’t start reinterpreting the image. We put our visual system on standby and only glance at the image for confirmation of our actions.
The end result: we’ve significantly extended that initial 60 seconds. This allows us to slow down and not rush it, especially as we are first learning the craft of digital color grading. This is the key to becoming good at color correction.
Color Grading Can Take Longer Than 60 Seconds
One thing to keep in mind about The 60-Second Rule – it really only applies to the initial fixing of the shot: setting brightness and contrast, removing color casts, neutralizing blacks. These are tasks that are frequently called “primary corrections.”
The next step of color grading – corrections we apply that help direct the viewer’s eye or manipulate specific regions of the image – frequently takes a few more minutes to execute, as artistic choices are more important than the humdrum of fixing problems. The 60-Second Rule doesn’t really apply to these secondary tasks (often called “secondary corrections”).
But you have to be careful. There comes a time when you’ve looked at a shot so long that you no longer know what to do with it. When that happens, notate the timecode of that shot and move on. You can come back to it later when your eyes see it freshly again – and having done the primary corrections already, you’ve got a new 60-Second Clock and this time you’ll find you can dial it in much more quickly.
Thursday – 07/05/12
Why Color Correction Begins (much) Sooner Than You Think
The Color Correction Workflow
(Note: If you need it, here’s the answer to the question: why should we color correct?)
When exactly does color correction begin? Most post professionals will answer with a few typical workflow options:
- DURING EDITORIAL: The offline editor can add color correction filters to do an initial balance on the images. This often makes sense if cameras are extremely mismatched and clients can’t get past how different the images look.
- WAIT UNTIL AFTER PICTURE LOCK: This is the most common workflow. It’s efficient (you’re only grading the shots that make the final edit) and practical (you’re leaving the process to someone who grades all day every day, and they’ll be much faster at it).
- GRADE THE DALIES, THEN GRADE AFTER PICTURE LOCK: This is a hybrid of the first two methods. In this workflow a colorist does a very quick grade on all the footage (often called a “one light“) and renders the footage out for editorial. The editors then have nicely balanced images while putting the story together. After picture lock the colorist pulls in the timeline, re-links back to the ungraded footage, loads up the one light corrections and begins color grading from the camera originals.
Scripted features, commercials and smaller corporate projects can choose to use any of the above workflows. From a colorist’s perspective, the third workflow is a charm. You’re involved early. You get to know all the footage. You’re doing a quick primary grade on the footage which later speeds up your workflow for final grading.
The third workflow is also a charm for editors since they can focus solely on storytelling. They don’t have to worry about silly client comments concerning shots not visually matching each other. It’s one less thing for the editor to worry about.
Documentaries and reality shows tend to generate much too much raw footage for the third workflow to be practical. Those types of projects tend to ignore color correction until after picture lock. And if they do color correct during editorial, it’s usually to fix really big problems with extreme color casts or faulty cameras – often just to see if a shot can be saved.
But I haven’t really answered the initial question, have I?
When Does Color Correction REALLY Begin?
Like the optical illusions we spent most of this week talking about, it doesn’t matter whether you’ve thought about the process or not – there is one fundamental truth . . .
Color correction begins before picture lock or before a rough cut has been edited. It begins before the dailies are generated and before the director has ever yelled, “CUT!” It even begins before a single light has been hung.
Color correction ALWAYS begins in pre-production.
Color correction starts (but doesn’t end) with the script
It begins when the director and producer are developing the script – and continues when the actors are doing their first reading. Color correction happens during location scouts and talking about lighting with the DP. How’s the set being dressed? How are the actors being costumed? What filters are being used on the camera?
Each of those scenarios is an opportunity to discuss the role color plays in the story.
What if you have NOT spent a single moment discussing or even thinking about color?
Not talking about color is as much a choice (intentional or not) as is having those discussions. When the project later lands in your colorist’s lap, they can only work with what you’ve given them! And those early choices have an exponential impact.
When you’ve talked about color to all the department heads before recording a single frame, the final color grade is far more likely to have a much richer image and the colorist will find the image far easier to manipulate. But when you don’t consider the role of color (what colors each character would wear, how they’d decorate their space, what gels to use on lights to infuse a mood) – that’s a choice too. A choice that will limit color correction and keep the project from ever reaching its full potential.
Whether we (or our clients) have these discussions or not, pre-production has a huge impact on the color grading process. And that includes if no color choices are ever made.
Yet sometimes, we don’t have a choice.
Sometimes color correction has to wait until post-production
Documentaries are a classic example where color choices are limited. It’s not really in the filmmaker’s hands (except in the case of interviews, which is the perfect time to use color and light to add drama and mood).
In this case, real people are making these choices for us. What they wear, how they live, the cars they drive. They make the color choices.
This is when a camerperson becomes worth their weight in gold. The biggest obstacle to a good color correction in documentaries is the quality of the recorded image. The more accurately exposed the image, the better the lens, the higher quality the recording codec – the more opportunity the colorist has to deliver an outstanding looking show.
And how do reality shows compare to documentaries?
Reality shows are more scripted than not, but the same rules apply. The earlier that department heads are thinking about color and using it to help reveal personalities and mood, the more opportunity a colorist has to add a tremendous amount of production value.
Both for reality shows and documentaries – with the exception of opportunities presented by sit-down interviews – practical considerations force color correction to happen after picture lock.
Now, there is one circumstance where color correction (of some sort) MUST begin after shooting but before editorial. In this circumstance, waiting can cost money.
Shooting LOG? Don’t wait.
The biggest new trend in the past five years has been digital cameras that record in LOG space (what’s LOG? Here’s a good explanation.). These cameras are designed to pre-process the image as little as possible – preserving as much brightness and color latitude as is electronically feasible. But this latitude comes at an expense. The raw footage looks terrible.
LOG camera originals are designed to be processed before they look correct. It shouldn’t be edited without applying a correction to it. Its biggest problem is the lack of dynamic range. We call this footage flat for a reason. We need to “unflatten” this footage for the editorial team.
What’s the problem with editing LOG (flat) footage?
LOG footage is recorded differently than what the human eye sees. There’s a ton of detail in this footage but it’s much tougher to see that detail than you’d think – even after you’ve been looking at the footage for weeks. I’ve had clients come to me after editing entire films looking at flat LOG footage. They’re unbelievably thrilled at how great the footage looks after color grading – but they’re back in my room 3 weeks later with a bunch re-edits and new shots to color grade!
Suddenly they notice details they hadn’t seen before. They discover medium shots where eye lines are all wrong or there’s a C-Stand smack behind a person in close-up. There can be a ton of other problems they never saw because they couldn’t see any detail while watching the LOG images.
On these types of projects we need to have our clients either do a digital dailies workflow or use a plug-in in the NLE to “unflatten” the LOG footage.
If you need to work with LOG footage in an NLE, at minimum you want to apply a Look Up Table (LUT) to your footage (to unflatten it). Here’s a terrific post by Andy Shipsides of AbelCine that rounds-up your options in three NLEs (plus DaVinci Resolve) to accomplish this task. I urge you to read it.
This leaves one question that I often get from editors, producers and directors. It concerns saving time in the color grading session.
Should you “pre-grade” to save time for the colorist?
If the only reason you want to grade the footage before handing it off to the colorist is to save time (and money) in the grading session…on average, my advice is: don’t bother.
Personally, I mass delete any color corrections my software will import from the NLE. And even if I’m forced to grade in an NLE – I still delete those filters.
The reason makes sense…
If the person who created those color corrections was good enough at color grading that I could build off their work – then I’d ask (somewhat sarcastically) why the heck I’m being hired! They’re more than capable of finishing the job.
But if the correction being delivered to the colorist isn’t accurate, it can take way too long to figure out how “fix the fix.” It’s easier (and faster) to delete the grade, look at the raw image and start correcting from scratch.
Of course, there is an exception to this rule – it involves the colorist trusting an assistant (or editor) who is knowledgeable about color grading. This person can do the initial correction and hand the colorist an accurate primary grade. The colorist steps in to handle all the Secondary Corrections, putting the final shine on the project.
Friday – 07/06/12
Bypass Your Brain to Improve Your Color Corrections
Most of this week we’ve been talking about our brain, how our visual system is NOT passive and how what we see is often different from the objective reality in front of us. We then applied this knowledge and walked through several scenarios showing how this can affect us while color correcting. We then went on to talk about the importance of bypassing your brain.
One of the best ways of achieving a brain bypass? Make our images usefully abstract to help us engage entirely different mental processes.
And what’s the best tool to accomplish that goal? It’s what post-production pros call “scopes.”
Today is all about Scopes
By the time you get to the end of this post, you should understand how to read scopes (we’ll get into using them in Week 3). But here’s the deal – there’s been a TON of great tutorials on the Internet about how to read scopes. And I’m going to leverage that information. Rather than re-writing about what has been so extensively documented, I’ll be sharing my favorite links on each of these topics.
At the end of this post we’ll get back together for some final thoughts.
Waveform Monitor Fundamentals
- Wikipedia: Waveform Monitor – What a waveform monitor is, plus links out to waveform monitor manufacturers. Short read.
- [Video] The Waveform Monitor – How to read a waveform monitor. Uses Premiere Pro. Andrew Davis points out an inconsistency between the waveform in Premiere and the waveform in After Effects on PAL footage.
- [Video] Understanding the RGB Parade Waveform – Andrew Davis introduces us to the RGB Parade. Premiere Pro.
- [Video] The Vectorscope – A video tutorial using PremierePro to explain what a Vectorscope is and how to use it. Plus, a few tips related specifically to Premiere. (Note: Andrew puts a lot of emphasis on the I-Line for flesh tones. Be aware that this line is a guide, NOT an absolute. Don’t be a slave to this line, but don’t ignore it either.)
- What’s In A Name? – A blog post by author Alexis Van Hurkman on the whole I-Line thing and his decision to call it the “flesh tone line” in The Color Correction Handbook – a decision that has proven controversial. (Yes, the field of color grading has controversies. Yay!)
- [Video] Using the Histogram to Keep Detail in the Highlights – I like this video because it clearly shows the relationship between the histogram and image detail getting clipped out. Also, Dave (the host) is a great guy.
Learning to use Scopes
- Measuring Video Levels – From the FCP 7 online documentation. A nice walkthrough of using all the above scopes to “read” the image.
- [Webinar] Taming Luminance in a Digital Dailies workflow – A terrific webinar via Vimeo co-presented by Assimilate and Off Hollywood. The first 10 minutes are a great explanation of light, how it’s measured and the scales used to record it. If you can, watch the whole 50 minute video. Not a video on how to read scopes but on how to manage the information that they present to us.
Which Scopes should you use?
The core scopes that most colorists use: RGB Parade and Vectorscope.
The RGB Parade is terrifically informative when it comes to neutralizing color casts or cleaning up the blacks and whites in an image. In areas of a picture where no color cast should exist, the RGB Parade should be even across all three color channels in that area. Confused? Don’t worry, in Week 3 of this series we’ll dig deeper into this topic.
The Vectorscope is used by me mostly as a confirmation that certain elements of the image aren’t hyper-saturated. If you’re grading for broadcast, you’ll need to be vigilant and watch that Vectorscope to keep elements from “going illegal.”
I also like displaying the Composite Waveform, mostly to judge black levels. If the RGB Parades have a downfall, it’s figuring out – after the RGBs are combined – where the blacks are sitting. Are they clipped? Are they lifted? The Parades don’t give you that information. The Composite Waveform does (it’s also useful, for the same reasons, for seeing where your combined highlights are sitting).
What about the Histogram?
Good question. I’m still figuring that one out. Still image artists love their Histogram – but I suspect that’s because it’s the most useful of all the abstract evaluation tools that ship with Photoshop.
I find that the histogram can be very useful in showing highlight and black clipping (whereas a waveform can keep you guessing). And by looking at the thickness of a histogram you can get a sense if one color channel is too overpowering. I’ve tried grading exclusively using the histogram. It can be done – but I never found an A-HA moment where it seemed to excel or to improve my grading noticeably. The Histogram has it’s place – I’m just not convinced that it’s nearly as useful as the RGB Parade.
That said – some visualizations of the Histogram are very interesting. Maybe one day someone will innovate with a Histogram display that this digital video colorist will use as his primary evaluation tool.
Outboard (External) Scopes
All of this brings us to the topic of outboard scopes. What are outboard scopes? Rather than looking at the scopes that ship with your software, you’re taking the signal that feeds your external monitor (usually through a Decklink, Kona, or Matrox card) and also feeding it to external scopes that display the Waveform, Vectorscope and Histogram. There are some downsides and many upsides to investing in external scopes:
Downside of External Scopes:
- They cost money: Software scopes need to run on a second computer. Plus you need to buy an I/O card to get the signal into that computer. Hardware scopes already have the I/O built-in and don’t require a computer but are MUCH more expensive. (think: $1000 – $4,000US for software scopes (including CPU and I/O card) vs. $6,000 for a bare bones hardware scope)
- They eat desktop real estate: Both hardware and software scopes need to display their information to you. This means you’ve got to put them on your desktop, taking up valuable real-estate. One solution I’ve used with software scopes – put them on the B-side of a dual monitor setup. Then, when you’re grading, switch to the B-side. For all other tasks, switch to the A-side for a dual-monitor desktop.
The Upside of External Scopes:
- They don’t slow your system down: The sad truth about running scopes that are included with your NLE or color grading app: they eat CPU cycles – lots of them, often bogging your system down. In comparison, external scopes have their own dedicated processors leaving your main rig 100% dedicated to working with your images.
- They don’t under-sample the image: Because scopes are so taxing on the CPU, most NLE and grading apps under-sample the image, leaving gaps in the data that’s being presented. External scopes show you Every. Single. Pixel.
- They update in real-time: See the above point about why built-in scopes rarely update after you press play. Or, if they do update, they drop their resolution to maintain playback speed. External scopes are dedicated to doing one thing: Updating the waveforms in real time and full resolution. Most external scopes (software and hardware) do this just fine.
- They reveal hardware defects of the computer output: If you use a 3rd party box or card to output your video to a tape deck or external monitors – then external scopes are a must-have. They’re your final check against a hardware failure of your output card. Internal software scopes can show you a lot – but the one thing they can’t show you is the image actually being fed into your monitors and tape decks. External scopes can tap into these feeds and help you find and troubleshoot hardware failures of the video being output.
Popular Software Scopes
If you think it’s time to invest in external scopes, here are some links for you to check out. I’m sticking to software scopes (requires Mac or PC to run) since those are the most affordable:
- – A series of video interviews I did at NAB 2012 with the three major players of software scopes (all listed below).
- ScopeBox 3.0 – An amazingly affordable and versatile set of software scopes. It actually does much more than display waveforms, which may be a bonus for you… or not. Mac only.
- – The most feature-rich software scopes on the market. Also the most expensive. They’re the ones I’ve been using for the past five years. PC only.
- Decklink Ultrascopes – The best-looking software scopes on the market. And trust me, when you spend 45 hours a week looking at scopes, being easy on the eyes counts for a lot. What they have in beauty is lost with versatility. You had better like how these scopes are laid out because there are almost no options for changing them. PC (with Mac version in perpetual public beta).
Frankly, you can’t go wrong with any of the above. And they are all superior to relying on the internal scopes of whatever software you’re running. But no matter what, if you don’t know how to read your scopes and aren’t using them, then you’re just guessing at what your image looks like. You don’t have to be a slave to them – but you do need to regularly consult them.
WEEK 2: Setting Up Your Color Correction Workspace
Monday – 07/09/12
How to Optimize Your Room for Accurate Color Correction
Last week we talked about the “Mind’s Eye” and how it uses environmental cues to manipulate raw retinal data. The end result? What we’re seeing may not be an accurate reflection of reality. In an example, I asked you to imagine a three-day color grading session with different lighting scenarios and how it would mess with your color correction decisions. Today, we’ll examine the core elements you need to control if you want to optimize your room to allow you to make accurate (and consistent) choices. At the end of this article, you’ll get specific recommendations on how to set up your room.
If you address these core considerations, not only will your decisions improve – those decisions will also be accurately displayed in any room that follows these design guidelines. In other words, you’ll be able to stand by the quality of your work.
What is the first core consideration you absolutely must control? Your room’s ambient lighting.
Why Do You Need to Care About Ambient Lighting?
I’ll answer by asking you to imagine that you’re grading in a room with windows with all the curtains raised, and beautiful natural sunlight bathing the room in an early-morning glow. As the day progresses and passes into high noon, the light gets harsh and loses its color. Around dinner time, the room is bathed in the rich, diffused colors of sunset. And throughout the day clouds pass overhead, variably dropping light levels and giving a blue quality to the light.
Now imagine: You walk in the next morning and the client joins you. You close the shades, turn on your monitor’s backlight and review the previous day’s work. How much time do you think you’ll spend revising shots? How do you think the decisions you made at high noon will be compared to the decisions you made bathed in the light of a warm sunset?
I’d say you’d have a whole bunch of revisions you’d be making. Why? Because as we learned last week, our visual system doesn’t see in absolutes. It sees via comparison. The quality of light hitting your monitor changes the colors and contrast of the images you’re looking at. And as that light changes, so do our decisions.
If our corrections are going to be consistent and well-considered, then we need to control every element of our viewing environment. First and foremost, that means we need to control the lighting in our room.
What Room Elements Are We Controlling?
You need to control ALL of these elements in your physical space if you want to have confidence in your decisions:
- Time of day: Got windows? What a shame! Because they have to be blocked out, as explained above. Ideally, think: complete blackout. In my experience, this means – at minimum – using blackout shades. I like to add thick black foam board that can be exactly cut to the dimensions of the window (which has the benefit of being non-destructive, should you move your room to a new location).
- Color Temperature: Why control color temperature? Read this 2007 post from my company’s blog which explains why I implemented industry-standard lighting for my color correction suite. The rationale stands to this day. Back on that blog, I posted a few days later about a much more succinct reason why industry-standard lighting is so important. And if you want a chuckle, read how my lighting decision had some unintended consequences.
- Ambient Light Level: Not much has been blogged about the influence of room brightness on the perception of shadow detail. But if you hang out in color grading forums and email lists long enough, the subject will arise. And for good reason. The brighter the room, the less detail we see in the blacks. And that forces us to increase the overall brightness of our images – resulting in overly bright pictures that look nuclear when watched in more subdued viewing situations. We need to control the overall light levels in our room.
- Reflections and Light Leaks:We need to control for reflections on our reference monitor, which wash out portions of our image and cause us to make bad decisions. Even worse, if these reflections and light leaks come from sources that aren’t at the proper color temperature, then both contrast and color choices become compromised. Typical sources you need to control:
- Door cracks
- Overhead lights
- Client task lighting
- Computer screens
If you actively take steps to control all of these elements, your confidence level in your color corrections will rise dramatically (as it should!). We must control our ambient environment because it contains all sorts of clues that our brain uses to re-interpret our raw retinal data. If you don’t actively control all these elements, then everything else I advise in this post will be defeated.
And we can’t stop here. There’s one more physical element that MUST be controlled once the overall room is tamed. We need to control what we see in our field of vision when looking at our reference monitor. In our business that’s called the Surround Field.
How Your Monitor’s Surround Field Can Make or Break Your Color Corrections
Do you remember the Brightness Contrast Effect from the second post in this series? If not, go back and read it because it’s an essential element you must understand. Here’s how it applies to our business: if the wall surrounding your monitor and within your field of view is very bright, you’ll make very different contrast decisions than if that wall were very dark. The Brightness Contrast illusion explains that our brain doesn’t make objective visual decisions. Rather, it takes in everything it sees and shifts the details around depending on the overall image.
Since we never sit close enough to our monitors to have them fill 100% of our field of view, we have to control everything else that we see surrounding our monitors. And that (usually) means the wall behind the monitor.
But we’re not only controlling the Surround Field’s brightness
Do you remember the Cube Illusion? That concept is applied here, too. If our back wall is lit to the proper brightness level (discussed at the end of this post) it’ll be completely defeated if the wall is red. Or baby blue. Or sunshine fuchsia.
The color of our Surround Field will adjust our eyes’ sensitivity to that color, radically altering how we color grade. As the day progresses, our eyes will desensitize to the surround color and we’ll keep wanting to add more and more of that color into our images. This can become a vicious cycle as each day our eyes start fresh early in the morning but are completely desensitized by the end of the day. Eventually we’ll declare ourselves terrible colorists, when in reality we just have a poorly controlled room.
Our mission is clear: we want a nice, neutral backing in our Surround Field. It shouldn’t be too bright or too dark. And it should not have a color bias.
But before I reveal the specifics on how to manage our Surround Field, there’s one other element in our physical space we need to control – our Reference Monitor.
Does Your Reference Monitor Match Your Surround Field?
In a moment, when I lay out specifics for setting up your room, you’ll come to realize the single brightest element in the room is the reference monitor!
All monitors are a source of light. Those sources of light can be set to different color temperatures. They can be very warm or super-cold. Or anywhere in between. And this color-temperature – or white point – determines what white looks like.
(Note: If you’re unsure what I mean by color temperature and how it relates to white point here’s a terrific article explaining this concept.)
You’ll want the color temperature of the light shining on your surround field (often called a bias light) and the white point of your monitor to match each other. Otherwise, you’ll be working with mixed color temperatures which will – you guessed it – throw off your color grading decisions.
And since the white point of professional monitors is determined by groups like EBU and SMPTE, we should set the white point of our monitors to those recommendations. This then determines the color temperature that we use for the bias light in our room.
Now that we’ve got our terminology explained, let’s look at the settings you should use to properly set up a room for color correcting.
Recommended Room Settings (for HD)
These settings are for working in HD on a direct view monitor (as opposed to a projector) in most regions of the world:
- Ambient Light Level: According to the Color Correction Handbook, “the SMPTE recommended practice . . . [is] just enough [light] to see your controls. No light should spill onto the monitor.” Yes, that’s a subdued light level. But NOT total darkness.
- Room Color Temperature: Your light bulbs in your room should match the monitor color temperature. For most people reading this, that will be 6500Kelvin. You’ll also want light bulbs that have a high CRI rating (the higher the number, the more accurate the color rendition of the light bulb). Above 85 CRI is good, but above 90 is much better. A website that many of us use to purchase our 6500K lights is CinemaQuest’s Ideal Lume brand of bias lights. They’re relatively inexpensive and work as advertised. (While you’re at their site, check out this page on bias light basics; it explains why we use surround lights behind our monitors.)
- Surround Field:The color of the wall behind your monitor should be a neutral gray. A standard practice is to purchase a photographic 18% gray card, bring that to a paint store and ask them to match that color. Tell them you want to use a white base and ask if there’s any way they can use just black pigments to create that color. If not, the gray will push a little warm or a little cool – but that’ll be far superior to whatever your wall is painted at this very moment. After the wall is painted, you’ll then need to splash light onto it (the bias light) from behind the monitor. You want the light to cast a gradient, brighter at the bottom of the monitor and darker towards the top. Basically, the light is positioned slightly lower than the monitor and points upwards and back. The really important thing: no one in the room should be able to see the light itself. According to the Color Correction Handbook, the bias light should be “no more than 10-25% of the peak light level of your monitor displaying 100% white.”Here’s a good discussion of painting the surround wall and some recommended formulations. I agree with Walter Biscardi in this thread – the wall will look bluish as a result of the light splashing onto it.
- Monitor Color Temperature: 6500Kelvin. This setting is selectable on your monitor. If not, you probably need to get a better monitor (unless you know for a fact it’s set for 6500K).
Learn More About Optimizing Your Room for Color Correction
You shouldn’t be surprised when I say that we’ve only just covered the basics when it comes to setting up your room. I strongly recommend purchasing the Color Correction Handbook, which covers this topic in far more exhaustive detail, much more concisely and in a greater variety of situations. Room setup is an extremely important topic and the more you know about it, the more confident you’ll become – and the more your work will benefit.
Tuesday – 07/10/12
Selecting A Monitor for Color Correction
I’m about to step in it, aren’t I? On the internet more electrons have been discharged on this topic than any other concerning color grading. In the end, they boil down to one thought: what’s the most inexpensive monitor I can buy for color grading?
What a loaded question!
You: I can download a full-blown color grading app for free – DaVinci Resolve (used on Avatar) – but I have to buy a $5,000 piece of gear to properly see the images it creates???
Me: Well, yeah . . . kinda.
You: But not everyone is working on the next Avatar, what about the rest us?
Me: Okay, okay – I hear you. Here’s my 2,000-word blog post to answer your question.
I feel your pain. But I can’t directly answer your question until I know we’re both talking about the same thing.
So let’s lay down some basic terminology and look at all the elements we want in a solid professional monitor. A monitor that allows us to do our work confidently. The kind of work that we can stand behind no matter where else our clients take their footage to be viewed. If you’re reading this we’ll assume you’re either a professional or soon hope to become one. If you want to color correct for paying clients you need to get used to the idea of having an external monitor.
Let’s begin at the beginning.
What is an external monitor?
An external monitor is NOT your computer monitor. It’s not the small window in Final Cut or Avid. If you can adjust the settings of your monitor from the control panel that ships with the Operating System of your computer… it’s not external monitoring.
An external monitor has one job and one job only: to take the video output of your computer and display it – full time, at full size.
Now, your app may display overlays and on-screen widgets in the external monitor, but you can’t actually mouse over that image. You control those widgets from the Graphical User Interface (GUI) of your computer’s display.
The point of an external monitor is simple.
It bypasses your computer’s color management system, which is off-loaded to the monitor. The monitor is fed from a specialized board or box that’s hooked up to your computer with special software drivers installed. You enable this board or box in your NLE or color grading app and it routes the video signal to this specialized hardware and into your external monitor.
The external monitor is often called the God Monitor. In other words, if there’s any doubt which monitor in the edit room is correct, The God Monitor has final say. It is the final reference. Or reference monitor.
The external monitor should be the most accurate, truthful display in our room. Heck, the whole point of controlling our room’s lighting (see: yesterday) is for our brain to get the most accurate rendition of the image as possible. And unlike consumer displays, we want an accurate monitor – not a pretty monitor. That’s why we want a monitor with as little processing as possible. We don’t color correct with our display set to ‘Cinema Mode’ or ‘Afternoon Mode’ (typical settings found in consumer LCDs and Plasmas). We use reference monitors that are calibrated to industry standards.
We begin by properly routing our pictures into this monitor.
Hooking up your external monitor to your computer
I’m not going to go too deeply into these options. I’ll just guide you in the right direction. Generally, you’ve got two choices for getting the image out of your computer and into an external monitor:
- PCI Card – Desktop computers that have slots for cards usually go this route. The two most popular choices are Decklink and Kona cards. I’ve used both. I like both. My decision of which to buy is based on the software I use and which cards support it. Generally, AJA cards have more sophisticated software controls and Decklink cards are more affordable with a wider range of price points.
- External Box – These solutions are perfect for laptops and also work with towers and iMacs. Matrox has specialized in these for years. BlackMagic has gotten into the game with their UltraStudio product. AJA has their Io family of boxes. But what if your computer only has a Thunderbolt output? No more excuses, thanks to AJA’s T-Tap – which will convert a Thunderbolt signal to 10-bit SDI that can feed your reference monitor.
Again – the point of these cards and boxes is to off-load color management from the CPU and onto these devices. Can you make do using the output of the graphics card feeding your computer screen? Yes. And if you search the internet you’ll find a variety of these DIY solutions. But for every person who gets it right, dozens more get it wrong. And if it were that easy, AJA, BlackMagic and Matrox would have gone out of business years ago.
Me? I stick with the proven solutions. I suggest the same for you.
Once we decide how we’re going to hook up our monitor to our CPU we then need to decide: which monitor? For that, let’s consider our minimum feature set.
Essential elements of a professional monitor
If we want our monitor to accurately display the pixels being fed into it, there are some core features we need to expect:
- Saves calibrations internally – This has been true since the days of the CRT. We want our adjustments to be saved – not to the computer, but on the monitor itself. It eliminates the possibility of the Operating System overriding our settings and adds a level of confidence to our work. This is one of the things that made the DreamColor such a great LCD when it first shipped. It saved its settings internally and massively simplified calibration. (Plus, it was 10-bit – all for the low low price of $2,500.)
- Saves calibrations per input – You may have noticed that I haven’t advocated one input format over another. There are a ton of different ways to send images from our computers into our external displays. A professional display will allow each of the different inputs to be calibrated separately. There will be different inputs for HDMI, SDI, Component, Composite, Dual-Link, etc. Since each of those technologies can potentially effect the signal differently, we need to calibrate each one separately – and save those calibrations on the display itself.
- User Selectable White Point: At the very least, your monitor needs to have a white point of D65. If it’s set for anything else then you need to be able to select and change the white point to D65. (If you missed it yesterday, a good discussion on color temperature and white point.)
- Multiple Gamma Settings: Until now we haven’t talked much about gamma. It’s one of those catch-all terms which has a meaning that changes depending on what you’re talking about. (Get started on understanding gamma as it relates to images and displays.) For our purposes today, just know that we need to be able to adjust the gamma settings on our displays to be between 2.2 and 2.6, depending on if we’re creating pictures for the internet, television or cinema. The exact setting you use is a bit controversial (except for Digital Cinema, which is specified for a gamma setting of 2.6, and Europe, which has a standardized setting of 2.35). Assuming you’re going to be working on a range of projects, your monitor needs to be able to handle a range of gamma settings.
- Proper Color Gamut – This is a fancy term for “the colors a monitor can accurately display.” Professional monitors have specialized processors designed to carve out a precise color gamut. For us, that usually means the HD color gamut (Rec. 709) or the Digital Cinema color gamut (DCI-P3). Consumer displays will often tout their huge color gamuts – but without the proper image processing, their color reproduction can actually suffer. Those wide-color gamuts need to be tuned to very precise specifications, part of the reason why professional monitors cost so much extra money. Every manufacturer of pro monitors has their own “secret sauce” of how they do this.
- 10-Bit Resolution – Most consumer monitors display 8-bit images. But many of our cameras and post-production pipelines record at 10-bit or higher depths. If possible, we want to monitor at the highest resolution we can so we know what the heck kind of pictures we’re producing. For most of us, that means using a 10-bit display – or an 8-bit display using advanced imaging algorithms to tweak it to look like a 10-bit display. (Confused by 8- and 10-bit imaging? Read this on color processing.)
This list should start to explain why professionals don’t go out and buy the cheapest monitor they can find. Accurate monitoring isn’t simple. Once we set our white point, gamma and gamut – we then need to calibrate the input we’re feeding into the monitor and save it. If we change any one of the variables of these variables, we must recalibrate and save.
Consumer vs Pro Displays
This brings us to the argument of our time: can a consumer display do the same job as a pro display? Can we get accurate pictures from consumer displays at significantly lower price points?
The answer is: maybe.
But there are hidden costs. The calibration requirements for a consumer display are where it’ll hit your pocketbook. To accurately profile a plasma or large LCD is difficult. At minimum you’ll want to hire a certified calibrator to profile your display and dig into the service menu to tweak the tracking of the RGB channels from black through gray and into white. For people who want to save money on their displays, the thought of investing in an $800 calibration for a $1,500 display defeats the purpose.
But my answer isn’t very helpful if you want specific advice. And since I don’t have those answers (except, maybe, to look only at THX-certified displays), I direct you to this thread on the color grading forum LiftGammaGain.com. The high-level discussion is full of useful recommendations. Also, Alexis Van Hurkman offers up some specifics in this 2011 post on three displays he thinks fits the bill.
The Display I Use
In 2011, I finally retired my aging Sony CRT and replaced it with a Flanders Scientific (FSI) 24″ LCD reference monitor. (Full disclosure: FSI is a fiscal sponsor of my website and Newsletter – but I’m NOT on commission!) I chose them for several reasons:
- They’re a boutique – Like me, they’re a family-run operation. After spending decades working for the “big boys,” the FSI founders decided to build their own monitors – several that are reference-quality. And they sell direct, cutting out the middleman and passing on the cost savings to the consumer.
- Professional features – Every display has all of the input options that a post-production professional expects. They have all the features I’ve outlined above, and then they go overboard with all sorts of overlays, scopes, audio monitoring and much more (scroll down here to see the full feature set and see what I mean).
- Free factory recalibration – This is a huge selling point. Whenever you want, how often you want – you pay for roundtrip shipping and FSI will recalibrate your monitor for free. Usually it’s same-day turnaround. And these guys are serious about calibration – using multiple devices to ensure that your monitor is not just accurate, but that the device doing the calibration itself hasn’t started to drift!
- They’re teachers – Got a question? Just call. They love talking to their customers (and potential customers) and will get you the answers you need.
Yeah, I’m a huge supporter of FSI. And not simply because they’re a huge supporter of TaoOfColor.com. They have integrity and I’m happy to give them my business. I’m also happy to direct you to this webpage I put together about FSI and a variety of resources to learn more about them.
Choosing your reference monitor can feel overwhelming. And the less money you have to spend, the more overwhelming it becomes. For good reason: below $2,500 there are no good choices, just decent compromises. If you’re cash poor and looking for specific recommendations, then read that thread on LiftGammaGain and Alexis Van Hurkman’s blog post. There’s a ton of great advice between those two sources, almost all of which I agree with.
Wednesday – 07/11/12
To-Grade or Not-To-Grade In Your NLE?
The theme of this week is setting up your room. And once you’ve worked out all the physical elements of your color correction space, next you’ve got to decide HOW you’re going to color correct. As in: what software are you going to use?
In your non-linear editor (NLE), or in a dedicated color grading app?
As a full-time professional colorist, I’ve clearly got my preference. But I was once an editor-colorist before I became a colorist-editor before I (finally) lost the hyphenate and specialized as a colorist. So I’ve been around this block and know the neighborhood. I can’t dictate what you need to do for you, your career, or your clients. I can share with you some questions to ask yourself to help you decide what your best course of action should be.
As usual, I like starting with The Really Big Question…
DO you want to color correct?
This is a basic question but it has be asked. Do you personally want to bother with color correction at all? When we work our way down into weeks 3 and 4 of this series you’ll realize that color correction is its own skill set. Yes, we’re telling stories and it’s an extension of both the DP’s and the editor’s work – but color correction engages a different part of your psyche. I know editors who would rather poke out their eyes than spend two days doing a careful color correction on their 25-minute show.
As you travel this road I advise that you pause from time to time. Ask yourself, “Do I really want to do this?” If the answer is yes, then the next question is, “What kind of grading do I want to do?”
Simple or sophisticated?
Assuming no deadlines and no budget constraints, any modern NLE can do sophisticated color grading (and we’ll get to what “sophisticated” means later in this post). But that’s not most of us. Most of us have constraints – including the ones we put on ourselves, such as, “I want to edit features.” Or, “I’m going to specialize in editing on corporate projects.” Or, “I want to be an editor who can save shots and make them match.”
Do you see what I mean?
Think of color grading as a two-step process:
- Fix problems and make the image look good (color correction)
- Enhance the image and control the viewer’s eye (color grading)
Your mission is to determine how deep you want to get into the process. Doing so will help you determine if you want to do your color grading in the NLE or if you want to invest your time and energy into learning dedicated color grading software. As a general rule, solid color correction (step 1) can be done in almost any app. But the more time you want to spend delving into color grading (step 2), the more you’ll want to consider jumping out of the NLE and into more specialized software.
But before you fully commit to one direction or the other (and you ARE allowed to change direction any time you want), let’s take a broad overview of the general pros and cons of color grading in your NLE versus using color grading software. At least you’ll make an informed choice.
Color correcting in your NLE
In this category I’m talking about grading in FCP 7, FCP X, Avid and Premiere Pro.
- Simple: There’s nothing simpler than finishing the edit at the end day and starting color correction the next morning. Easy-peasy.
- Fast to learn: Most NLEs have simple interfaces for making color decisions. And most have a “hidden” filter or two that are vastly under-used but very powerful. Essentially, NLEs limit your choices in how you’re going to color correct, which, in turn, makes them easy to understand and master.
- Plug-ins: Most NLEs have a rich eco-system of plug-ins that can radically expand your color correction options. You can add these to your toolbox as you get more comfortable color correcting.
- Stronger “auto” tools: Auto-balance, auto-match and auto-remove are all features most NLEs implement. Some do a very good job, others not so much. But all usually do a better job than the auto tools built into dedicated color grading apps (which assume most auto corrections won’t give you the result you’re looking for).
- Simple: Easy-peasy means it can be a real chore when you work on a project that you’d like to take to the next level. NLE interfaces are optimized for moving clips around – not for giving you a ton of sophisticated color and contrast controls.
- Clumsy: Plug-in interfaces can be really annoying. Lots of scrolling fields of sliders and radio controls all mashed together.
- Distractions: You’re in an NLE, right? Moving between color correcting then fixing a lower third then tweaking an edit then finessing an audio transition – it’s all simply a shift in your focus. And shifting focus can be very distracting while color grading. Ours is very much a momentum-based, get-in-the-groove activity. If your client is constantly bullying you into switching between tasks, it’s unbelievably distracting and can hurt your final color grade.
- Control surface support sucks: Most NLEs support some brand of colorist control surface to help manipulate their color correction plug-ins. Most of this support (excuse my language) blows chunks. The only one I’ve found that is decent is the Avid Artist Color with Media Composer and Symphony. It’s not perfect, but for color grading in an NLE it stands tall above other surfaces controlling other NLEs. (If you want to learn more about control surfaces check out this post on deciding if you might want one.)
Color correcting in a dedicated app
In this category I’m talking about grading in Apple Color, Adobe SpeedGrade and DaVinci Resolve.
- Multiple plug-ins at once: I often describe working in these apps as having six different color correction plug-ins available to me simultaneously. I touch them, they’re activated. I reset them, they’re disabled. But I never have to hunt for them. They waiting to jump into action on a moment’s notice.
- More sophisticated grading: It’s much easier to work on smaller portions of the image. For example: I need to tone down a practical light in the background, plus isolate an actor’s face and adjust their skin tone, plus make the sky more moody, plus back off the intensity of the grass and finally add a very soft vignette to help focus our eye on the actor… no problemo. This type of image segmentation (as my buddy and color grading author Alexis Van Hurkman likes to call it) is a breeze. In fact, it’s one of the main reasons for color grading in these types of apps. And if this kind of control appeals to you, you’ll be hard-pressed to do what I just described in most NLEs. Not without suffering some pain.
- Multiple instances of multiple plug-ins at once: How do apps like Apple Color, Resolve, and SpeedGrade allow us to pull off these sophisticated grades? Remember the notion of having many plug-ins immediately available to us? We can also add multiple instances of them to a single shot with a simple button push. In this sense, color grading is working in layers – or as I like to say, we’re “grading in passes.”
- Special features: Here’s a list of typical features in a color grading app you won’t find in most NLEs:
- Still Store – Snapshots of each shot that can be pulled up for reference without hunting in the timeline
- Trackers – Track windows (which have color corrections applied to them) onto moving objects
- “Snippets” – This is my term for saving and quickly adding little bits of corrections for common problems – or full-blown Looks that you might want to access in later projects with different clients
- Fast: This is a different kind of speed than we saw in NLEs. This is a getting-more-done-in-less-time kind of speed. It comes from the multiple plug-ins in multiple instances I described earlier. It comes from having those little snippets ready to go. It comes from an interface designed to do one task and one task only.
- Faster: If you go ahead and add a Colorist Control Surface, then you can leverage even more speed out of your software. No longer are you relegated to color correcting with the tip of a fingernail. Now you’re grading with two hands and ten fingers on a physical object. Now you can build muscle memory. Here’s a post of mine from 2007 – a diary of my transition off a mouse and onto a colorist control surface. It documents my initial productivity gains when moving off the mouse.
- Fastest: Many color grading apps have their own custom control surfaces. If you’re thinking of color grading full time then you should be thinking about owning one of these. Dozens of controls are at your fingertips. Literally. Once you build the muscle memory, getting through 700 shots in a day (while giving each shot its due consideration) is entirely possible.
- The pain of round-tripping: Moving your timeline out of your NLE and into the color grading app can be a seriously painful maneuver. The #1 rule is: Do No Harm to the Timeline. But when you think about how many different representations of a timeline exist between the four main NLEs being used today, it’s easy to understand how a color grading app can seriously mangle a timeline. Extra time needs to be taken when moving the timeline around. This is one objection to working outside the NLE that must always be seriously considered.
- Requires learning new software: Yup. Another big reason not to move outside your NLE. Not only do you now have to learn the art and craft of color grading, you’ve got a new interface to learn AND you have to become a workflow specialist to move your timeline around. If you’re lucky, some of these chores can be handed to an assistant and you can focus just on doing the grading.
Bringing it together
So what do you think: To-grade or not-to-grade in your NLE?
The decision is a mix of the types of projects you work on (and their constraints), the expectations of your clients and your own personal career goals. Being a full-time colorist is not for everyone. Being an editor who can handle 90% of the color grading opportunities that come your way might – at minimum – mean a few extra days of bookings per month, and it could be the difference in getting a job (assuming you market this skill).
Personally, wherever you decide to grade your project, I’m just psyched that you’re grading at all!
There’s way too much ugly video out there. And it doesn’t have to be. So wherever you are in your understanding of color grading – stick with us for the rest of this series and join the revolution in delivering high-quality images for ALL your projects!
Thursday – 07/12/12
Prepping a Timeline for Color Grading
After we’ve decided in which piece of software we’re going to do our color grading (see yesterday’s post), it’s time to do a bit of prep work. We need to organize ourselves and optimize our timeline. I call this: timeline prep.
This is true even if you’re grading inside your NLE. This is especially true if the timeline is being sent to another piece of software. And it’s absolutely, positively true if someone other than the editor is going to be handling the color grading session.
Whether you’re grading in an NLE or in something like DaVinci Resolve, there’s some basic tidying you’re going to want to do. And once again, yes, this includes you, editor-colorist who knows the timeline inside out!
Grade Prep: Basic Chores
This is basic stuff, but I strongly suggest you follow these guidelines. Once you enter the color correction stage you’ve (mostly) exited editorial and have entered the finishing stage of your project. Do you really need every single one of your dailies, all your stock footage and every sound effect you tested out at your fingertips? No. You don’t.
The first thing I advise is cleaning up your workspace and saving the approved locked picture:
- Duplicate the timeline: While you’re at it, this is a great time to start a brand new project with only the locked picture timeline inside it. Especially if your system has been getting sluggish or acting funky. But no matter what, we need a duplicate of the timeline as a backup since we’re about to start cleaning it up for color correction.
- Simplify the timeline: Are you an editor who likes to “build up” your timeline? You know you are, don’t deny it. Slowly scroll through your timeline and delete any clips that aren’t actually visible in the final picture. This includes alternate angles, on-camera interviews that are covered by B-Roll, alternate B-Roll choices, etc., etc…
- If you need to get back to those shots, you can always open up the timeline that you duplicated. The point here is to keep you from wasting your time color correcting shots that aren’t visible.
- This step also reduces your stress level as your subconscious mind doesn’t keep thinking that every one of those “underneath” shots need to be corrected.
- If you’re handing off the color correction to someone else, you absolutely must complete this step! No one knows this timeline like you – why waste their precious time by forcing them to decide, shot-by-shot, for hours on end, which shot needs color correcting and which doesn’t.
- Turn off (or remove) all titles and lower thirds: If you’re sending the timeline to another piece of software, titles and graphics either need to be completely removed or they might be able to stay. It depends on the software.
- Neaten the timeline: As you simplify the timeline, drop all the video down to the lowest video track possible. If you have multiple shots – say, used in a split-screen or some other special effect or transition – then it’s okay to build up on multiple video tracks. It lets the colorist see the images in context. (This is also a good time to put all your graphics on one video track, text on its own track, lower thirds on their own tracks, etc.)
- Remove all color correction filters: Consult with whoever is handling the color correction. If that person is anything like me – and unless you’re the editor-colorist – they’ll want to work from clean footage. They don’t want to spend time figuring out if problems they’re seeing are with your work or the underlying shot.
Yes, when it comes to finishing and color correcting, cleanliness counts. This may seem like a tedious chore (it is) but it’s necessary. And by turning this stage of the process into a habit, when the day comes that someone needs to pick up where you left off, you’ll be a friggin’ hero!
Be courteous to yourself and your peers. Clean up your timeline at picture lock!
Decision Time: What to do next?
What you do next depends on where you’re going to color grade. If you’re grading in your NLE, then prep time is done. Start grading.
If you’re grading in an external app there are two basic approaches; this should be decided on in consultation with the colorist. Sometimes the software will dictate the decision. Sometimes project constraints will dictate the decision (To-Grade or Not-To-Grade In Your NLE). No matter what, bring the colorist into this conversation.
Generally, it boils down to two workflows:
- 1: Export a flat, textless version of the timeline (with an EDL)
- 2: Provide all the individual camera originals on a hard drive (with an XML or AAF)
Exporting a single file (with EDL)
The first workflow is the simplest, fastest, easiest. The colorist takes one long movie, cuts it up into individual shots using the EDL as a guide – and away they go. They then render out a single file, and it gets imported back into the NLE. All the graphics and text gets placed back on top and the color correction process is complete.
The drawback: There’s no room for editorial flexibility after color grading. If you need to tweak an edit, the editor has to find the original shot, cut it back in and send that shot to the colorist to be re-graded. That said, it’s a super fast workflow and has been used by elite color grading shops for a decade.
Color grading from camera originals (with XML or AAF)
The second workflow has the advantage of offering much more flexibility after color grading. You’re providing the colorist the raw camera footage. The color correction software will recreate the timeline (as best it can). When completed, the colorist will render out each shot individually, with whatever handles (frames before and after the start and end of each shot) the editorial team specifies. Using this workflow I’ve had clients re-edit entire scenes, and with just two seconds of handles they had what they needed for the re-edit – they didn’t need to go back to the original shot to pick up the few extra frames they wanted.
Also, in workflows using codecs like RED’s r3d, this workflow is the only way to color correct off the raw camera originals.
In my next post we’ll continue discussing how to prep a timeline – this time for color grading from the camera originals in a dedicated app like DaVinci Resolve, Adobe Speedgrade or Apple Color. This workflow is generally called Round-tripping. It can get a tad bit complicated…
Friday – 07/13/12
Prepping A Timeline for Color Grading, Part 2: Round-tripping
When I talk about “round-tripping,” I’m talking about a very specific process that looks like this:
- The editorial team media manages their “locked picture” onto a hard drive.
- The hard drive is handed to the colorist.
- The colorist recreates the timeline, shot by shot, inside their color grading software (using XML or AAF).
- Each shot is graded and then rendered out individually, with handles. (What are handles?)
- The newly graded shots are handed back to the editorial team.
- The editorial team creates a new timeline that’s identical to the original timeline, but all the shots are replaced with our beautiful graded footage.
This is the workflow I advocate in my business for all my indie film clients (and most other projects as well). The notion of “locked picture” is a bit antiquated for most clients and I don’t want to be the Grinch that keeps them from completing their last-minute editorial changes. But the preparation to enable this workflow can be a bit daunting. And that’s what this post is about.
I’m not going to outline the precise workflow for each combination of color grading app and NLE – that would take another full month of blogging (and I’d still miss some variations). Besides, every time one of these apps gets an update, the workflow details change. I’d never be able to keep up!
Instead, I’ll focus on ground rules that will let you prep a timeline for just about ANY color grading software coming from about any NLE. Be aware – depending on the specific combination of NLE and color grading software, some of these steps can be skipped. Consult with your colorist to hammer out the precise details for your job.
Before we dig into how I go about prepping for round-tripping, let’s discuss why we have to go through this process in the first place. And to do that, we have to think like color correction software.
Psychoanalysis: How color grading apps are savants
You need to understand that color grading apps have a seriously deep, deep understanding of one area of video post-production: manipulating and isolating color and contrast. But much like savants, once outside of their area of expertise, they’re seriously shallow apps. This is especially true when it comes to understanding the timelines we send them.
For instance, the notion of mixed frame rates. Color grading apps are completely flummoxed by mixed frame rates. I mean, a timeline can only have one frame rate – right? Why the heck would the footage have a frame rate that’s any different than the timeline?
Right. . .poor babies. . .if they only knew.
Like I said: savants. They’re deep in one specific area, but generally clueless.
The best we can hope for is the way Apple Color handles it, which is to not attempt to conform the mismatched frame rates to the timeline’s frame rate. Apple Color punts by rendering the footage at its original frame rate, and lets that crazy kid of an NLE figure out what to do! This is just one example of a workflow gotcha common to round-tripping.
My round-tripping blueprint
Now that you have a sense of why we need to go through all the nonsense I’m about to outline – here’s my blueprint that’ll get you prepped for most color grading round-trips.
- Basic Chores outlined in Part 1: When you’re round-tripping, those aren’t minimum requirements – they are absolute requirements! You do those first. Then you go ahead with what I’m about to lay out in the rest of this post! (If you need to, go back and read yesterday.)
- Export a self-contained reference movie of locked picture: Do this before you do anything else and deliver it on the hard drive you send the colorist. Your colorist will want a reference to check against. This is that reference.
Once those Basic Chores are handled, we can move on to the really tedious stuff:
- Deal with freeze frames: Grading apps get super confused by shots that have a frame rate of zero! I mean, is that even a shot? And how can a shot with a frame rate of zero sit in the timeline for two seconds? We need to help our savant by fixing this timeline paradox and eliminating the freeze frame from the timeline. This can be accomplished in one of two ways:
- Delete it: If the freeze frame is mere a freeze on the frame of the shot before/after it, you can delete the freeze and then recreate it after the color grade. Me? I’m not that organized. I use this next method…
- “Bake in” the freeze frame: By “baking it in,” I mean export out the freeze frame as a self-contained movie. Then re-import it and cut it back into the timeline, replacing the freeze frame. And if there are any effects, resizes, or keyframes on that freeze, you’ll either bake them in when exporting out or you’ll remove those elements and recreate them after the self-contained movie is cut back in (I advocate the latter).
- “Bake in” variable-rate speed changes: Crimeny! More frame rate nonsense! Luckily, constant-rate speed changes have been with us in post-production for over 30 years. Today’s color grading apps seem to handle these pretty well. No worries there. But variable-ratespeed changes? Grading apps explode at the thought of a single shot changing its frame rate throughout the shot. Even worse, different NLEs have completely different ways of “describing” how the variable change actually happens. These types of speed effects must be dealt with with ruthless intensity. Personally, I’d prefer to delete them. Since we can’t…”Bake in” variable-rate speed changes: See “freeze frames” on how to do that. It’s the exact same concept.
- Remove color correction filters: Some color grading apps can import the color correction filters that are applied to a shot. Often those imports look like crud. The raw numbers applied to the color correction may be the same but the final look can be completely different! Then there’s the question: Do I, the colorist, want those corrections? Or is it faster to just delete all those corrections, look at the raw footage, and grade onward? I fall into the latter camp. Consult your colorist.
- Identify non-standard “shot containers”: By this I mean nested clips, collapsed clips, nested timelines, multi cam, secondary story lines – all examples of containers that look like a single shot but contain multiple shots. They are not, themselves, shots. The upshot: look for them, notate the different types of containers that you have and then call your colorist. In some cases, you’ll be just fine. Others are showstoppers. There’s no one rule of thumb on this.
- Deal with mixed frame rates: Have you noticed that three out of the five items we need to fix have something to do with frame rates? So what do you do if you have a timeline that has shots with multiple frame rates? For instance, a 23.98 timeline that has both 23.98 and 29.97 footage. First, consult your colorist. This answer changes depending on your color grading app.
- Apple Color: I know for a fact that Apple Color has no problem with these kinds of timelines.
- Other color grading apps: Consult your colorist. IF your color grading app can’thandle these kinds of timelines, you’ve got two choices:
- Abandon this type of round-trip workflow: Instead, go back to Part 1 of this post and follow the Exporting a Single File (with EDL) workflow detailed there. Or…
- Split the project into multiple timelines: In the example above, I’d put all the 23.98 footage in a 23.98 timeline. Then I’d create a 29.97 timeline with only the 29.97 footage. Each gets sent to the color grading app separately and graded in separate projects. This is an annoying work-around but it’s the only solution if you want to maintain editorial flexibility after round-tripping.
The round-tripping blues
I never said round-tripping was easy. But I can say that, as a colorist, I’ve won some jobs because I’m willing to go through this pain to offer my clients the flexibility they want after the color grade. Once you’ve been through it a couple of times it’s easier than it looks. But yes, the tedium never goes away.
Remember: Consult your colorist on these details. Their specific color grading apps may have its own little quirks. But the list outlined above will solve 95% of the workflow issues that destroy most round-tripping attempts.
I love the flexibility of round-tripping and the power offered by dedicated color grading software… even if those apps do act like spoiled little savants.
Get them answered by me, literally! Either use the comment section below or this contact form to leave your question with SpliceVine. You can also shout out your question to me @patInhofer on Twitter with the hashtag #whisperer. On July 20, 2012 we’ll collect them and I’ll do an audio podcast with Eric of SpliceVine – which he’ll publish during Week 4.
Like what you’ve been reading in this series?
I invite you to check out my awesome (and FREE) weekly newsletter: The Tao Colorist. It focuses exclusively on the art, craft and business of color grading – linking out to the best blogs, forum discussions and news articles (plus some funnies) of the previous week.
WEEK 3: Color Grading Fundamentals
Monday – 07/16/12
Color Grading 101: Laying a Solid Foundation
Sheesh, it took us awhile to get here, didn’t it? What with understanding all that how-the-human-eye-works and setting-up-your-room stuff? But much of what I’m going to explain from here on out will be academic if we didn’t nail down those previous details first. Otherwise your work would only look as good as it does in your particular setup. And trust me, that’s no fun. What do I mean by that?
We want our color grades to be a centerpoint around which the rest of the world’s monitors will vary too warm or too cool, too bright or too dark. But in most cases, they’ll look generally like what you created.
If you begin with a room setup that is already pushing your color corrections too cool, for instance, then anyone whose monitor is set up on the cool side will look at your work and think it’s freaking arctic! We don’t want that to happen.
So if you don’t understand how to read scopes (or why they’re necessary), or if you don’t know how to set up your viewing conditions for proper color grading, then you should take some time, jump to the table of contents for this blog series and brush up on some fundamentals. When you’re ready, come on back – we’ll be waiting.
First things first. What’s first?
You open up a project. You line up your first shot. And then you do… what?
Luckily, color correction doesn’t have to be an experiment. Or a shot in the dark. As personal finance guru Dave Ramsey says, “Children do what feels good. Adults make a plan and then execute that plan.” Adults in the world of color correction have a plan and execute that plan.
Generally speaking, color grading is a three-step plan:
- Step 1: Lay a solid foundation for each shot in your sequence.
- Step 2: Get the shots to match each other.
- Step 3: Build on that foundation to control the viewer’s experience of the scene.
Each step can (and should) be further atomized. Each step has a few ground rules to guide our movement throughout the color grading process. Not every ground rule needs to be followed, nor does every shot need to go through all the steps. This is a process – like baking bread, where all you need is flour, yeast and water; after that everything else is for taste and texture.
Today and tomorrow we’re going to focus on Step 1: Lay a solid foundation.
But what about the Bleach Bypass Look?
I know, I know. Every time I teach this process, someone wants to jump straight to creating cool looks! I’m here to tell you, this ain’t finger painting! If you want to apply an interesting look to your project, across multiple shots and multiple scenes, then you must lay a strong foundation on your images. Otherwise the look you’re applying will fall apart, be inconsistent, or impossible to manage. Then you’ll start cursing your cameraman, the codec, wardrobe – anyone but the actual culprit: poor color grading technique.
What are the elements of Step 1: Lay a solid foundation?
Atomizing Step 1, we want to (usually in this order):
- Evaluate the image: This helps us create a plan to execute the next three actions.
- Set the overall contrast of the image: Set your black point, white point and midtone gamma. In the process you’re setting the overall mood of the image.
- Fix problems with the image: Identify and fix any combination of – incorrect white balance, weak color channels or heavy color casts from the environment.
- Balance the colors of the image: Final tweaks include – neutralize blacks, set the whites, adjust saturation, adjust the overall color for the significant part of the image.
Creating our color correction game plan
The first thing we MUST do: we need to evaluate the shot. Every shot. And for each shot we’ll develop a game plan. If we’re baking bread, then this step is measuring out the flour and yeast.
- [Video] Brightness & Contrast: Analyzing clips for tonality issues. This GREAT tutorial from Jeff Stengstack on Lynda.com does a great job of showing how we use scopes to analyze an image and form a game plan for the adjustments we’ll be making.
- Gray Matter: Evaluating Contrast – How an artist evaluates the tonal range of an image to achieve his goals. Not necessarily the easiest read for non-painters, but there are many interesting points which makes it worth reading – especially for experienced colorists.
Executing our game plan
After we’ve analyzed the image, it’s time to start doing some color correcting.
Where do we begin? We start by setting our initial black and white points, then set our midtone gamma – then reset the blacks and whites as necessary. Don’t jump to saturation and hue at this point. That comes later.
And I recommend that your first moves be setting the black and white points before setting the midtone gamma.
On most apps, shadow and highlight adjustments overlap heavily – through the midtones – so I like to start with those two adjustments. Also, on many images it’s not clear what details exist in the shadows and highlights, so I like to pull them together first, see what’s there, and then expand them back out. Once those initial adjustments are set, I then move to the midtone gammas.
Here are several good tutorials on this process that I’ve selected because the concepts taught can be applied to any NLE:
- [Video] Manipulating Contrast: Good tutorial. One point: I don’t agree that our goal is to use the entire range from 0 IRE to 100 IRE. But otherwise, this is a solid explanation of using the Setup / Gamma / Gain controls as well as using Curves. It’s based on Avid Media Composer but good for everyone.
- [Video] Brightness & Contast: Oh NO!: Knowing what NOT to do is as important as knowing what you should be doing. This tutorial presents the reason for avoiding the “Brightness & Contrast” filter in almost every NLE on the market. This tutorial is also a good example of how to use a test image to figure out what a color correction filter is doing to your images.
- : If you’re working in a tool that has a Curves interface, this Photoshop tutorial is a terrific starter on understanding how to use Curves. As video colorists, the concepts are exactly the same. In the case of using Curves for manipulating contrast, we’d be limiting these corrections to the “Master Curve” or “Luma Curve” – whatever your app calls it.
- Understanding LOG Grading: This is an advanced topic that hasn’t been covered in this series. But this article does a good job of using Curves to explain what they’re doing to an image. And when I start talking Curves, there’s always someone who wants to talk about LOG. If that’s you – or if you just want to understand why I’ve jumped the rails and included this topic – start first with this article on What Log Is… and Isn’t.
Why so much emphasis on luma values?
We begin by setting our luminance values for several good reasons. It starts with how the human eye works. It is structurally far more sensitive to subtle changes in brightness than subtle changes in color. When it comes to matching shots, luminance is the key. Counterintuitive, I know. Especially since it’s called “color correcting.” But we’re talking about building a foundation – and the foundation for managing your colors is to first manage the luminance, or tonality, of your images.
Another reason: depending on how your software processes video, there are some apps in which luminance values make drastic differences in saturation, but saturation has no impact on luminance values. In those apps you must adjust luminance values first or you’re forever chasing your tail. So, to keep things simple, start with luminance and you’ll never be disappointed.
Luminance is also the key to sharpness. Flat images have no depth, they lack “pop” and clarity. As you expand out contrast, the perceived sharpness of the image is increased – causing the image to have much more pop, clarity and depth.
Tuesday – 07/17/12
‘Color Constancy’ and Why Colorists Control White Balance
Yesterday we broke down the color grading process into three steps. To quickly recap:
- Step 1: Lay a solid foundation for each shot in your sequence.
- Step 2: Get the shots to match each other.
- Step 3: Build on that foundation to control the viewer’s experience of the scene.
Today we’ll continue working through Step 1, which we’re taking slowly since it’s the most important step in the color correction process. If we get Step 1 right, then Step 2 and Step 3 are much easier to execute.
After evaluating the image and setting contrast: White Balance
And by white balance I really mean: fixing color problems. If you’ll remember, I broke Step 1 into three smaller tasks:
- Set the overall contrast of the image: Set your black, white and gamma (midtones)
- Fix problems with the image: Identify and fix bad white balance, weak color channels, strong color casts
- Balancing the colors of the image: Neutralize blacks, set the whites, set color of the significant part of the image
Since we’ve already dealt with setting contrast, the next step is fixing problems with the image. Specifically, color problems. There are all sorts of issues that we might need to take corrective action upon, including:
- Incorrect white balance
- Mixed color temperatures (such as practical lights mixed with daylight)
- Reflections casting colors (such as painted walls, colorful clothing, floor or ceiling reflections)
- Defective camera
Generally, we fix all these problems using the same basic methods. So let’s talk about fixing white balance issues – a common color correction task – since corrective actions taken to fix white balance can be applied to any of the problems I’ve mentioned.
As I’ve done before in this blog series lets start with a really basic question: “Why do we need to bother with white balance?”
‘Color Constancy’ and white balance
Loosely defined, white balance is what we do (whether in-camera or in post-production) to adjust an image so that white looks white, compensating for the color cast of the lights in a scene. If you’re a cameraman, it’s the first lesson you learn: white balance your camera. In color correction almost all NLEs have some sort of white balance function and newbies are taught to use eyedroppers on a neutral black / gray / white portion of the image to correct out any color casts.
Yet, why so much emphasis on white balance?
The answer echoes back to our early discussion of the Simultaneous Contrast Effect. But this time, the need for white balance is called by vision scientists: Color Constancy.
Color Constancy is our memory imposing itself on our vision
Color Constancy refers to our ability to see a banana as yellow, even under blue or orange or red light. In the excellent BBC documentary, Do You See What I See?, they demonstrate how a banana tends to always look yellow under different colored lights. Yet a yellow color swatch, perfectly matched to the banana and under the same changing light, doesn’t. The color swatch takes on the colors of the light, constantly changing. Surprised?
If you had read the first week of this blog series you wouldn’t be. Context is everything. A lifetime of experiences teaches us that a banana doesn’t change its color simply because the color of the main illuminant is changing. Our brain intercepts the raw retinal data and imposes our memory of the banana, canceling out the effect of the color washing over it. And that perfectly matched color swatch? It has no meaning to us, no context – and our perception of that color swatch changes dramatically with the changing light. With no meaning, that color swatch never has a chance – its color keeps changing even as the banana tends to hold steady.
Color Constancy also explains why our lunch partner’s white shirt looks white in the dimly lit orange glow of a restaurant – and still looks white immediately when we walk into the bright blue light of a slightly overcast sky. For whatever reason, our brain has a biological need to present to us a world in which colors don’t randomly change on us. In other words…
Our brains are wired to automatically white balance.
This also brings us back to Steve Hullfish’s ’60-Second Rule’ – that the more we look at an image, the more correct it looks. Color Constancy plays a huge role in that process.
Colorists control for white balance because our brains insist on it.
And that’s why so much emphasis is placed on white balance. If we don’t do it in-camera or in post-production, then – by golly – our brains will do it for us as we watch our televisions. Let’s not impose that effort on the viewer. Let’s remove the auto-white balance task from their brains so they can more completely immerse themselves in our stories.
In a few moments I’m going to fine-tune how we apply the notion of Color Constancy to our work. I’ve oversimplified it and there are some nuances we need to sort out. But before we get there, here are some great tutorials on how to color correct for color casts or incorrect white balance:
- [Video] Manipulating Color Casts with Curves: This shows my preferred way of fixing color channel imbalances. I like this tutorial at Avid Screencasts because it includes a nice primer on using the RGB Parades with the Curves channels. Note: I would have used the tennis shoes as my white reference point since it’s also the brightest element in the image, and his final gamma tweak in the red channel probably would have held up better (with less of a blue push in the image). But really, that’s a nit-pick. It’s a solid tutorial.
- [Video] Adjusting Color Channels: If you don’t have Curves in your NLE, then the first part of this tutorial shows how to manipulate the individual color channels using an RGB filter with sliders. Almost all NLEs have some variation of this tool.
- [Video] Color Balancing in FCP X: The Color Board in FCP X is a unique beast. Here’s a great tutorial from Denver Riddle on using the Color Board – along with RGB Parades – to color balance an image. He also makes a terrific point about what to do if a shot doesn’t have a white or black reference point for you to neutralize against.
‘Color Constancy’ isn’t just white balance
The effect of Color Constancy isn’t so much that ‘white is always white’ but that our brains want to set the white point and keep it there. In other words, ‘Constancy’ is what you need to focus on. Wherever you set your white balance, you’ll then strive to keep it there throughout the scene. I’m thinking particularly when a DP lights with colored gels.
One of the mistakes colorists will make is to ‘grade out’ the color casts specifically introduced by a DP. Maybe it’s a blue gel on the key light or a purposeful white balance using a warm card. Either way, we shouldn’t feel compelled to fix what’s essentially an in-camera effect. (Of course, it’s up to the DP, director or producer to communicate this to you and not waste your time having you fix something that’s perfectly okay.)
The other lesson taught to us by Color Constancy is that shots in the same scene should match each other. The wide shot shouldn’t have a green tinge while the close-up is warm and the reverse angle is blue. We’re forcing our audience to work much too hard as they’ll try to maintain the Color Constancy using their brains. We’ll pick up the topic of shot matching in tomorrow’s post.
Wednesday – 07/18/12
The ‘Key Color’ for Matching Shots in a Scene
We’re about to move into Step 2 of the three-step color grading process I outlined in the previous two posts. Since we went through Step 1 a bit slowly, let’s recap our actions:
- Evaluate the image: Create a game plan for what we need to do to a shot
- Set tonality: Set the black- and white- points and adjust the midtone gamma
- Fix color problems: Including – improper white balance, heavy color casts
- Perform final color and luminance tweaks: Set the saturation, add / remove warmth
- Do NOT: Balance out in-camera effects
Once you’ve gone through this process you should have a nicely balanced image. The next step is to move through through this same process with any other shots in the scene. But this time, with an eye toward getting all the shots to intercut seamlessly.
If you have plans to impose a Look upon the scene, I generally advise you get all the shots to match each other before moving on to creating the Look. There are several reasons for this, which we’ll discuss when we talk about creating a Look. Suffice it to say: balancing and shot matching makes creating and implementing Looks much much easier.
What is shot-matching?
Shot-matching is a fundamental skill of color correction. Its goal is to achieve seamless continuity between shots in a scene and scenes in a project. Unless there is a specific creative mandate to do otherwise, our goal is to never let mismatching shots pull an audience out of the story.
Remember yesterday’s discussion of Color Constancy? That’s one of the goals of shot-matching – to off-load the audience’s burden of maintaining Color Constancy. This enhances the viewer’s suspension of disbelief and increases the impact of the story being told.
Color-matching is not shot-matching
While researching this article I kept finding tutorials about how to ‘match colors’ between two shots. Or how to use the ‘hue match’ function of a particular NLE. Or some variation of this theme. While they all tended to get us to the same place – shots that matched each other – I want to make something very clear (on the assumption that words matter):
We’re not paid to match colors. We’re paid to match shots!
This is a huge distinction.
Color-matching is one small part of shot-matching. In fact, when you consider the poor color acuity of the average person, color-matching is one of the least important skills of shot-matching. If you look at color-matching games like this, matching brightness is the first main component of getting a perfect score. Yes, hue matters, but matching tonality is the key when practicing at a game like that.
Selecting the master shot
When shot-matching, the first order of business is to decide which shot to match against. This shot becomes the reference shot toward which all other shots are balanced. Ideally, if each shot in a scene matches the reference, then they’ll all match each other.
Different colorists have different preferences for selecting a reference shot. Some colorists select a wider shot that encompasses as many of the main elements of a scene as possible. Other colorists start with the shot that needs the least work or is the best-looking in a scene.
In most cases, when starting to grade a scene, it’s a helpful practice to quickly skim through the scene, select a reference image, balance it and then match the other shots to it.
When matching the other shots, first start on Step 1 to get each shot in a good balanced state – then start matching to the reference shot.
Tonality before color
When you watch the tutorials at the end of this article, notice how they always start with the brightness / tonality of the image. This is partly because of the heavy influence luma controls have on colors, but also because the human visual system will catch mismatches in brightness values before it catches mismatches in color values.
As we’re matching our shots to the reference shot, the waveform monitor is a great tool for guiding us. Try bouncing between the reference shot and the shot you’re currently color correcting while keeping your eye on the waveform. You’ll want to match the overall positioning of the three main luminance ranges: shadows, midtones and highlights.
Color is both saturation and hue
Once you’ve dialed in the tonal range of your shot, then it’s time to match saturation and hue. It’s somewhat counterintuitive, but if you find you’re having trouble getting the colors to match between two shots, sometimes the problem is really a luminance mismatch that needs to get solved.
As Alexis Van Hurkman says in the Color Correction Handbook – we are not looking for a precise color match in every element of the image. We are looking for a perceptual match. In fact, we’re really only looking for a perceptual match in the focal point of the image. The audience will generally accept mismatches in other areas of the image if the main elements match as the shots play down.
This is not an excuse for you to create shoddy work. It’s just a reality which can work to your advantage when on a deadline (and we all have deadlines).
Shot-matching isn’t about precision
It’s easy to think that because the same element in two different shots has a precise match in their RGB values, that we’ve got the two shots matching. But as the Brightness Contrast Effect informs us, we don’t see RGB values objectively. Context is everything.
If two elements in two shots have an objective match (their RGB values match) but they look different – then the shots don’t match!
This is the toughest part of shot-matching – brightness and color is heavily influenced by the overall image. If the wide shot has lots of color and is very bright, but the close-up has very little color and is dim, the raw RGB values to get skin tones to match between those shots is likely to be quite different.
Shot matching is all about context
Yes, this is an annoying aspect to the task and craft of color grading. It’s what makes the job difficult. And it’s why – after using tools such as the Vectorscope to dial in shots – we need to actually look at the image and play the surrounding shots.
Our eyes are essential in this process. The tough part is learning to trust your eyes – and that’ll come only with experience. In the meantime, rely heavily on your scopes and use your eyes to confirm or deny what the scopes are telling you.
The ‘key color’ in shot-matching
There is one insight about shot-matching that I can share with you. It has to do with the most important of the tonal ranges when grading a shot: the deep shadows.
I’m talking about the values in the 0 − 20 IRE range.
In my experience this tonal range is often the key to shot matching (as well as skin tones). If you’ve got some sort of color bias that’s preventing your shots from matching, the blacks are a great place to take a moment and re-evaluate.
Remember: black is, by definition, the lack of light. No light, no color. If your blacks have a slight tint to them, it’ll immediately start throwing off the rest of the image. And if the tint of your blacks starts bouncing around from shot to shot, you’re doomed.
Neutral blacks are the foundation for creating either strong, aggressive images or light, airy images. Nail your blacks and you’re 70% of the way to matching your shots.
Using FCP X’s ‘Match Color’ Feature – A good example of the overall shot-matching process.
[Video] Scene Matching with Steve Hullfish – A terrific tutorial on using scopes to match shots, how to set flesh tones. Plus a nice look at the proprietary displays of Tektronix hardware scopes.
[Video] How to Match and Balance Multiple Cameras – I LOVE this tutorial on matching cameras. This is how a live truck color shader matches cameras. And the workflow is how we do it in post-production (though usually without the benefit of a chip chart). This tutorial is also the best explanation of Tektronix’s proprietary double diamond display I’ve ever seen. I strongly suggest watching this video.
Classes of Primary Color Correction Filters
The past three posts have all focused on what colorists call ‘Primary Color Correction.’ Evaluating, fixing, balancing and then matching shots all fall under this category of corrections. Our tasks have been general in nature. We haven’t been isolating specific locations of the image. We’ve been painting with broad strokes.
Generally speaking, the specific tools we use to perform these corrections can be broken into four types of filters. Most NLEs and color grading applications implement some combination of two or three of these. Today I’ll guide you through these types of filters. I’m naming them how you’d (usually) find them classified and including how useful I think they tend to be. My hope is to help you figure out which filters you have available to use and to broaden your horizons if you find yourself using the same tool over and over again.
Almost every app has some form of this filter. The image below is a screenshot of RGB controls from three different apps. This class of filter generally has sliders for each of the RGB color channels, some filters have a Master color channel and others have a Luma-only channel. They all usually have three sliders: Shadows (or Setup) / Midtones (or Gamma) / Highlights (Gain).
Frankly, I think this class of filter controls is massively underused. RGB Slider filters can perform extremely precise, targeted controls that can be difficult with traditional 3-way color wheels but are broad enough that they’re not as likely to introduce posterization (as can happen with RGB Curves). When teaching, I urge students to grade for a week or two using nothing but RGB sliders. Once you become proficient with this tool it will get you out of a ton of tough corrections for the rest of your career.
Most Level filters are a kind of interactive histogram. They tend to give you three control points and are used for adjusting the tonal range of an image, allowing you to raise or lower black- and white- levels plus the midpoint gamma. Some filters will give you separate controls for Master / Red / Green / Blue channels, which is handy. Most Adobe apps also include an Input / Output Histogram adjustment (as does Avid’s Symphony).
As big a fan as I am of RGB Sliders, I’m at the other end of the enthusiasm spectrum for Level filters. Conceptually I like this kind of control. But in practice, Level filters always seem to feel clunky and slow. But, if you don’t have an RGB Slider filter at your disposal, Levels can do the job of that filter – depending on how the filter is implemented.
Here area few good tutorials I found dealing with the Levels filter:
- [Video] – This tutorial does a nice job of showing how most Level effects work. Though the Premiere implementation has serious drawbacks that don’t apply in other apps.
- [Video] – A good example of one implementation of this type of control.
- – Yes this is a Photoshop tutorial. But this is also how most pieces of software try to implement the Levels control.
3-Way Color Corrector
This filter was made famous in the NLE market by Final Cut Pro 3, when Apple marketed the heck out of it. You already know this control: three color wheels, each controlling a different tonal range – Shadows / Midtones / Highlights. Some apps add power to this filter by also including brightness and saturation controls for each wheel. Others also add a Master wheel for overall tinting through the entire tonal range.
For a huge swath of professional colorists, this is their work-a-day color correction control. It’s intuitive to use and all the color theory you ever need to know is clearly displayed on the tool itself.
Surprisingly I found very few good free tutorials on using the 3-Way Color Corrector, but these two covers the bases:
- Grading with Color Wheels – An older article that looks at a variety of color wheel filters in a variety of apps.
If the 3-Way Color Corrector is the most ubiquitous of filters, Curves is the most versatile. Unlike the other types of filters mentioned in this post, Curves can make very broad corrections across the entire image and it can make extremely targeted corrections. In this sense, Curves is both a tool for Primary corrections and Secondary corrections (discussed tomorrow). How the tool is used at any given moment defines how we’d classify its usage.
The defining characteristic of RGB Curves is its versatility.
There are a few types of tools and filters that are used to handle Primary corrections:
- Channels: These types of filters allow you to modify one of the RGB color channels by mixing in (or out) the other RGB color channels. For instance, rebuilding a very weak Blue channel by adding the Red and Green channels to it. Most apps have some form of this under-used tool.
- FCP X Color Board: The Color Board is a hue wheel bisected and strung out as a line. Everything about it is unique to FCP X. In an earlier posts I linked to a few tutorials using the Color Board. I found another:
- Understanding Color Grading in FCP X – Good tutorial from Oliver Peters on getting oriented in FCP X and its grading workflow.
[Insert FCPX Color Board] Caption: The unique FCP X Color Board tool.
- Magic Bullet Colorista II: No – this is not a class of filter – but it is a third party filter that stands in a class by itself. If you’re looking for a powerful 3-Way Color Correction filter that has a ton of bells and whistles, this is it. Highly recommended if you do most of your color grading in an NLE.
- Magic Bullet Colorista II – A well-written overview on this plug-in.
There you have it, my general classification of common tools to perform Primary corrections. Primary corrections encompass the first two steps in the color correction process I’ve outlined previously:
- Step 1: Lay a solid foundation for each shot in your sequence
- Step 2: Get the shots to match each other
- Step 3: Build on that foundation to control the viewer’s experience of the scene
Friday – 07/20/12
Color Grading for Content
Our ‘Workflow Whisperer’ for July is Patrick Inhofer of the Tao of Color.com, a color correction training site (which teaches all the techniques found in this series). Patrick is a working colorist at Fini.tv. And as someone who has interviewed many top colorists, Patrick’s perspective is one of a kind.
Congratulations! Today we’re at the last day of Week 3. If you’ve followed along this far then you’ve learned a ton. From how the human eye impacts our work during color correction to how we manage that impact. We’ve seen the well-heeled progression of steps colorists use to attack an individual shot, as well as a sequence of shots.
Now it’s time to pick up our game. Now that we’ve done the hard work of balancing those images, we get to play a bit and see what we can do to enhance the pictures in front of us. We’re now entering the world of Secondary Color Correction.
Secondary color-correction is about story-telling
Until now we’ve been color correcting to stay out of the audience’s way. You could say we’ve been playing defense. We’re fixing big problems. We’re working in broad strokes. Managing Color Constancy is the name of the game. But now – now it’s time to move into Secondary Color Correction and think about the story.
Secondary Color Correction is the final stage of color grading and this is where we, as creative artists, get to play offense. Here we get to refine each shot, tease out its emotion – suggest its meaning. We get to use the raw materials captured by the DP and arranged by the editor to explore additional ways to add serious production value to the project. We work with the director and producer to achieve their overall creative vision.
In this stage, I generally suggest four tasks we’re trying to accomplish:
- Focus the eye: The first moment you see a shot, where does your eye travel? Is it looking where you want? If not, then you’ve got to help your audience know where to look.
- Add emotion: Do the colors – their hue, intensity, or lack thereof – reinforce the message of the moment? How about contrast? Is it heavy or light? Too much so? Or maybe we need add more punch (or take it away)?
- Add depth: A well-graded shot usually feels like there are layers to the image. If we’ve expanded contrast, that already helps add depth… can we go any further? Are there more opportunities to make this happen?
- Fix additional problems: This usually relates to the first tactic, “focus the eye.” After doing some or all of the above you may find there’s this one annoying little thing that’s distracting. Now’s the time to solve it.
Isolation is the name of the game
Putting those four tasks into play is achieved through isolation. We isolate some part of the image, either to emphasize or de-emphasize it. Some examples:
- Face grade: You can isolate faces in a shot to refine the grade on them. Maybe its skin smoothing, slight color tweaks or a general re-balancing of the contrast in the midtones.
- Practical lights: Especially in the era of low-light DSLR photography, practicals can sometimes overwhelm an image. Isolating, tracking and de-emphasizing practical lights is a common task.
- Add punch to an interview: Documentary or reality projects are more concerned with looking ‘real’ than looking cinematic – but interview segments are perfect opportunities to add a ton of visual punch to a project. Carving out shadows, playing with color contrast – as long as the people look good, it’s a great opportunity to go nuts with your creativity.
There are two basic ways to create these isolations. Today we’ll focus on what colorists typically call HSL Isolations.
A key by any other name…
HSL stands for Hue, Saturation, Luminance. An HSL isolation is simply pulling a chroma key. Or a luma key. Or even a Hue key. Or any combination of the three.
It’s really that simple.
Do you want to control the overly saturated green grass that distracts from the people having a conversation in the driveway? Pull a key based on the Saturation of the grass. Or Hue. Or both. Then, desaturate.
Do you think you can get more contrast out of the sky to help pop out the clouds? And then maybe you want to grade more blue into the sky? Try pulling a Luma-only key grabbing the brightest parts of the image – which includes the sky. Then do your thing.
Sometimes pulling a key can be a bit more complicated. Sometimes that Luma-only key pulls more than the sky. But you notice the sky is lacking in saturation. You can further refine that key by including the bright parts of the image that only have a little bit of saturation. Now we’re pulling a Luma- and Saturation-qualified key.
And if that still isn’t refined enough? You notice there is some blue in the sky, so lets activate the ‘H’ in HSL and only select bright parts of the image that have tiny bits of saturation – but only if they have blue in them.
What we’ve just done in the paragraphs above is “build an HSL qualification” to help us isolate just the sky. And once it’s been isolated, we can expand the contrast (which pops out the clouds), add saturation and then push some more blue into the sky (with maybe a nudge toward orange in the highlights to suggest a late-day sky).
Now that I’ve explained it to you, here are some tutorials I’ve found on the topic of pulling HSL keys and working with Secondaries. Have fun! See you in Week 4.
- [Video] Tao of Color’s Masterclass Training: Colorista II Excerpt – Hey! Yup, this is an excerpt from my training for Colorista II. In this example I use Colorista’s Secondary Keyer to isolate a color and desaturate it. You can read about my MasterClass Color Grading training here.
- [Video] Tao of Color’s Masterclass Training: Apple Color Excerpt – Another excerpt, this one is a good example of what happens when you try to immediately match two shots, except the reference shot already has a Secondary color correction applied to it. DOH! Remember: it’s much easier if you first match shots using the Primary Grade then you can start working on Secondaries (and your Look).
- [Video] – Once you’re on this page, scroll down to the bottom and select the video ‘HSL Qualifications’ (there’s no way to directly link to this tutorial). It’s a short excerpt from Alexis Van Hurkman’s tutorials over at Ripple Training. Another example of how pulling an HSL qualification is really just pulling a key.
- [Video] Apply Secondary Color Correction in Avid Media Composer – If the point still isn’t hitting home for you – this is a classic example of isolating grass, allowing you to control it independent of the rest of the image. Since Media Composer doesn’t have a dedicated Secondaries tool, this tutorial shows two ways of doing this: Using a 3rd party plug-in and using a key effect.
Monday – 07/23/12
Secondaries Part 2: Focusing the Eye
In the last post we talked about using HSL qualifications to isolate areas of an image and manipulate it in service of the story. Today we’ll talk about the second way colorists serve the story: by controlling where the viewer looks in the frame.
A colorist can be overt about this or subtle. Sometimes these techniques can be bypassed completely. And sometimes these techniques are used simply to help us pull a better HSL qualification.
I’m talking about another way of isolating parts of an image…
Isolation by using shapes
Circles, squares, polygons, and user-drawn shapes are all tools that many NLEs and all dedicated color correction apps use to isolate an area of a shot. The concept is simple: create a shape and then start color correcting either inside or outside of that shape. Probably the single most used technique in color grading is the vignette.
Vignettes focus the eye
The concept of the vignette is simple: create a shape and either darken the edges around the outside of the shape or brighten the inside of the shape (or both). It’s based on a principle of human perception that we see details only in a focused area of our field of vision. Outside this area of focus, detail is lost. Here’s a nice visual showing this effect.
Vignettes are a colorist’s way of mimicking the natural visual field of the human eye. In most day-to-day color grading, vignettes should be subtle. They should only be noticeable when you toggle them on and off. In fact, I always know I’m getting them right when I’m watching a scene I graded two days ago and I can’t tell if I’ve added a vignette or not, so I stop, and sure enough – the vignette is there and the difference is dramatic. But when playing down, it’s barely noticeable.
Vignettes can get very dramatic and help establish a Look, such as in flashbacks or dream sequences. Used this way, they lend a sense of artificiality to the image which can help the overall narrative by separating out those sequences from the rest of the story. Here are some good articles and videos to give you ideas on creating vignettes and how they’re used.
- Just When You Think You’re Being Original – Alexis Van Hurkman comes upon a 150-year-old stained glass window and sees how vignettes (in this case, vignetting with color) have been employed for a very, very long time indeed. Cool post.
- Vignettes in Apple Color – An extensive walkthrough of different ways to create and use vignettes. These specific concepts can be applied to just about any software.
- [Video] Vignettes in Avid DS – Avid DS tutorials are rare beasts. Here’s a good one showing how to create a realistic vignette using a shape plus color correction tools.
- [Video] Vignette Techniques – If your NLE doesn’t have a built-in vignette in its color correction tools, here’s a time-honored method for creating them using a second video layer and a black slug. This tutorial is for Premiere but it will work in any NLE.
- [Video] DaVinci Resolve: Matte vs Mask? – Sometimes you want to create a broad vignette but isolate one area and exempt it from the vignette. Warren Eagles walks us through doing just that using the Matte / Mask button in DaVinci Resolve.
Combining shapes with HSL qualifications
Shapes aren’t JUST used for creating vignettes. They can be combined with HSL qualifications to make those qualifications easier to isolate. In these cases the shapes are used like garbage masks to make our lives easier.
- Using Garbage Masks in FCPx – A good tutorial that explains the concept of a garbage mask. Now imagine pulling an HSL qualification within that garbage mask to isolate the face and grade it.
- Combining Mattes In DaVinci Resolve Part 2 – Alexis Van Hurkman walks us through what I mentioned above, pulling an HSL on faces and using a custom shape to help isolate the qualification. Good stuff.
A word of caution about pulling HSL qualifications
Once you get hooked on pulling HSLs – especially when you combine them with shapes to help you really nail your isolation – there’s a temptation to start doing VFX work. In other words, you’re trying to pull perfect keys as if you’re working in an Ultimatte plug-in. Don’t. During color grading we’re not compositing with colors. We not pulling perfect keys. We’re pulling broader keys that are sometimes a little muddy. A little soft.
We do have to watch out for noisy keys. And we need to be careful, after pulling a key, that our subsequent corrections don’t result in an unnatural-looking image. f you need rotoscope-quality edges and compositing, use the appropriate tool to create those masks.
Tuesday – 07/24/12
Keep Your ‘Looks’ Looking Real with the ‘Singularity Rule’
If you’ve gotten to this post directly from a Google search… stop right here. I’ve got no easy answers on how to create an amazing look that can be quickly and easily applied to your entire project with no effort on your part. I don’t know how to do that. Instead, start at the top of this series and when you finally make it back here, the rest of this post will make sense.
I believe that great Looks are built on top of a strong foundation. The foundation starts with a solid primary color correction, which matches the rest of the shots in the scene and often has another level of secondary corrections applied to it. It’s at this point that a great Look can be applied to your footage.
In my experience, there are three ways creating and applying a Look:
- 1. Enhance what’s already there: In this case, much of the Look is already built into the image. From lighting design, costuming, and art direction through directing and camera placement – all the raw elements are in place and our job in color correcting is to enhance what’s already there. This is the ideal situation and can be a huge ton o’ fun for you and your clients. But they have to do a lot of up-front work.
- 2. Impose your Will on the footage: This is the classic, “Can you make my film look like (name your trendy movie)?” Of course, the makers of that movie decided upon that Look in post-production, and none of the raw elements for creating it are evident in the camera originals. Not ideal. Pretty soon you’re chasing rabbits down their holes and you’ve blown your day rate with nothing to show for it.
- 3: Riff off what the footage gives you: When a client asks for a Look that requires that we Impose our Will (#2), this is where I usually take them. We start with the general look and feel of the film they want to emulate – but where we go is up to the footage. I push, but then I let the footage guide me when I’ve gone too far or if I should maybe focus less on color and more on contrast. If the image starts breaking down, I try something different. Eventually, we find ourselves a Look that fits the desired emotion based on what was actually recorded.
In all three cases, the core elements to create a Look are the same. You can play with contrast, color and special effects. The more elements you manipulate in creating your Look, the more important your initial grading and shot-matching will be. Looks can be big, hairy beasts to tame. Black balance, white levels and overall gamma settings can radically change how well a single Look is applied to a sequence of shots. The more tightly your shots match, the easier (or less painful) it is to successfully apply a Look.
Creating a Look using contrast
Whether you’re flattening out the contrast (for a faded film feel) or massively thickening the blacks (for a dark, gothic feel) or lightening the midtones (for a happy feel) – the relationship between your blacks and whites is the first way to create a Look. Often you don’t even need to go any further than this. The effect can be so dramatic that there’s nothing left to do.
Creating a Look using color
You can push the image to a mucky brown like The Book of Eli or go the teal-and-orange route of Transformers. You can completely desaturate the image but keep a single color as in Pleasantville. Just name your favorite movie and mimic its color palette. Manipulating through saturation and hue is another way to create a Look.
Creating a Look using visual effects
Highlight glows, vignettes with blurs, layering with transfer modes (or blend effects), blending colored gradients – these are all examples of using special effects processing to go beyond simple Luma and Color controls to give your footage a Look. Some of these techniques are well-supported in NLE or color grading apps. Other techniques require plug-ins to make them happen.
Tao of Color’s ‘Singularity Rule’ for creating Looks
When it comes to Looks, I really only have one bit of advice: I call it the ‘Singularity Rule.’ A Singularity is just a fancy term for ‘Black Hole.’ A Singularity is a force of nature, a small point in space that generates an enormous gravitational field from which nothing escapes – not even light. Astronomers can only see a Singularity by looking next to it. Or around it. But since no light escapes, nothing reflects (or radiates out) to alert us to its presence. It’s black. And,
Black is black, period.
I mentioned this in the post on shot-matching and it’s doubly true in creating convincing, manageable Looks. Black is the absence of light. No light = no color. A Singularity. A lifetime of experience confirms this fundamental fact for every human being likely to watch your work. And, since we don’t objectively see the world around us – since our mind imposes what it expects to see on our vision (see Week 1 of this series) – if your blacks have color in them, the audience will notice and you’re going to have a project that feels worse than unnatural. It will feel like it was digitally manipulated.
How black is black?
For our purposes I consider the 0 − 10 IRE range to be black. At around 10 − 15 IRE, it’s okay to start letting color imbalances into the image, since deep shadows do often have some coloration to them. But they shouldn’t have too much saturation or they start to, again, feel digitally manipulated.
There’s another advantage to maintaining solid blacks: almost anything else goes after that. It provides a firm sense of reality that our minds can latch onto – and that’s why the green-and-orange skies of The Book of Eli work; with something real in the image, we can accept the rest.
Of course, knowing the rule means you can break it. In the US version of the television series Life on Mars, that’s exactly what they did. The blacks were, in fact, somewhat lifted and had a brown, Mars-colored tinge to them. But it served the purpose of helping the show have a faded 1980s feel to it. But to help the audience have something to grab onto, the colorist kept the whites clean and crisp. A sort of inverse of the Singularity Rule.
Quickly exploring Looks
One of my favorite tools for exploring Looks is the terrific plug-in, Magic Bullet Looks (MBL). For me, its greatest advantage is the Preset Pane. What’s so terrific about it? The Pane displays presets applied to the image in your timeline! Not some silly Golden Gate Bridge (as in Apple Color), which tells you nothing about how that preset will work with your footage. MBL’s Preset Pane lets me immediately throw away dozens of different potential Looks in a matter of seconds. I just load up a representative frame, apply MBL, open up the Preset Pane – and I get a look at dozens of possibilities in a single glance.
Once I’ve isolated a few Presets that are in the ballpark, I start exploring. The point is not to use a preset and move on; the point is to revise, tear down or build it up – using it as inspiration. (These days I rarely use MBL for actually applying my final Look. Instead it becomes a reference image for me to re-build in my color grading app, using the tools at my disposal.)
Final thoughts on creating Looks
In addition to the Singularity Rule, another area to focus on is skin tones. While you don’t need to keep them locked onto the i-line (or skin tone line) in the Vectorscope, you do want to keep them nearby and manage them. This will also help sell your final Look.
When it comes to deciding on a Look, keep in mind the three methods for getting to a Look. In my experience, riffing off what the footage will give you is almost always far more successful than imposing your Will, which can get messy and noisy. And don’t fight physics: black is black. It always has been. It always will be. Even if you’re going for a faded film effect and lift your blacks – the effect is created because you’ve bypassed the physical world around us, in which the only time black isn’t black is on old faded film stock.
Wednesday – 07/25/12
How to pick the perfect colorist control surface (for you)
This is a colorist control surface:
It’s a piece of hardware that interfaces with color grading software. It allows the colorist to directly manipulate the image without having to use the mouse. The basic concepts at play here:
- Work faster: You’re no longer restricted to working with a mouse, making one change at a time. With two hands and (up to ten fingers) you can make multiple changes to the image simultaneously. Color grading becomes a much more interactive experience. A surface also allows you to color grade without looking at the computer display and instead focus on the reference monitor and scopes – again, speeding up your work.
- Grade better: The quality of your work improves since the speed gains allow you to either explore more options or make more refined adjustments in the same amount of time.
- Improve earnings: Clients like gear. They like the color grading process to move more quickly. And they’ll often pay more money on an hourly basis if you can cut a day (or two or three) off the color grading budget simply because you’re using dedicated hardware and speeding up your workflow.
I’ve written quite a bit on this subject, as have a few other colorists. Rather than repeat what’s been written I’ll share a few articles with you. After you’ve explored, come back here and I’ll share some basic guidelines on how to select a control surface for your needs.
Understanding control surfaces
- Moving from a Mouse to a Control Surface – This is a journal I wrote while I did my first color grading session using a control surface. I tried to answer two things: Did it actually double my speed? Is color grading software better with a control surface? This article also reviews the JL Cooper Eclipse working with Apple Color.
- How to decide if a Colorist Control Surface is in your future – Not sure if a control surface is worth the investment.? This blog post should help you decide.
- Three Color Grading Panels Reviewed – Pros and cons of Avid Artist Color, JL Cooper Eclipse, Tangent Wave
- [Video] First Look: ‘Tangent Element’ – A video review and demonstration of the Tangent Element. See it in action driving DaVinci Resolve.
- ‘Tangent Element’ One Month Review – Beyond the First Look.
- [Video] Avid Artist Color Review – Using DaVinci Resolve.
- [Video] – How to turn your trackpad into a control surface using gestures.
Exploring and buying colorist control surfaces
(Note: B&H Photo and Amazon.com are affiliate links. Your price when clicking on these links stays the same but I get a very small commission on sales generated if you purchase after clicking.)
Below is a list of all the major control surfaces 95% of the people reading this article might consider for their setups. I’ve linked to their home pages as well as two buying sites so you can get an idea of their costs.
Which control surface should you buy?
Good question. Once you decide to buy a control surface, figuring out which model can be daunting; prices range from $1,500 to $30,000. Having answered this question many times, I’ve come up with a few general guidelines to help you narrow your choice.
Start with your software
All these controls surfaces require the software to recognize them. Either the software has to natively support the control surface or the hardware manufacturer must build the drivers that need to be installed. Whatever the case may be, start by deciding which software you want to use with the control surface. In many cases that’ll narrow down your choices considerably.
But what if your software can work with all the control surfaces?
Now you’ve got to think carefully about how you’ll use the controls surface and how much time you’ll spend on it.
The first question to ask: Does it need to be mobile or will the surface be installed and used in a single room?
For mobile purposes the first choice has to be the Tangent Wave. It requires no external power supplies or USB hubs. It’s plug and play in the best sense of the term. A single USB port is all that you need to get that surface up and running. If you need something smaller, then just buying the Tangent Element’s TK Panel will save you some money and give you the essential control of the trackballs and rings.
Tangent Element (with multiple panels), Avid Color and JL Cooper Eclipse all need some sort of power (either for the control surface or for a USB hub) to work – making them less useful for road warrior colorists.
How many days a month are you color grading?
That’s the next big question to ask. The fewer days a month you’re using this gear the less you’ll want to spend on it and more self-explanatory you want all the buttons to be. Here is my list of recommendations for each control surface:
- Tangent Wave – If you grade less than 7 days per month, this is a panel for you. Almost every button is labelled. It’s tough to get lost on this thing even if you haven’t used it in a few weeks.
- Avid Artist Color – You’ll want to be grading 7 − 15 days per month to get the benefits of this panel. Depending on the software you use, there are quite a few unlabeled buttons and several different ‘shift’ states per button to remember. If you’re not on this panel semi-regularly you probably won’t get fully up to speed on it. The exception is using this on an Avid. Avid has kept it simple and I’d recommend it if you’re grading for 5 or more days a month. The downside on using this with Avid: at the time of this writing (July 2012) it’s extremely laggy and I can’t really recommend it for Media Composer or Symphony. Not right now, not for color grading.
- JL Cooper Eclipse – When I bought this control surface 5 years ago, it was the cheapest on the market. It’s my favorite of all the sub-$8,000 control surfaces. I’m faster on it than on the others. But it’s a bear to memorize all the buttons and all the shifted states. I’d only recommend this to someone grading more than 14 days a month – otherwise you’ll never remember all the buttons. And even then, I think it’s overpriced at the current MSRP of $7,000. If you can buy it used for under $5,500, go for it! B&H is currently selling it for $6,300 which is a much more tempting price.
- Tangent Element – This is the most versatile of all the control surfaces. Every button is labelled, making it ideal for low-usage colorists. And it requires much less button pushing than the Tangent Wave or the Avid Artist Color – making it useful for high-volume grading suites. It’s tough to go wrong with this panel. Note: DaVinci Resolve doesn’t leverage the power of this panel nearly as much as it could, which is the only disappointment I’ve seen with it.
- DaVinci Resolve Panel – This is the custom panel sold by BlackMagic. At $29,000 you had best be very, very busy or living off a trust fund. But grading on this puppy is unlike grading on any of the others. Productivity skyrockets. Clients drool. The colorist feels like a magician. It’s good all around. Although you will be surprised at how much menu-stepping there still is, even after having over 60 clearly labeled buttons and knobs at your disposal.
Ethernet vs USB
One deal-breaker for some shops on a shared network: do you have an available ethernet port? If not, then you’ll want to limit your search to the USB panels. Of the panels listed, the JL Cooper Eclipse and the Avid Artist Color both use an ethernet port for communication.
Thursday – 07/26/12
Running a productive color grading session
For my final post in this series (tomorrow is the podcast Q&A), I want to step back from theory and technique to talk about how we interact with our clients. I think it’s fair to say that the most successful colorists (and editors who are skillful at color grading) aren’t the ones who know how to add 10 secondaries to every shot in under three minutes – or who know how to create every Look from every movie ever made. The most successful at color grading are the ones who know how to work with clients and manage their color grading sessions.
Managing a color grading session is an act of communication
And the most important communication is between you and your client. It doesn’t matter how creative or brilliant your work is – if you’re not giving your client what they want, you’re not doing your job.
Read that last bit over again – if you’re not giving your client what they want, you’re not doing your job.
Now, don’t interpret that as, “You must be a button pusher, blindly executing whatever comes out of the client’s mouth.” No. Hardly. I’m saying you must listen to what your client is asking for and then give them what they want.
Here’s the twist: Often what they’re asking for isn’t what they want. But to discern the difference, you’ve got to listen.
The key communication skill of a colorist (or editor) is listening
When a client asks me to pull the green out of the midtones…and I don’t see green in the midtones…I start listening much more closely by asking: “Why are you saying that?” This usually gets them to specify the real problem they’re having – which often has nothing to do with green in the midtones. They’re trying to solve a different problem and offered up an immediate solution, having no idea that they may be making things worse.
Listening isn’t just sitting back passively and executing instructions – it’s making sure you’re solving the root problems that the client is seeing. And actively listening means you’re hearing for when the client is giving you a solution rather than stating their problem. Focusing on their problems will help you move your sessions along much more quickly. It also helps build trust between you and the client.
When you execute their solutions without understanding the problem, you may be making the problem worse and you won’t know it. Since they don’t know what else to do and you’re not offering a solution, they think you’re not good at your job. Bad news all around. Knowing how to listen for root problems that you can solve is a key skill you need to develop. And for that to happen you need to develop a common language with your client.
How to talk Color
I’m not going to spend a lot of time on this subject, since there’s a great blog post on it. I just want to preface that post by saying: very few people are comfortable talking about color, contrast, hue values and saturation. Most of us aren’t wired with that vocabulary.
But most clients are comfortable talking about feelings and emotion. When clients get tongue-tied trying to talk like a colorist, I just ask them to talk about what they’re feeling. If the image feels wrong, what’s that feeling like? If the skin tones are off but they can’t tell you in which direction… does the person look sickly or irradiated? Pale or suntan-in-a-bottle?
- How To Talk to a Colorist – This is a great blog post by Alexis Van Hurkman. Must read. Oh, and if you haven’t yet – . Seriously. If you’ve read this far in this blog series then you are his target market….
- Grading ‘Short Beach’ – Colorist and trainer Warren Eagles talks about his process grading a film over the course of several months. There are many great insights in this post about client relations.
Stay on task.
I’ve written about this earlier in the series but I want to emphasize it again. If you’re grading in an NLE, don’t get thrown off track. Don’t let your client have you start looking for some new B-Roll or start adjusting the edits or re-mixing the audio. Color grading is a visually intensive activity. Constantly switching between grading, mixing, editing and hunting for B-Roll ensures you’ll do a poor job at all those activities.
Instead, work like a professional promo editor who first cuts for audio then fills B-Roll then adds sound effects then adds crazy flashes and film effects and finally adds titles and lower thirds. Sweep through your project focusing on one task at a time. It will greatly improve your finished product and get you (and your client) out the door on time.
Back in my days as editor/colorist, when it was time to color grade, I’d turn, point out where to find the current timecode number and simply state, “We’re grading now. If you’ve got notes for editing or sound – mark the timecode number and we’ll get to it after the grade.”
Setting the ground rules at the start, rather than waiting for them to ask, keeps you from sounding arbitrary. Having a plan and sticking to it is a sign of professionalism. Most clients will appreciate that you have a process.
How many shots can you color correct in a day? Don’t know? That’s a problem – for you, your client and your career. From now on keep detailed notes on how many shots you got through in X number of hours. It won’t be long before you figure out how much time you need to block out for the projects you work on. And then revisit this calculation any time you switch software or upgrade to a colorist control surface or change control surfaces. All these variables change how long you take to grade 100 shots. Keep notes and you’ll be able to accurately estimate how long you need for your sessions.
Leave time for revisions.
At the end of most jobs you need to have time for revisions. Somewhere around 10% – 15% of the time booked for a grading session is usually enough. Just make sure it’s built into the process. Even if all you’re doing is double-timing through the timeline – at minimum, making sure all the renders are good and nothing stands out like a sore thumb is, well, a good rule of thumb.
Got stuck? Move on.
We all get stuck on a shot at some point. It’s that one shot you paused on to spend a few additional minutes to ‘jazz up’ but which slowly turns into a horrible ‘blues ballad’ as your brain starts confusing you. The moment you sense that happening, jot down the timecode number and move on. Come back to it later in the day. I’m frequently amazed at how easily a shot is ‘solved’ if I put an hour or two between me and it.
The conversation continues…
Thanks for joining me here in July on Splice Vine. Blogging for a month about a topic I love turned out to be easier than I thought. I hope you enjoyed it. But just because July is over doesn’t mean you can’t keep learning about color grading.
First, let me point you to two of my resources focused on the professional development of your color grading skills:
- The Tao Colorist Newsletter – I call this the best damn weekly newsletter on the internet. And I’m not the only one. Click through to read about it and what others say about it, including a funny video testimonial from author Steve Hullfish. Did I mention? It’s free!
- The Tao of Color MasterClass Series – Develop your color grading skills (and then sell it to new and existing clients) in this unique hybrid training series. Learn how to use your software while grading a 16-minute short film shot on a Canon 5D MkII. There’s nothing else like it out there for turning color grading into a skill you can sell.
Here are three books that belong on everyone’s color grading bookshelf:
- Color Correction Handbook: Professional Techniques for Video and Cinema by Alexis Van Hurkman: A fantastic book for anyone learning the craft of color correction. Everything I’ve blogged about this month is covered in this book, but with more detail and covering more color grading systems. It’s practically an encyclopedia of facts, tools and techniques that’ll benefit everyone reading this blog series – from novices to seasoned pros.
- The Art and Technique of Digital Color Correction, Second Edition by Steve Hullfish: Just updated, another fantastic book that – like the Color Correction Handbook – will take you much deeper through many of the concepts we’ve talked about in this series. It also includes extensive interviews with the United States’ top colorists, breaking down their workflows and habits to help you understand, develop and refine your own workflows. Also, lots of great stuff on working with clients. A real gem of a book.
- by Dan Margulis: Now in its fifth edition, this is the book that taught me many of the fundamentals of color correction back before Steve Hullfish and Alexis Van Hurkman starting writing on this subject. The first four chapters easily translate to anyone doing color grading for still or moving images.
Friday – 07/27/12
Q&A with Patrick – Part 1
This is part 1 of my conversation with Patrick, in which he answered reader-submitted questions about color correction and color grading. Check back on Wednesday for part 2, when I ask him broader questions on the craft and the challenges facing colorists and video editors as we work on projects. –Eric from Splice Vine
You’ve been a writing machine, man, and everyone’s been so supportive and into the series. How do you feel about it as we enter the last phase?
I feel good about it. One of the hooks that sold me on it was realizing that we could turn this into an e-book. Now all of a sudden 20 blog posts isn’t such a bad idea because I can actually flesh out a topic and run through it the way I want run through it.
And that’s kind of what I’ve done. The entire series, I’ve tried to write with the thought that eventually these would become a compilation and every article would work with every other one, working through the topic.
Right. In a kind of linear way that makes sense, that’s concise and intuitive to people.
Exactly. But I would’ve liked to have done more pictures and screenshots, but I didn’t have the time.
Well, the hope is that the e-book will have more stuff like that, which will make it more attractive to people. At the end of the day, the content – your words – are going to be the same but the idea with the e-book is that there will be a little more content there in terms of the images and links and maybe even some video to play around with. We’ll just have to see how that goes. But I just love the fact that this is this permanent reservoir of information that people can keep coming back to.
Exactly. And I always wanted to try blogging everyday for a month. (laughs) So I’ve now done it. Or every working day for a month. Or I will have almost done it by the time the week is almost out.
What’s that phrase, ‘A page a day, a book a year’?
That’s exactly right. Right? Isn’t that the truth?
How have you been? Have you been happy with what I’ve done this month?
Oh yeah, absolutely! Like I said early on within days of this series starting, I want to go back and redo all of my portfolio.
I just consider my portfolio now like B.T. – ‘Before Tao’.
I just have to tell that to all of my clients. Just give me a few months to re-color everything. And it sucks now because my finished renders are just QuicktTme movies that I uploaded to Vimeo. I don’t even have the original project files anymore per se. You know, you do what you can do with what you know, when you know it. But it’s great that moving forward I have a better grasp on fundamentals than I ever had before.
Well, here’s the dirty little secret. Even this past week, I went to a screening of a film I graded last summer. It was the cast and crew screening and I’m watching the picture and I’m like, ‘Oh, really? I did that? Oh shoot. I should have matched that a little closer.’
You know, it’s like – it never ends. The day you look at a project and say it was perfect, is probably the day when you’re on the downside of your career.
Well, I sent you some of the reader questions. It was only a handful or so but I have a few questions for you that will hopefully get to the heart of some things that people did not send us. So I don’t know if it makes more sense to kinda launch into the questions and maybe at the end of you answering these, we can do a bit of a recap or parting notes even though I know we have a couple of days left in the series. But what do you think?
Yeah, that sounds great. Let’s do it.
Okay, so Eugene writes: “Great read. I’d love to read information on how to nail black and white areas and how to get a proper gamma range. I always struggle with skin tones and making the image look as it does in real life.” And your response to him in the comments was, “The problem is that skin tones always live in context of what surrounds them. Skin tones may be technically correct when sampled but can still look wrong. It’s the challenge of the craft. In the end, what matters is how the skin tones look to our eyes.”
So is there anything else you want to expand on with that answer?
Yeah sure. You know, skin tones is a really tough subject. There is this whole thing if you go do a search on memory colors, Stu Maschwitz about a year and a half ago had a series of blog posts on this.
Memory colors are just basically colors that the human animal knows instinctually are right or wrong. The most obvious of which are skin tones. But it’s also with things like grass, coffee beans, the sky.
These are things that we’ve seen so many times that we instinctually know what they look like. So if you really want to get something right, the saying goes, ‘it’s important to get those memory colors right’.
Yeah, I think it sounds a little bit like what you were saying when you were talking about color constancy.
The BBC documentary, Do You See What I See? and how it was talking about how a banana tends to always look yellow even under different colored lights.
That’s right. Because we imbue meaning to these objects. These are things we see everyday. We interact with them. So, even though we could be looking at grass, just as the sun is setting or looking at it at high noon or heavy cloud cover where the color temperature has moved from red to really blue and the grass always looks green to us.
And that’s because it’s the experience in our brain. I spent the first week on the blog series talking about how we don’t really see the way we think we see.
So skin tones really falls into that. In addition to the notion of memory color is this notion that you’ll often read about which is the I-line on the vectorscope, which is where your skin tone should cluster around.
And in researching this I found a lot of tutorials talking about how to get proper skin tones and they always go back to the I-line. And I’ve seen a couple of tutorials where people grade the skin tones right on to the i-line. I think that’s a big mistake.
Skin tones are variable. I think in one of the blog posts I posted this thing that was also in the Tao of Color newsletter last week which is this person doing this Pantone skin colors (project). She’s taking headshots of people, giving them Pantone colors and coming up with this universe of human skin color.
They’re all over the place, man. They go from red to pink to, like, this beautiful kind of tan to orange. They’re literally all over the place. They are within a relatively defined vector on the vectorscope. But they don’t all sit on the I-line.
They’re clustered in this group around it. So the thing about skin tones is there is a general variability between human beings so you don’t want to line them up so that everybody in your show has the exact same skin hue.
It’s okay if somebody looks a little red or looks a little pink and somebody looks a little more yellow. That’s okay. So, I tend to not obsess over skin tones. There’s this great book, . Fantastic starter book because you’re just working with still images. And the first 4 chapters can be applied to any piece of software where you’re manipulating real images.
In the first chapter he presents an image that he shows to all of his audiences when he’s teaching. It’s one picture with 4 different sets of skin tones. When he asks people to raise their hands as to A or B or C or D – which one looks correct to you?
2 of them, no one ever selects. Two of them everyone rejects as not looking natural. 1 of them, 80% of the audience looks at and says, ‘oh yeah, that’s a natural-looking skin tone’. And the 4th one that 20% of the audience looks at and says, ‘oh yeah, that’s also a natural-looking skin tone’.
So you go this 80/20 thing going on. I’m one of those people that falls into the 20%, where naturally I look at that picture and that picture tends to be red. I look at that picture and to me that looks like a natural skin tone.
And it took me 5 or 6 years to get to the point where I look at that picture and even though it looks good to me, I recognize that it looks too red, and I need to back off of that.
When I get really tired, what’ll happen is that I’ll look at a skin tone and I’ll say, ‘yeah, that looks really good,’ and then I’m like, ‘wait a second. If it looks really good to me, it’s not right.’
So I back it off. So, getting skin tones can be a struggle. I took me a couple of years to understand my bias. And the only way I learned my bias was:
- 1) Reading a book like that where he points it out and I realize immediately, I’ve got a little bit of a problem; and
- 2) Just working on jobs with clients and letting my clients help guide me.
I know in the back of my mind that I’ve got a problem with skin tones and it’s not that it’s wrong it’s just that the majority of people will see it differently than I do. So for many years, I just let my clients guide me and I’d re-evaluate what I thought looked right, what they thought looked right.
What do you do when your client comes in and says, ‘Hey, I look too pale – can you give me a little bit more color?’ What do you do when you start to get the backseat driver in regards to skin tones?
I don’t mind the backseat driver, I like the interaction.
I want my clients to let me know what they’re seeing. If they say that specific request, that the skin is looking pale – my first instinct is to add saturation, before trying to push color into it through the hue wheels.
My first instinct is to first see what is naturally in the shot. Even if by pushing saturation I blow out, let’s say the grass – it gets really hyper green. But I get the skin tone looking nice and rich, I’ll let that happen and then go back and isolate the grass to desaturate it or isolate the skin tone so that it doesn’t effect the grass. So I’m against letting something blow out because I know that I’ve got the tools to pull it back in.
But yeah, you want to get your skin tones right. If a client says that the skin tones don’t look right to them, that’s a big flag, and that’s something you’ve gotta deal with.
Yeah, it’s good to know. The whole I-line on the vectorscope thing has never really quite sat right with me. There is this array of skin tones. It’s kind of like the body mass index (BMI), right?
There can’t just be one, people are different genetically and what have you.
If you’re really interested in this subject another place to check out is the Color Correction Handbook by Alexis Van Hurkman, which I recommend to everyone.
He’s got a picture in here of a collage that he put together from magazines. The collage is of about 20 different models of different skin tones.
He put them all together into a single image, looked at it on a vectorscope and showed you that it’s a bulge. There’s no line. The line is like a center point of a balloon that wraps around it. This bulge in which skin tones can live is somewhat saturated to relatively desaturated, on both sides of that line as well.
So, you just want to be in the general area – you don’t need to obsess. I’ve seen people where they start to obsess over hitting that I-line and that’s completely missing the point.
Okay, next question. This is from Elieser Jairo. He wrote: “I would like you to answer some questions about ACES color space. Why and what was is it created for? In which situations should I use it? Patrick, your response was, “The technology is new and not worked out for most of us. That said – Eric and I will talk about it in the podcast at the end of the blog series.”
Yeah, and I need to put a reply on there with some links for him because, Mike Most at mikemost.com is a great colorist and a great technologist and he’s been playing with ACES for a while. He’s done a series of posts a year and a half ago on what ACES is and why is it important. That’s a great place to start. He’ll explain it way better than me.
The whole concept of ACES is that it’s pretty much the color space of human vision. We have all these different color models, HD, sRGB and all these things and finally someone said, ‘why don’t we have a color space that’s as big as humans can see?’ ACES has a bunch of different profiles for acquisition technology like cameras.
How do they map themselves into this big ACES color space? They define what their color space is within ACES and that gets mapped into a generic ACES color space. And then if we’re going into say, Davinci Resolve, that app will look at that and say, ‘alright, if we need to get into HD, I know how to get from ACES into HD.’
Therefore, if the camera can go from whatever its recording color space is into ACES, and Resolve knows how to get from ACES to HD, it can then map that specific camera to the HD color space.
So it’s kind of like getting to neutral ground. And depending on where you are in the chain, if an inkjet printer is trying to print out an image taken on a camera, it can take a look at that ACES color space and say, ‘okay, this is what I need to do to create proper reds, greens and blues from ACES. This image gives me an ACES profile so now I can reliably print out that image, that will print out in such a way that matches your TV.’
So that’s the big hairy ideal of ACES. Not all of the pieces are there. Depending on your pipeline, ACES is doable or not. There’s no reason why you can’t play with it.
I know guys who are trying to work with ACES just as a challenge, with varying degrees of success. That’s why I don’t really include it yet. It’s not really at the point yet, where, for the type of blog series that this was, it would just add a lot of confusion that is unnecessary.
It’s not a mature technology or system yet?
Yeah, they’re still working it out. I think they’re finally defining late this year some of the key protocols and transforms and math behind it all.
So as this stuff slowly gets developed, what happens is they develop another piece of the puzzle but then they have to wait a year as people try it out and they get responses back and they make tweaks and they see what will work and what will not.
Davinci Resolve, Baselight – all these guys implement ACES and they’re part of that overall discussion so you can play with it if you want to. But you don’t really need to at this point. And if you don’t understand the underlying technology at a relatively deep level, you can actually get into trouble.
Well let’s move on to the next question. This is from Chris Culp. He asks: “How long do you give yourself to grade projects? With SO MANY things to consider in a given shot, I feel like I can easily spend 5-10 minutes on each one! However in an NLE it’s feasible to typically color correct (as opposed to grade) a 30-minute show in under a day. Also, do you give yourself more time for details on higher quality productions?”
They all kind of come back to the same thing, which is: time. (chuckles) How much time is in the budget and how much work do you do?
Part of it is that you have to know it yourself. So my number for how long it takes for me to go through a show may be very different from your number. And it can be impacted by not just how fast I work but the type of gear I’m using or the type of software.
My number changes depending on if I’m color grading in FCP X with a mouse, compared to if I’m color grading in Davinci Resolve with a big Davinci control surface, compared to if I’m grading with a Tangent Wave panel. The numbers change depending on the tools, but I’m still the same constant.
So it’s important that you understand what you can get through in a day. For instance, yesterday I graded a 22-minute show with 620 shots and that took me 9 hours.
Every interview setup, I was doing vignettes and I was doing some face grades by pulling out the face separately from the rest of the image and tracking to it. I was doing that as well as fixing all sorts of problems: white balance problems, color temperature issues.
So I’m giving every shot its due. Sometimes you get a client that says, ‘You’re too expensive for me. You say it’ll take you 2 days so can you do this in 1 day and just do less’?
And my answer to that is I don’t know how to do less.
I can work quicker and I can work within a budget is what I say. But I can’t start out by thinking that I’m not going to do a vignette or not do a nice interview setup. I can work pretty darn quick when I want to.
It’s not that I’m dragging my feet when I say it will take 2 days to grade something, it’s just that I’m doing more detailed work. So, yeah – I adjust what I do based on the requirements of the project.
You know it’s the common thing where the client says, ‘I’ve got a 30-second spot, how long will it take you to grade?’ Well, I know some guys at commercial houses that will spend 3 days grading 20 shots. And then there are other projects where we’ve got 9 hours to grade 600 shots.
It depends on what’s going on with the shots.
Yeah. And on a commercial thing, the moment you start handing me Pantone color swatches that I need to match, where the brand colors are gone over with amazing meticulous detail, just managing that itself can add a half day. Once you get the basic grade down it’s like, ‘all right, now these colors are slightly shifting all over the place.’
Plus, there’s the whole notion of how we see, which, depending on the background and the shot, it may change our perception of what that color looks like. So now even though the Pantone may match, they may look wrong. So now I’ve gotta adjust for perception.
So there’s all of this stuff going on. There is no one right answer as to how long it should take to do a job. It’s how long can it take you to get it done and budgeting that correctly.
The question of time really comes down to budget. The last thing you want to do is budget a day and then it takes 2 days and you have to ‘eat’ a day.
That’s the worst case scenario.
To get to the point where you don’t do that is to understand how long it takes you to do something. And one of the things you’ve got to nail down as an editor or a full-time colorist is an understanding about yourself and how long it takes you to grade a shot.
And that’ll change depending on the hardware and software that you’re using.
And I recommend to people, in terms of how long it takes for them to do just about any kind of creative procedure, to breakout a spreadsheet.
Have a kind of general template in terms of like, ‘I know if I’m going to do this kind of Look or this kind of composite it typically takes me this long.’ It really does help you accurately spec out the budget and how long it’s going to take you to do something.
Because the last thing you want to do is, like you said, underestimate how long it’s going to take. You don’t want your client to feel like you’re charging him overages because you couldn’t figure out how long it is going to take to do what they need for you to do.
Yeah and oftentimes when it comes to overages, the client will say, ‘Well, what did we do wrong?’ And if you can’t answer specifically, like say, ‘Well, I budgeted this amount of time but I wasn’t aware that we had to do all these other things.’ If you can’t answer them that specifically – you’re eating that overage.
If it’s specifically because you didn’t calculate properly the amount of time, chances are, that’s on you.
Yep, that’s on you.
And the other thing too is, this tendency from clients that are watching down a scene for color.
All of a sudden they hear something in the edit or they see something on the screen and they say, ‘You know what? There was some B-Roll I wanted to put there or I don’t like that sound effect, can we try sound effect B back in that sound effects library?
The initial inclination, especially the first time this happens to you is to say, ‘ Yes, let’s go do that.’ What I tell my clients is – ‘No.’ (chuckles)
We’re not going to do that. Here’s the timecode number, take notes.
Because we’re color grading now.
I’m using different senses and I’ve got a different focus when I’m color grading than when I’m editing for audio or when I’m editing for timing or when I’m looking for B-Roll.
I’ve got different processes that I’m using in my brain, and one way to keep yourself on time is to keep yourself on task. So I tell clients in those situations when I’m grading in the NLE – which isn’t very much anymore – I’d start out the day, start out the session and say, ‘Look, if you’ve got audio notes or picture notes that aren’t related to color you need to write down the timecode, write down the note and we’ll get to it at either the end of the day or at the end of this process.’
That’s how you bypass it. That way you don’t look like a total ass the first time they ask you and you say no. If you warn them upfront before you even start they will totally appreciate that and you’re being professional about it and not being contrary at that point.
So the last reader-submitted question is from Victor Lei. He asks: “What color correction tool in AE could I use to make the subject’s skin color look lighter?” And he is referring to an image that the sent us which is a screenshot of talent on a virtual set. He wants to isolate her skin and just make her lighter and not bring up the whole luminance value or lightness of the scene surrounding her.
I don’t do a lot of color-correcting in AE simply because it’s slow.
It’s great for the kind of self-directed indie filmmaker who’s not on a deadline. The DV Rebel is great and I love that approach and I love that philosophy, but the DV Rebel doesn’t have a client sitting behind him and a looming deadline that he has to meet.
So AE isn’t the best tool for client-directed workflows but to actually answer the question, as opposed to saying, ‘you’re in the wrong tool’:
- 1) AE comes with Color Finesse so that would be the first place I would look at, which is a full-blown color grading app. Just before Final Touch was bought by Apple and re-released as Color, one of the tools I was looking at was using Color Finesse as my main grading tool because they’d come out with a standalone version of that. So it is completely capable of handling most of these color-grading tasks. So that would be one place to look. But it does have its own interface which is a bit of a unique approach to color grading.
- 2) You could go the route. It’s a great 3rd-party plug-in if you do a lot of this stuff. There’s a lite version – it’s free. It’s not quite as powerful but it should do a lot of this stuff.
- 3) There’s a couple of plug-ins that deal with skin tones. They know where skin tones populate on the vectorscope. It’ll help you isolate those skin tones and do smoothing and softening to it a well as some lightening and brightening of it.
But really, when it comes down to manipulating and controlling skin tones, what we’re talking about is secondary color grading.
In the blog I talk about how secondaries are nothing more than isolation – that’s all you’re doing is isolating. And on skin tones you’re isolating by hue and saturation. So in that long balloon around that I-line on the vectorscope is saturation from mostly unsaturated to a rich-looking skin tone on either side of that I-line.
That’s the area that you’re looking to isolate and once you isolate that area then you can manipulate it with your color grading tools to brighten or whatever you might want to do.
Colorista has built-in stuff for that and the skin plug-ins all are essentially keyers that are tuned for skin tones and allow you to manipulate the skin tones once you’ve isolated them.
Q&A with Patrick – Part 2
So I have a few more general questions – things that I’ve been wondering about. I’m glad I have the opportunity to talk to a colorist on these matters. So first question, if you’re in corporate or indie environment, how do you make the case to clients for the critical need to build time in the budget for at least a primary color pass? Specifically, how do you put it in terms that they’ll understand? Like, it will save them money, or it’ll make the project less stressful, or it’ll give the video more impact. What can you say to them so that they can wrap their head around the importance of color correction?
Yes, it comes down to educating your clients. The first thing you’ve got to do is listen to them. What is their big objection? So you have to start the discussion in the first place.
Sometimes it’s time. Sometimes it’s like: ‘well, we don’t have time to get it into DaVinci Resolve, because it’ll take too long to round trip and we’ve gotta prep the timeline.’ Sometimes the objection will be money. Like: ‘we just don’t have it in the budget to add another two days to the color grade.’
Well, even if I’m just in Final Cut 7 using the 3-way color corrector, I’m like, ‘Can you at least give me an afternoon to do rudimentary color correction, as opposed to 30 minutes?’ (laughs)
I think my approach on that is production value. It’s like, ‘look, you’ve already spent all this money. Give me a half day – we’ve been working on this for x number of days, you spent x number of time shooting this – let’s at least get all the production value. All the time and effort you spent on this – that’s what color grading is all about; it’s production value.’
To some producers, that really resonates with them. It’s about getting everything to look right. It’s about getting everything to match. Do we want this to look like basic cable or like network primetime? And no one in their right mind chooses basic cable when presented with that question.
That’s what we’re talking about. Color grading is going from basic cable or YouTube to network primetime. Whatever amount you can afford to do is better than nothing. If you’ve got a 10-minute project that has 300 shots and the best you can is add a 3-way and spend literally 30 seconds on each shot it’s better than nothing. It’s all about production value.
It’s about eeking whatever last bit of quality that you can out of a project without blowing the budget. And it’ll make it look like they spent more money than they did. It makes them look good.
Your calling card, in the end, is, you know – what’s it look like? What’s it sound like?
That’s one of the discussions I find that helps producers wraps their minds around it. Often time it’s not even them, it’s their boss who needs to be convinced. So sometimes the thing you have to do is prepare the producer so that they have the argument in their head and can then go to their boss and say, ‘Hey listen, let’s release a couple hundred extra bucks for this guy to spend another five hours just on the color grading.’
For me, production value is a big one. I wouldn’t get into color constancy – you don’t want to throw any big words around – but you do want to emphasize that this is all about evening everything out so that the audience is only focused on the message.
The only thing you want the audience focused on is the message. Especially corporate communications – it’s all about the message.
The other thing is, you know – if the CEO looks like crap, it’s like, ‘Really? You want the CEO to go out looking like this?’
Sometimes it’s a difference of pulling out that one shot, doing a quick color grade and saying, ‘Okay, look at the difference and tell me if he doesn’t want me to spend four hours to make all the shots of him look like this.’
That’s a good idea.
(laughs) You know, so that’s another way. They always want to look good, especially if the boss’s boss is in the video.
Well tell me, what are our options as editors, when we’re asked to do color correction but we’re in a less-than-optimal environment? You know, bad ambient lighting, strong surround fields, non-calibrated reference monitors. As a freelance editor, I’ve been put in cubicles! (laughs) – you know, in ‘cubicle land’ to do stuff, and it’s crazy-making! I won’t even talk about what that means for my sound mix… But what do you do about that situation from a color standpoint? What can you do? Other than say, ‘Hey, I need $10,000 for a better room?’
(chuckles) Well, you know – sometimes it’s just asking to be put in a room with no windows. And then you bring in a small task light and throw it up behind the monitor if you can – if that’s even feasible.
This comes down to just making due with what you’ve got, right? I mean, no matter what, no matter your viewing environment, at the very least – if you’re doing shot-to-shot matching – if everything is off by being too green, at least it’s going to be consistently off.
And that automatically adds a level of professionalism to what you’re doing.
Part of primary color grading to me is really the shot-to-shot matching. As far as I’m concerned, that’s really what I get paid for. Clients will tell me it’s to fix color balance – no, not really. The only reason you want to fix the color balance is because it doesn’t really look like anything else. It’s shot-matching that really sells the craft.
So if you’re in a terrible environment, just understand that, yeah, it’s not going to be ideal, and when you bring it home and look at it on DVD, you might be disappointed with what you see. But they’re used to all of their projects looking like this. For every project, their freelancers are in the same stupid cubicles, with the same terrible lighting conditions, so you’re no worse off than anything else they’ve ever had to judge coming from that environment.
Of course, then that explains why they want to go out of house – all of their stuff looks like crap. The out of house colorist has his room set up correctly. But if you were to spend $2,000 to give your internal guy a room that’s also properly setup for color grading, you could probably save yourself $20,000 by not going out of house.
Yeah, that brings me to my next question. A lot of companies get the idea of either going out of house for post audio or I’ve even been places where they have an in-house Pro Tools suite. But the idea of sending advanced color work out of house or having a Resolve suite – it never crosses their mind.
You know, the first thing I’d say is, ‘Look, it’s Pro Tools for video.’
(laughs) There you go.
Right? It’s Pro Tools for video. And it doesn’t cost a lot of money. It really doesn’t.
So, once you make your initial investment on the Pro Tools suite, you’re done. Right? Make an investment, on a color grading suite, and you’re done. A couple pieces of gear, and that’s it, you’re done.
I think that’s how you present it. You’ve got to educate your superiors or the decision makers. And one way to educate them is to put it in terms they understand. It’s audio mixing for video. Pro Tools for video.
I think you need to trademark that.
Well tell me this – what strategies can video editors or colorists use to insert themselves in the pre-production process as often as possible? Do we start hanging out with the DP and the head shooter and start taking them to lunch? How do we get involved in the project early so that we’re not brought in once everything’s been shot, on location, halfway around the world, and we’re kind of stuck with what we’ve got?
It’s something I’m still working out, and it evolves.
The thing is to have a good relationship with producers and the DP. And it depends on the project. On some projects, the DP has no say in the matter, so it doesn’t matter if he’s your best friend, he can’t give you the project because it’s not his to give.
Producers could be really good resources – to have a couple producers you’re really tight with and start working some projects with them.
And then when you hear of a new project, it’s like, ‘Hey, why don’t you bring me in on the pre-production meeting with the DP?’ In the meeting I won’t say anything. I’ll only talk if something doesn’t sound right that might affect us on the backend. I’ll just raise my hand. I’m only there to spot problems when it comes to the finishing side. But yeah, when I am brought in on these, I don’t like imposing.
I might have a DP or producer ask me, ‘What’s your favorite camera?’ I don’t have a favorite camera – I have favorite camera people. Cameras I couldn’t care less about. Even, to a certain extent, codecs I couldn’t care less about.
If it’s properly exposed and properly managed, that stuff all falls away. The only time it counts is when a shot is bad – when it was shot incorrectly. And that’s not the camera’s fault. So when DPs talk about their favorite camera – I’m thinking, ‘I just want you to light the damn thing right.’ (laughs)
(laughs) Right. Here’s a question and it’s a little bit off from what we’ve been talking about, but I’ve read at varying points that women generally perceive colors better. If that’s the case, why does it seem like there are so few women colorists?
I don’t know. I’ve often wondered the same thing myself. I think women would make excellent colorists.
It might be that the color acuity is so high that they have trouble. It could be because of the number of men in our business, with this being a male-dominated business and all. And this is coming from my wife that is in this business and has told me quite frequently that it’s male-dominated.
But a woman colorist might be obsessing over something that the guy behind her just can’t see.
Oh, wow. She’s on another level, literally.
Yeah, that’s a possibility. I’ve never sat in a session with a female colorist so I couldn’t tell you. But that might be a possibility.
They may have to train themselves to be a little less fussy. Or if they’re being fussy, not to let on that they’re being fussy.
Yeah, I don’t have a good answer for that but I think it falls somewhere in that line. It’s like I said with skin tones. If I let a client guide me on what they’d like with skin tones – you know, a good colorist has to be able to do that, and women who see with better color acuity might have to be trained into that philosophy.
That’s really all I have. What did you say that you haven’t said already?
(laughs) Oh boy! It’s been a long conversation… (laughs)
Yeah, we could talk about this for hours, obviously. This is essentially a Q&A, but is there a way to kind of succinctly sum up what you’ve talked about this month? Should we even try? (laughs)
Well, yes a couple things I want to say. Number one is once you get beyond the beginner stages, you’re doing a lot of this stuff simultaneously.
I’ve started recording my workflow to be able to do a postmortem on how I grade.
For instance: contrast before color. It’s a basic concept that I talk about in the blog series. But when I’m grading, when I’m in the groove, I’m doing both at the same time. I’m fixing color balance problems at the same time that I’m working on the contrast.
There’s this great video on the web of a guy teaching a golf swing, and he breaks down the golf swing into 16 steps. It’s absurd, the number of steps this guy breaks down. And I’ve never seen a golfer actually swing that way.
But the point isn’t to swing in 16 small steps. The point is we learn the 16 steps so we can isolate the 16 steps. We get rid of bad habits, learn good habits, and then we put it all together so it’s just one swing.
And so we’ve got 21 daily posts, of which there are probably 16 of them that are our 16 steps of color grading. But when you put it all together, it should not look like 16 steps. At best, there’s the backswing and whatever, the front swing – whatever the hell they call it.
Right? And then there’s the follow-through. It’s like two things: it’s your backstroke, it’s hitting the ball, and then it’s your follow-through.
The backstroke for us is primary color grading. The front stroke is shot-to-shot matching. The follow-through is the secondaries. And that’s it – that’s really all this is, and it’s really not more complicated than that.
But, in each of those steps you can break them down to further understand what goes into each one because talking about it broadly like that doesn’t help anyone become a better colorist or better at color grading.
In terms of talking about this month long color-grading series, that’s my big take away: it’s not as complicated as it seems.
Right, and again, the point of the whole thing was to talk about the fundamentals – for this to be a ‘quick guide’ for people to wrap their head around color correction and color grading. And if you want to get more into the nuance and the detail, then definitely, we both recommend Alexis Van Hurkman’s book and I think Steve Hullfish has a book as well?
Yes. And you know, when you look at this color-grading series, I consider it to be a start to looking at this stuff. That’s it: a start.
Like I said, we could break this down into 16 steps – and in the Color Correction Handbook (Alexis Van Hurkman) he does that. Fantastic resource. I haven’t read it in a couple months but I’m getting ready to re-read it because I’ll be teaching and using it as a textbook in the fall for a university-level class I’ll be teaching. Everything that I’ve written about, he breaks down even more. Terrific, terrific book.
The other book I recommend is Steve Hullfish’s The Art and Technique of Digital Color Correction – the second edition just came out. Get that one, there’s more in it. The beauty of his book is he’s talking to colorists, giving them the same projects to work on – and they’re all grading the same projects but doing them differently.
They all make different choices, and there’s a tremendous amount to be learned by giving everyone a baseline to work from using the same software, the same control surface, the same images, and then recording how they approach these shots differently.
That’s always fascinating.
It really is. I highly recommend these two books. If you really want to get beyond throwing the 3-way color corrector onto your shot and moving on, and you really want to think about turning this into a marketable skill that can help you get more jobs, make more money – then you’ve got to read those two books. They’re essential reading.
Well, I think that’s going to do it for us, Patrick. I definitely want to tell people to check out the Tao of Color. I’ll let you plug that and let them know what you’re doing there.
Right, TaoOfColor is my training website. I started training people in 2006, doing small presentations to user groups, and then out of one of those presentations grew the whole concept behind the Tao of Color. And the concept is simple – I mean, we could talk about this stuff all day, but it’s a matter of execution. It’s a matter of doing it. You’ve got to do it to learn it.
So not only do I teach you the DaVinci Resolve interface in my DaVinci Resolve master class , I give you a 16-minute project, 230 shots, and we go at it. I give you something to work with, something that I graded professionally, something that I know is challenging but doable, and something that I know will teach you the core skills.
I really believe that the core skill of any colorist is shot-matching. The film I provide is one huge exercise in shot-matching. There’s more stuff that’ll be coming out later this year once I wrap up the Resolve master class – that’ll go more into Looks and things like that. But even before we get into Looks – if you can’t match your shots, your Looks are going to look like crap.
Tao of Color is a training platform that is there to teach people how to do what I do, to eventually increase their rates and get more jobs.
And we definitely want to tell people to sign up for their newsletter – it’s great.
He loves the newsletter. And it’s free.
Yeah, I call it my ‘Sunday New York Times for the Industry.’ (laughs)
(laughs) From the very beginning, that’s exactly what I wanted it to be. I’m like, ‘You know what? I don’t want people to wake up and read the Times, I want them to wake up on Sunday morning, with their morning coffee and read the Tao newsletter.’
(laughs) Well it’s been a great month. Before you, we had Mike and Colin from Divergent Media doing a whole month on video compression. That was great. Those were the guys who make ClipWrap and ScopeBox, and I think you use ScopeBox Patrick. And next month, for August, our Workflow Whisperer is going to be Jeff Foster. He’s the PixelPainter and the guy who wrote The Green Screen Handbook. So he’s going to be talking for the whole month about keying and matting. So like what we’ve done with you and the guys from Divergent Media, he’ll be talking about the fundamentals in this kind of linear way so that, by the end of the month, readers will have a strong foundation for keying and matting.
I’m looking forward to seeing that myself.
Well Patrick, thank you so much.
Thank you for having me, I’ve greatly appreciated the platform.
Like what you’ve been reading in this series?
I invite you to check out my awesome (and FREE) weekly newsletter: The Tao Colorist. It focuses exclusively on the art, craft and business of color grading – linking out to the best blogs, forum discussions and news articles (plus some funnies) of the previous week.
Table of Contents
Both comments and pings are currently closed.