Jeff Foster: Keying / Matting 08/05/12
Every month, a daily progression of fundamentals on a topic.
This month Jeff Foster, author of The Green Screen Handbook, talks about the dark art of keying and matting. Primarily from a post perspective, he condenses the essentials of his handbook – complete with links and many of the book’s illustrations and screenshots.
As an author, producer, and VFX artist/compositor, Jeff has served as an instructor and featured speaker at conferences such as Photoshop World, Macworld, Adobe MAX and the NAB (National Association of Broadcasters) Post-Production Conferences. He has been producing and providing training for traditional and digital images, photography, illustration, motion graphics and special effects for DV and Film for over 20 years. Some of his clients include: Tribune Broadcasting, Motorola, McDonnell Douglas, Nestlé, FOX Television, Spike TV, Discovery/TLC, Deluxe Digital, Universal Studios, Lions Gate Films and Disney.
More information about Jeff and his work can be found on his website PixelPainter.
We’ll be tweeting links to these tips out daily, so make sure to follow us if you don’t already.
TABLE OF CONTENTS:
WEEK 1: History & Definitions
WEEK 2: Basic Compositing Techniques
TUES: Layer Masks
FRI: Best Practices
WEEK 3: Making the Composite Believable
(aka Selling It)
TUES: Simulated Rack Focus
WED: Motion Tracking
THURS: Light Wrapping
WEEK 4: Fixing Problem Green Screen Shots
MON: Poor Lighting
TUES: Difficult Spill
WED: Transparency Issues
FRI: Q&A with Jeff Foster
Monday – 08/06/12
History of Compositing & Matte-Making – Part 1
The earliest compositing or matting techniques were done just with luminesce; basically characters against a white background. Frank Williams was the first person historically recognized for his patented process. He lit subjects on a black background and copied it to high-contrast film in some of the earliest recorded compositing.
Most notable were the first Invisible Man movies by John Fulton. As a kid I was enthralled with them. They were so cool! Fulton’s process involved shooting on a black background, but getting the “invisible man” part required putting a velvet black cloth on the actor’s face and skin.
The actor would have black gloves and a black hood over his head, under his shirt and hat. He’d have sunglasses on with a wrap around his face. They would use the black of the negatives to then create their matte to make a film-to-film composite. That’s how that effect was done.
A little earlier, Walt Disney had been experimenting with a series of shorts with the Alice comedies. He kept pitching these shorts to studios and they kept turning him down, but he created them anyway. The shorts eventually got picked up as an ongoing series that would play in movie theatres before the main features.
He would shoot all of them on a white background with his characters in the foreground and develop that and make the negative a positive and then shoot that against animation. He would have a single character – mostly the girl, Alice – acting out and pantomiming scenes. The animation would play underneath that as he would re-shoot this new composite.
This is where Ub Iwerks comes into the scene.
He was the top animator for Disney. He and Walt worked on a lot of projects like this. He helped do Oswald the Rabbit which eventually became the inspiration for Mickey Mouse.
Just a history sidenote, Ub wanted more notoriety than he was getting at Disney, so he kind of went out on his own but eventually came back to work for Disney. There was a short period of time there that they had a little falling out.
But Disney and Ub kept working together before they had their split. The Three Caballeros was an advanced version of their compositing work long before blue screen and green screen. The film was all done with actors in front of an animated rear-projection screen. This is when Linwood Dunn started using optical printers to combine this footage with animation. The military used it during World War II. Dunn was most noted for the award-winning Astaire-Rodgers musical Flying Down to Rio. They had some fun effects of some dancing girls on the wing of an airplane.
MGM compositor Larry Butler (The Thief of Bagdad, The Old Man and the Sea) was trying early versions of the traveling matte process using a blue screen with limited results. If you’ve seen The Thief of Bagdad, you’ll agree that it’s really cheesy. You can see a lot of blue fuzz around the main character.
Tuesday – 08/07/12
History of Compositing & Matte-Making, Part 2
People were using a sodium vapor light on a white screen and, of course, sodium lights are yellow (think street lamps). They were using sodium vapor light to expose a secondary black-and-white film at the same time that they were exposing the primary footage. This was primarily happening in black-and-white movies.
Petro Vlahos, a member of the Motion Picture Research Council, came up with a system that took the frequency of the sodium vapor – with a very narrow bandwidth – and created a special prism with different coatings that would extract the sodium light and send it out in one direction, with the rest of the bandwidth coming through the prism and exposing the running footage.
The feasibility of using ultraviolet (which also has a narrow bandwidth) was also explored, but in experimenting with it, they found that it was too close to other parts of the spectrum.
The sodium light would bounce through the prism and go to another roll of film.
Petro had to create a camera that had two running rolls of film in it. One would expose the negative and one would expose the positive. He would then be able to sandwich the two and pull a matte from the sodium light. Petro patented the prism (in addition to owning the only one in existence) and the technique of creating this sodium vapor system.
Ub Iwerks (who was back at Disney, working in the UK where they were filming Mary Poppins) was also experimenting with this sodium vapor system – but he didn’t have the prism, and had to pull the matte differently. Apparently with this process, Ub was losing two stops of light.
On film, that’s a lot. A real problem.
The only way Disney was going to be able to make the sodium vapor process work was to buy Petro’s prism and use Petro’s technology. They did just that, and Ub ended up designing a camera specifically for that prism that made it work.
Ub is often erroneously credited with inventing the process. In truth, it was Petro Vlahos who sold the technology to him and gave him the rights to use his patent.
Ub continued with this process all the way through Pete’s Dragon, the last known film that Disney created using the sodium vapor process before they moved on to different technologies.
It worked really well for B&W. They did The Absent-Minded Professor and other films in the 60s and 70s using this method. Most of them were done in the UK, and Ub produced them all. The sodium method was the foundation for the color difference traveling matte system.
My colleague Les Perkins did a mini documentary of Disney’s productions called Brazzle Dazzle Effects that appears on the Pete’s Dragon special edition DVD. He’s helped me a lot on this part of the Disney history.
MGM had already tried without complete success to pull out the blue and create some kind of traveling matte with The Ten Commandments. That movie also had blue edges around actors and there is certainly some shoddy work in there by today’s standards.
When MGM started planning the production of the 65mm-film Ben Hur, they contacted Petro. By this time Petro was well-known in the industry and working for the Academy. MGM was having trouble with blue matte and needed his help.
Petro did the math and found just the right spectrum. With this new technology in place they shot a test on the set of a western. Much to everyone’s disbelieve, Petro had actor Glen Ford smoke a cigar, claiming that he would be able to matte the smoke.
It was unheard of.
You could barely get an edge around a human on a blue screen, let alone extract smoke.
Sure enough, he was able to isolate just the blue and create great mattes. They ended up using those tests in the final version of the movie because they came out so well. Petro was able to perfect the technique and got a patent on the technology. It was the beginning of the blue screen traveling matte as an industry standard.
Wednesday – 08/08/12
Most people refer to blue screen and green screen matte extractions; I’m careful to use that terminology instead of keying.
Chroma keying is a very specific process and it actually refers to early electronic switchers. Chroma keying is a process of finding a chromatic range. It doesn’t have to be just blue or green, it can be any color in the spectral range that can be turned off. It’s an on and off switch.
With keying there are sliders, switchers, knobs, etc., that are used to select a color and pull that color out. The problem with switching colors on and off is that there is no transition. There’s no transparency, no smooth or soft edges or anything like that.
So if you use a quick color keyer in your video editing software, you’re going to get a hard edge. You won’t be able to pull things out like smoke, hair or anything with any transparency in it – at least not very easily.
Color difference matting takes into consideration a very specific color range that can be extracted. So you can pull out various levels of that color difference range.
The earliest 8-bit was 256 levels of that color that was extracted from the difference. And that’s where the color difference matting electronically came into play.
Petro came in with Bill Gottshalk and his own son, Paul Vlahos, to develop a company called Vlahos Gottshalk Industries, later renamed Ultimatte.
They were basically developing these boxes – the Ultimatte boxes. That’s how he got his patent for electronic color difference matting.
It’s easier to say, “Let’s key that out,” than it is to say, “Let’s pull a color difference matte.” But there is a very significant difference.
Another huge question is: why would you use blue screen or green screen and why did you call your book The Green Screen Handbook instead of the blue screen handbook?’ I’ve actually had a lot of professionals ask me that.
Well, green screen is primarily where everybody is today. And all of the electronic technology is in that space primarily because of the visible spectrum of light.
I got a lot of this information from two technology scientists that I look up to. One is John Galt from Panavision – he has done a series of white papers, talks and presentations discussing the technology behind the way light is brought in to cameras and how the sensors and chips on them work.
Green primarily holds most of the luminance information in video signals in most video cameras. Blue and red channels only hold 20 – 30% of luminance information. Green holds 70% of the luminance information.
So when you take a single frame of video and pull out the RGB and look at the individual channels, you’re going to see a lot more noise in the blue and red channels than you will the green channels.
With that, it is only practical to extrapolate the green information to pull the matte. So green has become the standard for video. Some still use blue technology with film because that technology was more mature in the past and is entrenched in cinema workflows.
It’s the shift from film to video that’s made green screen the norm now. So now, because we have high definition (which also means better cameras that get more information in the color space throughout the spectrum) we can pull a decent blue screen digitally as well.
We have a lot more data to work with than we used to have.
If you just need an extraction of your characters, use green screen. It’s easier to light. It requires a lot less light to light a green screen than a blue screen. But blue is a much more natural light source.
If your subjects are going to be outside in the final composite, then you might want to use blue because there is going to be blue spill on them. And, of course, if your elements also have green in the foreground you can’t use green.
So if you’re going to have foliage in the foreground or your character has green in their clothing, you’re going to have to use a different color.
Another color that is widely used and that many people don’t know much about – and this is mostly done with models that have metallic surfaces (i.e. spaceships) – is magenta. This because they can hold greens and blues in the metallic surfaces with light bouncing back and still be able to extract a good matte from that. It’s much less common, but you’ll see a magenta screen every once in a while.
Thursday – 08/09/12
Hardware Matte Compositors
Petro Vlahos and his son Paul developed Ultimatte together. In the early days of the Ultimatte system, they only developed one or two boxes. They would take them out and test them, do production work with them and bring them back to the shop.
It wasn’t until about Ultimatte 4 that the hardware was refined enough so that they could start leasing the boxes out. They were still just building them in a small shop – all handmade.
Petro’s wife was performing the quality control on these boxes. She was finding all kinds of production issues, so she took over the production management for the physical products.
She had a great eye for catching all kinds of problems and mistakes. She was very detail oriented. It really was a family business!
But at that point they weren’t selling them. They were just leasing them out to production facilities, production people and TV stations. This was the beginning of the weather man in front of the blue screen, or the anchor in front of a background that could be replaced.
Later, Grass Valley introduced a “chroma keyer” in their hardware that emulated Ultimatte technology. Paul actually went up to Grass Valley and met with them and basically said, “Hey look, we’ve got a patent on this and I’m looking at your technology and it looks very similar to ours, so we’d like to offer you the chance now to just license our technology from us.”
Grass Valley agreed and they have had a long-standing relationship. As far as hardware is concerned, they are still the main players to this day.
There is other hardware now, like the Tricaster Studio by NewTek.
But they were standard definition for a long time before becoming HD capable. But there aren’t many other hardware switchers and hardware-based systems out there. The pros are still primarily using the Grass Valley or the Ultimatte boxes.
In my interview with John Galt, he talks about how Petro modified an HD version of the Ultimatte 4 back in the early 80s.
This allowed him to do HD tests – it was the first HD hardware matter. They did a bunch of demos and everybody thought it was cool, but nobody really adopted it as a production process.
So it sat on a shelf for years until, within the last several years, Ultimatte finally produced an HD box – the Ultimatte 11.
It basically sat on a shelf for a number of years because there weren’t any HD cameras yet.
As I’ve mentioned throughout, I’ve got some great interviews on my site that go into greater depth about the technology behind matting. Be sure to check them out.
Friday – 08/10/12
Software / Plugin Compositors
One piece of software that a lot of people loved (including myself) that is no longer supported is AdvantEdge.
It required a dongle – it was pretty high end. Not only did it perform matte extraction correctly, but it also gave you a holdout matte for garbage matting. As long as you had a locked off camera, you could matte out (or extract out) your blues and greens.
You could also use a single frame without any actors onstage and use that as a holdout – a difference matte. So if you had C-stands, cords or cables – or even dirt on the floor – it would pull all of that out and only introduce back in the actors or the talent that you brought onstage, including their shadows and everything, and it would hold everything out and make that a difference.
People had been walking on blue screen floors and leaving foot tracks. Petro wanted people to be able to dial that out in the hardware (they’ve now done the same thing with their software).
They stopped supporting their software as of last year, which is really unfortunate.
AdvantEdge was the granddaddy of matting software, but there are a lot of good alternatives out there.
Ultimatte Knockout is also discontinued, but it was only for Photoshop – not for video. The technology they had was for finding edges of objects. It wasn’t necessarily for knocking out a specific color range, it was for finding an edge of a specific element.
They haven’t been able to automate that, which is one of the reasons they never developed the technology for moving pictures.
I mention this because iMatte (the Ultimatte sister company) has basically two products that they developed in which this technology comes into play.
One product is something that they’re licensing on a smartphone using the camera. You take an individual and put them against any kind of neutral wall. You line them up with either a template or your own background and it will extract them, send that information over the web, and it will come back to your phone already composited.
That’s done through software on the backend. The composited image that is sent back to your phone has shadows, integration, everything. You can check out their website for more information.
Personally, I use Keylight in After Effects:
I know it and I have my own method for compositing with it. This is my go-to for the most part. If I can’t pull it out with Keylight, then I’m roto-ing. And, of course, there is a lot of lower-end stuff if you’re doing work in Final Cut or other NLEs. There’s Primatte Keyer – and Veescope Live is actually pretty darned good and powerful.
They’ve got a great standalone compositing workflow. It’s nodal-based, so you will have to learn that process. Alex Lindsay has made some great tutorials on his software’s process of using the color-difference channels to create a nice extraction.
I’m mainly work in my Adobe workflow of AE and PPro, so I’m bouncing back and forth between those apps all the time. This is why I primarily use the Keylight workflow throughout my book.
I should mention that there are a lot of other great products out there. But stuff that’s accessible for the average person – people who are looking for options under $10,000 – that’s what I want to focus on. It’s also the workflow I’m in.
I don’t use Nuke, although that is an option. I think it’s a great workflow and I may someday move in that direction. But for now, a lot of the work that I’m doing requires that I work in an accessible system that can be handed off to another colleague or client.
So I work with the tools that are accessible, exchangeable and relatable to my client base. You work with the tools that work with the portion of the industry you’re in.
Monday – 08/13/12
Keying / Matte Extraction
Chroma keying or keying is a shortcut name for matte extraction (or matting, as I like to call it). I still refer to it as keying because it’s a process. We can get beyond the technology here and just refer to it as keying, or “to key something out.”
Even that is still a bit broad. Besides blue screen and green screen keying, there is luminance keying as well. I do a lot of luminance keying.
Say I’ve got a shot where the actor has moved off of the green screen and into the air. I’ve got a dark-haired actor against a washed-out sky, and I’m going to use whatever information I’ve got to pull whatever I can to keep his hair in ‘play.’ That’s another way I may key out (or extract out) my characters.
The process of chroma keying or matte extraction is a matter of pulling out the color range that we’re looking to get. I’ll use Keylight as an example.
In a perfect world, in the shot I’m given to work with would include these elements:
- the green screen was well lit – nice and even
- my subject is well lit and enough in front of the green screen that there’s no green spill on them
- there’s a little bit of backlight to separate the subject from the background
This is the best scenario for getting a good extraction. I only have to really select my green color with my software keyer and pull this color out.
Once I’ve selected my green, I’ll switch to preview mode to look at the matte itself. That’s going to tell me the information I need to tweak. What you will need to do from here will depend on how the background was lit, how the foreground was lit, what kind of camera was used, its resolution – all of this comes into play.
The matte shows me a white-on-black image. The white is my hold out – the part I want to keep. The black is what I want to go away. It’s going to be my alpha.
I’ll often find I’ve got to adjust a little bit of my black, to ramp it up so that it eliminates any noise that may be around the edges. The lens that I’m shooting with may create a vignette. I may have a smaller chip camera which produces some noise.
All of those elements stack up against you when you’re shooting. It makes it hard for the guy in post to get a good extraction. You’ve got to crank up the black so that the black goes ALL black – so that there aren’t any white pixels in there. Then you have to ramp the white down, so that it gets rid of any ghosting, spill, or black pixels that may be in the area.
If there’s any kind of green spill or green anything on the person – even in the shadows, where there is natural noise (under the nose, ears, chin, etc.) – you have to crank down the white so that it goes ALL white.
What you want to end up with is a really clean white-on-black (but not so much that your edges go away). You want to have really nice edges, especially if you have any kind of softener – that includes motion blur, hair flyaways, and any transparency in the clothing or in the scene – you’ve got to make sure you hold that.
There are also controls for choking the matte. If you have an issue with noise and lower-res images, you might need to choke the matte, which means bringing it in a little tighter, and maybe introducing a little bit of edge blur to soften the edges.
Those are the types of things you have to look at in the matte. You can do all of that in the matte view mode. Once that’s refined, you can go back and look at the final composite mode so you can really see what you did. Sometimes you have to go back and forth and tweak a little bit. But that is primarily the process of software keying.
Something about Primatte that is really cool is that it has a way to create a background spill on the edge. You can incorporate an edge effect around your character that looks like lighting. It’s basically fake backlighting – it takes some of your composited layer and pulls in the edge.
BCC (Boris Continuum Complete) has Light Wrap:
It’s nice if you have a harsh key or something where your composite background is much brighter than your foreground. You don’t want it to look like you have cut somebody out and stuck them on a background. Using Light Wrap for this works well.
I’ve often faked this in After Effects. I use layer styles – either internal glow or inside shadow, and set it to a luminance and a light color. There are a lot of ways to fake it directionally. Really, there’s a lot of faking that we can do in general.
Tuesday – 08/14/12
This is something really beneficial about the upgrade to After Effects CS6. In the past, I’ve used the pen tool to create garbage mattes – or create masks and layers for sharp-edged objects or things that are somewhat linear.
Roto-masking (which I’ll talk about on Thursday) and layer masks go hand-in-hand. In our series between Tuesday and Thursday this week, all lines will be blurred.
I’ll use a layer mask as a garbage matte that holds out all the stuff I don’t want or need.
For a typical green screen shot, I’ll look at the length of my shot and find where the most extreme movements of my character take place. I’ll make a moving matte just by drawing around them with the pen tool, giving a (large) feather so that there’s a nice, soft edge. Then I will animate the key frames of the shape of that matte over the timeline.
I’ll start at my first point and then look at my end point. I’ll move the matte to adjust for movement. It’ll automatically create the key frames for me and then I just look back and forth through the path of the shot.
If, for instance, somebody’s hand goes outside of the matte, then I just move it at that point and a new key frame is made. I’ll continue key framing in this manner with the pen tool for a long time to create my garbage mattes on a green screen.
This is important to know how to do well because although the lighting of the green screen around the actor is usually fine, I’ll eventually get a shot where the outside boundary of the green screen shows on one side, or a C stand can be seen on screen.
So I want to garbage matte everything. If I make my garbage matte then I can go right to the keying or matting process. I get my garbage matte made and then I can go right to Keylight, key it out, and I’m done.
That’s a great working scenario.
Another way that layer masks are really helpful is that they can basically be used as a roto technique.
Example: I’ve got a railing that I need to mask out from the background. The railing is on a roller coaster, and it’s a handheld shot. It isn’t just something I can set up and leave. It’s not a locked-off shot; it’s continuously moving.
Using the same process, I’ll take the pen tool around all the edges to give me nice handles and vector shapes. Then I’ll move them – key frame them to follow along. It’s a tedious manual process but it has to be done.
With After Effects CS6, this is a thing of the past. There used to be a time when you had to create lots of different masks on a layer or mattes, because with some of them you wanted to blur the edges by one or two pixels to get a little soft edge. In some areas, you need to follow something that’s got a little more fuzz to it or there’s an edge of something that you want to matte out.
Say I wanted to extract a mountain from the sky. I could do a really hard edge on one matte and then I’d have to do a softer edge – maybe where there were some trees that fuzzed out.
In CS6 you have the ability to define areas in between points (the amount of edge feathering). I could take the mountain and keep all my edges fairly consistent, but then with that little fuzzy area of the grove of trees, I can go inside and pull out a little softness on my edge matte – and I can do it all with one matte. I don’t have to have two or three mattes (or masks) to do it anymore, which is huge. After Effects gives us a lot of control over making layer masks.
Tomorrow we’ll be talking about mattes. The line between masks and mattes begins to blur – mattes are basically a process of using masks. We’ll get to roto on Thursday – it’s a totally different process.
Wednesday – 08/15/12
Matting is a process of using masks. Mattes use keyed elements – shapes, objects, etc.
I’ll typically use mattes if I have a very complex key that I’ve created or something that’s been roto’ed in After Effects. I don’t want to continue making the machine think about all of that math in the background, frame by frame, while I’m tweaking and doing other things. The matte that I typically make for myself is a luminance matte.
What I do:
- 1) Have my background set to black
- 2) Get a white solid layer and use the chroma keyed or keyed layer element as my alpha matte
- 3) Put in a white solid below it
- 4) Set that to a track matte of alpha
- 5) Render that out with no alpha channel -render it out just as white-on-black as a luminance key
With this, I’ve basically created the very thing I was looking at in Keylight.
Another way to do this: sometimes you can just leave it on matte in Keylight and then render it flat. But sometimes I’ve got a combination of Keylight and layer masking and other things going on, in addition to the roto tool (which we’ll talk about tomorrow).
If I use the roto brush, I almost always do this because the roto brush is not stable enough in a lot of cases to keep rendering and rendering while keeping all that information. So I’ll actually render out a movie that I bring back in as a luminance matte and then the computer doesn’t have to do all that other math. It’s just doing luma matte.
To do this I:
- 1) Bring the black-and-white matte back in
- 2) Take my original file/footage and place it underneath the new black-and-white footage
- 3) Hide all the other stuff that I already did (or start a new comp)
- 4) Select my original footage underneath the black-and-white footage I’ve brought in, then select that to ‘track matte luma matte’ – that will give me a really clean extraction.
I can then go in make other refinements, like work a little more with the keyer.
We cover this more later on, but it’s just a matter of making the computer not have to work so hard. If it’s not working so hard, I can work faster. I can get in there. I can paint on that layer. I can do a lot of things.
I’ve got a lot more control over that layer and I’m working a lot faster. It’s all about productivity and saving time every step of the way.
Thursday – 08/16/12
In this series we really focus on After Effects, because that’s the tool I’m using. It’s the tool that’s most accessible to most people – that’s why I relate to this tool.
In After Effects, roto-masking is done in several different ways. One way is using layer masks or mattes – that is to, point-by-point with the Pen tool, select an item that has some definition to the edges.
This is not easily done on a human because humans don’t have a geometric shape. This process works best on things like chairs, walls, furniture, swords – things that have smooth, long, soft edges or even defined edges. If you rely on just the Roto Brush for really defined edges, you’ll see it won’t work – it’ll look like it’s been eaten away with chopped edges.
The Roto Brush works mostly for organic shapes like people, clothing, etc. I won’t use it for smooth, long surfaces or hard edges. For those elements I’ll make a layer that’s just roto-masked using my Pen tool.
I’m add soft edges if there is some motion blur or something crazy going on in there; then I just work back and forth. If it’s a really crazy handheld shot, you’re going to be doing some tweaking on pretty much every single frame.
If it’s a locked off shot you’ll still have to watch for vibrations. A lot of times – even with a locked off camera – the floor moves or something bumps it or the wind buffets it a little bit. You won’t see it when you’re recording on set. It’s only once you start to mask shots in post that you start seeing the little vibrations and shakes of the camera during filming. Often, you have to watch the shot frame-by-frame in post to catch it. This is a really important phenomenon to remember.
Roto Brush in After Effects is a useful tool. I use it a lot; it’s a good starting off point.
If I’ve got a lot of activity in my shot and I really just want to pull out my characters roughly, I’ll use the Roto Brush to get my basic roto. There are a lot of refinement tools in the Roto Brush which take a lot of tweaking – but there are also a lot of really powerful tools that deal with motion blur, refining its searching point. We could dedicate a whole week in this series to just this tool.
Even though I’ll get my first pass with the Roto Brush, I want to capture more information than I need. For example: I have to matte a poorly planned film in which the crew intended to shoot an actor on green screen. Because of poor framing, the green screen ended up being just a small little patch of green underneath him. Of course he jumps out beyond the boundaries of this little green screen, so I still end up having to roto him.
I do this through the whole scene, leaving a little bit of the green where I had it because that allows me to come back in later and key it out. When I use the Roto Brush, it goes a little crazy around the edges. So again, I can either:
- (a) key it out if there is something to key, whether it’s a luminance matte or green or blue; or
- (b) come back in and use the Paintbrush tool or the Eraser tool and either paint in black or write on white. I can also soften up edges or paint out areas where there’s something that doesn’t belong in there.
Sometimes there may be a rope sticking out or wire removal. Sometimes you have to come in and paint those out frame by frame, and that’s where this process comes in handy. To summarize my workflow:
- 1) Roto Brush the shot
- 2) save that out as a luminance matte
- 3) bring this back into the project
- 4) paint out or key our other areas I don’t need
Friday – 08/17/12
Pulling a key, step-by-step:
- 1) Select the green (or blue) color with the eyedropper in the software keyer.
- 2) Switch to preview mode to look at the matte itself.
- 3) The matte shows you the white-on-black image. The white is the hold out – the part you want to keep. The black is what you want to go away. It’s going to be your alpha.
- 4) Crank up the black so that the black goes ALL black – so that there aren’t any white pixels in there. Then you have to ramp the white down, so that it gets rid of any ghosting, spill, or black pixels that may be in the area.
- 5) You might need to choke the matte, which means bringing it in a little tighter, and maybe introducing a little bit of edge blur to soften the edges.
- 6) Those are the types of things you have to look at in the matte. You can do all of that in the matte view mode. Once that’s refined, you can go back and look at the final composite mode so you can really see what you did. Sometimes you have to go back and forth and tweak a little bit. But that is primarily the process of software keying.
Creating a background spill on the edge of the subject:
- 1) You can incorporate an edge effect around your character that looks like lighting (basically fake backlighting). To do this, use Light Wrap in BCC (Boris Continuum Complete).
- 2) Or: fake it in After Effects using layer styles. You can use either internal glow or inside shadow and set it to a luminance and a light color.
- 1) Find the length of the shot and where the most extreme movements of the subject you need to hold out are.
- 2) Make a moving matte just by drawing the pen tool around them, giving a feather – a large feather – so it’s a nice, soft edge.
- 3) Animate the key frames of the shape of that matte over the timeline.
- 4) Make sure you garbage matte everything. If you do, you can go right to the keying process or the matting process and not worry about strange anomalies.
- 1) Have the background set to black.
- 2) Create a white solid layer and use the chroma keyed or the keyed layer element as the alpha matte.
- 3) Put in a white solid below it.
- 4) Set that to a track matte of alpha.
- 5) Render that out with no alpha channel. Render it out just as white-on-black as a luminance key.
- 6) Or you can just leave it on matte in Keylight and then render it that way and render it flat.
Monday – 08/20/12
Color Matching Foreground Plate to Background Plate
Part of making a scene believable is having the lighting of your inserted matted subject match the lighting of your intended background. You can plan this ahead of time before you shoot – it’s a production/planning issue.
Often times you’re given something that’s already been shot on a green screen and doesn’t necessarily match the background that you want to composite it into. There are certain limitations:
If the camera angle is off, or if you have a really brightly lit subject and you’re trying to put them in a dark room, you’re going to have a lot more work to do to get it to match.
But when you do have some control and some foresight, you can determine what those changes are going to be just by doing some basic color correction to try to match the foreground and background plates. Nudging the background more toward the color of the foreground (if possible without affecting your scene) can sometimes be much more desirable than trying to make your subject go more blue or green (unless that’s the effect you’re trying to achieve).
When you’re doing a composite shot that’s going to be edited by another team member in visual effects, you need to try to hold the color correction of the foreground object, or the person that’s going to be inserted. Then try to get the background color to match.
What your team member is going to do is edit that clip into their whole scene, and then do a final pass color correction of the whole scene. So it’s something you need to verify with the editor or VFX director ahead of time, making sure that you’re on the same page.
If it’s a project you’re doing yourself, always try to make the person look good – make the skin tones look good first, then try to bring the background to match their color. That’s usually the best practice.
Unless, of course, it’s for a specific effect. If they’re in a dark alley at night, it’s not usually going to be really warm (unless it’s a sodium street light). You have to determine the lighting of a scene, the time of day, the environment. You have to take a lot into consideration to determine how you’re going to match those colors and make the scene look like it really belongs together.
Tuesday – 08/21/12
Simulated Rack Focus
A very directorial and subjective method for making a believable composite is simulating a rack focus. That means bringing the foreground subject into focus while blurring out the back, and vice versa.
Sometimes making a shift between the two really makes it feel like you just did an in-camera, in-lens focus on the subject. It’s a stylistic thing that’s popular in filmmaking these days, especially with DSLRs where you’ve got a really shallow depth of field. This is one way to simulate it.
Sometimes clients may call for this kind of effect on a shot that wasn’t done originally through the lens, and you actually have to roto out the person to create it. Having the subject and background separated gives you control of the rack focus effect. For starters, you’ll be able to get the colors to match.
After you’ve created these two discrete layers, you’ll want to do a slow rack focus pull, something where the subject is blurred out a bit but the background is sharp and in focus. In After Effects, I use the lens blur effect (now called camera lens blur in CS6) to simulate a rack focus in just a couple key frames by overlapping the frames over time.
If you want to get tricky with it, you’ll over-pull it – that is, pull them into focus and then just slightly past that so that it goes a bit blurry, and then right back into focus. This way it looks like a camera operator is manually focusing the shot.
You can also simulate a little camera movement which brings a little excitement and realism to the scene. This trick alone can really help build the intended effect of reality – the effect of it being an actual in-camera shot.
It really helps sell the composite.
Wednesday – 08/22/12
This takes some pre-planning when you’re shooting your green screen shots. You must have some kind of markers or something very definable in your foreground if you have camera movement.
Say you’ve got the camera on a dolly and you’re supposed to be shooting out a window, or there’s something in the background behind your subject. You’ve got a subject, props, and other things in the foreground, and you’re moving the camera with a dolly (or handheld).
You’re going to have to use kind of motion tracking or match moving for your background plate. That’s where After Effects comes in handy. The built-in tracker is more than adequate.
When you get into CS6, you’ve got the 3D Camera Tracker, which is actually quite phenomenal for tracking everything in the scene and letting you put something into it. Specifically, being able to track all the elements and then get that background plate, floor plate or wall plate in there is great. It will really lock it in.
I use the built-in tracker. I’ll usually go to that first before going to something like Mocha; unless I know that I really need a planar tracker – then I’ll go on to Mocha for that. But I’ll use the built-in tracker in After Effects first if I know that it will meet my needs.
Using a combination of the simulated rack focus with something that has been tracked really helps sell the composite. If you’re dollying in toward something and you want to really focus on the background after you’ve dollied in on your foreground subjects or props, then doing a combination of the motion tracking and simulated rack focus really makes it feel like you’re on a set, in the scene, with a camera.
Thursday – 08/23/12
Say I have a subject that’s really well-lit and the green screen is lit with nice, balanced light, and everything’s great – but my background is busy.
Maybe it’s a really heavy environment or has a lot of light and color in it, but my foreground subject looks a little flatly lit. There are a few things you can do to tweak this.
Light Wrap takes the background plate that you’re using and gives just a slight edge to your foreground composite that allows you to pull in some of that light.
A camera lens picks up light and fuzzes things out a little bit around the edges.
It isn’t just spill, but is a softening, environmental light ambience that is around your subject. The Light Wrap plugin works really well for that.
I also use Layer Styles in After Effects to cheat it. Sometimes I’ll do this on two or three different layers to get control of what I want. Instead of using any kind of edge glow (because edge glow goes all around the image completely), I’ll use an internal shadow around my subject layer. (Or I’ll do it on a separate layer and have just the layer style overlay.)
On an internal shadow, instead of setting it to multiply, I’ll put it on overlay or on screen or whatever works best for the scene. Then I’ll choose a lighter color. I’ll do this on two or three things – mask them out, get really detailed on it. But if there’s something that’s really bright in the background, you’re going to want some diffusion and maybe throw some lens flare in to help fake it.
The nice thing about using Layer Style is you can keyframe it and strengthen its intensity and the size of the “shadow” – it’s a great way to fake around people, hands, arms, etc. It also helps any props or other objects that might reflect light, or objects that might get some diffusion or distortion as they pass in front of stronger lights in the background.
Friday – 08/24/12
The Top Three Reshoot Requests (Deal-Killers)
1) Camera Angle
You’d be amazed at how often you’ll be given a green screen plate where the camera angle is so far off that there’s just no way you can make it work. In this case, you’re going to have to reshoot the scene.
If the camera angle is completely off, or if they’re zoomed in too far (if most of a character’s body is missing, for example…see #3), or if they’re too far away and you’re unable to get all the details without adding a bunch of noise – these all are deal-killers.
Another frequent deal-killer: a shot’s lighting is so far off the mark that there’s no way to recover it (without it looking really, really terrible). If your scene was shot way underexposed, and it’s supposed to be an outdoor scene or something that’s really brightly lit – there’s only so much you can do to fix it (which we’ll cover next week).
If you try to bring up a shot that’s way underexposed, you’re going to introduce a lot of noise – especially in the darker areas and shadows of the scene.
While there are plugins that may help – like Magic Bullet Denoiser II from Red Giant – it can be just beyond recovery. On the other side, if your lighting is way too strong and your shot is overexposed – with highlights on the skin that are just blowouts with no information to recover – you’re going to spend a lot of time trying to doctor up skin tones.
Like the camera angles deal-killer, I’ve had shots where a whole side of a person is cut off, or the person walks out of frame or off screen and the aspect ratio is all off when the people have got to be center-screen.
Maybe they’re shooting standard-def or are just too tight in on the person. In these cases, there’s not even anything to roto, the person just gets cut off.
Obviously you’re going to have to reshoot; people can’t just appear center-screen as if they’ve just stepped out of a time portal. You really can’t recover this kind of thing.
If it’s just somebody’s hand or shoulder that goes off screen, you might be able to fake it with CG or 3D work, or by grabbing some other frames and faking it in. But when there’s a walk-on, or an entire half of a body or arm missing, and the person has to (in the final product) be in the center of the background plate – there’s really just no way. You have to re-shoot.
This is why I’m adamant about having some kind of pre-planning for all your shoots. We call it “pre-post.” You should have storyboards created for your scene, and make sure that someone is on set during the shoot looking for these kinds of potential mistakes as the lighting/shooting is being done. Everyone (the person directing, the person shooting, the person lighting) should all know what’s going to happen with the footage.
Monday – 08/27/12
I have a great example of a poor lighting scenario. This was shot on a RED cam by someone who’d never used RED cam before and who’d never shot green screen before. They had a green screen 20 feet in the background with no lights on it – and the screen was muslin, the worst material to use.
They had a close-up of the talent flipping her hair. It was framed as a close-up and had a lot of motion blur because it was shot 24p. This was all in front of a grayish green background because of poor lighting. Everything was washed out.
I had to do a multistep process to create interval mattes from this shot. I did things like oversaturate the original footage and use it as an alpha matte. I basically had to use every trick in the book. I go into more detail on it in the Green Screen Handbook.
Under-lighting is always a worst case scenario; anytime you don’t have enough information you’re going to introduce noise as you try to save the image. If your screen is underlit or your talent is underlit or your camera iris is not open wide enough, that’s going to create a lot of problems.
There are ways of getting around some of that. Luckily we have some pretty decent filters in the market for noise reduction, but you have to put those in last. You still have to pull an extraction from that matte depending on what the edges are like. Sometimes you can do a noise filter to knock back some of the noise and then do your matte extraction, but usually your edges will suffer because of that. It really depends on the scenario.
Tuesday – 08/28/12
Another really big issue. There are several plugins that will deal will this. I’ve found that Veescope Live (a standalone software) has got pretty decent spill suppression, even in post. It doesn’t have to be a live image coming in; it can be post-production. dvGarage’s compositing has some decent spill suppression in their workflow as well.
If I’m just going to be using Keylight, I’ll do what I can with spill suppression – but I’ve got several different processes for going into that.
You’re almost always going to have some spill, even if it’s just noise from the camera. Even if you don’t have a spill, there might be a cast. It will depend on the camera optics, the range, the distance between the camera and subject, whether it’s shot in a studio or stage or outside or whether you’re using reflective media and front-projection LEDs. Your colors are going to be different depending on the green screen or blue screen materials used. There are so many factors that will yield spill or casting issues.
The process starts by pulling a matte, I’ve explained previously (creating the luma matte), and then pulling out an additional pass of green cast removals.
Wednesday – 08/29/12
This doesn’t have to be an issue if the green screen was lit well. Using masks and mattes, you can isolate specific areas and factor them in separately.
Say I’ve got a subject that has a lot of motion blur or someone who’s wearing something semi-translucent – and maybe the green screen isn’t great. In this case, I’ll do one pass of just the individuals – I get their hair, body, all of that. In this first pass, the translucent area’s probably going to get blown out because I’m going to have to really crank it out to get the person extracted.
After the first pass, I’ll do another layer to isolate the piece of clothing with a garbage matte. Then I’ll be able to process it separately.
I can then use those two mattes together to create my luma matte. I save that out, come back in and take my original footage, do a luma matte, and everything’s done.
This is kind of what I had to do with the poorly lit green screen, too. I use a matte with several masks, combine them and then do one matte extraction. That way I’ll have one good, clean matte to work with.
It’s often a matter of creating a brand new matte for yourself. It may be a multiple-stage process to end up with one clean matte that we’ll use to finally extract our individual from.
You can’t really use just one layer of key light. Rarely do I ever use it as my “one step only” option. I will almost always make a matte of it, then use that matte to then extract. As I said, there may be casting, or you may introduce more noise and start seeing holes in your final product. That’s happened to me numerous times – it looks okay while I’m working on it, but then I go to render it out and I can see noise holes in it. Not good.
It’s a matter of stacking up those layers. Sometimes you have to stack them 2 or 3 deep (meaning 4 to 6 layers).
Thursday – 08/30/12
Reflectivity is an issue when you’re in a big green environment – like a Cyc wall – or if you’re using a front projection LED system like Reflecmedia. Someone wearing glasses, someone holding a camera or something shiny is going to reflect some of the green. You need to be able to put a patch in there, which means you have to garbage matte and isolate just that area and be able to put something else in there.
If you want the reflective item to reflect white, as if it’s reflecting lights, that’s fine. You can also have it reflect an actual environment if, for example, it’s a metallic surface and you want it to reflect what’s behind the camera.
It’s all a matter of tracking – using a small moving matte that follows that reflection and then putting it in under the layer.
Reflectivity is actually a pretty easy fix. You just have to decide what you want to show in a reflection – light or environment? In most cases, if your reflecting light source should be white, then you will want to replace the “hole” in your matte with a white solid.
Friday – 08/31/12
Q&A with Jeff Foster
Thanks to everyone who submitted keying / matting questions to Jeff. It gave us some great topics to work through in conversation. This is a transcript of that conversation, wherein Jeff answers your questions and adds some final words of advice to wrap up our month-long look at green screen fundamentals.
Katherine Russell has a question about using the native keying and matting tools in Final Cut Pro 7. They are trying to extract a matte, but no matter what adjustments they make they are getting a lot of fringing and spill around the edges. What advice do you have for them?
If you have to stay within your non-linear editor, whether it’s Final Cut, Premiere, or whatever – and you don’t want to go into After Effects, then a third-party solution is really your only option., Primatte Keyer, dvMatte Pro and Conduit are worth checking out.
Most all of them will handle things like spill depression and some of the basic edge issues that you’re going to have. By just using only the NLEs built-in keyer, all you’re doing is identifying the green and how it’s handling that transition from green to the edge – it’s going to either over-feather it, over-choke it, or it’s just going to leave holes and it’s going to look like celluloid burnout.
So better results if they can go to a third party solution.
So here are the questions that were sent to us on Twitter:
Kylee Wall (@kyl33t) states that her “biggest peeve is when people shoot on formats that aren’t conducive to keying (like DV) then want me to make Hollywood quality comps.” What advice do you have for her?
If you’re working with standard-def video footage, the best keyer I’ve found is dvMatte Pro from dvGarage. Anything from dvGarage seems to handle interlacing and video noise better than any of the others that I’ve seen, other than say, using Ultimatte hardware or software plugins which they don’t even make anymore. I’ve found that even trying to use Keylight in After Effects is really tough.
So she goes on to say that it’s particularly hard with certain types of subjects or talent, “especially spill on blonde hair. Bad lighting for sure! People trying to shoot full body without enough lights. Wrong color temps. Or just spill everywhere.“
Well here’s an issue that’s pretty well known with green screen. Even in high-definition, there are different colors that the camera’s going to pick up in a bleached blonde person vs. a naturally blonde person. It’s actually going to go toward a green cast anyway. You’re already working with that as an issue to start with.
So when you have:
- green spill on top of that
- low-resolution and bad edge definition
- lens or imaging issues
- tape transfer issues
all these things stack up against you.
This is where I would definitely use After Effects vs. trying to do it all in a non-linear editor, because you can stack in layers. So you might be able to find edge definition by using several layers of really harsh keys and softer keys and stacking them all up. You’re basically making a luminance matte. Just matting in white over black, it’s the only way you can get a clean matte.
I think I addressed that a couple of ways this week – dealing with edge spill issues and with really tough green screen shots. It’s just a matter of working with multiple layers. Some of it’s going to be hand roto work to fill in the holes. It’s just a lot of work, there’s no way around that.
There’s not any one plugin for it, it’s just a lot of hand work and stacking of layers.
Great advice. She also has issues with things being the wrong color temperature. And Kes Akalaonu asked a question about a green screen that’s not the “proper” green:
@NLE_Ninja asks, “How would I handle a green screen that’s not the proper green?”
(laughs) I don’t mean to laugh but I’ve encountered all of these, so I understand!
I know you addressed that during one of the days in this series, about how you have to match the background to the foreground. Talk about what you do when you have hue adjustments that you have to make.
I’ll duplicate the original layer, sometimes several times, then go in and really tweak and zero in (with the 3-way color picker, etc.) to try to bring that green as close as I can, and I try to bring up my contrast. I’ll do that just so I can get my Keylight extraction to get some kind of an edge. That’s one way you can deal with it.
Of course you’ll be throwing the colors of your foreground subject off, too. And if there’s a lot of green spill, then you’re back to the issue of having to go back and forth and doing some hand roto to hold out some of the major areas. Just trying to get some kind of edge definition and then, again, stacking those layers to create a white-on-black matte. Render out that matte, bring it back in and extract the original footage from that, and do any last minute color correction of that extraction.
It is a back and forth process just to get the matte. It’s all about getting the matte. How do you get the matte? Once you’ve created your matte (that’s why I created the luma matte, white-on-black that I can bring back in), then it’s just a single pass with my original footage and it’s clean.
Sometimes I have to go in and hand paint some areas. If I see spill in the white-on-black, I can come back in – even in After Effects – and paint in and paint out areas that aren’t working, that didn’t hold out.
Again, that’s where working with your final luma matte allows you to either clean up your matte and give you something that you can work with.
Here’s a fun one that I’m sure has happened to you a bunch of times (I remember it happening to me and it was a lot of fun). What do you do when you have green-eyed talent in front of a green screen? Or maybe you’re using a blue screen and the talent has blue eyes. What are your options when that happens? When the shot is a close-up the problem can be really obvious. You don’t want their eyes to key out.
(laughs) That’s really similar to the reflectivity issue, except instead of replacing the green or blue like you would in a reflection, you just have a hold out matte.
This is where you will garbage matte in their face. Unless you have a lot of spill or something that’s causing an issue on their skin tone, it’s a really easy fix.
If you’ve got a really good, clean shot and you’ve got a close-up, you basically do everything like you would when creating a reverse garbage matte. You want to hold in areas of the face – which is a matter of making a matte around their eyes or around their whole face – use that as a hold out. It’s a second layer that you’ll keep in there.
It’s easier to do in After Effects than it is in Premiere or Final Cut, but it can be done there, too. If you can create a matte, you can hold it out.
So it’s just a matter of key framing that so if the person’s moving…
Yeah, if you create a matte and they’re moving their head, obviously you have to roto that. So it’s either a key frame to roto process or, if it’s really crazy, you have to track it. But yeah, just a hold out matte. It’s an easy one.
That’s good. Manson Floyd (@MansonF) has a similar issue. He wrote that he often has “to shoot shirts with a green and blue logo. Is there a good way to keep these areas from keying out?”
So it sound like it’s the same process, right?
Yeah, you can do that. As long as you don’t have an edge issue, where they’re wearing a green shirt on a green screen, or something like that. But if they’ve got a green tie or something that stays within the boundaries of the rest of the character’s edge, then yes, it’s the same process.
If you don’t have the luxury of going in and creating mattes in every frame, and you do have one of these portable pop-ups that’s reversible green and blue, then use blue instead of green. Just make sure you’ve got enough light on it; blue screen typically takes a lot more light than a green screen does.
Right, right. Ben Lybrand (@benlybrand) wrote in that “the (issue) that vexes me the most is when the talent is moving their hands around there’s usually blurring green around the hands that is really hard to get rid of.“
I was making a joke to him that they’re like the Sleestaks from the Land Of The Lost - these kind of webbed creatures with green webbed hands, (laughs) – and how tough this problem can be to deal with. What kind of advice do you have for people when they get footage like that?
That’s actually very typical, especially if you’re shooting in 24p, you’ll have a lot more motion blur than if you’re in a faster frame rate.
First, you have to use a good matte software. You can’t just use the built-in keyer in Final Cut. That’ll give you nothing but headaches. You have to use good software to start with. Keylight does a pretty good job of dealing with that, actually.
Depending on what you’re trying to composite your character over, if you want to maintain a lot of the motion blur, it’s a matter of working with layers. I’ll garbage matte around their hands and create a whole different layer. I’ll use a different adjustment of Keylight on that area than I would say on the rest of their body – anywhere I need a little more detail, like hair.
There’ll be times when I’ll take one character and make five or six different layers masking different parts of their body. So I may mask just their hair, or maybe their whole head, and I’ll approach my Keylight extraction a little bit different there than I will around their clothing. And then their hand, or if they’ve got a sword or something like that, you may want to mask that and key that a little differently.
It’s a matter of creating all of these different masks to make one character extraction. If you have to choke that in a lot, I’ll reintroduce motion blur in my final composite by adding in a little ReelSmart Motion Blur from RE:Vision Effects. I think I’ve mentioned that before, but that’s a plugin I rely on a lot for any roto or matting of any kind. In this case, to re-introduce motion blur.
Do you find that Avid has better built-in matting plugins? I have a tweet here from Cliona Nolan (@jkl_3), who does very little green screen but is using Avid Media Composer and “getting finer edges right is always tough.”
Do feel like Avid Media Composer is better than Final Cut Pro in terms of built-in tools?
Oh, definitely – Avid has a lot more advanced tools than Final Cut. I personally haven’t used it, but I have a lot of colleagues that do, and I’ve seen the type of work that they’ve done. Including with some of the morphing and tweening type of tools. There’s just a lot more in there.
Boris RED is actually bundled with Media Composer. I’d rather use these tools than anything that Final Cut has to offer, other than having to find third-party plugins.
BCC also has LightWrap and some other tools that really do help you with those edges if you don’t have any other keyer.
So this next question is about tracking issues. Joe B (@zbutcher) says that “accurate / identifiable tracking points are key for work that needs a lot of compositing / tracking of additional elements.“
Robin de Jong (@robindejongedit) agrees with him, talking about how much of a challenge it can be for editors if, on set, they “forget to place tracker markers (that are needed for shots with) complex camera and object movements.”
What do you do in a situation like that? How can you make up for what wasn’t done on set?
So, meaning if they don’t have proper tracker markers on the green screen?
Yes, or if they’re incomplete.
Sometimes you kind of have to fake it. That’s where match moving comes in. Here’s one example that I’ve used both in my book and in my video training on my website. There was a place where they’d put tracker markers on a green screen that was outside a window, but they had the green screen too close to the window, so when you tried to use the tracking markers for your background, it made it look like there was a poster pasted to the inside of the window instead of true depth outside the window. So I had to get rid of the tracking markers and then just did match moving by eye.
Sometimes you have to just have to do that. Other times there may be enough data back there that you can use a planar tracker like Mocha and establish your entire background scene (or at least a panel that will establish some kind of tracking). Maybe the markers aren’t there but you can utilize something else if you’ve got a lot of other junk on the set – things like light stands that you might be able to use just for tracking purposes.
Track a plane and then you can build everything off of that.
I hope I didn’t gloss over the issue of bad lighting. That was definitely a major theme in many of our questions. So again, what do you do when you get underlit or over-lit footage? What are the top 3 steps you can take to help with that?
Underlit is actually easier than over-lit. Over-lit sometimes makes greens go yellow or white, and unless you have really dark foreground subjects, you’re going to have a hard time getting an edge.
Underlit is a little easier just because you can go in there (and this is the same process as wrong color or anything like that), make a layer that you’ll either use as a luminance matte on, maybe you have to over-crank it just to find your edges and then do a luminance matte. Or just try to isolate different sections.
I’ve got one example in my book and videos that was underlit. I’ve got two individuals – an African-American and a Latina – who are both in one scene and the footage is underlit. There was hardly any light on the green screen. So I had to bring everything up, but isolate them differently. He didn’t have much hair, but she had a lot of hair. So I had to isolate her separate and get her hair tweaked in. He had nice sharp lines to work with, but the background was really dark greys and greens.
It’s just a matter of creating little mattes and masks around little areas, isolating those, stacking all your layers again and bring it back to create a luminance matte. Then use the luminance matte on the original, and then you can color correct your foreground. That’s really your only way around it.
A lot of times people get frustrated because they think that the plugin is going to do all the work for them. You can’t just click the green and dial the dials and hope it’s going to do all the work. 99% of the time, that won’t work. Even if you have a really, really good green screen – well-lit, scoped in, everything’s balanced, everything’s clean, it’s high-definition, there’s no noise – very seldom can you do a one-click wonder.
It’s going to be a process of going through all these steps.
Okay, great. Thank you so much, Jeff!