Compositing and Post Work - What is it, and why should I care ?

189101214

Comments

  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    Now, the next concept that is important to understand is that you can pretty much describe any color you want by using various percentages of three basic colors (or "primary" colors). It has to do with how human eyes work and stuff, but the important thing is that you can describe to the computer pretty much any color you want by telling it how much of each of those three primary colors to use in making the final color.

    Generally, for computer work you'll be using Red, Green, and Blue as the three colors.

    So if we go back to the image and the zoomed version I just posted, you can pick any one of the pixels in that image and describe the color of that pixel by specifying a combination of Red, Green, and Blue. And that's what the computer does. It takes each pixel and breaks it down into the Red, Green, and Blue color components.

    So for example, if we look at the 1280x960 skyline image, bring it into PS or Gimp, and choose a pixel at the end of the top line of pixels, over in the top right corner, we will see that the computer tells us that the blue sky is Red=84, Green=150, and Blue=237.

    What does that mean? Well the image is an 8-bit image. Which means that it uses 8 bits of data to describe each of the R, G, or B values. And for those who learned about bits and bytes and stuff, that means that the computer has 256 values to choose from when picking each R, G, or B value. It includes a value of 0, which means the range is 0-255 values.

    So, as we might have expected, the pixel in the top right corner where the sky is blue should be mostly blue, which it is (237 out of 255).

    Now, what does all of this mean? Well, for every single pixel in that image, you can describe its color by three components, and each component can be one of 256 values.

    And since the RGB values become simple numbers between 0 and 255, an easy way to store that information for each pixel is to use three grayscale values for each pixel. One for R, one for G, and one for B.

    So if you take all of the R values for each pixel in the image and put them together into a separate grayscale image you generate what's called a "channel" of data describing the Red value of each pixel.

    Same goes for the G and B values.

    And the attached image is the result of doing just that. This image is the blue channel data, describing the blue component of each pixel in the image.

    And of course, the "whiter" the pixel in this grayscale, the higher the blue component. Which is why the sky area is so bright. The pixels in the sky area of this image are in the 237 range....

    Blue_Channel.JPG
    973 x 734 - 122K
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    So now we've learned that any image can be fully described by 3 "channels" of data, and those three channels describe the R, G, and B values of every pixel in the image. And those "channels" are nothing more than grayscale images whose pixel values correspond to R, G, and B values.

    Now, I've never been a big fan of the term "channel" for these image components, cuz it's a bit confusing, but anyway...they never asked me... :) :)

    So if you open the image in PS and click on the Channels tab, you'll see the RGB channels we've been discussing, and they are all nothing more than grayscale images. So if you save an image file with just those three image components you can fully describe the image. Cool.

  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    Now, one side issue that is worth discussing is what's called "color depth".

    I mentioned that the skyline image I posted used 8 bits of data to describe the R value of each pixel, and another 8 bits to describe the G value, and another 8 bits to describe the B value.

    That's what's called "8 bit depth".

    Now, you can see right away how describing a color with only 256 values of Red, Green or Blue might be a bit limiting. Since you have to pick one color for each pixel, wouldn't it be nice to have a lot more colors to choose from?

    Well, yeah, and that's why some people use images with a higher bit depth. And some serious compositing tasks require higher bit depths.

    But for the average guy, 8 bit images are probably sufficient. But at least you know what it means now so you can decide for yourself.. :) :)

  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited April 2015

    So, for the typical image, all you need is three channels of data in order to fully describe each and every pixel in that image. So you store the image in an 8 bit file with 3 channels of RGB data and you're good to go.

    However, if you want to manipulate the image, you need to get fancy.

    Long ago, the cool computer graphics guys realized that you could add more information to the image file so that you could be able to composite that image with another image, like a background. So instead of a total of 24bits to describe only the RGB values, you could add another 8 bits and describe any transparent areas you might want so you could isolate the subject and drop it on a background.

    Now of course you didn't have to use that information, but at least if it's there you can use it whenever you want. And doing stuff like that, adding stuff to a background, was hugely important for putting Luke Skywalker on a cool background of a far away planet that doesn't really exist.

    So they developed what's called an Alpha channel.

    And just like we discussed with all the render passes, an alpha channel merely provides a grayscale image that describes which areas of the image are intended to be transparent, and which aren't. And typically the black areas of the image are intended to be transparent, and white are fully opaque.

    And that's why many image formats have the ability to store an additional 8 bit channel, rather than just the RGB channels.

    Post edited by JoeMamma2000 on
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    Okay, so if channels are the image components that describe the RGB and sometimes alpha parts of an image, what are "layers"?

    Well, let's look at an example...

    Let's look back at our render passes examples, and load an image into PS which has a diffuse layer from a render pass. That layer is also a component of the image, but only provides information about one aspect of the image, and that is its color. And if you look at the components of THAT layer, you'll see that it also has RGB channels to describe the colors of each pixel in that layer image.

    So the diffuse LAYER also has 3 RGB CHANNELS.

    So a layer is generally a separate image, packaged with the main image, which is a component of a compositing process. And a channel is generally an RGBA component of each layer.

  • DADA_universeDADA_universe Posts: 336
    edited December 1969

    Yaay!!!! Joe Mamma is back! You're writing a book! I can tell.....you're writing a book!

  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited April 2015

    Now, you may have also noticed that when you output render passes, some of the passes are included in the rendered image channels, and some are included as separate layers (assuming you specify to embed them in the same file and not as external files).

    Generally, those layers that are actual components of the final image are included as layers (diffuse, specular, shadows...), while those passes that merely provide information about the image render are included as channels. These might include stuff like Object ID, Normals, etc.

    Another thing to keep in mind that you generally have total control over what is included in your images. Most compositing software gives you the ability to move channels and layers however you want, even between images. So images can become less like images and more like tools to do whatever you want.

    Post edited by JoeMamma2000 on
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    And this brings us to what I think is one of the most confusing areas for people who are new to composting...

    The concept of moving channels around inside an image, or even moving them around between images, often causes a lot of confusion.

    In Nuke there is the "Shuffle Copy" function, and in Fusion there's the "Channel Boolean". But whatever they're called, they're basically the same function.

    And a good example of how you might use them is this:

    We all know now about Carrara's Object Index (aka, "Object ID") pass. It basically renders a grayscale image, where each object is assigned a uniform grayscale value. So a sphere might have a grayscale value for all of its pixels of (R=1, G=1, B=1), and it will render as an almost black image. And a cube beside it might have a grayscale value for all of its pixels of (R=2, G=2, B=2), and it will render as a slightly less black image.

    Now, this image will probably show up as a channel of your rendered image, so you'll have R, G, B, A, and Object ID.

    However, the Object ID channel isn't of much use as rendered by Carrara, since it isn't configured as an alpha channel or mask so you can use it for its intended purpose, which is isolating the associated object with a mask.

    So how do you fix that?

    Well, one way is to modify that Object ID channel so that it is a black/white mask for the object you want to mask, and then move that channel into the Alpha channel to make it a mask. So once you've modified that channel so it's a mask for the intended object, you can use, for example, the Nuke "Shuffle Copy" node to move that channel into the Alpha channel.

    Below is an Object ID pass. Just select one of the objects, create a mask, and move that channel into the image's alpha channel and voila, you have an object mask that travels with the image.

    Object_Index.jpg
    640 x 480 - 103K
  • makmamakma Posts: 54
    edited December 1969

    "Game of Thrones" compositing show:

    https://vimeo.com/100095868

    Have fun!

    Marek

  • JonstarkJonstark Posts: 2,738
    edited December 1969

    makma said:
    "Game of Thrones" compositing show:

    https://vimeo.com/100095868

    Have fun!

    Marek

    I've watched the FX reels for every season so far, and been consistently amazed at the amount of seamless compositing that goes into this show. So many scenes you would think have no digital effects at all, and then you find out afterwards in the FX reel how very very much compositing actually goes into it. Great link.

  • makmamakma Posts: 54
    edited December 1969

    Hi, Jonstark! The reel is great.
    Now I found what is the reason of celebrity madness! Spending days in green boxes could bring everyone to a state of mind shock...
    On the other hand the reel supports Joe's main view through the thread - isolation/compositing is the way to go for an efficient production pipeline and gives an opportunity to get the great artistic results. So there is no other way - we have to learn it despite the new render engines ever growing accuracy.

    Marek

  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    makma said:
    "Game of Thrones" compositing show:

    https://vimeo.com/100095868

    Have fun!

    Marek

    It's amazing all the work they need to go through, huh? And all because they didn't render it right the first time... :) :)

  • Rashad CarterRashad Carter Posts: 1,799
    edited December 1969

    makma said:
    "Game of Thrones" compositing show:

    https://vimeo.com/100095868

    Have fun!

    Marek

    It's amazing all the work they need to go through, huh? And all because they didn't render it right the first time... :) :)

    There's always more than one way to skin a cat.

  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    There's always more than one way to skin a cat.

    That's true of just about anything, isn't it? :) :) :)

    That's why the smart guys try to learn what's the best and smartest way...

    What's your point? :)

  • 0oseven0oseven Posts: 626
    edited December 1969

    Terrific Article Joe !

  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    To continue with the subject of moving channels around in your compositing app, as well as dealing with something like an Object ID pass, here is a very old comp I found that shows how you can take the Object ID pass from Carrara and convert it into something useful.

    As I mentioned, the raw pass from Carrara is of liltle use, and you need to do some monkey motion to make it useful. This is a screenshot from a Nuke comp where I take the pass and first "grade" it to crank the white point down so that you can actually see the almost-black grayscale image that Carrara barfs out... :) :)

    The top left image shows the result of the grade. After that I sampled the grayscale values of both objects (which turned out to be RGB values of 0.3 and 0.4), and added two luminance keyers in order to make mattes for each object. You just set the luminance keyers to key the very narrow range of luminance values (0.3 keyer is shown on the right), and voila, you have a matte for each object. Those are the two red matte images.

    Once those are done, you can re-inject those mattes back into the original image as alpha channels using one of the channel shuffling tools, or just use them downstream in your composite.

    And of course, the more up-to-date 3D apps which include some form of node compositing make this entire process FAR easier.

    Object_ID_Extraction.JPG
    1905 x 999 - 158K
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    And to drive home the point I've been making about the complete freedom and flexibility you can obtain from using tools outside of Carrara, such as compositors, I've taken a test image that was posted here the other day and done some quick modifications to merely change the "mood" of the image, and add some drama. This is not intended to be any criticism of anyone's image, but merely instructional to show how easy it is to make instant modifications to images that might take a very long time, or not even be practical or possible, in your 3D app.

    The image below started as straight JPEG image, and I brought it into my favorite compositing app to change the mood and add some more drama to the image. And also it was a good opportunity to show how you can even simulate 3D effects in 2D images very quickly, with realtime feedback.

    I basically just added a blue underwater color cast to the image, added a little glow, and also a "volume rays" effect that does far more than just slap some light beams over the top of your image. In fact, as you can see, it respects the underlying image and uses luminance values to determine how the rays will appear to interact with the objects in the scene. And this is all done in 2D.

    All you do is add the node, and use the mouse to move the light source around your image, with realtime feedback of the resulting rays. Note how the apparently 3D rays conform around the diver, and light up the rocks below. Of course, it also requires some modification of blending modes, as well as some manual masking to tweak the lighting on various areas of the image.

    But the opportunities are endless, and you can get just about any effect you can imagine.

    Again, this was NOT intended to criticize anyone's image, but to show how one very useful and dramatic 3D effect can be used very quickly on a 2D image.

    AndyRaysDramaFinal.jpg
    1280 x 853 - 70K
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    And for comparison, attached is the original JPEG image.

    Also, FWIW, I've included the compositing node flow diagram to show how simple the modifications can be made. This particular comp was done in Nuke, but you can do the same thing in Fusion or even Blender, both free apps. And when Nuke is released for free later this year you can download that and give it a try.

    Andy_Flow.JPG
    1221 x 655 - 40K
    Andy.jpg
    1280 x 853 - 784K
  • MarkIsSleepyMarkIsSleepy Posts: 1,496
    edited December 1969

    I basically just added a blue underwater color cast to the image, added a little glow, and also a "volume rays" effect that does far more than just slap some light beams over the top of your image. In fact, as you can see, it respects the underlying image and uses luminance values to determine how the rays will appear to interact with the objects in the scene. And this is all done in 2D.

    All you do is add the node, and use the mouse to move the light source around your image, with realtime feedback of the resulting rays. Note how the apparently 3D rays conform around the diver, and light up the rocks below. Of course, it also requires some modification of blending modes, as well as some manual masking to tweak the lighting on various areas of the image.

    But the opportunities are endless, and you can get just about any effect you can imagine.

    That's very cool Joe. Something in my brain just doesn't want to wrap itself around node-based editing - it's the main thing that's kept me from using Blender more,I keep trying but those spaghetti messes I see for more complicated edits are frightening. I'm far from expert, but I could have done this same thing in Photoshop in a couple minutes - I'm curious if the node-based system has benefits for something like this over doing it that way? Or (in this particular case at least) would it be a matter of "use whichever you're more comfortable in?"

  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    MDO2010, good question. I think it all depends on what you need. Clearly, I think for most of the folks here, Photoshop or Gimp is fine. In the years I've visited this forum, it's clear that, for what people here are interested in doing, those apps meet their needs.

    But for serious compositing needs, those apps just don't provide stuff that is designed to do professional work on commercial projects, and do it with high quality and quickly and efficiently. For most here, it's not really an issue of node based vs. layer based compositing, because their needs are quite limited to simple image manipulation.

    However, there are HUGE benefits to professional compositing apps for those who need those features. But unless you've actually come to the point in your work where you have difficulty with a layer based approach, and can give a node based system a try and see how much better it is (IMO, at least), then someone just telling you of the comparative benefits might not be beneficial.

  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    MDO2010, let me give just one small example to try to illustrate why some prefer a nodal approach...

    Previously in this thread I described a Nuke comp where I extracted a matte from an Object ID pass out of Carrara. And I posted the node flow diagram, shown below.

    Now, I agree when you say that often the node diagrams are a jumbled mess. :) :)

    But there are features and procedures you can (and should) use to make it much clearer. As shown in the image below, if you use a colored background for a section of your flow, and break the flow up into bite sized, easily understood modules, then suddenly it becomes much easier to understand. Also I tend to use the little dots shown in the image to arrange the flow in a nice way. So you can see in the image that the main module in the flow is an "Object_Index_Matte_Extraction", which I named when I added the colored background.

    Now, what you can do that is extremely useful is decide to save the nodes that are in the colored background as a standard Object ID matte extraction tool for future work. And then the next time you need to do a comp where part of it includes extracting mattes from a Carrara render pass, you just load that module into your comp, do a couple of tweaks if needed, and you're all set.

    And after a while you build up a library of modules, so if you need to do a new comp you just load in the appropriate modules and you're all set. And that also helps you gain an understanding of what you're doing...if you have a library of 100 often used modules, laid out in a clear manner, you soon can just look at an otherwise complex flow and instantly understand it since you've seen all those modules and used them many times before.

    Object_ID_Extraction1.jpg
    500 x 474 - 70K
  • kakmankakman Posts: 225
    edited May 2015

    Thanks to Joe for starting this thread and thanks to Joe and everyone who contributed by sharing their knowledge, insights and opinions on the subject matter.

    This thread stoked my imagination and compelled me to try to learn and apply some new methods for my hobby – which is making Blu-Ray discs of my various travels. Although I have just completed my 35th such disc, I am always trying to expand my horizons and, of course, improve the quality of the production.

    The images below are a photograph of downtown Birmingham, Alabama, a Carrara 8.5 render and the final image is a composite done in Photoshop Elements 13 (which I downloaded, installed and used for the very first time).

    The final image was used as the closing for the disc, with an additional text overlay.

    The nickname for Birmingham is Magic City.

    Birmingham_-_Moon_Gaze_Magic_City_V2.jpg
    1920 x 1080 - 590K
    Birmingham_-_Moon_Gaze_Magic.jpg
    1920 x 1080 - 570K
    IMG_6862.JPG
    1920 x 1080 - 353K
    Post edited by kakman on
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited May 2015

    Inspired a bit by kakman's nighttime cityscape of Birmingham Alabamer, and while I was tapping my fingers waiting for a seemingly interminable Blender render, I decided to play a bit with cityscapes and compositing.

    This one is basically just a gorgeous photograph of Singapore, and I did some compositing stuff to make it appear to be a a 3D scene. I inserted a glowing alien spacecraft/entity/whatever, and a "super moon" background, and had the spacecraft fly into the scene and behind some buildings, and land at the Fullerton Hotel in Singapore.

    I hope they had reservations cuz that place gets booked weeks in advance. :) :)

    I also added some volumetric rays and lens flares and all that cool stuff to show how the alien entity can warp the space/time continuum and all that kind of stuff, and made an animation out of the whole mess.... :) :)

    Here's the link to the animation: http://youtu.be/1OWCa3vslhE

    (EDIT: I changed the link to an animation where I included the cool "super moon" background...)

    Unlike many here who like to call the images they post "unfinished test images" and such to deflect criticism (just joking....even though we all know it's true, right? :) :) ), I'll call this one FINISHED and welcome comments. In fact, maybe this can be a learning experience for anyone interested in compositing. What additional steps are needed to fully integrate the spacecraft/whatever into the scene? Take a look at the animation before commenting.

    I count at least 4 additional tweaks that should be done to fully integrate the effects into the scene and fix a glaring error or two. Plus a bunch of personal preferences I'd change.

    Landing1a.jpg
    1920 x 1080 - 1M
    Post edited by JoeMamma2000 on
  • DesertDudeDesertDude Posts: 1,235
    edited December 1969

    What additional steps are needed to fully integrate the spacecraft/whatever into the scene? Take a look at the animation before commenting.

    I'm no expert, but I'll make an attempt...

    1) Maybe since that spaceship is so bright the buildings could receive some extra light as it flies past them, especially the sides of buildings most exposed to that glowing spaceship.

    2) Animate some lights for cars or car headlights to introduce some very subtle motion on the land. Nothing detailed, not actual 3d cars.

    3) Introduce some subtle movement for the water, it's very still. Don't need to bash the viewer over the head with big fancy waves or anything, again something subtle.

    4) Possibly introduce some of that light from the ship reflecting on the water as it makes it's final landing? It looks to me like lights form the hotel are reflecting in the water, so if that spaceship is landing directly in front of the hotel, maybe the water would pick that up. Not sure though...

    Unlike many here who like to call the images they post "unfinished test images" and such to deflect criticism (just joking....even though we all know it's true, right? :) :)

    Sadly for me, it is true. :red: ... :-)

  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    Someone asked me a question about premultiplication and alpha channels and related stuff, and what it means, so I figured I’d copy my answer here. It's probably a whole lot more than most folks here are interested in, but maybe it will help someone.

    So what is it, and why do we care?

    Well, first we’ll go over some basics of alpha channels and mattes. It’s one of those areas that many think they understand, but in fact they really don’t… :) :)

    Long ago, the pioneering graphics guys realized that they really needed to come up with a way to layer images on top of one another. Like if you wanted to superimpose a cool image of a spaceship that you rendered in your 3D render software on top of a photograph of a city skyline, so that it looks like the spaceship is flying over the city. That simple operation is one of the most crucial and most important aspects of generating images in every feature film, music video, TV commercial, and just about any other digital visual medium you can imagine. It’s done to superimpose live action, CG elements, text, and any other type of image you can imagine over other images.

    So how do you do that? Hmmm….let’s see….you have a CG image of a spaceship in one hand, and a photo of the city in the other hand.

    Well, you could print out the two images and get out a pair of scissors… :) :)

    Okay, well that’s not real practical. So let’s look at some other possibilities…

    Skyline.JPG
    1203 x 789 - 133K
    Spaceship.JPG
    1203 x 788 - 36K
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    For me, the easiest way to understand digital images is to see what happens on an individual pixel level, since everything with digital images comes down to some very simple operations on each individual pixel.

    Now keep in mind that every pixel in your rendered image is nothing more than a set of 3 numbers, which describe the Red, Green, and Blue components of that pixel’s color. So somehow you have to take the pixels in the spaceship image and combine them with the corresponding pixels in the city skyline background image in such a way that the spaceship is superimposed over the city, and the black background around the spaceship is gone. You pretty much want to make the black background be fully “transparent”. Or something like that…

    But the problem is that there really is no such thing as “transparent” to the computer. All it knows is that an image has pixels, and it has to decide upon ONE color to assign to each of those pixels. That’s it. And all the computer can do with the image is some very simple math operations on those pixel value numbers, like add, multiply, etc.

    So let’s look at our spaceship and background images and try to figure out how to get the computer to place the spaceship over the city background.

    Well, before we said when you add any pixel value to a black pixel (ie, RGB=0,0,0), you just get the original pixel value, right? So RGB (138, 210, 12) + RGB(0, 0, 0) = RGB(138, 210, 12) right? If you add 0 to ANY number you just get the original number. So if the computer sees one image with a (0, 0, 0) black pixel, and a corresponding pixel in another image with RGB (138, 210, 12), and it adds the two images, it will give a final image of RGB(138, 210, 12). It effectively discards the black parts of the spaceship image and replaces them with the city skyline image pixels.

    So maybe, since the spaceship image is surrounded by black, we can just add each pixel of the spaceship image to the corresponding pixel of the background. Cool, right? The city background pixels will then be added to the zero/black area surrounding the spaceship image, and you’ll just get the city background image. And where the spaceship is, you just add those pixels to the underlying city background pixels and you’re all set, right?

    Well, almost…

    Here’s the result if you merely add the two images together.

    Composite_Plus.JPG
    1024 x 697 - 115K
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    It’s fine for the black area surrounding the spaceship, because skyline + black = skyline. But if you add the spaceship pixels to the underlying skyline pixels, you’ll get a problem. You’ll get spaceship + skyline, when all you want is just spaceship….

    For example, let’s say a spaceship pixel is (215, 128, 206), and you’re adding a skyline image pixel of value (127, 16, 122). What’s the result? Well, the R and B values will add up to much greater than 255, which is bad. It’s called “superwhite”. But the bigger problem is that, by adding the two, the spaceship image area has been changed. Instead of the correct value for that pixel of (215, 128, 206), it’s now a much lighter value of (342, 144, 328). Bad scene, man.

    So you’re getting close to a solution, but just adding the two images ain’t gonna work. So you need some other way to do this.

    Now, what was the problem with the “add” method we just tried? Only that the underlying city skyline pixel was added to the corresponding spaceship pixel. What we really want is that the underlying city skyline pixel would be totally black, so if you add the two pixels all you’ll get is the original spaceship pixel.

    So how do we effectively “punch a black hole” in the skyline image where the spaceship is?

    Well, to do that we first need to define the area covered by the spaceship, right? Once we have that we can punch a spaceship-sized black hole.

    Enter the “alpha channel”. Basically, it is a grayscale image, often generated by 3D render software, that does exactly that. It is white pixels where the spaceship is, and black pixels where it isn’t. Cool. Now we can combine the spaceship alpha channel with the skyline image to punch out a spaceship-shaped black hole in the skyline image, then add the two images and we’re done !!!

    Well, almost…

    SpaceshipAlpha.JPG
    1205 x 793 - 32K
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    Since we need a black hole in the skyline image, we’ll have to invert the spaceship alpha channel first so that it’s white everywhere the spaceship isn’t, and black where it is. Then we can multiply that inverted alpha with the skyline image, since 0 (or black) times any value is 0 (or black). That forces the skyline pixels under the spaceship to 0 (or black).

    SpaceshipAlphaInvert.jpg
    1205 x 793 - 36K
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited May 2015

    Now all you need to do is punch the hole in the background city skyline image (aka "stencil"), and you're ready to add the spaceship onto the background. And you can do that easily with a Multiply operation, thanks to the fact that 0 (black) times any number is black, and 1 (white) times any number is that number. So it passes thru the white alpha areas unchanged, and forces the black alpha areas to black.

    Stencil.JPG
    1027 x 707 - 112K
    Post edited by JoeMamma2000 on
  • JoeMamma2000JoeMamma2000 Posts: 2,615
    edited December 1969

    So now we have figured out the basic process for placing one image on top of another. You take the spaceship image with a fully black background, then punch a black hole in your background skyline image (by multiplying with the inverted spaceship alpha info), and then add the two images. And voila, you have a spaceship flying over the city.

    But we haven’t mentioned “premultiplication”…what is that?

    Well, lets’ say that our original spaceship image on a black background isn’t on a black background. Let’s say it’s a blue background. Or a sky background. When you punch a black hole in the city skyline background, and then add the spaceship image with the city skyline image you’re gonna get a mess. You’ll be adding the city skyline pixels with the blue or sky background pixels surrounding the spaceship.

    And here’s the result….

    CompBlue.JPG
    1025 x 700 - 102K
    SpaceshipBlue.JPG
    855 x 649 - 31K
Sign In or Register to comment.