Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
What you really need is some way to first force all of those blue background pixels surrounding the spaceship to black.
Well, in the same way you punched a black hole in the city skyline image, you need to do the reverse to the spaceship image and force everything surrounding the spaceship to black. And you do that with what’s called “premultiplication”. As the name suggests, you “first multiply” the alpha information with the spaceship image itself, and that will force those pixels surrounding the spaceship to black. Similar to what we did with the city skyline image, but kind of the inverse.
So you punch a black hole in the city skyline (using multiplication), then punch the inverse black hole in the spaceship image (using premultiplication), and then simply add them together. And for any compositing application you use, those operations are done automatically when you tell it to place one image over another. Punch a hole in the background, punch a hole in the foreground, and add them.
Now, why is it called “PRE-multiplication”? Well, because your 3D render app like Carrara has a choice of whether to give you the resulting render with the image already multiplied by its alpha information or not. Should it premultiply before showing you the render or saving the file, or should it leave it unmultiplied and let you do it later if you want? Should it force the blue background to be black so that your image will be ready for compositing, or just leave it as it was rendered?
Now you might think “well, why wouldn’t you always want it to premultiply by the alpha? Makes it easy to superimpose on different backgrounds, right?”. Well, yeah, but there are downsides…
When you premultiply you actually modify the RGB values of your image. It multiplies the RGB image pixel values with the corresponding alpha pixel values and replaces the RGB values with that result. So if your image pixel starts out with RGB (215, 128, 206), and that pixel has an alpha value of say 0.10 (10% “transparency”), then the premultiply changes that pixel value to RGB(22, 13, 21) since it multiplies the RGB values by 0.1. And that can cause problems down the road when you try to modify the image. Now the problems can be easily fixed when you’re compositing, but you need to be aware of it.
So basically, all of this alpha and premultiply stuff comes down to punching black holes in images, then adding images together. Really it has nothing to do with transparency, but that’s a useful concept to use when thinking about it. And if you do any serious compositing work, it’s best to understand what’s going on at the pixel level. And at that level it's all just 1 + 1 = 2 stuff... :) :)
Also, one somewhat obvious but important point is this...
All of this alpha and premultiplication stuff is only really important when you have images you'll be placing over another image in some other application.
When there is no "background" area visible to your Carrara camera you don't really need to worry about alphas and stuff. The Carrara (or other) renderer handles all of that transparency stuff just fine within the scene.
But if you're going to, for example, use an Object Index matte to cut out some objects from your scene and composite them then you might need to be aware of how alphas and all that stuff work.
Well after a week or two away from all computational-type devices, I was doing some render pass stuff in Carrara which involved Object ID passes. And I realized I never mentioned a couple of key items you’ll need to know if anyone does any Object ID stuff in Carrara. At least I *think* I never mentioned them. But I’m too lazy to sort thru the old posts to verify. :) :) So apologies if this is a repeat of some stuff.
Now, let’s say I want to use Carrara for what it’s good at (at least IMO), and that is rendering simple characters. And I then want to take the character part of the render and integrate it with some other components, using the strengths of whatever else is out there (camera images, other 3D apps, live action, etc.).
I could render out the character against a black background, and also include an alpha in the render, and I’m all set for integrating the character with some other elements, right? Here’s a simple render, character against black background, with no ambient/bounce lighting.
Well, that’s not quite enough. The problem is that you need to plan for integrating the character into whatever new background you’ll be using. The character will need to fit into the new environment. And the new environment would probably reflect light onto the character. And cast shadows onto the character. And be seen in reflections on the character. And so on…
So you’ll need to render the character in Carrara with the appropriate ambient and bounce light and reflected light and shadows. Or at least a starting point that you can modify in your compositing software so it will integrate cleanly.
Here’s an example of a simple render I did with a character, using a Realistic Sky atmosphere (to give a realistic ambient sky and sunlight), and with floor and wall objects to give some good bounce light. And using the appropriate render passes, I can then tweak the GI lighting, etc., in my compositing app to adjust for whatever background I will integrate the character into. For example, if I render it with a lot of GI bounce/ambient light, I can then tweak those levels in my compositing app to allow for varying levels of desired bounce/ambient. And what was rendered with one environment can then be used for many different environments.
Here’s the Global Illumination pass out of Carrara, which can be used at varying levels to add bounce/ambient light to my character. By merely sliding a slider, in real time, I can either tone down the amount of ambient/bounce light, or even multiply it to get an even greater effect than was rendered.
So all I need to do is render it, then use the alpha channel to isolate the character, and I’m all set to go, right?
Well, not quite. First, here’s the alpha I get from the render. Not real helpful to isolate the character.
But that’s what an alpha is. All objects in the scene are white, and backgrounds are black.
So now what do we do?
Well, the first option, and it might be the best, is to simply disable visibility of all objects that aren’t the main character, and disable any other background stuff that might get in the way, and do another fast Carrara render (with all time-consuming render options turned off) just to get an alpha. Not real elegant, but it might be the best alternative.
Another option is to use an Object ID pass. I mentioned before that an Object ID pass is a grayscale image in which Carrara assigns a single grayscale value to all the pixels that define each object in a rendered image. Different apps do this differently, but for Carrara it’s just a simple grayscale. And one of the points of having an Object ID pass is so that you can generate a matte image or alpha channel that defines the object you want to isolate, just like any other alpha channel.
That being said, there are some inherent limitations and challenges with working with an Object ID pass for generating alphas. And because of those limitations and challenges, in some cases you might want to consider just doing the additional render to get a real alpha channel.
One of those limitations is that the Object ID pass is, by definition, not anti-aliased. In other words, each pixel has a value based solely upon whether the associated object is occupying that pixel or not. And for the object’s edges, if the object fills most of the pixel, it’s considered “there”, and the pixel is painted white. But if it’s less than say, 50% coverage then it becomes black. There’s no antialiasing because, well, that’s not the purpose of an Object ID. It’s a simple “there or not there” determination.Here's a zoomed-in version of the modified Object ID pass showing the non-AA'd edges that come with that pass.
(BTW, click on the attached image to see the full-res jagged edges)
And that gives us a challenge if we want to use the Object ID pass as an anti-aliased alpha image so we can smoothly blend the image with whatever background we choose.
So how do we make the edges to be anti-aliased?
Well, I mentioned before what’s called a “Coverage” render pass. I forget what Carrara calls it (fragment maybe?). And a coverage pass provides an anti-aliased image which defines the edges where objects in the image overlap each other. It basically defines the edge antialiasing between overlapping objects. Which is exactly what we need to make our Object ID pass anti-aliased. Well, almost exactly. We still need to do some monkey motion to make it do what we want.
Below is a coverage pass from an Carrara render in which the character is comprised of multiple objects: a hair object, a main character object, a conforming clothing object, etc.
So how can we use that coverage to convert the matte we generated from the Object ID to be anti-aliased? Well we simply blend the two images together to somehow “add” the AA’d edges to the jagged Object ID edges.
But you can see that we only want the edges that outline the overall character, not each object that makes the character. So somehow need to get rid of all the edges inside of the character boundaries so we’re only left with a matte for the entire character, hair, clothing, and all.
Well, it turns out to be relatively easy if you put your thinking cap on…
You have a perfectly good Object ID pass that defines the character. So why not shrink it a bit (AKA, “erode”), and use that shrinked image to stamp out all of the edges inside the character outline? Just shrink the Object ID pass by 2 or 3 pixels, then add that to the coverage pass, and voila, you have some AA’d edges that you can multiply with your Object ID pass.
Now, this isn’t always a perfect solution. For example, if there are “transparent” areas in the actual alpha (like the partially transparent character hair in the example), those won’t show up in your Object ID pass, even after you AA it.
Here's a zoomed-in comparison of first the "real" alpha, and then the Object ID generated alpha. You can see where the Object ID alpha fails is in the semi-transparent areas, while it's reasonable accurate with the opaque edges.
And here's a final composite using an alpha generated from an Object ID pass that received AA edges from a Coverage pass. Note the semi transparent hair areas are a problem.
But it's one method of extracting a character from a complex scene with ambient and bounce lighting received from surrounding objects, without having to do a separate render.
Now while this might seem like the long way around, keep in mind that once you have the compositing "tree" figured out, you can re-use it for any other image/sequence with few, if any, modifications.
very nice explanation Joe, thanks!
+1
I kinda doubt anyone here really cares about much of this, but for completeness I realized I left out an image comparing the AA'd edges in a Coverage pass compared to the non-AA'd edges in an Object ID pass. Here's that image, showing a zoomed-in portion of the passes around the area of the character's backside, with the bottom of the character's hair shown at the top of the image. Note the Coverage is on the left, and the Object ID is on the right.
Note that neither includes any "transparency" information for the partially transparent hair.
And just a small example of how the technique of rendering with a lot of ambient/GI, and then varying that later on in post (after you extract the image using the tools I just mentioned), here's a simple GIF showing the wide range of ambient bounce light you can dial-in to your character merely by sliding a slider.
And with a little real-time tweaking of the ambient levels in my compositing app, as well as some 2D "relighting", I placed my character in a different environment.
It certainly needs some more work, but hopefully the point is clear.
Wishing I had more time... I want to re-read what Joe just posted.
I found this open source compositing software https://natron.inria.fr/
Wow, I actually got a question about this Coverage and Object ID pass stuff. Looks like some folks in another forum are following this.
I was a bit unclear in just how you can take the Object ID pass and use it to 'stamp out' the unwanted AA'd lines/edges in the coverage pass. Well, here's a simple animation showing the concept:
http://youtu.be/IlWFM9w0eeo
Basically, you have unwanted, AA'd lines inside the outline of the character you want to extract. In this case it's due to the overlapping edges of the hair and clothing objects and bracelet objects.
So all you need to do is take the Object ID matte image and "erode" it (ie, make it smaller), and then use that to stamp out the inside edges.
Here I show the Object ID matte, the raw coverage pass, and then the result of eroding the Object ID matte by 2 pixels and using that to stamp out the coverage pass.
In the animation, I've just animated a range of "erode" values (in pixels) to show how you can just use a slider to determine how much is the right amount to erode the ID pass so that it does what you want.
And for those who might enjoy some "Fun with Erosion", here's an animation showing how the actual ID pass looks when it's being eroded by a range of values.
Just slide the erosion slider in real time until you get the right answer... :) :)
For those who might be dabbling in compositing using Carrara render passes, I have a caution for you. And I think I alluded to it briefly earlier in this thread....
Sometimes the render passes that Carrara produces are garbage. Often they're fine, but sometimes you get passes that are just wrong. I'm not sure why, whether it's a version thing, maybe it's related to the age of the scene or something, or maybe one of the render features or settings that causes the render passes to barf. Just don't be surprised if some are wrong.
Here's a recent example of a Diffuse pass that is wrong, along with the final render for comparison. And when you re-compose the final render from the passes, the result is wrong. Even in the .psd output, which generally does a nice job of re-composing the final render by correctly blending all the pass layers, is wrong. It appears that the floor/carpet and wall textures are just blown out or something, and the carpet texture that shows in the final render isn't in the diffuse, or any other pass.
Just something to be aware of.
Some fun tidbits in here. I'm curious, though, why you're not simply rendering out your Alpha as a PNG. I do all of my individual 3D elements as PNG Sequences.
Well, because back on page 6 of this thread I decided to toss a coin and choose between one of the 32 bit formats you can save your renders in Carrara...Targa, TIFF, PNG, and Photoshop. And the cointoss chose TGA.
Does it really matter to you which format? Generally the alpha becomes integrated into the fourth channel of the image...RGBA.
I guess I'm not sure what point you're trying to make.
Just a quick note here, as far as I know Carrara does not have real 32 bit output regardless of the format it saves in, it's same "trick" as "ChangeDepth" node in Fusion
You were using a green screen earlier on, and then you were rendering out a flat black background to protect the render from changing the layer style. Those aren't things you have to do when you render as a PNG. You just render and lay it on your footage and then get into compositing from there. By changing the base render's layer style, you're effectively changing how the proceeding layers will blend with it, ya know?
I don't know, there was a lot in this thread. I read most of it, but maybe you did something differently than what I'm describing. The two formulas seemed to be chroma key or layer change with a bottom black layer to preserve the render instead of just rendering the PNG.
I almost always render in PNG format but ocassionally I have problem with alpha channel, does not work properly in Fusion for some reason, in that case only solution is slapping green (or blue) background in Carrara and keying it ouit in post
Why are you using .png, it's for internet...
There are better files formats with alpha channel: .tga (Targa), .tiff, .psd (photoshop).
https://en.wikipedia.org/wiki/Portable_Network_Graphics
You know, I'm a photographer, and I asked many pros why exactly TGA and TIFF (hell, I have camera which can natively save in TIFF) are better and none of them ever gave me straight answer.
I suspect there is none.
PNG file format is just fine ...
Losless compression is a good thing and it is what it is: LOSLESS, which means, well, no information is lost, lol, and it saves on disk space, very important for animations, so yeah, not just for internet, the fact that browsers can render it is only a plus in my book
And this is exactly why I originally just tossed a coin and chose targa.
Guaranteed, especially in hobbyist online forums, the discussion of image file formats will wander off into a completely irrelevant discussion of stuff that, in the end, doesn't really matter to the average hobbyist who is using the images for their own use. Or posting it on the internet. Some people spend a whole lot of effort finding the perfect, lossless image format and render with perfect quality, just to post it on the internet in a 720x480 jpg.
It's a bit like arguments over CPUs. Or which renderer is "more realistic". Or which app is "better". And so on.
For the vast majority here, it doesnt' matter. Use whatever works for what you're doing.
Not sure exactly what you're referring to, especially with "layer style".
It's pretty straightforward, IMO....when you render an alpha it generally becomes a 4th channel in your image. R, G, B, and then alpha becomes the 4th ("A") channel, which is carried along with the image.
Then, when you take it into a 2D app for compositing (Photoshop, Fusion, Nuke, whatever) the app usually recognizes that A channel and makes your life easier by automatically using it to matte your image. Or you can tell the app not to.
The file format (TGA, PSD, TIFF, PNG, EXR, whatever) becomes fairly irrelevant.
Or maybe I'm missing your point.
Again, that's where these discussions get bogged down. PNG is a fine format for stuff other than "internet". Just because it was originally designed as an improvement to GIF doesn't mean it's not widely used elsewhere. Cuz it is.
I know people love to come up with these arguments about what is "better", but often there is no "better", only "different".
Yeah, I'm not sure. It could be that they all function the same way, but I thought only PNG only rendered the primary image (the alpha).
When you render in PNG, you're only choosing to render the figure, ya know? Like, not the black background. Maybe you can do that with those other file formats as well? By rendering just the actual rendered image of the figure, you're not forced to manipulate layer styles for blending/compositing, until you get into actual lighting, color matching/correcting and ambient effects.