Nonsquare pixels for Studio
In 3Delight, it's simple to have nonsquare pixels. It's in the spec to work the same way that it works in Carrara. I'm confused as to why we don't already have this implemented. For example, to render something for widescreen DVD, you'd put
Format 720 480 1.2
in the beginning of the RIB file.
I use this to great effect to squeeze widescreen images into a standard resolution, as my plugin of choice (Cinelook Broadcast) has a limit on the dimensions of input images.
As in Carrara, you could make the aspect ratio of the production frame (which I can't seem to make show up in 4.x) take that horizontal pixel aspect ratio value into account for the final output, and the values for the ScreenWindow call to 3Delight could be derived from the final multiplied result (i.e. for the above example, it would calculate the values for ScreenWindow from 864 by 480, rather than 720 by 480).
I'm curious to hear others' thoughts on this as well.
Thank you :)
Comments
I'm not exactly sure what you mean by nonsquare pixels. Could you perhaps show a visual comparison?
I'm guessing it is about this: http://en.wikipedia.org/wiki/Interlaced_video
Amiga used to be a dominant force in video editing in early 90s, because it was able to output interlaced, TV-quality stuff.
Not referring to interlaced video at all; sorry for not posting examples earlier.
I am attaching two images. These are both resized down 50% from the original size, in keeping with the 600pixel limit (which I'm assuming still applies to the new forum).
They both contain the same scene information, but one was originally 1037 (You can actually do 1036.8 pixels in Renderman/3Delight, but as programs aren't really meant for that, there's no real benefit that I can tell from not just using 1037) x 576, which is the same aspect ratio (shape) as a widescreen DVD image. The software I use for various effects requires my input image or video to be no more than 768 x 576. So, then what I do is lower the image's width, but make each pixel cover a wider range, so that I can capture the full widescreen scene in the 768-pixel wide image. In this case, each pixel is carrying the information of 1.35 pixels horizontally. While there is a bit of information loss, it's not terrible (for my purposes).
This is similar to how some of the first consumer-level 1080P cameras functioned. The sensor was 1440x1080 pixels, but those pixels were 1.333x as wide as they were tall, effectively capturing the same range of information as a 1920x1080 sensor with square pixels.
Going back further, physical anamorphic lenses work the same way. If you see one that is a 1.333 lens, then it will turn a standard 4:3 image into 16:9, or 16:9 into a 21.3333:9 image. It works the same on cameras (going in) as it does on projectors (going out).