Worth it to delve into render nodes?
Good afternoon.
I'm running Carrara 8 pro. Technical animations with not many lights or much scenery. Simple.
I have a good machine to render my Carrara animations. Gateway i7 3 ghz, 8 gig ram.
With current settings a two minute clip may take 3-6 hours to render, not a big deal as I usually leave it running overnight.
I also own a Mac-mini intel-core Duo 2 ghz, 8gig ram running Osx 10.6 , and a laptop Dell core duo, 1.6 Ghz with 4 gig ram running win xp.
With these machines being slower then my main machine would I reduce my render time appreciably by using either as a render node.
I have been using a lot of " Aura " which I noticed can really slow things down.
Should I just be happy I have such a fast main machine and leave it at that?
Is setting up and running render nodes a hassle. Is it stable?
I assume mac and PC can be mixed?
Thanks !!
8068
Comments
I know in Carrara 7.2 Pro the Aura's rendering isn't done by the nodes. It's a post effect that's applied by the host machine, and if I recall correctly isn't multi-threaded so it's only one processor/core that does the aura render. There's a couple other functions in Carrara that aren't multi-threaded either, but I don't remember what they all are.
As to rendering with a mixed environment, whether it be speed of machines or OSes, it depends. Your mileage may vary. Give it a shot. My main rig is a PPC G5, I have an older G4 and sometimes I use an Intel iMac (core duo I think) with different flavors of OS X. I also used to have an older Dell POS (not that Dell is a POS, just the machine I had), and had nodes on all of them working. Carrara 8 has some marked improvements in how it allows you to manages nodes as compared to earlier versions, so slower machines may not be as much of an issue. Since I don't have C8, maybe some others will be able to give more specific advice.
It will very likely be worth it, especially if they are all on the network already. It certainly can't hurt to do some tests.
When you render now and see the little tiles that represent each of your cores render parts of the image, adding network computers just adds more of those tiles. They run at their own speed, and in some cases they may run much slower than the host (or vice-versa). The cool thing is that you have control over the size of those tiles and here's how it helps even if the network computers are slow:
EX - Let's say you have a quad 2 core processor. The render window would display 8 little colored tiles working away to render your image. Let's say on average they were taking 1 min each. If you add another computer to the network, say a dual core, it would add two more rendering tiles. Lets say they're taking 2 min each to complete. Your host would render 16 tiles in the time the networked computer would take to render two. This is an improvement. Since you can set the size of the tiles, you could roughly set it up to block your image into 18-ish tiles, so that both computers essentially finish rendering at the same time.
dbutcher
Thanks for the response. I'll give it a try.
( now that I've discovered sub surface scattering!)
8068
With several computers of differing speeds, be sure to reduce your tile size (bottom of the render settings where you find the toggle for netwrok rendering) smaller tiles spread the work over the various machines more evenly. Larger tiles can slow you down while waiting for one big square on a slower computer to finish...
In my experience (Mac) Rendernode seems to work best when the host machine has a SSD or hybrid drive. I am using a mixed OS X environment 10.6, 10.7 and able to get 2 nodes working. Still tinkering to get the 3rd node, a MB Air w/10.8 working - it shows up but not rendering, ugh. Although C8.1 itself is useless in Lion and Mtn Lion thankfully C8.1 Rendernode does work. Sometimes all 2 nodes don't always render or take a very long time to kick in. Very tempermental. Whatever you do, don't upgrade that Mac Mini from Snow Leopard 'cuz you will run into troubles.
Curious to hear your results using PC and Mac so let us know. Based on my own experience, each node is reducing my animation render times by 25%. So something that takes 5 hours solo on my host machine (MBP i7 2.2GHZ quadcore) will take 2.5 hours using + 2 nodes (slower dual cores).
3DPixLA
Sounds well worth it.
I'll give it a shot,
Thanks
B
I don't have any of those and I've been using nodes since C5 in a Mac environment. The long load times for the nodes could be what's in the scene. I've found that scenes which use mainly procedural shaders load very fast and scenes that use image maps and movies within them take much linger to load.
Forgot to mention another trick if you have multiple machines installed with Carrara and don't want to mess with nodes...
Say you have a :30 sec animation and 3 machines. Set different in-out times :10 sec each machine and reassemble the clips using your video editing tool.
So you're saying you have a 2 minute clip (which comes out to 2880 frames being rendered at 24fps), and it takes you between 3 and 6 hours to render them (which is between 10,800 and 21,600 seconds to render them).
That means that each frame is rendering in somewhere between 4 and 8 seconds ? Unless my math is goofy...
I dunno, doesn't seem like rendering in nodes is gonna do much for you. Especially with Carrara's weird "share the frame" approach, rather than assign complete frames to each of the nodes. Seems like the nodes won't have time to get set up for each frame. Or something like that. I very, very rarely use the node rendering feature since I don't like it very much. Maybe if each frame takes a very long time it will benefit you, but for short render times I don't think so.
You can try it, but I'm not sure if you're gonna gain much. Especially if you leave them overnight. You get a problem and you'll wake up to an unfinished render in the morning.
Why not just leave good enough alone?