Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Hm I should try that soon.
The most important studios may prefer to work with technology that follows "industry standards".
Unfortunately the companies involved in GPU rendering have not yet reached the point where they cooperate with all major partners to create such standards that remain unchanged for a longer product cycle to ensure fully working hardware and software combinations at all times.
- - -
Standards per definition should remain unchanged. If the technology advances then new standards can be discussed and agreed upon in cooperation with all industry partners.
As long as this procedure is not adopted by the GPU rendering industry major studios may prefer to stick to CPU rendering.
What remains are the hobbists and freelancers which because of lack of an affordable alternative are forced to just deal with the current situation.
- - -
Thanks! I will report some progress in another message. I started buying parts for a system using the used/surplus E5's, and then saw the article you mentioned. I had two of the same motherboard (Asrock Rack, C602-based), two pairs of E5-2670, the same power supply and the same memory (DDR3 non/ecc, unbuffered). So, I felt that I had picked the right parts. I assembled both and both booted first time. Started installing Win 7 Pro 64 bit. After only a few boots, started to hang on cold boot on the second one, POST code 60 (CPU or Memory problem). Would always boot the second try, so I went ahead and installed Windows 7 Pro 64-bit, got the update roll-ups, then updated to Windows 10. Asrock Rack customer support suggested a bent cpu pin on the motherboard, but I have always seen that immediately, no delay in problems showing up. After a couple of weeks, the first machine also start showing the same problem. So I swapped the new non ecc memory for registered ECC sever ram (used/surplus) in the second machine. Went from showing boot POST 60 20 times in a row to no errors 20 boots in a row. So we will see if this was the problem. More shortly.
Thanks for this info too!
I know that I am taking a chance with the used server ram (the CPU's too for that matter), but I am buying from top-rated EBAY vendors who deal in surplus sever parts. I like to tinker, and I can always get some new server ram to replace what I have if problems crop up (at least gradually as I can affored it).
I added my initial issues in the post above. So here is the progress so far:
2 x Asrock Rack MB (C602) and 1 x SuperMicro MB, all with paired E5-2670 V1 and 64 GB DDR3 server ram. MB and power supplies new, CPU and ram used.
2 x SuperMicro X8DTT-F LGA-1366 blade server boards with paired X5660 CPU and 24 GB DDR3 server ram. All components used, including proprietary pin-out power supplies.
Installed Windows 7 Pro 64-bit and the roll-up updates, then upgraded to Windows 10 (with one day to spare for my "free" upgrade). Ran LuxMark 3.1 (complex scene/lobby) as a burn-in and performance test.
I noted my problems with the non-sever ram with the ASROCK MB in an earlier post.
The surprises:
- With server ram installed, all 5 motherboards booted first time (never expected that!), and no problems so far.
- The old X8 MB setups had LuxMark 3.1/complex scene scores over 600, which is much better than I expected.
Then I had to put this all aside for awhile to catch up with real life : ) as I have most thoroughly blown my budget of time and money for a good while.
Once I get back to this, I will report any problems or interesting results. I will shortly have a set of new server ram for troubleshooting if problems show up. Kendall's info convinced me that I should be better prepared. I can always use the ram, I have a spare MB.
Any questions I can help with, feel free to send.
If anyone interested in small renderfarms lives near Baton Rouge, that is where I am.
Thanks again to everyone for all the info and encouragement!
Greymom
Wow, if I'm doing my math right you've got 3 machines with 32 render cores each and 2 with 24 render cores each, for a total of 144 rendering cores (!) What a render network! Must be a blast to render with
Well, I'm hoping it will be when I get the time . Only two of the systems are in cases, the rest are on trays (big time contraint to get my "free" Win 10 upgrade). Still a lot of work to do, and I will get back to it as soon as I get the house and yard back under control. "The oxen are slow, but the Earth is patient" (High Road to China). At least it will probably be winter by then, and I can also use the system to heat my house! But anything will be an improvement. My ancient Q6600 machine (anyone remember ABIT P35 motherboards?) has a Luxmark score of like 80, and that is what I did all my previous rendering on.
Just a caveat for building a dual-E5 machine - most of the new (non-blade) motherboards are SSI-EEB 12" x 13" form factor (same as XL-ATX but the mounting holes are different). The EEB tower cases are hard to find for a decent price (NewEgg has some), and my dogs could live in one. I think I bought the last two Raidmax Vampire cases available, but the boards fit and they look cool. The X8 blades only fit in the special 1U/2U supermicro server modules, so I will have to modify an old jumbo tower case to fit two blades and the power supply (the PS is for two blades).
Greymom
...I'd actually look for the generation next Titan-X to have 16 GB instead of 24. GTX technology is primarily marketed towards the gaming sector rather than professional CGI production. I'd also look for the Quadro line to be the first to get HBM2 memory.
can two Xeon E5-2670 8-cores clocked at 2.6GHz 20MB L3 cache each compete with one gtx 980 ti in rendering using iray?
Only if you blow out VRAM. Iray is heavily GPU optimized with little optimization on the CPU computing side. 32 cores is not enough to compete with 2800 CUDA if everything fits in VRAM.
Kendall
Optimize your scene, not your hardware. There are likely MANY MANY MANY props and surfaces that are using 4K x 4K textures. These will chew up your VRAM. For any that are not in direct focus or that are not clearly visible, lower the texture size by as much as you can and still look "good". The texture atlas is your friend. Turn off any clothed/occluded parts of your figures -- Inner Mouth, teeth, tongue, feet, thighs, etc.
EDIT: On props that are far away from the camera, or out of focus, some times you can completely remove the displacement/bump/normal maps and save yourself many megabytes of VRAM per map. This is a trial-and-error process though, if you remove the displacement/bump/normal maps and it goes bad, you'll want to restore the surface. Also, keep your maps all the same size if you can.
Kendall
Also, going along with what Kendall said...use procedurally generated/noise based textures if you can. They consume a lot less memory.
Iray uses about 3 bytes/pixel. So a 4k x 4k image will use about 50 MB. While a 2k x 2k one will use about 12 MB and a little tiling 512 x 512 'procedural' image will use under 1 MB. And that's per MAP...so think about it...1 diffuse, 1 bump, 1 specular/roughness/whatever, 1 SSS per surface and for your average figure (G3) there are 4 'main' maps...some have more than 1 surface, but are on the same map...Face, Lips, Ears, Eye Socket, for example, Then the eye parts and mouth parts are two more...so 6 total (and you can add 1 to 3 more if the makeup/eyecolor/nail color maps don't replace the originals!). So it's entirely possible that a single textured figure...not counting clothes and hair can use up to half a GB for just the textures!
All what was said above.
But there are also other less time consuming things you can do (and I did regulary before I got an Titan X.
1) Hide everything that is out of view and is not intended to effect the light (like walls). This include things like unseen clothing (underwear) that is still loaded
2) reduce subD Level of figures / objects in the background
3) Hide a part of your figures (e.g. in different areas of the picture) render, hide the others unhide the previous, Render again. Combine Pictures in Photoshop/Gimp (Layering)
4) When you start tinkering with textures the easiest start point is often to remove (often big) normal maps of figures in the background. You can save here a lot of VRAM without significant impact on the picture.
General advice: Get Sim Teneros Iray Memory Assistant. Even thought it is not 100% accurate due to DAZ/Iray limitations it is very helpful when optimizing your scene.
No, but it will be considerably faster...probably 3x to close to 4x faster, but not quite exactly 4x. Or using that example...11 mins instead of 10. There are some processes that are done on a single core (like texture prep) that won't gain any speed no matter how many cores you throw at it, so it won't help to cut the overall time any, but for the actual render process itself, yeah...a lot faster.
...hiding parts of a set or prop are contingent on how the mesh was set up. For example hiding the mesh of a say, structure in the background in a large set can also affect other portions of the mesh like the ground plane or adjacent buildings/props.
...layering/compositing has one weak point as shadows may affect several render "layers". This can mean they have to be "painted in" by hand in post.
..."tinkering" with textures in a 2D programme can be incredibly tedious and even become a "diminishing returns" situation compared to the resultant render time, especially in a very busy scene.
...please excuse any typos as the spell check as you type is borked once again.
I feel one of the biggest hobbies "gotcha" is from a cost stand point being stuck with Daz studio and no Linux version. If you wanted serious processing nothing beats Linux. It is the platform most "real" render farms use, not windows. It's not just about the cost of a license for each machine, though with a thousand PC's running that alone would be very expensive, it is the scalability factor. Nvidia is for smaller shops and hobby unless you can pay for their render farm type engines. nVidia does have excellent Linux support although that does nothing to help Daz studio users, at least not without jumping through a lot of hoops to get your scenes from Windows to Linux to use that power. Daz appears totally uninterested in considering this advancement.
But tons CUDA cores in a linux box makes for some powerful calculating machines. This is where the Nvidia architecture shines brightes, in a farm environment full of CUDA cores.n The actual CPU cores is not all that relevant since most are not very involved in rendering. There for managment and housekeeping and keeping all the pc's talking to each other.
When relying on the Nvidia cards for the actual rendering those farms would be limited to the same RAM limitations you and I are. Could only hold a scene that fits in the RAM of the smallest of the cards. You could have a farm with 1000 pc's all with say 8GB cards, the entire farm would only process that size scene.
Studios often write their own software as that is the only way to get the speed and thruput they need. They won't be running iRay because of the aforementioned limitations.
A custom farm for someone like Pixar does not rely on super comuters they rely on a TON of pretty normal boxes with Tesla or similar cards as none of them require any monitors except perhaps a managers consol. Linux lends iteself to massive parallel processing this way which is how "poor" labs (not just graphics but weather, chemistry, genomics etc) build their "super" computers. Linux with a lot of RAM and a lot of CUDA cores.
You could have your own super computer in one box with Linux, a boat load of ram and for underr a grand if you do it right.
https://web.stanford.edu/~hydrobay/lookat/gpumeister.html
I wish I had that !
//
...I agree, Linux is more elegant when it comes to compute performance, however the one caveat to that is there are just too many distros floating around for many software developers to mess with. You favour one version, and users of the others start crying "foul". With WIndows and MacOS there are only the two individual development paths to keep track of that are relatively stable (well, in a sense).
Daz is a small company that does not have the development resoruces of an Autodesk or Adobe. Crikey they haven't even bothered much with the other products they own (Bryce, Carrara, Hexagon) in years.
The issue I have with GPU rendering is the limited amount of VRAM for the cost compared to physical memory.
I can get 64 GB of physical memory for about the price of a standard GTX 1080 (doesn't really matter if it is DDR3 or DDR4 as long as 4 channel configuration is supported. I can get two 2.66GHz 8 core hyperthreading Xeons for around 350$ giving me 32 CPU threads. As I also work with software that does not nateivly support GPU rendering, going the way that most studios do suits my needs better even for rendering in Iray. For one, with 64 GB of physical memory, I pretty much will not have to worry about the process dropping into much slower swap mode. That right there is a time savings. Second as I tend to create fiarly large scale scenes a fair amount of the time, it would pretty much take the resoruces of a Titan Xp or even Quadro P5000 to ensure the process remains in VRAM until it is completed. For 1,200$ (the cost of that Titan Xp) I could have 128 GB of physical memory.
Basically for a little more than the price of that Quadro P5000 I could have a pretty raging workstation that even includes a pair of 1070s for Daz (once the price finally settles down after the cryptomining rush bottoms out).
Honestly, you would be better off with a single GTX 1080 ti over a pair of 1070 cards. You get almost double the cuda cores, almost double the memory bandwidth, and 3 more GB of VRAM. Then there is the price, at non-inflated prices the GTX 1070 started just under $400, The GTX 1080 ti starts at $700. And if the scene fits in the 11GB it will render circles around the P5000.