Why the most important studios use cpu for rendering?

124»

Comments

  • gederix said:
    wolf359 said:

    "Optimization info is never enough, esp here. Very very very valuable tips."

    I render all of my final animations using Daz content exported to Maxon Cinema4D, this is an important reminder for me to remember to checkfor those ridiculous 4k x4k texture maps and res them down in photoshop CS
    I wish there was a script to  batch procces them by folder.

    Record your own action for the res down part and then use it to batch process either a selection of images or a specific folder (file>automate>batch).

    Hm I should try that soon.

  • linvanchenelinvanchene Posts: 1,382
    edited September 2016

    The most important studios may prefer to work with technology that follows "industry standards".

    Unfortunately the companies involved in GPU rendering have not yet reached the point where they cooperate with all major partners to create such standards that remain unchanged for a longer product cycle to ensure fully working hardware and software combinations at all times.

    Example:

    Cameras, video codecs, editing software, playback devices, broadcasting equipment,  monitors all need to follow the 4K, HDR standards the entertainment industry has agreed upon.

    compare:

    http://www.whathifi.com/advice/hdr-tv-what-it-how-can-you-get-it

    If the big companies of the entertainment industry would not have prepared the launch of 4k and HDR for years the end users now would be faced with many different versions and none of them would work together.

     

    - - -

    Standards per definition should remain unchanged. If the technology advances then new standards can be discussed and agreed upon in cooperation with all industry partners.

    As long as this procedure is not adopted by the GPU rendering industry major studios may prefer to stick to CPU rendering.

    What remains are the hobbists and freelancers which because of lack of an affordable alternative are forced to just deal with the current situation.

    - - -

    Post edited by linvanchene on
  • GreymomGreymom Posts: 1,110
    Jonstark said:
    Greymom said:

    Thanks to everyone for all this vital information!!!

    Working now on a small farm based on LGA-2011 V1/V2 server boards with 2x E5-2670V1 (8-core) + 64GB DD3 server ram.   Used pairs of E5-2670V1 going for $160 vs $3100 in 2012, E5-2650 pairs for as little as $88!   Used server ram is cheap too.   This will allow me to run most any rendering engine, including VUE's RenderCows.

    I will wait a bit (need to save up money anyway) to decide on graphics boards.  Looks like VUE 2016 will be adding a hybrid GPU/CPU mode also.

     

    That's awesome, I've been drooling over the idea of going for a LGA-2011 setup, especially since I saw an article on building a 32 core render monster that would cost less than the price (at the time the article was written) of one single haswell E i7 8 core cpu:  http://www.techspot.com/review/1155-affordable-dual-xeon-pc/   The setup pretty much destroyed every more modern consumer grade machine they pitted it against  smiley

     

    I looked into building one of these for myself, but a) I've never put together a computer before, the most I've done is tear them down, clean out fans and switch out cpus and I'm not sure of my ability to do this and b) even though the prices are waay low it was still just too pricey at this point for me  smiley   Then I started looking around ebay and there are lots of hp z620 and z820 workstations that are prebuilt with dual xeon lga 2011 (I'm sure there are also comparable Dell and Lenovo workstations for dual lga 2011 too) and they seemed to go for between $400 - $600, which is less expensive but still a bit to pricey for me at the moment, when I can get the prior generation xeon workstations that are still great for rendering for less.  But if the trend continues and prices keep dropping, maybe in a couple of years those things will be going for $200 - $300 for a complete 32 render core server, ready to go, and I will lunge at it  smiley

     

    I'm exited for your project, hope you share as it progresses  smiley

    Thanks!  I will report some progress in another message.   I started buying parts for a system using the used/surplus  E5's, and then saw the article you mentioned.   I had two of the same motherboard (Asrock Rack, C602-based),  two pairs of E5-2670, the same power supply and the same memory (DDR3 non/ecc, unbuffered).   So, I felt that I had picked the right parts.   I assembled both and both booted first time.   Started installing Win 7 Pro 64 bit.   After only a few boots, started to hang on cold boot on the second one, POST code 60 (CPU or Memory problem).   Would always boot the second try, so I went ahead and installed Windows 7 Pro 64-bit, got the update roll-ups, then updated to Windows 10.   Asrock Rack customer support suggested a bent cpu pin on the motherboard,  but I have always seen that immediately, no delay in problems showing up.   After a couple of weeks, the first machine also start showing the same problem.   So I swapped the new non ecc memory for registered ECC sever ram (used/surplus) in the second machine.   Went from showing boot POST 60 20 times in a row to no errors 20 boots in a row.   So we will see if this was the problem.  More shortly.

     

  • GreymomGreymom Posts: 1,110

    There are several different types of error correcting RAM.  Some are more easily checked than others.

    I don't think the average desktop user realizes just how often memory errors happen.  In most cases it doesn't matter since an error is more likely in "unused" sections of memory than sections that are constantly being read/written, and the desktop user never sees a difference (usually it would precipitate a crash if an error happened in active memory).  However, servers and server OS's tend to use almost all of the available RAM so errors are a much bigger concern.  ECC RAM allows the system to "repair" or reset the error and continue -- most of the time.  Sometimes the error is so bad that the system halts.  If the chip develops a permanent error in either the RAM or the parity sections the system will refuse to boot upon detection.  There are times that POST doesn't detect the errors and it is left to the Motherboard circuitry to catch the problem, in these cases what occurs on a detected error is up to the OS.

    Worst case is that you get a machine that crashes randomly (hard to detect if running windows since it crashes more often, but very easy to see when running linux).  Best case is that POST detects the problem and tells you which chip is failing.  Running a memory test on ECC RAM is costly and normally it is less expensive to ust throw out a suspected bad chip than to risk losing critical data.  It is these "suspected" chips that end up on the used market along with chips pulled for preventative maintenance or upgrades.  Caveat Emptor.

    Kendall

    Thanks for this info too! 

    I know that I am taking a chance with the used server ram (the CPU's too for that matter), but I am buying from top-rated EBAY vendors who deal in surplus sever parts.   I like to tinker, and I can always get some new server ram to replace what I have if problems crop up (at least gradually as I can affored it).

    I added my initial issues in the post above.   So here is the progress so far:

    2 x Asrock Rack MB (C602) and 1 x SuperMicro MB, all with paired E5-2670 V1 and 64 GB DDR3 server ram.   MB and power supplies new, CPU and ram used.

    2 x SuperMicro X8DTT-F LGA-1366 blade server boards with paired X5660 CPU and 24 GB DDR3 server ram.   All components used, including proprietary pin-out power supplies.

    Installed Windows 7 Pro 64-bit and the roll-up updates, then upgraded to Windows 10 (with one day to spare for my "free" upgrade).   Ran LuxMark 3.1 (complex scene/lobby) as a burn-in and performance test.

    I noted my problems with the non-sever ram with the ASROCK MB in an earlier post.

    The surprises:

    - With server ram installed, all 5 motherboards booted first time (never expected that!), and no problems so far. 

    - The old X8 MB setups had LuxMark 3.1/complex scene scores over 600, which is much better than I expected.

    Then I had to put this all aside for awhile to catch up with real life : )  as I have most thoroughly blown my budget of time and money for a good while.

    Once I get back to this, I will report any problems or interesting results.   I will shortly have a set of new server ram for troubleshooting if problems show up.  Kendall's info convinced me that I should be better prepared.  I can always use the ram, I have a spare MB.

    Any questions I can help with, feel free to send.

    If anyone interested in small renderfarms lives near Baton Rouge, that is where I am.

    Thanks again to everyone for all the info and encouragement!

    Greymom

     

  • JonstarkJonstark Posts: 2,738
    Greymom said:
    Jonstark said:
    Greymom said:
     
    Greymom said:

    2 x Asrock Rack MB (C602) and 1 x SuperMicro MB, all with paired E5-2670 V1 and 64 GB DDR3 server ram.   MB and power supplies new, CPU and ram used.

    2 x SuperMicro X8DTT-F LGA-1366 blade server boards with paired X5660 CPU and 24 GB DDR3 server ram.   All components used, including proprietary pin-out power supplies.

    Installed Windows 7 Pro 64-bit and the roll-up updates, then upgraded to Windows 10 (with one day to spare for my "free" upgrade).   Ran LuxMark 3.1 (complex scene/lobby) as a burn-in and performance test.

    I noted my problems with the non-sever ram with the ASROCK MB in an earlier post.

    The surprises:

    - With server ram installed, all 5 motherboards booted first time (never expected that!), and no problems so far. 

    - The old X8 MB setups had LuxMark 3.1/complex scene scores over 600, which is much better than I expected.

    Then I had to put this all aside for awhile to catch up with real life : )  as I have most thoroughly blown my budget of time and money for a good while.

    Once I get back to this, I will report any problems or interesting results.   I will shortly have a set of new server ram for troubleshooting if problems show up.  Kendall's info convinced me that I should be better prepared.  I can always use the ram, I have a spare MB.

    Any questions I can help with, feel free to send.

    If anyone interested in small renderfarms lives near Baton Rouge, that is where I am.

    Thanks again to everyone for all the info and encouragement!

    Greymom

     

    Wow, if I'm doing my math right you've got 3 machines with 32 render cores each and 2 with 24 render cores each, for a total of 144 rendering cores (!)  What a render network!  Must be a blast to render with smiley

  • GreymomGreymom Posts: 1,110
    Jonstark said:
    Greymom said:
    Jonstark said:
    Greymom said:
     
    Greymom said:

    2 x Asrock Rack MB (C602) and 1 x SuperMicro MB, all with paired E5-2670 V1 and 64 GB DDR3 server ram.   MB and power supplies new, CPU and ram used.

    2 x SuperMicro X8DTT-F LGA-1366 blade server boards with paired X5660 CPU and 24 GB DDR3 server ram.   All components used, including proprietary pin-out power supplies.

    Installed Windows 7 Pro 64-bit and the roll-up updates, then upgraded to Windows 10 (with one day to spare for my "free" upgrade).   Ran LuxMark 3.1 (complex scene/lobby) as a burn-in and performance test.

    I noted my problems with the non-sever ram with the ASROCK MB in an earlier post.

    The surprises:

    - With server ram installed, all 5 motherboards booted first time (never expected that!), and no problems so far. 

    - The old X8 MB setups had LuxMark 3.1/complex scene scores over 600, which is much better than I expected.

    Then I had to put this all aside for awhile to catch up with real life : )  as I have most thoroughly blown my budget of time and money for a good while.

    Once I get back to this, I will report any problems or interesting results.   I will shortly have a set of new server ram for troubleshooting if problems show up.  Kendall's info convinced me that I should be better prepared.  I can always use the ram, I have a spare MB.

    Any questions I can help with, feel free to send.

    If anyone interested in small renderfarms lives near Baton Rouge, that is where I am.

    Thanks again to everyone for all the info and encouragement!

    Greymom

     

    Wow, if I'm doing my math right you've got 3 machines with 32 render cores each and 2 with 24 render cores each, for a total of 144 rendering cores (!)  What a render network!  Must be a blast to render with smiley

    Well, I'm hoping it will be when I get the time laugh.   Only two of the systems are in cases, the rest are on trays (big time contraint to get my "free" Win 10 upgrade).   Still a lot of work to do, and I will get back to it as soon as I get the house and yard back under control.  "The oxen are slow, but the Earth is patient" (High Road to China).   At least it will probably be winter by then, and I can also use the system to heat my house!  But anything will be an improvement.   My ancient Q6600 machine (anyone remember ABIT P35 motherboards?) has a Luxmark score of like 80, and that is what I did all my previous rendering on.

    Just a caveat for building a dual-E5 machine - most of the new (non-blade) motherboards are SSI-EEB 12" x 13" form factor (same as XL-ATX but the mounting holes are different).  The EEB tower cases are hard to find for a decent price (NewEgg has some), and my dogs could live in one. I think I bought the last two Raidmax Vampire cases available, but the boards fit and they look cool.  The X8 blades only fit in the special 1U/2U supermicro server modules, so I will have to modify an old jumbo tower case to fit two blades and the power supply (the PS is for two blades).

    Greymom

     

  • kyoto kidkyoto kid Posts: 41,023
    ArtisanS said:

    Blender uses CPU rendering indeed if they have to render a movie (Sintel, Tears of Steel, and Cosmo Laundromat). Tom explained indeed that scenes could easily outperform the wildest combination of video cards, however there are also studiio's that use 8 Titans to render stils (and movie) at a blistering pace (so it's not only hobbyist rendering on GPU's),. It just depends on what you are creating. I have a private prop build that is a appartment building (outside only) in all it's details. That prop alone eats 12 Gb in RAM, so rendering is done on the processor cores (6 of them to be exact) and that takes time. But sometimes you just have to wait. If all goes wel the next generation of Titan X's could sport 24 Gb video ram (if tthe 900 is a guide to go by and extrapolate on the 1080:980). So I guess, more and more wil be handled by GPU's. Fact of the matter is that GPU's are better in handling complex single precision calculations then ordinary processor cores.

    Greets, Artisan!

    ...I'd actually look for the generation next Titan-X to have 16 GB instead of 24. GTX technology is primarily marketed towards the gaming sector rather than professional CGI production.  I'd also look for the Quadro line to be the first to get HBM2 memory.

  • can two Xeon E5-2670 8-cores clocked at 2.6GHz 20MB L3 cache each compete with one gtx 980 ti in rendering using iray?

  • Kendall SearsKendall Sears Posts: 2,995
    edited October 2016

    can two Xeon E5-2670 8-cores clocked at 2.6GHz 20MB L3 cache each compete with one gtx 980 ti in rendering using iray?

    Only if you blow out VRAM.  Iray is heavily GPU optimized with little optimization on the CPU computing side.  32 cores is not enough to compete with 2800 CUDA if everything fits in VRAM.

    Kendall

    Post edited by Kendall Sears on
  • Thank you very much Kendall for the answer. I've looked on Internet about it and found nothing. VRAM have been always a issue for me. I had a 2gb card then 4gb and now I was forced to change into 6gb. The VRAM still not enough. It only can handle a scene with three characters. I need a bigger scene and six characters. The card can't handle that. I know that I need at least 12gb but... If I reach the maximum VRAM again? I would need to update the cards. It's very frustrating because I don't have the money to update the pc each time my cards can't handle a scene. I just need to reach the rendering power of a single 980 ti with cpu's. If I could make that I would be super happy! No more updates for a long time! Yay! Kendall I would like to know if you can tell me what I need to make that. Do I need a cluster with a few racks? I don't need to use it very much. Just three days a week. Noise it's not a problem. I have a budget of us 6000 A little test I make: An I7 vs 980 ti rendering the same scene 980 ti----->20 min to reach 100% I7 -----> 1 hour and 20 min to reach 20% I don't speak English so if you didn't understand something please let me know to correct it and explain it better Many many thanks Kendall!!!
  • Kendall SearsKendall Sears Posts: 2,995
    edited October 2016
    Thank you very much Kendall for the answer. I've looked on Internet about it and found nothing. VRAM have been always a issue for me. I had a 2gb card then 4gb and now I was forced to change into 6gb. The VRAM still not enough. It only can handle a scene with three characters. I need a bigger scene and six characters. The card can't handle that. I know that I need at least 12gb but... If I reach the maximum VRAM again? I would need to update the cards. It's very frustrating because I don't have the money to update the pc each time my cards can't handle a scene. I just need to reach the rendering power of a single 980 ti with cpu's. If I could make that I would be super happy! No more updates for a long time! Yay! Kendall I would like to know if you can tell me what I need to make that. Do I need a cluster with a few racks? I don't need to use it very much. Just three days a week. Noise it's not a problem. I have a budget of us 6000 A little test I make: An I7 vs 980 ti rendering the same scene 980 ti----->20 min to reach 100% I7 -----> 1 hour and 20 min to reach 20% I don't speak English so if you didn't understand something please let me know to correct it and explain it better Many many thanks Kendall!!!

    Optimize your scene, not your hardware.  There are likely MANY MANY MANY props and surfaces that are using 4K x 4K textures.  These will chew up your VRAM.  For any that are not in direct focus or that are not clearly visible, lower the texture size by as much as you can and still look "good".  The texture atlas is your friend.  Turn off any clothed/occluded parts of your figures -- Inner Mouth, teeth, tongue, feet, thighs, etc.

    EDIT:  On props that are far away from the camera, or out of focus, some times you can completely remove the displacement/bump/normal maps and save yourself many megabytes of VRAM per map.  This is a trial-and-error process though, if you remove the displacement/bump/normal maps and it goes bad, you'll want to restore the surface.  Also, keep your maps all the same size if you can.

    Kendall

    Post edited by Kendall Sears on
  • mjc1016mjc1016 Posts: 15,001

    Also, going along with what Kendall said...use procedurally generated/noise based textures if you can.  They consume a lot less memory.

    Iray uses about 3 bytes/pixel.   So a 4k x 4k image will use about 50 MB.   While a 2k x 2k one will use about 12 MB and a little tiling 512 x 512 'procedural' image will use under 1 MB.  And that's per MAP...so think about it...1 diffuse, 1 bump, 1 specular/roughness/whatever, 1 SSS per surface and for your average figure (G3) there are 4 'main' maps...some have more than 1 surface, but are on the same map...Face, Lips, Ears, Eye Socket, for example,  Then the eye parts and mouth parts are two more...so 6 total (and you can add 1 to 3 more if the makeup/eyecolor/nail color maps don't replace the originals!).  So it's entirely possible that a single textured figure...not counting clothes and hair can use up to half a GB for just the textures!

  • RenomistaRenomista Posts: 921

    All what was said above.

     

    But there are also other less time consuming things you can do (and I did regulary before I got an Titan X.

    1) Hide everything that is out of view and is not intended to effect the light (like walls). This include things like unseen clothing (underwear) that is still loaded

    2) reduce subD Level of figures / objects in the background

    3) Hide a part of your figures (e.g. in different areas of the picture) render, hide the others unhide the previous, Render again. Combine Pictures in Photoshop/Gimp (Layering)

    4) When you start tinkering with textures the easiest start point is often to remove (often big) normal maps of figures in the background. You can save here a lot of VRAM without significant impact on the picture.

    General advice: Get Sim Teneros Iray Memory Assistant. Even thought it is not 100% accurate due to DAZ/Iray limitations it is very helpful when optimizing your scene.

  • Thanks for all the advices!!! I currently have a i7 4790 with 4 cores. If I switch to 16 cores will I be 4x times faster in rendering? For example. It took 40 min to render a scene with my i7 with 4cores but If I get 12 cores more will I be 4x times more faster? This is my last question about rendering I swear Thanks again
  • mjc1016mjc1016 Posts: 15,001
    Thanks for all the advices!!! I currently have a i7 4790 with 4 cores. If I switch to 16 cores will I be 4x times faster in rendering? For example. It took 40 min to render a scene with my i7 with 4cores but If I get 12 cores more will I be 4x times more faster? This is my last question about rendering I swear Thanks again

    No, but it will be considerably faster...probably 3x to close to 4x faster, but not quite exactly 4x. Or using that example...11 mins instead of 10.  There are some processes that are done on a single core (like texture prep) that won't gain any speed no matter how many cores you throw at it, so it won't help to cut the overall time any, but for the actual render process itself, yeah...a lot faster.

  • kyoto kidkyoto kid Posts: 41,023
    Renomista said:

    All what was said above.

     

    But there are also other less time consuming things you can do (and I did regulary before I got an Titan X.

    1) Hide everything that is out of view and is not intended to effect the light (like walls). This include things like unseen clothing (underwear) that is still loaded

    2) reduce subD Level of figures / objects in the background

    3) Hide a part of your figures (e.g. in different areas of the picture) render, hide the others unhide the previous, Render again. Combine Pictures in Photoshop/Gimp (Layering)

    4) When you start tinkering with textures the easiest start point is often to remove (often big) normal maps of figures in the background. You can save here a lot of VRAM without significant impact on the picture.

    General advice: Get Sim Teneros Iray Memory Assistant. Even thought it is not 100% accurate due to DAZ/Iray limitations it is very helpful when optimizing your scene.

    ...hiding parts of a set or prop are contingent on how the mesh was set up. For example hiding the mesh of a say, structure in the background in a large set can also affect other portions of the mesh like the ground plane or adjacent buildings/props.

    ...layering/compositing has one weak point as shadows may affect several render "layers".  This can mean they have to be "painted in" by hand in post.

    ..."tinkering" with textures in a 2D programme can be incredibly tedious and even become a "diminishing returns" situation compared to the resultant render time, especially in a very busy scene.

    ...please excuse any typos as the spell check as you type is borked once again.

  • paul_ae567ab9paul_ae567ab9 Posts: 231
    edited August 2017

    I feel one of the biggest hobbies "gotcha" is from a cost stand point being stuck with Daz studio and no Linux version.  If you wanted serious processing nothing beats Linux. It is the platform most "real" render farms use, not windows.  It's not just about the cost of a license for each machine, though with a thousand PC's running that alone would be very expensive, it is the scalability factor.  Nvidia is for smaller shops and hobby unless you can pay for their render farm type engines. nVidia does have excellent Linux support although that does nothing to help Daz studio users, at least not without jumping through a lot of hoops to get your scenes from Windows to Linux to use that power.  Daz appears totally uninterested in considering this advancement.

    But tons CUDA cores in a linux box makes for some powerful calculating machines. This is where the Nvidia architecture shines brightes, in  a farm environment full of CUDA cores.n The actual CPU cores is not all that relevant since most are not very involved in rendering.  There for managment and housekeeping and keeping all the pc's talking to each other.

    When relying on the Nvidia cards for the actual rendering those farms would be limited to the same RAM limitations you and I are. Could only hold a scene that fits in the RAM of the smallest of the cards.  You could have a farm with 1000 pc's all with say 8GB cards, the entire farm would only process that size scene.

    Studios often write their own software as that is the only way to get the speed and thruput they need. They won't be running iRay because of the aforementioned limitations.

    A custom farm for someone like Pixar does not rely on super comuters they rely on a TON of pretty normal boxes with Tesla or similar cards as none of them require any monitors except perhaps a managers consol.  Linux lends iteself to massive parallel processing this way which is how "poor" labs (not just graphics but weather, chemistry, genomics etc) build their "super" computers.  Linux with a lot of RAM and a lot of CUDA cores.

    You could have your own super computer in one box with Linux, a boat load of ram and for underr a grand if you do it right.

    https://web.stanford.edu/~hydrobay/lookat/gpumeister.html

     

    I wish I had that !

    //

    Post edited by paul_ae567ab9 on
  • kyoto kidkyoto kid Posts: 41,023
    edited August 2017

    ...I agree, Linux is more elegant when it comes to compute performance, however the one caveat to that is there are just too many distros floating around for many software developers to mess with.  You favour one version, and users of the others start crying "foul".  With WIndows and MacOS there are only the two individual development paths to keep track of that are relatively stable (well, in a sense). 

    Daz is a small company that does not have the development resoruces of an Autodesk or Adobe. Crikey they haven't even bothered much with the other products they own (Bryce, Carrara, Hexagon) in years. 

    The issue I have with GPU rendering is the limited amount of VRAM for the cost compared to physical memory. 

    I can get 64 GB of physical memory for about the price of a standard GTX 1080 (doesn't really matter if it is DDR3 or DDR4 as long as 4 channel configuration is supported.  I can get two 2.66GHz 8 core hyperthreading Xeons for around 350$ giving me 32 CPU threads.  As I also work with software that does not nateivly support GPU rendering, going the way that most studios do suits my needs better even for rendering in Iray.  For one, with 64 GB of physical memory, I pretty much will not have to worry about the process dropping into much slower swap mode. That right there is a time savings. Second as I tend to create fiarly large scale scenes a fair amount of the time, it would pretty much take the resoruces of a Titan Xp or even Quadro P5000 to ensure the process remains in VRAM until it is completed.  For 1,200$ (the cost of that Titan Xp) I could have 128 GB of physical memory.

    Basically for a little more than the price of that Quadro P5000 I could have a pretty raging workstation that even includes a pair of 1070s for Daz (once the price finally settles down after the cryptomining rush bottoms out).

    Post edited by kyoto kid on
  • JamesJABJamesJAB Posts: 1,760
    edited August 2017
    kyoto kid said:

    ...I agree, Linux is more elegant when it comes to compute performance, however the one caveat to that is there are just too many distros floating around for many software developers to mess with.  You favour one version, and users of the others start crying "foul".  With WIndows and MacOS there are only the two individual development paths to keep track of that are relatively stable (well, in a sense). 

    Daz is a small company that does not have the development resoruces of an Autodesk or Adobe. Crikey they haven't even bothered much with the other products they own (Bryce, Carrara, Hexagon) in years. 

    The issue I have with GPU rendering is the limited amount of VRAM for the cost compared to physical memory. 

    I can get 64 GB of physical memory for about the price of a standard GTX 1080 (doesn't really matter if it is DDR3 or DDR4 as long as 4 channel configuration is supported.  I can get two 2.66GHz 8 core hyperthreading Xeons for around 350$ giving me 32 CPU threads.  As I also work with software that does not nateivly support GPU rendering, going the way that most studios do suits my needs better even for rendering in Iray.  For one, with 64 GB of physical memory, I pretty much will not have to worry about the process dropping into much slower swap mode. That right there is a time savings. Second as I tend to create fiarly large scale scenes a fair amount of the time, it would pretty much take the resoruces of a Titan Xp or even Quadro P5000 to ensure the process remains in VRAM until it is completed.  For 1,200$ (the cost of that Titan Xp) I could have 128 GB of physical memory.

    Basically for a little more than the price of that Quadro P5000 I could have a pretty raging workstation that even includes a pair of 1070s for Daz (once the price finally settles down after the cryptomining rush bottoms out).

    Honestly, you would be better off with a single GTX 1080 ti over a pair of 1070 cards.  You get almost double the cuda cores, almost double the memory bandwidth, and 3 more GB of VRAM.  Then there is the price, at non-inflated prices the GTX 1070 started just under $400, The GTX 1080 ti starts at $700.  And if the scene fits in the 11GB it will render circles around the P5000.

    Post edited by JamesJAB on
  • kyoto kidkyoto kid Posts: 41,023
    edited August 2017
    ...yeah one 1080Ti would do the job but I can start with a single 1070 (which hopefully will drop to around 380$ again) and add a second later. The other thought though is using the second one for the displays. This way I'd have 8 GB dedicated to running the viewport in Iray view mode which would help deal with lag. True that the GTX 1080Ti would offer better up front perfornance. However, as I keep mentioning, Quadro cards are designed to handle high workloads for an extended amount of time which translates to a longer service life.
    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 41,023
    ...getting back to the topic of CPU vs GPU rendering, another reason it is preferred is that in spite of being slower, it is more accurate. If resources were not an issue and I could get someone to figure out how to get W7 to work on an Epyc CPU, I'd go for a dual Epyc system (128 processor threads) with 128 GB of physical memory.
Sign In or Register to comment.