Why the most important studios use cpu for rendering?

24

Comments

  • kyoto kidkyoto kid Posts: 41,023
    mjc1016 said:
    kyoto kid said:
     

    As I mentioned elsewhere, when I have both the Daz programme open with a scene loaded it takes a whopping amount of physical memory (again my rail station scene alone eats up 8.7 GB in idle mode).  So first, how does this this translate to VRAM for rendering purposes, does the scene take up the same amount of memory on the GPU? More? Less?  Would I be better served going with the dual 8 core Xeon 128 GB setup originally planned?

    Not quite...because part of that 8.7 GB is the program itself...which isn't passed to the GPU.  Also Iray does it's own texture compression, so the full, uncompressed image files will be in RAM but not passed to the GPU...but all that said, thinking of that as a max is probably a 'safe' bet.  It will probably be much lower than that being given to the card for rendering but it shouldn't go higher than that...

    This doesn't quite hold.  It is true that the effective size of the render footprint and the size of the in memory footprint are mostly independent.  However, to say that the render size will be likely less than the "design time" is not completely valid.  The memory for the design time use is resolution independent and usually represented at a much smaller resolution on "screen" therefore some things will not necessarily be represented at all by the memory use.  It is entirely possible for an image done at "gallery" sizes to vastly balloon when rendering because the textures don't necessarily compress in size, but may in fact expand due to details that would be "pressed out" of the scene at smaller resolutions becoming prominent and causing the texture(s) to be recalcalculated at a size that is now visible.  For instance, bump and/or displaccement values that would normally fall under the precision of the engine due to small size/distance from camera suddenly become an issue because not only are they now within the precision to use, they cause the system to recalculate the rises and valleys that previosuly didn't exist and increase the effective surface area by the new volume(s) generated.  Then one must take into account that in these situations new "geometry" now exists because of the displacement and that new geometry must now be compensated for in the memory of the engine.  In most cases, this will not occur on the VRAM side, but will happen on the CPU side and then the resultant larger render set is loaded and rendered.  Now one must consider that reflections and incidental shadows must now be calculated based on a possibly MUCH larger set of facets than were shown and/or used in design time.

    There are other complexities that are involved in this but I believe that this gives an introduction to some of the considerations that come into play.

    Kendall

    ...so basically unless I can afford a Quadro P6000, I am stuck with CPU rendering in Iray.

  • kyoto kidkyoto kid Posts: 41,023

    Kyoto Kid, if you want to use Iray with your present PC, you could always tick CPU only and use your existing system RAM. It will be slow, like LuxRender, but it's better than nothing and you can save your money for more RAM which will be much cheaper than any high end card. As you have said many times, you are just doing single frames and not animation.

    ...I have no choice as my GPU has only 1 GB of VRAM.  Even so, most of my scenes end up rendering in swap mode which is even slower thanks to the fact I usually have only 2.5 - 3 GB overhead for rendering due to the base amount of memory the scenefile and Daz Programme take up. Doubling my system memory to 24 GB would help but it also means upgrading the OS from W7 Home to Pro Edition. all totaled, about a 255$ expense.

  • Kendall SearsKendall Sears Posts: 2,995
    edited September 2016
    kyoto kid said:
    mjc1016 said:
    kyoto kid said:
     

    As I mentioned elsewhere, when I have both the Daz programme open with a scene loaded it takes a whopping amount of physical memory (again my rail station scene alone eats up 8.7 GB in idle mode).  So first, how does this this translate to VRAM for rendering purposes, does the scene take up the same amount of memory on the GPU? More? Less?  Would I be better served going with the dual 8 core Xeon 128 GB setup originally planned?

    Not quite...because part of that 8.7 GB is the program itself...which isn't passed to the GPU.  Also Iray does it's own texture compression, so the full, uncompressed image files will be in RAM but not passed to the GPU...but all that said, thinking of that as a max is probably a 'safe' bet.  It will probably be much lower than that being given to the card for rendering but it shouldn't go higher than that...

    This doesn't quite hold.  It is true that the effective size of the render footprint and the size of the in memory footprint are mostly independent.  However, to say that the render size will be likely less than the "design time" is not completely valid.  The memory for the design time use is resolution independent and usually represented at a much smaller resolution on "screen" therefore some things will not necessarily be represented at all by the memory use.  It is entirely possible for an image done at "gallery" sizes to vastly balloon when rendering because the textures don't necessarily compress in size, but may in fact expand due to details that would be "pressed out" of the scene at smaller resolutions becoming prominent and causing the texture(s) to be recalcalculated at a size that is now visible.  For instance, bump and/or displaccement values that would normally fall under the precision of the engine due to small size/distance from camera suddenly become an issue because not only are they now within the precision to use, they cause the system to recalculate the rises and valleys that previosuly didn't exist and increase the effective surface area by the new volume(s) generated.  Then one must take into account that in these situations new "geometry" now exists because of the displacement and that new geometry must now be compensated for in the memory of the engine.  In most cases, this will not occur on the VRAM side, but will happen on the CPU side and then the resultant larger render set is loaded and rendered.  Now one must consider that reflections and incidental shadows must now be calculated based on a possibly MUCH larger set of facets than were shown and/or used in design time.

    There are other complexities that are involved in this but I believe that this gives an introduction to some of the considerations that come into play.

    Kendall

    ...so basically unless I can afford a Quadro P6000, I am stuck with CPU rendering in Iray.

     

    No. As has been discussed in other threads, you can lower the scene's size through techniques like using the texture atlas, hiding geometry that will not be seen, turning off bump and displacement where it is going to make absolutely no difference, etc. These will lower the resource load (even in 3DL) making it more likely that things will fit in VRAM and speeding up the render in general.

    If the tongue is not seen, turn it off (same with teeth and inner mouth). That removes the facets and the texture load for it. It also means that the engine doesn't need to do visibility tests for it during render. This goes for feet, legs, upper arms, etc when they are completely covered by opaque clothing. Doing that can bring an otherwise huge scene down significantly in size.

    Kendall

    Post edited by Kendall Sears on
  • kyoto kid said:

     

    mjc1016 said:
    kyoto kid said:
     

     

     

    ...so basically unless I can afford a Quadro P6000, I am stuck with CPU rendering in Iray.

     

    No. As has been discussed in other threads, you can lower the scene's size through techniques like using the texture atlas, hiding geometry that will not be seen, turning off bump and displacement where it is going to make absolutely no difference, etc. These will lower the resource load (even in 3DL) making it more likely that things will fit in VRAM and speeding up the render in general.

    If the tongue is not seen, turn it off (same with teeth and inner mouth). That removes the facets and the texture load for it. It also means that the engine doesn't need to do visibility tests for it during render. This goes for feet, legs, upper arms, etc when they are completely covered by opaque clothing. Doing that can bring an otherwise huge scene down significantly in size.

    Kendall

    +1

  •   The PA's continue to put out great looking content using the same techniques that were used prior to this point -- minimalistic geometry with huge 4Kx4K textures for every surface, with some small changes for the way that Iray handles/mishandles maps. 

    This is partly because the tools needed to do highly detailed morphs aren't available to the prospective vendors, so most new folks get into the habit of doing the highly detailed textures from the beginning and don't change how they do it.

  •   The PA's continue to put out great looking content using the same techniques that were used prior to this point -- minimalistic geometry with huge 4Kx4K textures for every surface, with some small changes for the way that Iray handles/mishandles maps. 

    This is partly because the tools needed to do highly detailed morphs aren't available to the prospective vendors, so most new folks get into the habit of doing the highly detailed textures from the beginning and don't change how they do it.

    I'm not talking about morphs, but the base geometry.  HD Morphs are a completely different animal.  PAs have been conditioned to create models using almost "game level" numbers of polygons simply because 3DL preferred it.  For instance, if modeling a house made of brick, the tendency would be to create the walls with as few facets as possible, apply a brick texture and brick displacement map to match the bricks.  In 3DL, this would almost be mandatory.  These types of habits are very hard to break.  Iray would prefer that each brick be made of its own facets, separated by facets for the mortar.  If the bricks could be instances, so much the better.

    Kendall

  •   The PA's continue to put out great looking content using the same techniques that were used prior to this point -- minimalistic geometry with huge 4Kx4K textures for every surface, with some small changes for the way that Iray handles/mishandles maps. 

    This is partly because the tools needed to do highly detailed morphs aren't available to the prospective vendors, so most new folks get into the habit of doing the highly detailed textures from the beginning and don't change how they do it.

    I'm not talking about morphs, but the base geometry.  HD Morphs are a completely different animal.  PAs have been conditioned to create models using almost "game level" numbers of polygons simply because 3DL preferred it.  For instance, if modeling a house made of brick, the tendency would be to create the walls with as few facets as possible, apply a brick texture and brick displacement map to match the bricks.  In 3DL, this would almost be mandatory.  These types of habits are very hard to break.  Iray would prefer that each brick be made of its own facets, separated by facets for the mortar.  If the bricks could be instances, so much the better.

    Kendall

    True, at least to a point. I agree with you on the environments and even clothing items, but until DAZ provides a separate high polygon base figure specifically for use with Iray, character vendors will have to make at least one low poly figure and sell it before they can get the tools to be able to improve how their characters loom in Iray to any significant degree.

  • kyoto kidkyoto kid Posts: 41,023
    edited September 2016
    kyoto kid said:
    mjc1016 said:
    kyoto kid said:
     

    As I mentioned elsewhere, when I have both the Daz programme open with a scene loaded it takes a whopping amount of physical memory (again my rail station scene alone eats up 8.7 GB in idle mode).  So first, how does this this translate to VRAM for rendering purposes, does the scene take up the same amount of memory on the GPU? More? Less?  Would I be better served going with the dual 8 core Xeon 128 GB setup originally planned?

    Not quite...because part of that 8.7 GB is the program itself...which isn't passed to the GPU.  Also Iray does it's own texture compression, so the full, uncompressed image files will be in RAM but not passed to the GPU...but all that said, thinking of that as a max is probably a 'safe' bet.  It will probably be much lower than that being given to the card for rendering but it shouldn't go higher than that...

    This doesn't quite hold.  It is true that the effective size of the render footprint and the size of the in memory footprint are mostly independent.  However, to say that the render size will be likely less than the "design time" is not completely valid.  The memory for the design time use is resolution independent and usually represented at a much smaller resolution on "screen" therefore some things will not necessarily be represented at all by the memory use.  It is entirely possible for an image done at "gallery" sizes to vastly balloon when rendering because the textures don't necessarily compress in size, but may in fact expand due to details that would be "pressed out" of the scene at smaller resolutions becoming prominent and causing the texture(s) to be recalcalculated at a size that is now visible.  For instance, bump and/or displaccement values that would normally fall under the precision of the engine due to small size/distance from camera suddenly become an issue because not only are they now within the precision to use, they cause the system to recalculate the rises and valleys that previosuly didn't exist and increase the effective surface area by the new volume(s) generated.  Then one must take into account that in these situations new "geometry" now exists because of the displacement and that new geometry must now be compensated for in the memory of the engine.  In most cases, this will not occur on the VRAM side, but will happen on the CPU side and then the resultant larger render set is loaded and rendered.  Now one must consider that reflections and incidental shadows must now be calculated based on a possibly MUCH larger set of facets than were shown and/or used in design time.

    There are other complexities that are involved in this but I believe that this gives an introduction to some of the considerations that come into play.

    Kendall

    ...so basically unless I can afford a Quadro P6000, I am stuck with CPU rendering in Iray.

     

    No. As has been discussed in other threads, you can lower the scene's size through techniques like using the texture atlas, hiding geometry that will not be seen, turning off bump and displacement where it is going to make absolutely no difference, etc. These will lower the resource load (even in 3DL) making it more likely that things will fit in VRAM and speeding up the render in general.

    If the tongue is not seen, turn it off (same with teeth and inner mouth). That removes the facets and the texture load for it. It also means that the engine doesn't need to do visibility tests for it during render. This goes for feet, legs, upper arms, etc when they are completely covered by opaque clothing. Doing that can bring an otherwise huge scene down significantly in size.

    Kendall

    ...that is still a lot of time consuming and tedious piecemeal work particularly in a large scene like I create. . Also again I am looking to render large format so turning off Bump and Displacement could have an undesirable impact on the final rendered scene. Iray already poorly interprets these last two values without really cranking them up.

    Post edited by kyoto kid on
  • kyoto kid said:
    ...so basically unless I can afford a Quadro P6000, I am stuck with CPU rendering in Iray.
    kyoto kid said:

    ...that is still a lot of time consuming and tedious piecemeal work particularly in a large scene like I create. .

    Pay, Think, or Wait. I have to do one of these. I will not get fast high quality renders without effort for free.

  • wolf359wolf359 Posts: 3,826
    edited September 2016

    "Pay, Think, or Wait. I have to do one of these. I will not get fast high quality renders without effort for free.

    This is quite true 
    If one prefers to do dense ,content heavy stills one should at least consider learning some "cheats" like rendering elements seperately and compositing them in post or at least implementing some of the polycount optimizations
    suggested by Kendall Sears. 
    Or invest in some $$uber hardware$$.
    One needs to be willing to abandon long held practices and learn diferent
    ones( hence the thinking)
    No easy solutions no matter what course you take.

    Post edited by wolf359 on
  • StratDragonStratDragon Posts: 3,167
    edited September 2016

    some studios may use CPU for some aspects and GPU for others. They may not want a propriety engine which locks them into a proprietary hardware necessity, or their finding may show GPU rending engine for specific functionality is not as exact as a CPU based methodology. 3D studios are popping up all the time and with results raging from utterly astounding for a small group of people to 'do they know what a bump map is?' but a variety of engines are being used . Pixar uses it's own engine developed nearly 20 years ago and it allows specific features that may not be available in proprietary engines, it may also be specific for their modeling pipeline. If you coming at this from a Studio centric mindedness that "iRay works for me, it should work for everyone " you may not be considering the workflow of something far more involved and demanding. 

    Post edited by StratDragon on
  • AllenArt said:

    Because, as Kendall Sears would explain more fully than I can, it's actually much easier to adequately cool a CPU than it is a GPU, plus the well known rendering engines like 3Delight that they use have clients that do not require a functional graphical interface on the rendering PC. This means that the main power load on the power supply is the CPU and local drives, not a device that is not essential to operation of the the system.

    This.

    I have a desktop with 2 8 core Xeons and 64 gigs of ram and a GTX 980Ti card with 6 gigs. The amount of time it takes to render an average scene of mine between the two are pretty close whether I use CPU or GPU. That desktop probably looks like a toy compared to some of the rigs in the big studios.

    You're not Kendall Sears..... oh... there's Kendall Sears..... I'm all ears..... :)

    Sorry, just slipped into the silly season.....

  • Blush....

    There are actually a plethora of reasons but those given are definitely in the mix.  For still renders or semi-motion walkthroughs (architectural, design, visualization, etc) PBR rendering works well.  For "full motion" VFX and CGI there are a lot of very complex calculations that just don't perform well on GPUs.  One can reference the CISC vs RISC CPU wars of a time ago to see why (GPU is RISC while CPU is CISC -- literally).

    Cooling can be a problem as datacenters are very sensitive about the BTU output of EVERY piece of equipment in the server rooms.  It is also the Watts vs Work equation.  For complex operations (motion blur, antialiasing, light matching, and other advanced techniques) it can take several GPU operations to do the work of 1 or 2 CPU operations.  Over time the electricity use WILL add up.

    With this being said, there are many studios that are using Tesla Units for some precalculations that are done before the scenes are transferred to the render farms.  See my post on the MASSIVE amount of work and data required to do dynamic atmospherics (http://www.daz3d.com/forums/discussion/106011/anyone-else-drooling-over/p1) and you can see where the Tesla GPUs can be helpful.  In the end though, the final rendering is done on CPUs because of the resource use and other various complications.

    It is late, and I can (and have) lectured for hours on these very topics in University Settings (and will be again here very soon).  So, for the sake of the reader's sanity, I will refrain from my normal propensity to write a textbook on the various details that come into play.  If there is enough interest, I can expound later.

    EDIT:  To get this back OT, the reason that hobbyists gravitate to GPUs is four-fold:

    1.  OS requirements.  To go above 2 physical CPUs, windows requires a Server License.  The lowest cost Windows Server is several hundred dollars and tops out at 32GB of RAM.  To go above that will cost over $1000.  This effectively caps the CPU core count at 40 (if one uses two 10 core Xeons w/ hyperthreading) A GPU gets the user past this OS limitation for much less.
    2. Power Requirements.  Even the heaviest GPU cards only suck about 350W under full load.  A single multi-Xeon machine can suck over 1200W.  When I power on my racks fully, my electricity bill quadruples with only a small number of days of use.  There is also the fact that most residences are not wired to handle the Amperage load that a render farm will want, so specialized power circuits have to be installed.  Powering the necessary environmental cooling units is also a massive power drain.
    3. Space.  Few people have the space required to house a render farm.  A GPU card or two can fit inside a standard Tower/4U case.
    4. Noise.  Most multi-CPU (>2 CPU) machines are EXTREMELY noisy.  So noisy that one cannot hold a conversation in normal tones next to one.

    Kendall

    I remember watching a small "Extra" on the Battlestar Galactica blurays about the special effects and the local render farm they had built on the premesis. It was very impressive, and at a time when I didn't even know about DS or Poser. 

    I don't know what digital tools that series used, but if anyone does know please chime in. I am still amazed by the Galactica digital item used in that show. Did they ever make it available for public use?

  •  

    EDIT:  To get this back OT, the reason that hobbyists gravitate to GPUs is four-fold:

    1.  OS requirements.  To go above 2 physical CPUs, windows requires a Server License.  The lowest cost Windows Server is several hundred dollars and tops out at 32GB of RAM.  To go above that will cost over $1000.  This effectively caps the CPU core count at 40 (if one uses two 10 core Xeons w/ hyperthreading) A GPU gets the user past this OS limitation for much less.
    2. Power Requirements.  Even the heaviest GPU cards only suck about 350W under full load.  A single multi-Xeon machine can suck over 1200W.  When I power on my racks fully, my electricity bill quadruples with only a small number of days of use.  There is also the fact that most residences are not wired to handle the Amperage load that a render farm will want, so specialized power circuits have to be installed.  Powering the necessary environmental cooling units is also a massive power drain.
    3. Space.  Few people have the space required to house a render farm.  A GPU card or two can fit inside a standard Tower/4U case.
    4. Noise.  Most multi-CPU (>2 CPU) machines are EXTREMELY noisy.  So noisy that one cannot hold a conversation in normal tones next to one.

    Kendall

    I suspect hobbiests under-use rental render farms. With DS, there isn't a pratical way (I know of) to get your files to the farm, but if you're working with a supported application like the Autodesk pacakges, C4D, or Modo, pushing your renders out to a farm is probably cheaper than buying the hardware to do them nearly as quickly…and you put the burden of keeping current on the farm owners, as well as the power use and noise.

  • I suspect hobbiests under-use rental render farms. With DS, there isn't a pratical way (I know of) to get your files to the farm, but if you're working with a supported application like the Autodesk pacakges, C4D, or Modo, pushing your renders out to a farm is probably cheaper than buying the hardware to do them nearly as quickly…and you put the burden of keeping current on the farm owners, as well as the power use and noise.

    I spun up a Vue RenderCow instance on Amazon Web Services to see if it could be done.  It can, and it worked reasonably well.  You can fire up an 8 CPU Windows box for about 77 cents an hour.

    If the alternative is purchasing a new system at $2400, that's about 3100 hours of render time, without the heat and power issues.

  •  

    EDIT:  To get this back OT, the reason that hobbyists gravitate to GPUs is four-fold:

    1.  OS requirements.  To go above 2 physical CPUs, windows requires a Server License.  The lowest cost Windows Server is several hundred dollars and tops out at 32GB of RAM.  To go above that will cost over $1000.  This effectively caps the CPU core count at 40 (if one uses two 10 core Xeons w/ hyperthreading) A GPU gets the user past this OS limitation for much less.
    2. Power Requirements.  Even the heaviest GPU cards only suck about 350W under full load.  A single multi-Xeon machine can suck over 1200W.  When I power on my racks fully, my electricity bill quadruples with only a small number of days of use.  There is also the fact that most residences are not wired to handle the Amperage load that a render farm will want, so specialized power circuits have to be installed.  Powering the necessary environmental cooling units is also a massive power drain.
    3. Space.  Few people have the space required to house a render farm.  A GPU card or two can fit inside a standard Tower/4U case.
    4. Noise.  Most multi-CPU (>2 CPU) machines are EXTREMELY noisy.  So noisy that one cannot hold a conversation in normal tones next to one.

    Kendall

    I suspect hobbiests under-use rental render farms. With DS, there isn't a pratical way (I know of) to get your files to the farm, but if you're working with a supported application like the Autodesk pacakges, C4D, or Modo, pushing your renders out to a farm is probably cheaper than buying the hardware to do them nearly as quickly…and you put the burden of keeping current on the farm owners, as well as the power use and noise.

    DS supports RIB output for Renderman export and standalone rendering.  Can't get much more standard than that.

    Kendall

  • kyoto kid said:
    kyoto kid said:
    mjc1016 said:
    kyoto kid said:
     

    As I mentioned elsewhere, when I have both the Daz programme open with a scene loaded it takes a whopping amount of physical memory (again my rail station scene alone eats up 8.7 GB in idle mode).  So first, how does this this translate to VRAM for rendering purposes, does the scene take up the same amount of memory on the GPU? More? Less?  Would I be better served going with the dual 8 core Xeon 128 GB setup originally planned?

    Not quite...because part of that 8.7 GB is the program itself...which isn't passed to the GPU.  Also Iray does it's own texture compression, so the full, uncompressed image files will be in RAM but not passed to the GPU...but all that said, thinking of that as a max is probably a 'safe' bet.  It will probably be much lower than that being given to the card for rendering but it shouldn't go higher than that...

    This doesn't quite hold.  It is true that the effective size of the render footprint and the size of the in memory footprint are mostly independent.  However, to say that the render size will be likely less than the "design time" is not completely valid.  The memory for the design time use is resolution independent and usually represented at a much smaller resolution on "screen" therefore some things will not necessarily be represented at all by the memory use.  It is entirely possible for an image done at "gallery" sizes to vastly balloon when rendering because the textures don't necessarily compress in size, but may in fact expand due to details that would be "pressed out" of the scene at smaller resolutions becoming prominent and causing the texture(s) to be recalcalculated at a size that is now visible.  For instance, bump and/or displaccement values that would normally fall under the precision of the engine due to small size/distance from camera suddenly become an issue because not only are they now within the precision to use, they cause the system to recalculate the rises and valleys that previosuly didn't exist and increase the effective surface area by the new volume(s) generated.  Then one must take into account that in these situations new "geometry" now exists because of the displacement and that new geometry must now be compensated for in the memory of the engine.  In most cases, this will not occur on the VRAM side, but will happen on the CPU side and then the resultant larger render set is loaded and rendered.  Now one must consider that reflections and incidental shadows must now be calculated based on a possibly MUCH larger set of facets than were shown and/or used in design time.

    There are other complexities that are involved in this but I believe that this gives an introduction to some of the considerations that come into play.

    Kendall

    ...so basically unless I can afford a Quadro P6000, I am stuck with CPU rendering in Iray.

     

    No. As has been discussed in other threads, you can lower the scene's size through techniques like using the texture atlas, hiding geometry that will not be seen, turning off bump and displacement where it is going to make absolutely no difference, etc. These will lower the resource load (even in 3DL) making it more likely that things will fit in VRAM and speeding up the render in general.

    If the tongue is not seen, turn it off (same with teeth and inner mouth). That removes the facets and the texture load for it. It also means that the engine doesn't need to do visibility tests for it during render. This goes for feet, legs, upper arms, etc when they are completely covered by opaque clothing. Doing that can bring an otherwise huge scene down significantly in size.

    Kendall

    ...that is still a lot of time consuming and tedious piecemeal work particularly in a large scene like I create. . Also again I am looking to render large format so turning off Bump and Displacement could have an undesirable impact on the final rendered scene. Iray already poorly interprets these last two values without really cranking them up.

    FRom what Kendall has said, I think it's more a case that Iray is treating them like the poor substitute for real details in the mesh that they are. I Would love to be able to take Nadiya's mesh to the next level and make her have more surface detail, but even at SubD 3 she still doesn't look quite right.

  • Kendall SearsKendall Sears Posts: 2,995
    edited September 2016
    kyoto kid said:
    kyoto kid said:
    mjc1016 said:
    kyoto kid said:
     

    As I mentioned elsewhere, when I have both the Daz programme open with a scene loaded it takes a whopping amount of physical memory (again my rail station scene alone eats up 8.7 GB in idle mode).  So first, how does this this translate to VRAM for rendering purposes, does the scene take up the same amount of memory on the GPU? More? Less?  Would I be better served going with the dual 8 core Xeon 128 GB setup originally planned?

    Not quite...because part of that 8.7 GB is the program itself...which isn't passed to the GPU.  Also Iray does it's own texture compression, so the full, uncompressed image files will be in RAM but not passed to the GPU...but all that said, thinking of that as a max is probably a 'safe' bet.  It will probably be much lower than that being given to the card for rendering but it shouldn't go higher than that...

    This doesn't quite hold.  It is true that the effective size of the render footprint and the size of the in memory footprint are mostly independent.  However, to say that the render size will be likely less than the "design time" is not completely valid.  The memory for the design time use is resolution independent and usually represented at a much smaller resolution on "screen" therefore some things will not necessarily be represented at all by the memory use.  It is entirely possible for an image done at "gallery" sizes to vastly balloon when rendering because the textures don't necessarily compress in size, but may in fact expand due to details that would be "pressed out" of the scene at smaller resolutions becoming prominent and causing the texture(s) to be recalcalculated at a size that is now visible.  For instance, bump and/or displaccement values that would normally fall under the precision of the engine due to small size/distance from camera suddenly become an issue because not only are they now within the precision to use, they cause the system to recalculate the rises and valleys that previosuly didn't exist and increase the effective surface area by the new volume(s) generated.  Then one must take into account that in these situations new "geometry" now exists because of the displacement and that new geometry must now be compensated for in the memory of the engine.  In most cases, this will not occur on the VRAM side, but will happen on the CPU side and then the resultant larger render set is loaded and rendered.  Now one must consider that reflections and incidental shadows must now be calculated based on a possibly MUCH larger set of facets than were shown and/or used in design time.

    There are other complexities that are involved in this but I believe that this gives an introduction to some of the considerations that come into play.

    Kendall

    ...so basically unless I can afford a Quadro P6000, I am stuck with CPU rendering in Iray.

     

    No. As has been discussed in other threads, you can lower the scene's size through techniques like using the texture atlas, hiding geometry that will not be seen, turning off bump and displacement where it is going to make absolutely no difference, etc. These will lower the resource load (even in 3DL) making it more likely that things will fit in VRAM and speeding up the render in general.

    If the tongue is not seen, turn it off (same with teeth and inner mouth). That removes the facets and the texture load for it. It also means that the engine doesn't need to do visibility tests for it during render. This goes for feet, legs, upper arms, etc when they are completely covered by opaque clothing. Doing that can bring an otherwise huge scene down significantly in size.

    Kendall

    ...that is still a lot of time consuming and tedious piecemeal work particularly in a large scene like I create. . Also again I am looking to render large format so turning off Bump and Displacement could have an undesirable impact on the final rendered scene. Iray already poorly interprets these last two values without really cranking them up.

    FRom what Kendall has said, I think it's more a case that Iray is treating them like the poor substitute for real details in the mesh that they are. I Would love to be able to take Nadiya's mesh to the next level and make her have more surface detail, but even at SubD 3 she still doesn't look quite right.

    I don't agree with the bolded part at all.  I was making reference to maps being bad in the context of Iray only.  Iray's use of maps isn't well implemented, but when implemented well maps are an excellent way to introduce "hard to model" details into models.  There are some types of features that are just too freaking hard to get right using geometry, displacement/bump/normal maps allow these to be added without making a huge mess of the mesh.  Iray is one of the outliers that actually excels at working with geometry.  Most engines really, really struggle when poly counts get high.  nVidia happened to get it done well.

    I believe in using the right tool for the jobs as well as using thebest features of the tool to get the most out of it you can.  3DL excels at using maps, and so it is perfectly appropriate to use them there.  Whether one needs 4Kx4K maps for EVERY SURFACE is a different argument.  Iray excels at geometry, but that doesn't mean one needs to unnecessarily add geometry where it isn't necessary.

    Kendall

    Post edited by Kendall Sears on
  • mjc1016mjc1016 Posts: 15,001
    edited September 2016

    The nice thing about Studio...you do have choices.  By including and supporting both (at least at the application level) both Iray and 3Delight there is a lot more that can be done that with just one of them.  And that doesn't even count 3rd party exporters...for Luxrender (2x), Blender Cycles (free script by Casual) and Octane.

    Post edited by mjc1016 on
  • JonstarkJonstark Posts: 2,738

    I've got Octane and Thea, but lately I've been going the other direction and aiming more towards CPU rendering instead.  

    The reason?  Because of Carrara's ability to network render natively, which I had never taken advantage of before.  But I bought a new desktop, and then thought 'hey, I've got these 2 laptops that are going unused' and installed a render node on each, and instantly I went from 8 render cores) to 24 render cores.

    Then I discovered Ebay, something I'd never really looked into before, and realized that you can get 1st, 2nd, and 3rd generation i7 computers for a next to nothing!   I guess corporations are constantly upgrading their workstations to the latest/greatest, and turn around and sell the tech from yesteryear for peanuts, and the price is driven down even lower by the fact they all dump their stuff on the market at the same time.

    I remember the bad old days when buying any computer with an i7 cost an arm and a leg.  Not so anymore!  So I thought, 'why not build my own little render farm on the cheap?'   I picked up a 3rd generation i7 workstation for what I thought at the time was a very low cost of $250, and when it arrived I now was up to 32 render cores.  Whoo-hoo!  I was addicted!

    Then someone on the Carrara forum wisely pointed out I was thinking too small, and I ought to set my sights on a dual CPU Xeon server.  I was absolutely floored that I could pick up a z600 hp server with 2 hyper-threaded xeons for just over $200 (!)    

    To put that in perspective, this is a server that 4 years ago would have cost me over $5000 to buy... and I got it for just over $200!

    Why would something that was nearly twice as powerful end up costing me even less than an i7 machine?  I think it's because the consumer market doesn't really think 'server' when they go out to find a computer, and the professional market is too focused on buying the newest and most expensive tech, but this kind of thing is absolutely perfect for those of us in with a rendering hobby.  (by the way, I'm not a tech person, so I was a little afraid of getting a 'server' as I thought it might take labyrinthine tech skills to operate.  Turns out, it's no different than any other computer.  I installed Windows 7 pro on it and was good to go.)

    Now I have 48 cores of rendering power at any time I want (it really flies).  Perfect for high-quality, very complex scenes and animations.

     

    But I actually bought 'stupidly' and while I'm not unhappy with my purchases, I really could have paid even less.  See, I had never done an Ebay auction before and didn't know how.  I just went to the marketplace, found all the 'buy it now' entries, and picked the ones that looked like the best deals and bought right away.  I've since learned that paying just a little bit of attention and being willing to bid, a buyer can do even better than I did.  And I also think I've become addicted to watching the Ebay auctions tick down...

    I bought a 3rd laptop (2nd gen i7) for $150.  Then I was mad a myself because while I got the 2nd best deal of the day on any i7 laptop sold that day, there was another auction a few hours later for a similar spec i7 laptop that went for freakin' $79!   And I don't need a 3rd laptop, but I told myself I wanted a backup to my other two laptops (one of which I use for work, and both of which I paid a ton of money for, so I wanted a laptop that could fail at any moment and it wouldn't pain me in the least).

    Then I bought an hp 8200 i7 workstation for $140, just because I couldn't believe the price (I really thought in the last few seconds it would get bid up much higher and I would lose it).  This is very comparable to the 1st machine I bought for $250, and most models of this type go for about that much.

    I really shouldn't have made either of those purchases, but the prices were so low that I just couldn't resist, and the thought of adding all those render cores...  Yes, I begin to fear I'm addicted.  I made a promise to myself: no more!

    Then yesterday a Lenovo d20 dual core Xeon server came up and no one was bidding on it.  I was sure it was an exercise in futility but I put in a bid - and got it for $124!  That's 2 more xeons and 16 more render cores.  Yes, I realize I have a problem and need to start going to meetings, but man what a steal, I've never seen any Lenovo D20 server go for less than $300, so even if I come to my senses I could probably re-sell it for a substantial profit margin.

     

    So the last 3 that I bought haven't come yet, but once they do I'll be up to 80 render cores(!)   Contrast that with the priciest Xeon cpu on the market right now, which has 44 threads and costs $4100.00, and I can't help but think it's a very good time to be a render enthusiast  smiley

  • kyoto kidkyoto kid Posts: 41,023
    mjc1016 said:

    The nice thing about Studio...you do have choices.  By including and supporting both (at least at the application level) both Iray and 3Delight there is a lot more that can be done that with just one of them.  And that doesn't even count 3rd party exporters...for Luxrender (2x), Blender Cycles (free script by Casual) and Octane.

    ...the downside is so much new content is being released with Iray shaders only, converting to other programmes/engines can be a real pain. 

  • kyoto kidkyoto kid Posts: 41,023

    ...I was going to bring up the subject of third party render farms the other night but it got late and I was tired.

    I have been discussing this with a friend of mine and yes, in an ideal situation, it can save a lot of zlotys compared to buying/building a beefy system. The one rub, Dax studio does not yet have a network render interface.   Carrara does, have one however I have little issue with Carrara render times compared to Iray CPU mode.  A comparable scene that I can render in Carrara in say 15 min could easily take 5 - 6 hours in Iray.

    Now I wouldn't waste processing time and money on numerous test renders, but for say proofing and the final finished image, yes it could be very economical, particularly for the quality level I am needing.

    Apparently Daz has (or maybe had) something in the works as under the Advanced Render settings in 4.8 there is a tab labelled "Cloud (Beta)"  which has server designation and login along with several parameter sliders.

  • mjc1016mjc1016 Posts: 15,001
    kyoto kid said:
    mjc1016 said:

    The nice thing about Studio...you do have choices.  By including and supporting both (at least at the application level) both Iray and 3Delight there is a lot more that can be done that with just one of them.  And that doesn't even count 3rd party exporters...for Luxrender (2x), Blender Cycles (free script by Casual) and Octane.

    ...the downside is so much new content is being released with Iray shaders only, converting to other programmes/engines can be a real pain. 

    I've said it before...

    I'd prefer if all content came with just maps, no presets.  Then which renderer is used wouldn't matter...the end user would have to set up the materials for the one they preferred.  Frequently, the presets are either not optimal or even contain flat out wrong settings (color maps in control slots, greyscale maps in things like glossy color locations and so on).

  • JonstarkJonstark Posts: 2,738
    edited September 2016

     

    1.  OS requirements.  To go above 2 physical CPUs, windows requires a Server License.  The lowest cost Windows Server is several hundred dollars and tops out at 32GB of RAM.  To go above that will cost over $1000.  This effectively caps the CPU core count at 40 (if one uses two 10 core Xeons w/ hyperthreading) A GPU gets the user past this OS limitation for much less.

     

    I was surprised to see this, as on my current little render farm I'm running win7pro, win8pro, and win10pro on my various PCs, but I'm not encountering any cpu core limit yet (currently rendering with 48 cores, will add another 32 cores soon).  All I did was set all my PCs to the same homegroup network so they could talk to each other, and they seem to all render together in Carrara just fine. Did I hit some lucky loophole here, or am I misunderstanding what the limitation you're describing is?

     

    DustRider said:
    MistyMist said:

     

    There is not a simple answer, because there a many factors at play. For processor speed, when using the same processor version, it'seems simple. A 4.0 GHZ processor will give you a performance increase of approximately 12.5% over a 3.5GHz system. But, determining the performance increase when using different processor families gets more difficult because of several factors including instruction set optimization, cache, motherboard chip set, application being used, etc.

    In general, a dual 6 core processor (12 thread) xeon system running at approximately the same speed will give an approximate 3x speed increase over a single processor 4 core (8 thread) i7 system for rendering get with Carrara. Carrara probably will not take advantage of any of the new instruction set optimization implemented in the i series processors over the last few years, because the render engine hasn' t been updated to take advantage of them. On the flip side, Carrara may take advantage of some of the calculation optimizations and larger cache found on xeon processors. My guess would be that a similar generation i7 would have slightly slower render performance in Carrara than a xeon running at the same speed.

    Whoa, that's fascinating!  I had no idea, going to have to concentrate on Xeons then in the future!  It does seem to square with the fact the render buckets in my Xeon machine seem to render much faster than I would have expected for a 2.4 Ghz processor.  I'm still looking at upgrading to 2 hexcore Xeons at above 3.4 Ghz power in the near future (it boggles my mind these cpus were orignally over $1600 a pop and now can be had for just over a hundred dollars) but I was surprisingly happy to see how fast even the little 2.4 Ghz renders, and I think your technical explanation goes a long way towards explaining why this might be.

    Post edited by Jonstark on
  • JonstarkJonstark Posts: 2,738
    mjc1016 said:
    kyoto kid said:
    mjc1016 said:

     

    I've said it before...

    I'd prefer if all content came with just maps, no presets.  Then which renderer is used wouldn't matter...the end user would have to set up the materials for the one they preferred.  Frequently, the presets are either not optimal or even contain flat out wrong settings (color maps in control slots, greyscale maps in things like glossy color locations and so on).

    I kind of agree with this, since that's pretty much the way I view all products (to render in Carrara or Octane or Thea I'm going to have to tune the shaders anyway, so I'm used to this). but in practice I think that's probably asking a lot of a new user just coming into Studio or Poser for the first time, and I can certainly see why vendors sell products with shaders pre-made (even though personally I think people would be much happier with the render results if they took the time to understand materials/textures/shaders settings and tweaked the settings themselves).

  • mjc1016mjc1016 Posts: 15,001
    edited September 2016
    Jonstark said:

     I think people would be much happier with the render results if they took the time to understand materials/textures/shaders settings and tweaked the settings themselves).

    That is one very big advantage of doing it yourself.

    But, with all the maps in the 'normal' locations, presets can still be made and shared...they just don't have to be considered 'the only way'.  Since the standard ways of saving out the presets, in recent versions of Studio, don't require any 'protected' data to be shared, it's not that hard to do.

    Post edited by mjc1016 on
  • kyoto kidkyoto kid Posts: 41,023
    edited September 2016
    Jonstark said:
    mjc1016 said:
    kyoto kid said:
    mjc1016 said:

     

    I've said it before...

    I'd prefer if all content came with just maps, no presets.  Then which renderer is used wouldn't matter...the end user would have to set up the materials for the one they preferred.  Frequently, the presets are either not optimal or even contain flat out wrong settings (color maps in control slots, greyscale maps in things like glossy color locations and so on).

    I kind of agree with this, since that's pretty much the way I view all products (to render in Carrara or Octane or Thea I'm going to have to tune the shaders anyway, so I'm used to this). but in practice I think that's probably asking a lot of a new user just coming into Studio or Poser for the first time, and I can certainly see why vendors sell products with shaders pre-made (even though personally I think people would be much happier with the render results if they took the time to understand materials/textures/shaders settings and tweaked the settings themselves).

    ...indeed, there would be a lot less people using this software were that the case (self included).

    Post edited by kyoto kid on
  • HoroHoro Posts: 10,597

    Jonstark - I congratulate you for your render farm. I'm using network rendering with Bryce since v5 but only on 5 machines. Network rendering has the advantage that your machines could be anywhere on your network, in any room. This distributes the mains power though I'm not sure it is really more efficient in main power use. Theoretically, Bryce can be made to network render over the public Internet but it is very tricky to set up (I did it once) because it uses ephemeric TCP and UDP ports. Nevertheless, I prefer the option to network render over having everything in one computer with several graphics cards and huge power supplies. If it fails, you're lost. If one of the computers in the network fails, the others are still there.

     

  • [...]

    Unfortunately, the nature of the content that we're dealing with no longer fits the "general case."  Part of the problem lies in the practice of PA's baking details into the bitmaps instead of building them into the geometry.  This (obviously) comes from all of the years of 3DL where there is a massive penalty for the use of geometry and almost no penalty at all for adding the detail in the bitmaps.  Iray is designed for handling geometry, not maps.  It is extraordinarily good at handling massive amounts of vertices and facets, but pretty poor at handling bitmaps, and worse at handling modifiers encoded as maps.  Witness how Iray needs to have great levels of sub-division to handle even mediocre detailed displacement maps.

    Unfortunately, DS PA's and users are used to being able to use extra-high levels of definition for displacement to replace what would add probably millions of additional facets of geometry.  

    [...]

    Users continue to rely on hardware growing the necessary VRAM space (sometimes ending up waiting long periods of time), and PA's rely (and hope) on the fact that nVidia and DAZ will likely come up with some magical solution that "makes it all work."

    Unfortunately, I think we're rapidly approaching a point where neither of these are going to come to satisfactory conditions, and things are going to "break".  We've already seen this with the nVidia pascal architecture and the release of the hardware far before software for non-gaming was ready.  Those who were relying on the newest hardware to solve the VRAM issue were/are sorely disappointed and continue to wait.  The PA's continue to put out great looking content using the same techniques that were used prior to this point -- minimalistic geometry with huge 4Kx4K textures for every surface, with some small changes for the way that Iray handles/mishandles maps.   DAZ is in a holding pattern waiting on nVidia to release the necessary SDKs, while everyone prays nVidia doesn't change something basic again that causes current products to all have to be redone -- again.  Everyone is waiting on somebody else to resolve the "issue" while continuing to bang away without making any changes themselves to compensate.  When "someone" doesn't make good on "their" part things are going to lock up like a hydraulic system that exhausts the reservoir of fluid.

    [...]

    Are you going to chance it with technology that continually mutates in unknown directions, or are you going to go with what you know will get the job done in time for the release date?

    Kendall

    Thanks for all the super informative posts, especially this one.

    I hope all the crucial bits are heard loud and clear. 

  • Jonstark said:

     

    1.  OS requirements.  To go above 2 physical CPUs, windows requires a Server License.  The lowest cost Windows Server is several hundred dollars and tops out at 32GB of RAM.  To go above that will cost over $1000.  This effectively caps the CPU core count at 40 (if one uses two 10 core Xeons w/ hyperthreading) A GPU gets the user past this OS limitation for much less.

     

    I was surprised to see this, as on my current little render farm I'm running win7pro, win8pro, and win10pro on my various PCs, but I'm not encountering any cpu core limit yet (currently rendering with 48 cores, will add another 32 cores soon).  All I did was set all my PCs to the same homegroup network so they could talk to each other, and they seem to all render together in Carrara just fine. Did I hit some lucky loophole here, or am I misunderstanding what the limitation you're describing is?

     

    Network distributing is not the same as multi-processor.  I am talking about multiple physical processors on the same MotherBoard.  Windows requires a server license for any machine with >2 physical processors in the same machine.

    Kendall

Sign In or Register to comment.