Why do some renders utilize CPU & not GPU only as designated in Iray render settings?

This is something thats been bothering me for awhile and maybe its something I'm doing wrong..

I render completely with Iray and my scenes never go over my GPU's memory limit. I also have renders set to use my GPUs only, so in the case that they do go over memory limit DS will actually crash, which has happened on rare occasion, but I'm Ok with that because I never want to go to CPU for rendering as its just too slow.

So I was wondering why do certain renders actually slow down my PC and actually heavily utilize my CPU as well as my GPUs for some part of the rendering process whereas most renders will not?

For example, rendering a scene with the Streaming Hair will completely slow down my computer, wheareas I can understand the render may take longer since there are individual strands of hair, I would assume it would still utilize only my GPUs, but thats not that case. It heavily uses my CPU as well.  And again, I can confirm that these scenes are only around 4GB and well under the 6GB limit of my GPUs.

So what's going on, why does certain type of geometry hit my CPU and not do all the rendering exclusively with the GPUs?

Comments

  • fastbike1fastbike1 Posts: 4,077

    Even if you select GPU only, Studio should fall back to CPU if you exceed the GPU memeory (at least it does on my machine).

  • FrankTheTankFrankTheTank Posts: 1,131

    Yes, but the scene is not over my GPU limit. I can monitor the scenes as they render using an EVGA utility, and I see that the GPUs are being utilized, the scene is not dumping to CPU. There is something else going on. Certain processes are not handled by the GPUs during rendering and I was curious what those are.

  • There was a thread a while back where it was explained that Iray uses the CPU for handling the geometry and textures as they are loaded and some other functions. The program, like Octane, also uses the GPU memory to hold the image you are rendering until it's ready to be saved. Also, I ran out of memory doing animations if I left the preview window open as it was holding on to images. Do a few frames and it was over to the CPU. Haven't tried it with the latest yet, though.

  • FrankTheTankFrankTheTank Posts: 1,131

    There was a thread a while back where it was explained that Iray uses the CPU for handling the geometry and textures as they are loaded and some other functions. The program, like Octane, also uses the GPU memory to hold the image you are rendering until it's ready to be saved. Also, I ran out of memory doing animations if I left the preview window open as it was holding on to images. Do a few frames and it was over to the CPU. Haven't tried it with the latest yet, though.

    Hmmm,  I'm thinking the CPU is making the calculations for each strand of hair as they move from frame to frame, whereas, the GPUs are only handling the light bouncing, more or less. I should do a test with no movement in the hair.

  • ToborTobor Posts: 2,300

    Hmmm,  I'm thinking the CPU is making the calculations for each strand of hair as they move from frame to frame, whereas, the GPUs are only handling the light bouncing, more or less. I should do a test with no movement in the hair.

    There are some behind-the-scenes functions that nVidia does not disclose due to trade secrets, though I'd be somewhat surprised this is one of them. In professional/commercial settings, which Iray is primarily targeted at, there are only GPU renders through a shared GPU-based server. These servers contain 10 or more high-end cards, each with thousands of CUDA cores and many gigabytes of VRAM, so that each card can separately handle the full scene.

    You can test your theory by specifically turning off CPU for the render. You'll still get some CPU activity to manage the process, but the log will show it's not used for rendering.

    Kevin is right that during the scene building process the GPU is not called upon. Despite what D|S says in its info box, rendering has not actually begun yet, and the GPU is only called on in a token manner during this.

  • FrankTheTankFrankTheTank Posts: 1,131
    edited March 2017
    Tobor said:

    Hmmm,  I'm thinking the CPU is making the calculations for each strand of hair as they move from frame to frame, whereas, the GPUs are only handling the light bouncing, more or less. I should do a test with no movement in the hair.

    There are some behind-the-scenes functions that nVidia does not disclose due to trade secrets, though I'd be somewhat surprised this is one of them. In professional/commercial settings, which Iray is primarily targeted at, there are only GPU renders through a shared GPU-based server. These servers contain 10 or more high-end cards, each with thousands of CUDA cores and many gigabytes of VRAM, so that each card can separately handle the full scene.

    You can test your theory by specifically turning off CPU for the render. You'll still get some CPU activity to manage the process, but the log will show it's not used for rendering.

    Kevin is right that during the scene building process the GPU is not called upon. Despite what D|S says in its info box, rendering has not actually begun yet, and the GPU is only called on in a token manner during this.

    Yes, as stated in my original question, I always have the CPU off for rendering. I should have been more specific as to when I notice this slowdown. Prior to when I render an animation, I always set the active viewport to Iray prior to rendering an animation. I let the scene fully  render in the viewport, before hitting render.This preloads the scene on the card, at the expense of some memory overhead, and then typically each subsequent frame of animation renders much faster, as only the changes between each frame are sent to the GPU. Otherwise the entire scene is reloaded each frame. This works great for most things, usually allowing me to render a frame in 30 secs to a minute per frame. 

    But then when I have something like the Streaming Hair with a lot of geometry, or rendering something wih heavy transparencies, like Iray Clouds by Stonemason, I run into this issue. And like I said, they fit on the GPU, I;ve verified this. So I won't beat a dead horse anymore, I was just hoping I could find some trick that I missed to speed things up further.

    Post edited by FrankTheTank on
  • ToborTobor Posts: 2,300
    edited March 2017

    You should instead do an intitial render to a window (cancel after just one iteration), and keep that window open while you do successive renders. The viewport render isn't a full Photorealrender, and there's no telling if all of the scene database and/or materials get fully loaded. I don't think anyone outside nVidia knows the answer to that one, and the developer docs don't mention it.

    As long as the initial window remains open the scene database will not unload from RAM. You can verify this by looking at the task monitor. There's no need to do a complete render to the initial window. Once the iterations start, you know the scene is loaded into any and all devices that will take it. Iray does not appear to "learn" from its previous frames and do a faster subsequent ray tracing, but that would be a helpful feature if it did. 

    For speed issues between CPU and GPU, there have been some observations that Iray appears to synchronize to the slowest rendering device you have. In a GPU/CPU system, that would be the CPU. I am not sure of this theory, or why nVidia would even implement it, but some users report that when they have two GPU cards, and one is an older slower model, they can sometimes get faster renders simply by turning it off. I think it would be card/core dependent, so YMMV.

    How and where the CPU is utilized during the render when it's been deselected is a secret known only to Daz and nVidia. But I think it's a fair conclusion that iray isn't sneaking in a few iterations from the CPU even if you have deselected it. Does it do things like prepare data for the GPU? It's possible. But again, the whole idea behind Iray for commercial/professional markets is to offload from the user's CPU and sell those $50,000 rendering boxes. 

    Post edited by Tobor on
  • Here's info on the VCA - http://www.nvidia.com/object/visual-computing-appliance.html
    Xeon 20 core and 256 GB system memory - the GPUs have their own work to do.

    And http://www.nvidia.com/object/vca-for-iray.html

    "Every Iray plugin and native integration includes the ability to send rendering work directly to the Quadro VCA instead of using the local system resources. Within the Iray settings interface, you must specify the IP address of your visual computing cluster. When connected, texture and geometry data is transferred and cached onto the cluster. The visual computing cluster is responsible for dividing the render work among all the VCAs in the cluster and collecting the results. The final rendered image is interactively streamed to the application in either 16-bit lossless, JPEG, or H.264."

  • FrankTheTankFrankTheTank Posts: 1,131
    Tobor said:

    You should instead do an intitial render to a window (cancel after just one iteration), and keep that window open while you do successive renders. The viewport render isn't a full Photorealrender, and there's no telling if all of the scene database and/or materials get fully loaded. I don't think anyone outside nVidia knows the answer to that one, and the developer docs don't mention it.

    As long as the initial window remains open the scene database will not unload from RAM. You can verify this by looking at the task monitor. There's no need to do a complete render to the initial window. Once the iterations start, you know the scene is loaded into any and all devices that will take it. Iray does not appear to "learn" from its previous frames and do a faster subsequent ray tracing, but that would be a helpful feature if it did. 

    For speed issues between CPU and GPU, there have been some observations that Iray appears to synchronize to the slowest rendering device you have. In a GPU/CPU system, that would be the CPU. I am not sure of this theory, or why nVidia would even implement it, but some users report that when they have two GPU cards, and one is an older slower model, they can sometimes get faster renders simply by turning it off. I think it would be card/core dependent, so YMMV.

    How and where the CPU is utilized during the render when it's been deselected is a secret known only to Daz and nVidia. But I think it's a fair conclusion that iray isn't sneaking in a few iterations from the CPU even if you have deselected it. Does it do things like prepare data for the GPU? It's possible. But again, the whole idea behind Iray for commercial/professional markets is to offload from the user's CPU and sell those $50,000 rendering boxes. 

    Thanks for the detailed response Tobor, some useful info there. Much appreciated. 

Sign In or Register to comment.