Daz Studio Iray - Rendering Hardware Benchmarking

13941434445

Comments

  • oddboboddbob Posts: 397

    New build, new benchmark.

    Daz Studio 4.21.1.48 Pro beta public build

    Watercooled system, all stock settings for CPU and GPU.

    System/Motherboard: Asus Strix Z790-H
    CPU: Intel 13700k
    GPU: Inno 3D Ichill Frostbite 4090
    System Memory: 64gb (2 x 32) Corsair DDR5 6600 CL32
    OS Drive: WD SN850X 2TB PCIe NVMe Gen4
    Asset Drive: WD SN750 1TB PCIe NVMe Gen3
    Operating System: Win 11 Home 10.0.22621 Build 22621
    Nvidia Drivers Version: 531.68 game ready
    PSU: Asus Thor 850w - max reported system power use was 424w during benchmark


    2023-04-26 09:14:53.261 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 4090): 1800 iterations, 0.723s init, 68.622s render
    2023-04-26 09:14:48.226 [INFO] :: Total Rendering Time: 1 minutes 10.74 seconds
    Rendering Performance: (1800/68.622) = 26.23 Iterations Per Second


    With AI overclocking on for the CPU and opportunistic boost going to 5800/4400 p cores/e cores from 5400/4200 and GPU voltage 1.05 > 1.1 with +100 core and +500 Vram it hit about 27.5 iterations, so about 5% better - wasn't a proper test, just playing.

    With the same GPU on a 10700k with 64gb DDR4 3200 rendering was about 25 iterations per second but init times were much longer.

  • outrider42outrider42 Posts: 3,679
    There should be no difference in rendering time if the CPU is not being used as a render device. If there is a difference, were you using a different version of Daz? If the water cooling is new, thermals may have played a role.
  • oddboboddbob Posts: 397

    outrider42 said:

    There should be no difference in rendering time if the CPU is not being used as a render device. If there is a difference, were you using a different version of Daz? If the water cooling is new, thermals may have played a role.

    Cooling is the same, GPU is the same, tried the same version of DS, and current windows and Nvidia driver versions. Ran it multiple times and the result is repeatable.

    Why shouldn't there be any difference in render times?

  • RayDAntRayDAnt Posts: 1,140
    oddbob said:

    outrider42 said:

    There should be no difference in rendering time if the CPU is not being used as a render device. If there is a difference, were you using a different version of Daz? If the water cooling is new, thermals may have played a role.

    Cooling is the same, GPU is the same, tried the same version of DS, and current windows and Nvidia driver versions. Ran it multiple times and the result is repeatable.

    Why shouldn't there be any difference in render times?

    Assuming @outrider43 is talking specifically about your 10700K system results. Because Iray works by using whatever rendering capable devices (Nvidia GPUs, Intel/AMD CPUs) you have enabled in the settings as almost completely independent functioning units. Hence why VRAM requirements tend to be so high. Because each GPU is dependent solely on its own internal resources/limitations to do it's job. The flip side of that is that a different host system hardware platform should make virtually no rendering performance difference on any single GPU - assuming identical software versions. And identical software operating conditions (having other applications open while rendering always has the potential to skew results.)
  • outrider42outrider42 Posts: 3,679
    Because the CPU isn't rendering, simple as that. All the data is loaded into GPU once, and it stays there in VRAM the entire render, so the CPU isn't doing anything other than handling basic traffic. GPU rendering by its nature is not dependent on CPU or other specs outside the GPU. We have historical data of the same GPUs in different machines getting the same performance. Also, when I built my new PC a couple years ago, I ran the same tests with 2 different 1080tis and they scored within a second of their previous times both together and separately. There was no difference, and my new PC is miles better than the old one (like 7 generations newer, a massive gap). Even different GPUs of the same model clocked the same get similar performance in totally different rigs with wildly different performance gaps. The only time there might be a difference is with multiple GPUs, as the GPUs need to talk to each other more. Even in that case, it generally requires more than 2 GPUs before a real difference can be observed.

    So if you are getting something different that's unprecedented and suggests an entirely new bottleneck in Iray.
  • outrider42outrider42 Posts: 3,679
    BTW, I noticed that the new version of Daz may not respect the render settings when loading a scene saved in a previous version. Like our bench scene file, which was saved a long time ago.

    So it is important to verify that all Iray settings are the same, as it may be loading settings from your previous instance of Daz rather than what the scene save file. Compression is also not saved in a scene file. Make sure the compression settings are set to default value for the test.

    Compression settings can actually alter load times a little when a scene is loaded into VRAM.
  • oddboboddbob Posts: 397

    RayDAnt said:

    oddbob said:

    outrider42 said:

    There should be no difference in rendering time if the CPU is not being used as a render device. If there is a difference, were you using a different version of Daz? If the water cooling is new, thermals may have played a role.

    Cooling is the same, GPU is the same, tried the same version of DS, and current windows and Nvidia driver versions. Ran it multiple times and the result is repeatable.

    Why shouldn't there be any difference in render times?

    Assuming @outrider43 is talking specifically about your 10700K system results. Because Iray works by using whatever rendering capable devices (Nvidia GPUs, Intel/AMD CPUs) you have enabled in the settings as almost completely independent functioning units. Hence why VRAM requirements tend to be so high. Because each GPU is dependent solely on its own internal resources/limitations to do it's job. The flip side of that is that a different host system hardware platform should make virtually no rendering performance difference on any single GPU - assuming identical software versions. And identical software operating conditions (having other applications open while rendering always has the potential to skew results.)

    That's interesting, thanks. I'm seeing two CPU cores maxed out while rendering the benchmark scene. Is this usual or a foible of the Nvidia 40 series drivers?

  • oddboboddbob Posts: 397

    outrider42 said:

    Because the CPU isn't rendering, simple as that. All the data is loaded into GPU once, and it stays there in VRAM the entire render, so the CPU isn't doing anything other than handling basic traffic. GPU rendering by its nature is not dependent on CPU or other specs outside the GPU. We have historical data of the same GPUs in different machines getting the same performance. Also, when I built my new PC a couple years ago, I ran the same tests with 2 different 1080tis and they scored within a second of their previous times both together and separately. There was no difference, and my new PC is miles better than the old one (like 7 generations newer, a massive gap). Even different GPUs of the same model clocked the same get similar performance in totally different rigs with wildly different performance gaps. The only time there might be a difference is with multiple GPUs, as the GPUs need to talk to each other more. Even in that case, it generally requires more than 2 GPUs before a real difference can be observed.

     

    So if you are getting something different that's unprecedented and suggests an entirely new bottleneck in Iray.

    Thanks for the info. I'd suspect any bottleneck might be more to do with incomplete 40 series support if the behaviour is unusual for DS.

    I retried the benchmark after changing all the settings to default, opening the scene and looking for changes or anything that looked odd. Still getting 26 iterations and change.

  • outrider42outrider42 Posts: 3,679

    Well, we are only talking a 1 iteration per second difference, though, correct? That is really small considering this is going from 25 to 26. It is hard to say exactly what it is. Since this is a fresh new machine, perhaps it has a clean driver, who knows. There are other 4090 benchmarks in the thread to compare to. One user got over 31 iterations per second, but that was with Daz 4.16. The same user dropped to just under 24 iterations in 4.21, and he got it back to almost 27 with a small memory overclock, but still a noticeable difference from 4.16. His 4.21 was not exactly yours, either, I think it is the previous one. His GPU was not watercooled.

    Another thing to consider is that these cards are getting pretty fast, and that is compressing the benchmark time. You are only running for about a minute, so even a tiny deviation of a second or two can alter the iteration count. So overall I don't think it really means much. If it was a wider gap, or if it translates to larger and longer renders then that would be a different story. But for now I think it is just an outlier and conforms to statistical variance. I'd like to see if more people experience the same thing, but it is hard to replicate since it requires a new build with the same GPU and version of Daz.

    Also, just so people know, Iray 2022.1.7 has released by the Iray developers, the final version 2022 release. They also have the first Iray 2023.0.0 BETA as well. As for when Daz will get these is hard to say. Watch the Daz Beta release threads.

  • LenioTGLenioTG Posts: 2,118

    Anyone with the RTX 4070 yet?

    I know it has been criticized because of the low price difference with the 4070 Ti, but in Italy the price cut is huge (600€ vs 900€). It still has plenty of VRAM and I guess it would be a big jump from my RTX 2070 Super.

  • outrider42outrider42 Posts: 3,679

    Ah, do we have a current bench for the 2070 Super? If not, could you please post one? That would also help gauge how much faster the new cards can be. 

    There shouldn't be a big difference between the 4070 and 4070ti. They both offer 12gb, so the only difference is speed. I found a Vray benchmark that showed the 4070ti as being 19% faster with ray tracing enabled, and 26% faster with pure CUDA. So I would guess the difference between them will be in that ball park with Iray, depending on the scene.

    They will certainly be a big upgrade over any 2000 series. The 3060 is almost as fast as a 2080ti, the leap between 2000-3000 was pretty big. The two 4070ti benches we have are a little slower than a 3090. The fastest 4070ti ran the bench in 127 seconds (14.175 iterations per second), while my 3090 can do it in 107 seconds (16.70 iterations per second). So you take maybe 20% off the 14 iteration number.

    Now at that point, I wonder how it compares to the existing 3080 12gb and 3080ti, as these are not far off the 3090. It is very possible that these cards are still faster than the 4070. But they may not be available anymore, at least not new.

    There is a possibility that Iray improves the 4000 series performance, since the dev team admitted they were working on this. So the 4000 cards might get faster in the future. But I would not count on this happening. Also, this benchmark is not definitive. Actual performance can vary on how you build your scenes. So the 4070 could pull ahead of the 3080s in your particular scenes. It is interesting that the 4070ti showed a larger gap with pure CUDA performance. That suggests scenes with more complex shaders might have a wider gap in performance between the 4070ti and 4070. What is a complex shader? Skin, there are no shaders that are more complex than skin. Clear fluids like water are also difficult, and downright brutal if you use caustics.

  • RetlawRetlaw Posts: 1
    edited May 2023

    LenioTG said:

    Anyone with the RTX 4070 yet?

    I know it has been criticized because of the low price difference with the 4070 Ti, but in Italy the price cut is huge (600€ vs 900€). It still has plenty of VRAM and I guess it would be a big jump from my RTX 2070

    System Configuration
    System/Motherboard: MSI PRO Z790-P
    CPU: Intel® Core™ i7-13700K
    GPU: MSI GeForce® RTX 4070 12GB
    System Memory: 64GB (4x16GB) DDR5/5200mhz
    OS Drive: 2TB Solidigm P41 Plus M.2 NVMe PCIe SSD
    Asset Drive: 2TB Solidigm P41 Plus M.2 NVMe PCIe SSD
    Power Supply: Corsair RM850X 850W 80+ Gold
    Operating System: Windows 11 Home 64bit
    Nvidia Drivers Version: 531.61
    Daz Studio Version: 4.21.1.48 Pro Public Build
    Optix Prime Acceleration: STATE (Daz Studio 4.12.1.086 or earlier only)

    Benchmark Results
    2023-05-14 18:00:41.944 [INFO] :: Finished Rendering
    2023-05-14 18:00:41.977 [INFO] :: Total Rendering Time: 2 minutes 24.33 seconds
    2023-05-14 18:00:50.513 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 4070): 1800 iterations, 0.812s init, 141.528s render
    Iteration Rate: (12.71) iterations per second
     

    Post edited by Richard Haseltine on
  • outrider42outrider42 Posts: 3,679

    The 4070 is surprisingly close to the 4070ti if the numbers for each are correct. I'd also be curious what the power numbers are on the 4070 while rendering.

    Before jumping at a 4070 though, there are rumors that the 4060 and 4060ti will have 16gb versions available. Those who need VRAM might want to look for these, they could be launching in late May and July. We might have another goofy situation like the 3060 having more VRAM than the 3070, 3080 (10gb launch model,) and 3060ti. Keep in mind this is just rumor and may not happen. But it does sound legit given the poor sales of GPUs right now, and the growing problem of 8gb not being enough to play several new games. If it does happen, then the 16gb 4060 and 4060ti would instantly become some of the best budget friendly Iray cards available.

  • soul92asoul92a Posts: 2

    So a 4070 manages 12,7 iterations/sec and a 4090 does 26,2 iterations/sec? Does not make a lot of sense based on specs.

    And, of course, all that is 20% below what a ver 4.16 would do, bit ouchie. Daz honestly need to regain all its lost speed and add proper 40xx utilization. The competition is not exactly standing still

     

     

  • Richard HaseltineRichard Haseltine Posts: 101,551

    soul92a said:

    So a 4070 manages 12,7 iterations/sec and a 4090 does 26,2 iterations/sec? Does not make a lot of sense based on specs.

    And, of course, all that is 20% below what a ver 4.16 would do, bit ouchie. Daz honestly need to regain all its lost speed and add proper 40xx utilization. The competition is not exactly standing still

    Iray is developed by nVidia, not Daz.

  • edited May 2023

    In case anyone is interrested in some more recent 3080ti 12GB numbers with latest daz and 4.16

    System Configuration
    System/Motherboard: MSI Z390 Tomahawk
    CPU: Intel® Core™ i7-9700K
    GPU: MSI GeForce® RTX 3080ti 12GB
    System Memory: 32GB (2x16GB) DDR4
    OS Drive: 500GB Samsung 970 evo M.2 NVMe PCIe SSD
    Asset Drive: 2TB Crucial MX500 sATA SSD
    Power Supply: 1100W BeQuiet 80+ Gold
    Operating System: Windows 10 Pro 64bit
    Nvidia Drivers Version: 531.41
    Daz Studio Version: 4.21.1.48 Pro Public Build, 4.16.0.3 Pro Public Build

    Benchmark Results

    4.16.0.3 at 85% power limit, that's around 300Watts
    2023-05-17 17:01:05.982 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3080 Ti):      1800 iterations, 2.032s init, 95.483s render
    Iteration Rate: (18.85) iterations per second

    4.16.0.3 at 65% power limit, that's around 220Watts (my regular setting)
    2023-05-17 16:56:23.851 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3080 Ti):      1800 iterations, 2.088s init, 100.954s render
    Iteration Rate: (17.83) iterations per second

    4.21.1.48 at 85% power limit, that's around 300Watts
    2023-05-17 17:21:54.657 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3080 Ti): 1800 iterations, 1.098s init, 112.296s render
    Iteration Rate: (16.03) iterations per second
    4.21.1.48 at 65% power limit, that's around 220Watts (my regular setting)
    2023-05-17 17:18:24.603 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3080 Ti): 1800 iterations, 1.239s init, 117.035s render
    Iteration Rate: (15.38) iterations per second

    Going to 100% power limit doesn't do anything to speed things up that's not purely random. All it does is use more power and create more heat.

    Post edited by darkhound1_2c7433f604 on
  • outrider42outrider42 Posts: 3,679

    Thank you for the 3080ti mark, I can imagine several people would be curious how it compares to the 4070ti and 4070 since all of these have 12gb. A new 3080 12gb would be interesting to see as well.

    Ampere is also a bit different than most other generations with how aggressive Nvidia was with the clock speeds. I think there is a very strong argument that Nvidia pushed most of the 3000 series well beyond their peak efficiency curves. Undervolting and reducing power has always been a thing, but I don't think it has ever been quite so effective as with Ampere. It really is beneficial to look at these power reduction options for Ampere owners.

    At any rate, the 4070ti is slower than a 3080ti, which is disappointing. Obviously, a 4070 is slower still. The spread is not super wide, though, so how much of a difference is it could vary from scene to scene.

    The 4070 is however a big step above the 12gb 3060. So in the "12gb battle", the 3060 is well below. But the 3060 has been cheaper than all of these other options, too. The 4070 is also extremely power efficient, so people who face high electricity costs could see more benefits with its power savings.

    The 4060 and 4060ti are now official. The 4060ti will have both a 8 and 16gb version, and the only difference is the capacity. There are no other spec bumps for the 16gb version (honestly refreshing to see that). The 4060 however only has a 8gb model, at least for now. There are rumors swirling that a 16gb version of the 4060 existing, but perhaps Nvidia changed its mind. They may also be keeping a potential 16gb 4060 in their back pocket should the 4060ti with 16gb prove to be a better seller. The prices could be better on these cards.

    4060 8gb - $300 July

    4060ti 8gb - $400 May 24

    4060ti 16gb - $500 July

    Yep, they are charging a whole $100 more for 8gb of VRAM. That seems a bit much to me. The whole lot is not fantastic, but that has been the norm for the 4000 series as a whole. But even so, the 4060ti still is compelling for content creators. You should be getting roughly 3070 performance out of this card, potentially with twice the VRAM...but the 3070 launched at $500, too. So not a really great deal 2+ years later. Still, 16gb with Nvidia is something we've never had at this price before. On the good side the 4060 only uses 115 Watts TDP while the 4060ti 16gb only uses 165 Watts. At any rate that isn't for this thread. It will be good to see what these cards can do in the benchmark when they release.

  • soul92asoul92a Posts: 2
    edited May 2023

    System Configuration
    System/Motherboard: MSI Z590-A PRO
    CPU: Intel® Core™ i7-11700KF
    GPU: PNY 4080 XLR8 EPIC
    System Memory: 32GB (2x16GB) DDR4
    OS Drive: 1TB Samsung EVO 870
    Asset Drive: 1TB Samsung EVO 870
    Power Supply: 1000W Corsair RM1000x Gold+
    Operating System: Windows 10 Pro 64bit
    Nvidia Drivers Version: 531.68
    Daz Studio Version: 4.16.0.3 Pro Public Build

    Benchmark Results

    100% Power.
    Iteration Rate: (22.20) iterations per second

    100% Power, +1000mhz mem
    Iteration Rate: (23.64) iterations per second

    75% Power, +1000mhz mem
    Iteration Rate: (23.60) iterations per second

    60% Power, +1000mhz mem
    Iteration Rate: (23.43) iterations per second

    50% Power, +1000mhz mem
    Iteration Rate: (22.89) iterations per second

    As can be see the 4080 does not start to meaningfully react until <60% power limit and does not really start to show it until approaching 50%. The reason for this exceedingly strange behavior is that the power draw from rendering is only ~210 watt of the 320 watt possible or ~65%.

    This does not change whether the power limit is 100% and until it hits its actual utlilization rate of 65%. So for rendering there is no purpose in power limiting a 4080 card, it is in any case taking a rather relaxed vacation while rendering no matter what.

    A side effect of this semi-vacation state is that the temperature of the GPU does not exceed 55C even though it is an aircooled GPU in a 2 GPU setup (which would otherwise be a good reason to power limit, less heat).

    Being confronted by my GPU sipping Cuba Libre while on a Hawaii vacation instead of working I decided to get a bit nasty and overclocked it more. Which turned out a bit different than expected.

    2023-05-20 18:46:15.216 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 4080):      1800 iterations, 1.664s init, 70.925s render

    100% Power (actually 65%), +2000mhz mem, +250clock
    Iteration Rate: (25.37) iterations per second

    It did a 14% boost from stock, with no voltage fiddeling, which combined with the old but much faster Daz 4.16 made it more or less behave as a stock 4090 would in the newer but slow 4.21.x Daz versions.

    Which while nice is probably not quite how things are supposed to work

    Had it render a ton to see if that was stable, and it was. Strange considering what gamers report a 4080 will clock, but here we are, overclock away if rendering with a 4080 I guess (and keep it away from Rum and Cola until Daz and Iray are further optimized for 40xx, that too).

    Post edited by soul92a on
  • outrider42outrider42 Posts: 3,679

    Interesting data.

    There is a possibility that the cache is playing a role in reducing power draw. One of the biggest changes to Lovelace is a massive L2 cache upgrade. The 3090 has 6MB of cache, while the 4090 has 72MB, a factor of 12. The 4080 has 64MB of cache, in fact all of the 4000 series has significantly more cache than the 3090 does. Even the lowest announced model, the 4060, has 24MB of cache, 4 times that of the previous flagship.

    Nvidia has said that the increased cache results in far fewer calls to VRAM. If you look at the 4060ti release specs, Nvidia makes a point to show 2 memory speeds. They show 288 Gb/s, along with an "effective speed" of 554 Gb/s. They come to this conclusion because the 4060ti has 32MB of cache, and this reduces traffic to VRAM by almost half.

    I got to thinking about this, as this could also explain some of the power savings in Lovelace. All processors look for data in the closest and fastest locations first. When it doesn't find the data it wants, it goes to the next level of memory, with each progression getting farther away and slower from the processor core. The longer distance also uses power. We already know that Iray pretty much stays within VRAM on the GPU for the whole render. Iray does not go to system RAM once the scene is loaded, that's why we need to have enough VRAM in the first place, and why we often see Iray using less power than a demanding video game (video games are constantly moving data everywhere.) So if the L2 cache is helping to reduce the traffic to VRAM with Iray, this should help reduce some of the power as well.

    However, that said, I don't think the power savings with this extra cache would fully explain why Lovelace uses so little power with Iray. I don't know how much of an impact the cache has on power draw, or for that matter how much an impact cache has on Iray. On the flip side, VRAM is using some power even when it is not being used, so how much power can really be saved by less traffic to it? We do know that Nvidia rates the 16gb 4060ti as being 165 Watts TDP, and this is only 5 Watts more than what they rate the 8gb 4060ti at which otherwise has identical specs. It is important to point out that TDP is just an estimate.

  • oddboboddbob Posts: 397

    outrider42 said:

    I got to thinking about this, as this could also explain some of the power savings in Lovelace.

    For the 4090 the card is voltage limited. Raising the voltage to the maximum 1.1 allowed in software from the default 1.05 sees a 320w power draw while rendering at the stock power limit and an increase in boost clocks on my card of about 90mhz. So about 10% more power for 3% higher clocks. For some reason the vram is clocked down 500mhz when using iray in DS as well.

  • outrider42outrider42 Posts: 3,679

    I take that as showing it going beyond its power efficiency curve. 10% more power just for 3% clocks is not going to help Iray as much as gaming, where clock speeds can be everything. Even if the curve holds up, which is not likely, you'd need 50% more power just to get 15% more clocks, and what performance would that actually translate to for Iray? Probably not much, if anything. If Iray (or any software) is not fully utilizing the GPU it will sip power. There are plenty of games where the 4090 uses very little power because of various factors, in particular CPU bottlenecks. That shouldn't be a factor with GPU rendering in Iray, since the CPU is not involved enough to be a bottleneck. 

    The 4090 sips power in other rendering applications as well, so it is not just Iray. However other render applications have reported larger gains for Lovelace than what we see with Iray, that is the key difference. If the Iray performance was more like others, it wouldn't be a question. So it all starts from that. All indicators show Iray is not fully utilizing Lovelace, and even the dev team has said so, if that changes then power draw will go up in a more natural way. Giving it more voltage at this time is not going to do much until they resolve whatever the issue is. Even if they do figure it out, I don't expect the 4090 to use a ton of power, but I would think it would be in the low to mid 300 Watt range.

  • outrider42

    I have a question.

    which video card makes iraypreview faster the rtx 3090 or the 4070 ti.


    I was between the rtx 4080 and the rtx 4070ti because I wanted to upgrade my 3060 12GB because its iraypreview is very slow. Between the 4080 and the 4070ti and I already know that the rtx 4080 is faster than the rtx 3090 but I think the rtx 4080 is very expensive and I wanted a video card with more iraypreview speed than the rtx 3090 or better so I can improve my iraypreview.

     

    Can yo

  • outrider42outrider42 Posts: 3,679

    This isn't something we test for, and honestly I don't use the Iray viewport that often because of the reason you describe, its sluggish. I don't have a 4070ti to compare, but I do have a 3090 and a 3060 in my rig, so I can compare those. My 3090 runs the benchmark faster than the 4070tis posted, and its not that close (like 20 seconds apart on a 1.5 minute long test). Maybe in specific scenes the 4070ti can do better, but we have not seen evidence of that here. So just FYI the 4070ti is not an upgrade over the 3090. Thus it is unlikely the 4070ti is faster with the viewport, but that may not matter.

    There are two aspects to the render, it is the same whether you are doing a full render or the viewport. The first is the load, then you have the render. If you look at load times in this thread, you will see they are all over the place, because there are more factors involved besides just the GPU. You have your storage speed, your RAM, your bus, and CPU all factor into this aspect. The render speed is going to be similar to normal rendering, so the difference between cards is still intact. The Iray preview is a bit different than photoreal, it takes some shortcuts to be faster. Even your monitor resolution might alter preview port performance.

    In a normal render, the load time isn't a big deal because it is such a short part of the overall time a render takes. But when you are using the viewport the load times become an issue. The viewport has to reload every change you make. When you reload, it also restarts the render all over again. It also restarts a render when you move the camera. Basically anything you touch restarts the render.

    The iteration counts are still going to be similar. Once the render starts, the 3090 will fill in pixels faster than the 3060. But the time it takes to load and get going is exactly the same between them. It takes my PC about 10 seconds to start showing something when I turn on the Iray viewport, regardless of whether I use the 3090 or 3060. To make sure it was fair I restarted Daz Studio for each one. I also ran a test using both cards, and the result was actually a second or two longer for the load. Once the pixels start to render the image fills in faster just like normal rendering. If you do not add anything to the scene, just move stuff, the scene stays loaded but still restarts rendering.

    However, the experience of using the 3090 over the 3060 was not dramatically better IMO. The 3090 filling pixels in faster didn't seem to make a huge difference to me. I still had to wait for the pixels to restart every single time I touched something, which frustrates me to no end. I don't think using a 4070ti or 4090 for that matter will make a big difference, either. At least I don't think it is a difference I would be upgrading for. I upgrade on VRAM and render speed, to me the viewport is a bonus if it is better. That is what I would look at between the 4080ti and 4080. The 4080 offers a little more VRAM, which I do run past 12gb frequently, so I personally need 16 or more.

    I have a 5800x, a pcie 4.0 motherboard, and both GPUs have full x16 to work from. My assets also come from a SSD. So I think my rig is decent overall, but Iray viewport is still sluggish. I used a recent DS 4.21 to test, in part because it loads scenes faster so it makes shutting down and restarting Daz easier for testing. The version of Daz might also affect viewport performance, too.

    My suggestion for you is to watch some youtube videos of people using Daz. You might find them using the viewport, and get an idea of its performance with their hardware. The 3090 has become a popular Daz card, I know Jay has a 3090 and some kind of pro Intel CPU. IT Roy has a 3090, and I think rdaughter has a 3090, too. Check out some of their videos and watch for them to use the Iray viewport. You can ask them what they have in their rig.

  • thanks for the response outrider42.

    My doubt about the video cards.

    It's because I have a friend who has an RTX 3090 too.

    He streams from time to time on discord so I can see his screen and I see that the scene in the viewport loads very quickly and everything is identical to the render that he will do in a matter of 1 or 2 seconds in the viewport.

    When I use my RTX 3060 it takes about 1 minute to match the screen and because of that it takes me a long time to know if the lighting is good or not in the scene.
    And that's why I wanted a new video card so that I can see the viewport faster and I can adjust the lighting more quickly in my scene that usually has a bit complex environments and 2 figures.

    Currently 12gb of vram serves me very well and the rendering time is a bit long but i don't mind but what makes me angry is the loading time of the NVIDIA irayport and that's why i wanted a card equal to or better than the rtx 3090 for that.

    I know the RTX 4080 is supposed to be better than the rtx 3090 but it's too expensive and that's why I was looking at the RTX 4070 ti for that.

    But if the rtx 4070 ti is worst compared to the rtx 3090 in the Nvidia iray viewport I will choose the rtx 4080.

    I currently have:
    GPU: RTX 3060
    CPU: Intel Core i5-10600K Processor
    RAM: 64 GB RAM
    SSD: 3 TB

  • outrider42outrider42 Posts: 3,679

    I think something else is going on because my 3060 doesn't do that badly. While it takes a bit to load, once the pixels start showing you should get a general idea of how things look even with a few pixels rendered. Like I said, it only takes my PC about 10 seconds to start showing something even when using the 3060 as the viewport device. The 3090 is twice as powerful as the 3060, so your friend having their image show up that fast is doing so for a reason. They may also be using the denoiser. The denoiser can work in the viewport. Also, a scene might appear to be the same, but in reality not be very similar. There are many reason things can be different. I have a 3090, too, and I haven't ever seen my 3090 look that amazing in the viewport in just 2 seconds unless I was rendering HDRI. Though I render in the 4000-5000 pixel range, so that might make a difference.

    Something could be clogging up your load times. I don't think the hardware is an issue, at least looking at yours. The only real difference is you have pcie 3.0, which is slower than 4.0, but that should not hurt loading so drastically. I don't think the 3060 takes advantage of 4.0 that much, anyway.

    There may be an issue with how your assets are installed, or how Daz is installed. I don't know. But I don't think buying a new GPU will actually solve your issue. Are you sure the 3060 is checked as the interactive device? The 10600 has a little iGPU, so it might be using that somehow. Make sure that the 3060 is checked, and that the 10600 is not used as any rendering device. Having both checked could be slowing things down, too.

    Using the Iray viewport is very resource heavy. If you have other stuff in use you can bog down the response. If you use a lot of smoothing on items, this will also bog down a CPU, because it has to recalculate every time. Turn the smoothing off while working to help make the software more responsive.

    The complexity of the scene also impacts the viewport loading. More stuff logically leads to longer waits. Large textures and complex shaders can hurt it too. (I use 8k textures a lot).

  • After too many years using 2015's finest GTX Titan X (Maxwell), I've skipped Pascal, Turing, and Ampere and landed in the loving arms of Ada Lovelace. The only version of the RTX 4090 that would fit in my 2019 Mac Pro is the Founders Edition, and even that requires a power cable adapter so that  the wires don't get bent and melty. I use the AMD card that came with the Mac to drive my displays, leaving the NVIDIA card to devote all its energies to rendering.

    System Configuration
    System/Motherboard: Apple Mac Pro 2019
    CPU: Intel(R) Xeon(R) W-3235 CPU @ 3.30GHz   3.30 GHz/stock (12 cores/24 threads)
    GPU: NVIDIA GeForce RTX 4090 (Founders Edition)
    System Memory: 96 GB @ 2933MHz
    OS Drive: Crucial P5 1TB
    Asset Drive: 3 * WD Green SN350 2TB configured as parity storage space
    Power Supply: stock, 1.4KW nominal, 1280W maximum continuous power
    Operating System: Windows 10 Pro, version 22H2, build 19045.2965, installed via Boot Camp
    Nvidia Drivers Version: 531.61 Studio
    Daz Studio Version: 4.21.0.5 64-bit

    Benchmark Results
    Total Rendering Time: 1 minutes 18.26 seconds
    1800 iterations, 1.222s init, 74.471s render
    Iteration Rate: 24.170 iterations per second
    Loading Time: 3.789 seconds

    And here's how it did on the current Beta:

    Daz Studio Version: 4.21.1.48 64-bit Public Build

    Benchmark Results
    Total Rendering Time: 1 minutes 8.89 seconds
    1800 iterations, 3.063s init, 63.275s render
    Iteration Rate: 28.447 iterations per second
    Loading Time: 5.615 seconds

  • outrider42

    Thanks for the tips. It really help me.

    I ended up buying the 4080 and next week I'll post its benchmarks.

  • PetercatPetercat Posts: 2,321

    douglasmartinsxd said:

    thanks for the response outrider42.

    My doubt about the video cards.

    It's because I have a friend who has an RTX 3090 too.

    He streams from time to time on discord so I can see his screen and I see that the scene in the viewport loads very quickly and everything is identical to the render that he will do in a matter of 1 or 2 seconds in the viewport.

    When I use my RTX 3060 it takes about 1 minute to match the screen and because of that it takes me a long time to know if the lighting is good or not in the scene.
    And that's why I wanted a new video card so that I can see the viewport faster and I can adjust the lighting more quickly in my scene that usually has a bit complex environments and 2 figures.

    Currently 12gb of vram serves me very well and the rendering time is a bit long but i don't mind but what makes me angry is the loading time of the NVIDIA irayport and that's why i wanted a card equal to or better than the rtx 3090 for that.

    I know the RTX 4080 is supposed to be better than the rtx 3090 but it's too expensive and that's why I was looking at the RTX 4070 ti for that.

    But if the rtx 4070 ti is worst compared to the rtx 3090 in the Nvidia iray viewport I will choose the rtx 4080.

    I currently have:
    GPU: RTX 3060
    CPU: Intel Core i5-10600K Processor
    RAM: 64 GB RAM
    SSD: 3 TB

    I wonder if a second 3060 would help. I don't use iray preview, but in my computer, the two 3060s that I have render twice as fast as a single 3060 does. 3060s are relatively inexpensive nowadays.

  • outrider42outrider42 Posts: 3,679

    A 4080 will be quite an upgrade. Not sure about the viewport performance, it will be better but it might not be worlds better. If you keep your 3060, you can use both to get more speed. Pick the 4080 for interactive, and use both for photoreal.

Sign In or Register to comment.