Daz Studio Iray - Rendering Hardware Benchmarking

1181921232445

Comments

  • skyeshotsskyeshots Posts: 148

    takezo_3001 said:

    Thanks, I'm so glad I got an FE version at MSRP via HotStock/BestBuy before the scalpers hacked BestBuy's current anti-bot measures!

    Also thanks for finishing my benchmark results, this card is a beast, and even rendering for 30 minutes my card did not go above 60 degrees Celcius, hell it never broke the 55 thresholds; BTW my case is a behemoth with 3 230 mm and one 140 fans and I try and keep my apt at 61 degrees Fahrenheit!

    That's an awesome setup. Great job beating the bots!

  • outrider42outrider42 Posts: 3,679

    skyeshots said:

    outrider42 said:

    I think at this rate we will not see the higher VRAM capacities as long as the crypto boom is going. Nvidia can sell every single card they build instantly, so there is simply no incentive for them to build cards with more VRAM that cost more to manufacture.

     

    The only way this changes is if the crypto market crashes and Nvidia has to wake up and compete. The rumored specs for Intel's GPU lineup show multiple SKUs, with several tiers available with 8 or 16GB models. It sure would be embarrassing for Nvidia to be the only GPU maker to not be offering these capacities. It would make them look bad.

     

    But this is far into the future.

    This is a great topic. Last year when I started using Daz, I was running quad SLI with 4GB GPUs. It was sad to watch the renders roll over to CPU every time. 

    Now, moving up to the A6000 cards from the 3090s, whle it does have some compromises, the A6000 offers plenty of VRAM at 48 GB per card. If I had to do this all over, instead of building out a triple 3090 setup, I would have gone straight to quad A6000. There simply was not enough benchmarking done yet to make an informed decision. There were no IRAY tests on record. NVidia themselves were not very helpful. I hope the tests I did here help others in a similar situation or currently on the fence.

    As far as real world workflows in Daz, my current pain point comes from saving work (which I do often). This is especially true for working with large scenes. I'm going to try an 905P Optane drive. Not sure how this will compare to drives like the Samsung 970/980 Pro in Daz, but on paper it looks like it may help. If others have current builds with Optane or thoughts on this, please jump in.

    For me the 24gb would be fantastic and probably cover everything I want to do for at least a few years. I can break my 1080ti's 11gb, but not by a huge margin. At least you can SLI two of the 3090's to get near 48gb. Adding one more 3090 down the road would open up the option to SLI two pairs of 3090s, though I shudder to think what the power draw would be on such a thing, LOL.

    I have noticed improved loading with my 870 EVO, though I also took the step on consolidating my library onto this single SSD as well. Before I had my library split between two drives, a 860 EVO and a WD Black HDD.The 860 EVO was just 1TB so I couldn't really keep everything on it, plus it was my OS drive and had other things too. Now I have a dedicated Daz SSD that hold everything. Plus I built a new machine powered by a Ryzen 5800X, which in turn is able to access data faster than the old i5 could. All of the elements combined have reduced loading times quite a bit. This has been a pretty fantastic upgrade even though I still just have my 1080tis. Being able to use Daz itself and all my other software more effeciently makes a huge difference. Even the simple things like saving a large PNG image in GIMP is much faster. It had got to the point to where I could get up, go grab some stuff, come back and my file would still be saving. It was that bad. I personally don't think the Samsung Pro SSDs would offer any real advantage over the EVOs in practical use, but that is just my opinion. I think the biggest upgrade is just going all in on SSD, from there you get diminishing returns. I also believe there is likely a bottleneck within the Daz app itself for how quickly data can save and load.

    So while I have said many times that it is ok to build an 'unbalanced' PC for Daz because of how heavily weighted Iray is to GPU, that is only for people who cannot afford building new machines. It certainly does help having modern hardware. But at some point most people have to make a decision on what parts to focus on, because it is impossible to have it all for 3D rendering. There is literally no limit to what you can throw at 3D rendering, the only limit is your budget and perhaps power limitations. The goal of this thread is to give people information so they can hopefully make an informed decision for themselves that works best for them. Often times the GPU really is the best thing one can buy for an upgrade, but right now, maybe it isn't the best time to do that unless you get lucky.

    I have a bad feeling that Ampere is going to be mostly a lost generation for many people, as the GPUs will be hard to get as long as crypto continues to boom. There doesn't seem to be any end in sight for it, and we may not find Ampere's easy to get until the 4000 series releases. This is a genuine possibility at this point. Somebody mentioned a recession might change things, I rather think the opposite would be true. A recession might only serve to bolster crypto that much more, because instability is what drives crypto the most. But I don't want to stray off topic here.

  • vukiolvukiol Posts: 66
    edited March 2021

    System Configuration
    System/Motherboard: Asus Prime x570 pro
    CPU: AMD RYZEN 7 3700x @ STOCK CLOCKS
    GPU: GPU1 1 Zotac 3060 Twin Edge OC GPU2 Manli 2060s Gallardo
    System Memory: 32 gb SKILL TrindentZ DDR4
    OS Drive: SAMSUNG EVO 850 250gb
    Asset Drive: Network
    Operating System: WIN 10 PRO 64 19041
    Nvidia Drivers Version: Studio 461.72
    Daz Studio Version: ds 4.15.0.2 64 BITS
    Optix Prime Acceleration: STATE (Daz Studio 4.12.1.086 or earlier only)

    Benchmark Results

     

    Loading 14.7 secs

    Received update to 01800 iterations after 353.914s.

    Total Rendering Time: 5 minutes 58.58 seconds


    ==================================================================================

     

     

     

    mmmh,  dunno why i dont have the pants, dont think the rendering time will change anyway

    Optix Prime Acceleration: STATE (Daz Studio 4.12.1.086 or earlier only)

    ^dont know what means exactly i just a copy pasted from another benchm. but should be the same

     

    Post edited by vukiol on
  • Matt_CastleMatt_Castle Posts: 2,579

    vukiol said:

    GPU 1: Zotac 3060 Twin Edge OC
    GPU2: Manli 2060s Gallardo

    Benchmark Results

    Loading 14.7 secs
    Received update to 01800 iterations after 353.914s.
    Total Rendering Time: 5 minutes 58.58 seconds

    mmmh,  dunno why i dont have the pants, dont think the rendering time will change anyway

    Well, *something* has made a huge difference there, as my 3060 on its own (without a 2060S in support) rendered it in just under four minutes, not six.

    Either the lack of trousers has been a major slowdown problem (translucency and SSS are more intensive than simpler reflections), or your system has some other kind of issue - were you running with CPU rendering enabled, perhaps? It has been known to actually slow down faster GPUs.

    In whichever case, I would argue the benchmark needs to be fully standard to be valid. There's big differences in how long different surface types take to render.

  • vukiolvukiol Posts: 66

    nono, no cpu, will ty to fix the missing pants if i am able to find the required product.

    obvioulsy you need to render the cameraview at 1800x1200, right?

    will try again after a system restart

  • Matt_CastleMatt_Castle Posts: 2,579

    If you've set up your DS as stated, it should load with the correct parameters already in place, which I believe is 900x900 pixels at 1800 iterations.

  • RayDAntRayDAnt Posts: 1,135

    If you've set up your DS as stated, it should load with the correct parameters already in place, which I believe is 900x900 pixels at 1800 iterations.

    As long as you have installed all the default, free content included as the standard Daz Studio install everything should load properly. In order for the benchmark to be valid it is essential that all the content loads correctly and the final render is 900 x 900 pixels as the scene setup dictates.

  • outrider42outrider42 Posts: 3,679

    vukiol said:

    nono, no cpu, will ty to fix the missing pants if i am able to find the required product.

    obvioulsy you need to render the cameraview at 1800x1200, right?

    will try again after a system restart

    You don't need to change anything when the scene loads, just hit render. As for the content, the parts all come from the various Genesis Starter Essentials. They are not all for Genesis 8, some parts may come from G2 or G3 I believe. So to be sure, just reinstall each of these. Each Essentials package contains an outfit or two. If you install manually you may have overlooked one of the downloads.

    I'm not sure if the pants would impact the time much. It kind of breaks down to this: how complicated are the shaders on the pants versus the exposed skin of the legs? If the skin shader is more complicated, then your render will be slightly longer without the pants. If the pant shader is more complicated, then your render without them will be slightly less. And there is also the complexity of the mesh as well. Skin is usually more complex to render than clothing, though I don't think the skin is too drastic here.

  • vukiolvukiol Posts: 66
    edited March 2021

    outrider42 said:

    You don't need to change anything when the scene loads, just hit render. As for the content, the parts all come from the various Genesis Starter Essentials. They are not all for Genesis 8, some parts may come from G2 or G3 I believe. So to be sure, just reinstall each of these. Each Essentials package contains an outfit or two. If you install manually you may have overlooked one of the downloads.

    I'm not sure if the pants would impact the time much. It kind of breaks down to this: how complicated are the shaders on the pants versus the exposed skin of the legs? If the skin shader is more complicated, then your render will be slightly longer without the pants. If the pant shader is more complicated, then your render without them will be slightly less. And there is also the complexity of the mesh as well. Skin is usually more complex to render than clothing, though I don't think the skin is too drastic here. 

    ofc! the stater bundles, thank you. edit:found it, mistery, dunno why i got the persian top but not the pants :o

    and i found what i was making wrong. My ds start with the aux viewport active :) im so used to it that and i didnt pay attention

    not valid because i have ps bridge and max open, ds cleared and restarted by right click on the scene file open with

    as before 3060 and 2060s

    2021-03-07 05:53:28.715 Total Rendering Time: 2 minutes 23.55 seconds

    and after i found a log by gpu (yep, again i didnt pay attention before)

    2021-03-07 05:53:55.939 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2060 SUPER): 715 iterations, 3.948s init, 136.752s render

    2021-03-07 05:53:55.940 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060): 1085 iterations, 3.680s init, 136.593s render

    i am assuming that those are the iterations which every gpu was able to compute

    Post edited by vukiol on
  • skyeshotsskyeshots Posts: 148

    System/Motherboard: Gigabyte X299X
    CPU: I9-10920X  3.5 GHZ
    GPU0: PNY RTX A6000
    GPU1: MSI RTX 3090 (+950 Mem)
    System Memory: 64 GB Corsair Dominator Platinum DDR4-3466
    OS Drive: 240 GB Corsair Sata 3 SSD
    Asset Drive: Same
    Operating System: Win 10 Pro, 1909
    Nvidia Drivers Version: 461.72 Studio Drivers
    Daz Studio Version: 4.15

    A6000 + 3090
    2021-03-06 18:57:59.142 Total Rendering Time: 55.90 seconds
    2021-03-06 18:58:07.572 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-03-06 18:58:07.572 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 3090):      895 iterations, 2.333s init, 50.678s render
    2021-03-06 18:58:07.583 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (RTX A6000):      905 iterations, 1.888s init, 50.524s render
    Loading Time: (55.90-50.678) = 4.778
    Rendering Performance: 1800/50.678 = 35.52 iterations/second


    A6000 + 3090 + CPU
    2021-03-06 19:14:31.972 Finished Rendering
    2021-03-06 19:14:32.018 Total Rendering Time: 51.79 seconds
    2021-03-06 19:14:40.659 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 3090):      928 iterations, 2.029s init, 46.194s render
    2021-03-06 19:14:40.669 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (RTX A6000):      837 iterations, 1.849s init, 46.876s render
    2021-03-06 19:14:40.669 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU:      35 iterations, 1.502s init, 46.721s render
    Loading Time: (51.79-46.876) = 4.914
    Rendering Performance: 1800/46.876 = 38.40 iterations/second

    The x299x came in today. Scores were consistently faster with the CPU enabled. I will try scaling out the A6000 cards next week.

  • skyeshotsskyeshots Posts: 148

    System/Motherboard: MSI MPG Z490 Carbon EK X
    CPU: I9-10850K @ 3.6 ghz
    GPU: MSI RTX 3090 x3 +1250
    System Memory: 64 GB Corsair Dominator Platinum DDR4-3466
    OS Drive: Samsung 970 EVO SSD 1TB – M.2 NVMe
    Asset Drive: Same
    Operating System: Win 10 Pro, 20H2
    Nvidia Drivers Version: 461.72 Studio Drivers
    Daz Studio Version: 4.15

    2021-03-07 10:06:19.858 Finished Rendering
    2021-03-07 10:06:19.880 Total Rendering Time: 33.17 seconds
    2021-03-07 10:06:28.223 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-03-07 10:06:28.223 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090):      607 iterations, 1.896s init, 28.882s render
    2021-03-07 10:06:28.223 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 3090):      606 iterations, 1.961s init, 28.863s render
    2021-03-07 10:06:28.223 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 2 (GeForce RTX 3090):      587 iterations, 2.220s init, 28.683s render
    Loading Time: (33.17-28.88) = 4.29
    Rendering Performance: 1800/28.882 =  62.32 Iterations Per Second

    Updated drivers and went +1250 on VRAM on the 3x3090 setup. Substantial increase in the iteration rate, just over 20 iterations per second per card.

    Wish I could cluster these two systems.

  • Matt_CastleMatt_Castle Posts: 2,579
    edited March 2021

    I've now rebuilt my system, and figured it was worth seeing if the system behind it had any meaningful impact on exactly the same card.

    System Configuration
    System/Motherboard: ASUS TUF X570-Plus
    CPU: AMD Ryzen 7 5800X
    GPU: Asus RTX 3060 TUF OC @ stock
    System Memory: 2x Corsair Vengeance 32 GB @ 3600 MHz
    OS Drive: Samsung 970 EVO Plus 1TB PCIe 3.0 NVMe
    Asset Drive #1: Samsung 870 QV0 TB
    Asset Drive #2: 6TB Seagate IronWolf NAS
    Operating System: Windows 10 Pro 64-bit, Version 10.0.19042 Build 19042
    Nvidia Drivers Version: 461.72 Game-Ready
    Daz Studio Version: Daz Studio 4.15.0.2

    Benchmark Results:
    2021-03-11 00:22:04.195 Finished Rendering
    2021-03-11 00:22:04.221 Total Rendering Time: 3 minutes 41.73 seconds

    2021-03-11 00:22:32.995 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-03-11 00:22:32.995 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060):      1800 iterations, 1.306s init, 218.992s render

    Iteration Rate: 8.214 iterations per second
    Loading Time: 2.6 seconds

    Pretty much the same as far as iteration rate (8.144 and 221.020 seconds before).

    However, I noticed various things while running this:
    - The card wasn't hitting the same core speeds or power consumption as it does for gaming benchmarks, so heat or power weren't the limiting factor here
    - Iray was actually backing off the memory clocks 200 MHz compared to gaming.

    Put together with the fact that the 192-bit bus width is supposed to be something of a limitation for the 3060 anyway, and that Nvidia seem to have backed off somewhat on the 3060 spec - it's being run at 15 Gbps, despite the original rumours of 16Gbps, and the fact that no manufacturers make GDDR6 in this spec (the closest spec above it is, indeed, 16 Gbps)...

    ... well, while I've heard that for Iray memory bandwidth doesn't make much difference, might as well try clocking the VRAM at its actual design spec (8000 Mhz, rather than the 7300 it was running at):

    System Configuration
    System/Motherboard: ASUS TUF X570-Plus
    CPU: AMD Ryzen 7 5800X
    GPU: Asus RTX 3060 TUF OC (VRAM @ +700 MHz)
    System Memory: 2x Corsair Vengeance 32 GB @ 3600 MHz
    OS Drive: Samsung 970 EVO Plus 1TB PCIe 3.0 NVMe
    Asset Drive #1: Samsung 870 QV0 TB
    Asset Drive #2: 6TB Seagate IronWolf NAS
    Operating System: Windows 10 Pro 64-bit, Version 10.0.19042 Build 19042
    Nvidia Drivers Version: 461.72 Game-Ready
    Daz Studio Version: Daz Studio 4.15.0.2

    Benchmark Results:
    2021-03-11 00:28:47.470 Finished Rendering
    2021-03-11 00:28:47.494 Total Rendering Time: 3 minutes 30.37 seconds

    2021-03-11 00:28:50.920 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-03-11 00:28:50.920 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060):      1800 iterations, 1.467s init, 207.603s render

    Iteration Rate: 8.67 iterations per second
    Loading Time: 2.77 seconds

    A 5.5% boost in iterations isn't shabby, given it's only an overclock on the technicality that Nvidia underclocked it in the first place.

    Interesting to note that it could have been that much closer to the 2080 Ti (which recorded 8.75 iterations/sec on this version of Iray) if the memory clocks were actually matched to the component spec.

    Taking it a bit further... well, I could get it to 8500 MHz without Iray objecting, at which point a 200.52s render translates as 8.976 iterations per second, but that is beyond official spec.

    Post edited by Matt_Castle on
  • outrider42outrider42 Posts: 3,679

    Sometimes the motherboard itself can impact the clocks. Especially for CPUs, some gaming mobos will automatically overclock the CPU and/or RAM in the BIOS. The same goes for the GPU as well, a lot of factors go into the calculations made to adjust clockspeed. It used to be real simple...heat. Heat still is #1, but there are a lot more now.

    I am curious as to how VRAM speed can effect Iray. By design the entire scene is loaded once into VRAM, this is the whole reason that we need enough VRAM for our scenes in the first place. Once in VRAM it basically stays there. And especially in single GPU rigs. I could maybe see how it could effect multiGPU rigs, as the cards do need to talk to each other a bit, or perhaps with GPU+CPU rendering, where again the GPU and CPU must talk to each other. It is one of the reasons why the CPU can actually slow down the GPUs' iteration rates with CPU+GPU, because the CPU is often a major bottleneck.

    But for a single GPU it doesn't make sense to me. However, while the boost may be noticeable it is also not that huge. I would be very wary of overclocking VRAM, more so than the GPU core. The VRAM components are often not as well cooled as the rest of the card is, even in supposed high end gaming cards. Many GPUs may not even have sensors on them, in which case the user wouldn't know what their VRAM temps are. And I just feel VRAM is a more sensitive component compared to the core. A simple static charge can wipe out VRAM more easily than other components (well, maybe the HDMI ports).

    It also makes me curious how or when the VRAM speed became more of a factor with Iray. Maybe it is the new generation of GPUs, or the updates to Iray. I seem to recall seeing it tested in the past with no difference, or such a minor difference it wasn't worth it. But that was a long time ago, before RTX. It could have been that the GPU hardware couldn't take advantage of the VRAM speed with Iray. Things have changed so much. We now have dedicated ray tracing and Tensor cores on Nvidia cards, and Iray itself has switched from OptiX Prime to full OptiX. Maybe the advent of dedicated ray tracing cores is at the center of this. Ray tracing uses more memory, so perhaps this is also where the memory speed makes the difference. But I am just guessing here.

  • outrider42outrider42 Posts: 3,679
    As if on cue, there is an article about hot VRAM in a Founder's 3090. https://www.tomshardware.com/amp/news/replacing-geforce-rtx-3090-thermal-pads-improves-temps-by-25c

    So in this example you have the highest end desktop GPU available and Nvidia didn't bother taking a step to better cool their VRAM. While I expect the 3rd parties may do better, we can't assume they will. The 3090 does use faster and hotter GDDR6X, but the 3060 is a "value" product and that often means cheaper cooling. The Gamer's Nexus tear down of the EVGA 3060 Black shows some of this. It is not offensive, but the cooling on the Black could be better. I've seen a number of tear downs of other GPUs and it is fairly common for them to not even have a thermal pad covering all of their memory modules. It happens more often than people think. So some modules might be OK, but some might not.
  • Matt_CastleMatt_Castle Posts: 2,579

    My specific card, the Asus TUF, is somewhat above the MSRP variants, and while it may not have the separate VRAM heatsink ASUS put on the 3080 TUF, the card is constructed in a way where I can physically see the memory modules and the thermal pads connecting them to the heatsink, which is pretty hefty (a 3 fans, 2.7-slot cooler and 5 heatpipes is a reasonable amount for a sub-200W GPU). I suppose it is possible that they've completely cheaped out on the thermal pads, but that seems a bit paranoid.

    (If the card has VRAM temperature monitoring, I don't think I can access it with my normal hardware monitoring tools, but as it is connected to the same heatsink as the GPU, I'd expect to see some difference on that temperature if the VRAM were getting scorchingly hot).

    For this specific card, I don't imagine that adding a moderate amount to the memory clocks is particularly hazardous, although whether I'd do it in the long run is perhaps another question.

    I am curious as to how VRAM speed can effect Iray. By design the entire scene is loaded once into VRAM, this is the whole reason that we need enough VRAM for our scenes in the first place. Once in VRAM it basically stays there.

    The GPU does actually need to access all that data, and how fast it can access it is logically a factor in how fast it can do the necessary computations. However, I suspect it may well be case of bottlenecks; if the VRAM is fast enough, then increasing the speed may not affect the GPU's speed at all.

    However, while the boost may be noticeable it is also not that huge.

    Not huge, but the 5.5% boost in Iray is much bigger than any difference I saw from any gaming benchmarks, where clocking the RAM at 8000 MHz almost seemed to be "margin of error" as far as the difference.

    In any case, it seems that VRAM speed can have an impact on Iray; whether that's specific to certain cards, or whether it's a broader thing relating to how RTX versions of Iray work... well, I can't easily say. That's something other people would have to test so we had more data points. (Which could also be done by underclocking, if people want to stick entirely within spec).

  • Christoph7891Christoph7891 Posts: 8
    edited March 2021

    Has anyone done any benchmarking with the use of an external gpu via thunderbolt 3? In partciular, combined with 1 GPU already inside the PC connected to the motherboard.

    The reason I ask is I am planning on upgrading my PC, and I am leaning towards an AMD 5950x (amazing single-thread performance with good overall performance too). I would prefer run 2 3090's but I am concered about the potential overheating of the top card (my current dual 1080 TI set up is showing signs of ware, with the top card overheating and casuing blue screens (video tdr failure).

    I would prefer not go down the water cooling route as honestly I am not sure I wan't to faff around with that stuff if I want to swap GPU's at a later date. 

    That leaves with going with either 1 3090 only, water cooling for 2 (which I really don't want to), or potentially running an extra GPU via thunderbolt 3 externally.

    I would love to know what performance hit you would get from the second gpu being via thunderbolt 3, rather than internally.

    Thanks

     

    Post edited by Christoph7891 on
  • Matt_CastleMatt_Castle Posts: 2,579
    edited March 2021

    Christoph7891 said:

    I would prefer run 2 3090's

     I will note that, regardless of any other issues, you will lose a major advantage of using two 3090s, in that you will not be able to pool their VRAM using NVLink, as I doubt any connector exists for the job of connecting an internal and external GPU.

    In this context, if you're already looking at investing quite so much anyway, I might actually suggest that you look into a case that is designed to orient the motherboard so that the PCIe slots face towards the top of the case. In this configuration, there would be no "top card".

    Post edited by Matt_Castle on
  • It's a good point regarding the NVlink, and in truth I would love to have double the VRAM. I am pretty use I would use it as well, as I often have muitple Daz's doing test renders to effecitvley double my scene opening speed.

    But I am struggeling to see how it would work practially without water cooling. Even if they are both sideways, they will always be really close together due to their size. Especially when I spend a whole day doing test renders, having 2 3090's are going to get crazy hot next to each other.

    Actually I did just have a thought, which is maybe I could start with 1 3090, and buy a second 3090 which has built in water-cooling + a rad. That could be the top GP, while the second has it's stock fans. Perhaps that will be enough to keep things cool, with the added bonus of making NVlink a possilbity. 

  • outrider42outrider42 Posts: 3,679
    edited March 2021

    If you have space you could use riser cards. Just a page or two back you will find pics showing such a contraption. I personally am not on board with externals because of the cage itself. You can run into a variety of issues just getting one to work, but my concern would be power draw. The 3090 is so power hungry and large, will it physically fit and can the cage supply 400+ Watts?

    I know it is possible, Daz themselves (I think it was Rob?) has tested externals before. But not in multiGPU. I think that would add yet another obstacle, because now the two GPUs need to talk to each other over pcie AND Thunderbolt. The physical distance alone has got to add some latency that could slow them down. Plus external cages cost a lot. I think there are just too many things that can go wrong if you attempt this. I think Daz also mentioned they fried a GPU trying to test it externally, LOL.

    I know my two 1080tis don't draw as much power as the 3090s, but they run well within spec. My EVGA 1080ti Black is the hotter of the two and will hit around 74C give or take. The MSI 1080ti Gaming is on the bottom and runs at stunningly low temps, hitting just 64 at most rendering. It can stay at 60 for a long time before going up to 64, and if the room is cool, it might stay at 60. They are about 0.75 inch apart. The temp difference is normal, as they coolers are very different. The EVGA Black is a pretty basic cooler (Black editions are always more basic), and the temps it hits are in line with the temps it would hit by itself. When gaming it may go up to about 84, which is exactly what the Black does alone in most situations. The MSI simply has a better cooler. If I reverse the two cards, the temps do change a bit. The MSI on top is a much fatter card, and so just has a few millimeters of a gap, they practically touch. But the temps do not climb that much, I think it leaned towards 69 at peak, I can't remember right now. So a 5 degree change, and still well within spec (Iray is not as hot as gaming).

    I saw a video recently, I think by GamersNexus that mentioned using multiGPUs. Steve said that the two cards will generally be fine even if they are very close together. Even a gap of a several millimeters can still be enough to allow the card on top to breath. As long as the card can pull in air, and case itself is able to keep them supplied with air, this can still be a viable setup.

    BTW, just remember you may need a lot more RAM to take advantage of the VRAM capacity of the 3090, especially if you do Nvlink them. We had a user who was testing a 3090 with 64GB of RAM, and they crashed Daz Studio before they could fill the 3090's 24GB VRAM capacity. They only got to about 17GB of VRAM, but they were using up the 64GB of RAM and Daz crashed. So if you don't have a lot of RAM, Nvlink may be pointless. Iray compresses the scene when it sends it to VRAM, and it can be compressed quite a bit.

    Post edited by outrider42 on
  • Matt_CastleMatt_Castle Posts: 2,579

    Christoph7891 said:

    Actually I did just have a thought, which is maybe I could start with 1 3090, and buy a second 3090 which has built in water-cooling + a rad.

    I'm not speaking from first person experience, but I think I've heard that the NVlink connectors may not be in exactly the same place on different models of card, so that's another thing to bear in mind.

  • outrider42outrider42 Posts: 3,679

    Back to VRAM testing, I wanted to see if the VRAM spec impacted previous versions. I happen to still have a 4.11 hanging around, so I ran that. While I was at it, also checked my VRAM use to see how much VRAM was being used to compare these two versions of Daz.

    First I ran the control tests to make sure my baseline was still the same.

    4.15.0.2

    Nvidia driver 461.72, everything else the same as my last Ryzen 5800X run.

    2021-03-12 21:30:22.077 Total Rendering Time: 3 minutes 23.1 seconds

    2021-03-12 21:30:24.409 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 892 iterations, 1.469s init, 199.920s render

    2021-03-12 21:30:24.409 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): 908 iterations, 1.929s init, 199.683s render

    That equals 9.00 iterations per second.

    VRAM:   GPU1  3786  GPU2  2975

    The second GPU listed is not driving the monitor, so all of that VRAM comes from redering this scene.

    Then I overclocked the VRAM +500

    2021-03-12 21:21:12.034 Total Rendering Time: 3 minutes 9.59 seconds

    2021-03-12 21:21:17.887 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 891 iterations, 1.530s init, 185.670s render

    2021-03-12 21:21:17.887 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): 909 iterations, 1.937s init, 185.692s render

    That equals 9.69 iterations per second. The VRAM was the same. So indeed there was a gain here of 0.69 per second. That is 7.6%, I think.

    ---------------------------------------------------------------------------------------------------------------------------------

    Then I opened up my old Daz 4.11.0.236 beta. Since this is before RTX, OptiX Prime is turned on.

    Base VRAM.

    2021-03-12 21:38:50.470 Total Rendering Time: 4 minutes 22.76 seconds

    2021-03-12 21:38:56.813 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 897 iterations, 2.105s init, 258.153s render

    2021-03-12 21:38:56.813 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): 903 iterations, 2.025s init, 258.090s render

    That equals 6.97 iterations per second. This is not great, my actual bench runs back when 4.11 was current were much faster than this. I assume that the driver updates are optimized for our new Iray and not the old Iray, and that is why 4.11 is so slow now. But back then I could do 3 minute and 30 second runs in this bench. That is quite a regression.

    VRAM: 3705 and 2952  Slightly less VRAM used. But very slightly.

    VRAM overclock +500

    2021-03-12 21:46:40.105 Total Rendering Time: 4 minutes 8.61 seconds

    2021-03-12 21:46:42.876 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 897 iterations, 1.665s init, 245.310s render

    2021-03-12 21:46:42.876 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): 903 iterations, 1.663s init, 245.577s render

    That equals 7.33 iterations per second.

    So there is indeed a boost here as well. The gain was 0.36 per second. The gain was smaller though, at 5.2%.

    Since we do have OptiX Prime as an option, I decided to turn it off just too see. Prime still makes a difference because the times are even slower without it. There was still a performance gain without Prime. Interestingly turning off Prime drops VRAM use by another 50 MB per card in this scene. The old Iray docs do say that Prime uses a little more VRAM in exchange for the speed. This was discussed in the old sickleyield bench forum, so this is not new info. But since Prime does slightly increase VRAM, and Iray RTX adds slightly more, you are looking about 80-100 MB additional VRAM used going from this to RTX. That is just with this scene, certainly other scenes would have different numbers. But look at the speed. The difference in speed is quite large.

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    All of these tests have been with 2 GPUs, so I tried testing with just one, and once again found the VRAM overclock to run faster. This is back to 4.15. I am not going to run the test on 4.11 with just one card, LOL.

    2021-03-12 22:41:02.648 Total Rendering Time: 6 minutes 30.46 seconds

    2021-03-12 22:41:06.895 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): 1800 iterations, 1.341s init, 387.745s render

    That comes to 4.64 per second. VRAM was exactly the same on this GPU as the previous 4.15 test. That is nice consistant data. Also, this test with one 1080ti is FASTER than the test with TWO 1080tis with OptiX Prime disabled. Yikes. Granted 4.11 is not as fast as it used to be right now, but the results back in the day with OptiX off were not much better. That is why people pretty much universally used OptiX Prime back in the day, unless you absolutely needed the VRAM.

    Now the +500 test.

    2021-03-12 22:33:04.063 Total Rendering Time: 6 minutes 3.63 seconds

    2021-03-12 22:33:06.939 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): 1800 iterations, 1.421s init, 360.337s render

    4.995, may as well say 5 iterations per second. The gain here was 0.36 per second...a percentage of 7.76%. This is almost exactly the same gain as the test with two 1080tis, which is quite interesting.

    What is even more interesting is the gain I observed was even higher than Matt's 5.5%, even though I only set it 500 Mhz higher instead of 700. I guess that is thanks to the wider bus.

    -----------------------------------------------------------------------------------------------------------------------------------------------

    I ran one more test at +800 just to see if I could. This is with both GPUs, and 4.15.

    2021-03-12 22:52:11.369 Total Rendering Time: 3 minutes 1.95 seconds

    2021-03-12 22:52:14.310 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 893 iterations, 1.453s init, 178.617s render

    2021-03-12 22:52:14.310 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): 907 iterations, 1.911s init, 178.660s render

    That equals 10.075 iterations per second, which is kind of wild. I never thought I would break 10 in this bench scene. Compared to the stock run that is a gain of 1.075 per second, which is a 11.9% gain. That is a pretty strong gain. I read that 1080tis could get to 6000 Mhz, and I didn't even go that far. These were at 5800 Mhz.

     

  • chrislbchrislb Posts: 100

    outrider42 said:

    I am curious as to how VRAM speed can effect Iray. By design the entire scene is loaded once into VRAM, this is the whole reason that we need enough VRAM for our scenes in the first place. Once in VRAM it basically stays there. And especially in single GPU rigs. I could maybe see how it could effect multiGPU rigs, as the cards do need to talk to each other a bit, or perhaps with GPU+CPU rendering, where again the GPU and CPU must talk to each other. It is one of the reasons why the CPU can actually slow down the GPUs' iteration rates with CPU+GPU, because the CPU is often a major bottleneck.

     I'll try some tests with my GPU locked at a specific speed and the VRAM underclocked, at stock speed and overclocked to see if there is a difference in VRAM speed and render times.

    With error correctign VRAM you can slow down your renders by overclocking ti too far.  That's because the VRAM will correct the errors before it crashes.  I nticed this before with Daz when experimenting with VRAM speeds.

  • chrislbchrislb Posts: 100
    edited March 2021

    I tested this is the same PC I used in previous results. 3950X, 64 GB DDR4 3600 MHz RAm and MSI Gamign X Trio with MSI Suprim 450 watt BIOS.

    I locked the GPU speed at 1905 MHz using the boost lock feature in overclocking software and left the monitoring tab open on another monitor to confirm GPU and VRAM speed stayed locked at their preset speed.

    The GPU is cooled with a EKWB GPU waterblock, three 360mm radiators, and a D5 pump.  One fan blows across the backplate of the GPU to keep the rear VRAM cooler.  The fans and pump were set at max speed so that the GPU never exceeded 41C during the render.

     

    System Configuration
    System/Motherboard: MSI MEG X570 ACE
    CPU: AMD R9 3950X @ Stock with PBO +200
    GPU: MSI Gaming X Trio RTX 3090 with MSI Suprim 450 watt BIOS
    System Memory: Corsair Vengeance RGB Pro 64 GB @ 3600 MHz CAS18
    OS Drive: 1TB Sabrent Rocket NVMe 4.0 SB-ROCKET-NVMe4-1TB
    Asset Drive: XPG SX 8100 4TB NVMe SSD
    Operating System: Windows 10 Pro 64 bit Bild 19042.867
    Nvidia Drivers Version: 461.72
    Daz Studio Version: 4.15.02

     

    Results:

    3090 Gaming X Trio w/Suprim BIOS 1905 MHz GPU 8502x2(17,004 MHz) VRAM speed:

    2021-03-13 18:09:04.906 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 101.355s.

    2021-03-13 18:09:04.907 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-03-13 18:09:05.530 Total Rendering Time: 1 minutes 43.96 seconds

    2021-03-13 18:09:37.381 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-03-13 18:09:37.381 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.046s init, 100.474s render

     

    3090 Gaming X Trio w/Suprim BIOS 1905 MHz GPU 9502x2(19,004 MHz Default) VRAM speed:

    2021-03-13 18:13:45.127 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 99.512s.

    2021-03-13 18:13:45.128 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-03-13 18:13:45.756 Total Rendering Time: 1 minutes 42.11 seconds

    2021-03-13 18:13:47.488 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-03-13 18:13:47.489 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.046s init, 98.633s render

     

    3090 Gaming X Trio w/Suprim BIOS 1905 MHz GPU 10508x2(21,016 MHz) VRAM speed:

    2021-03-13 18:18:03.157 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 88.201s.

    2021-03-13 18:18:03.162 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-03-13 18:18:03.779 Total Rendering Time: 1 minutes 30.80 seconds

    2021-03-13 18:18:05.783 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-03-13 18:18:05.783 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.025s init, 87.345s render

     

    Underclocked VRAM: 100.474s render

    Sock speed VRAM: 98.633s render

    Overclocked VRAM: 87.345s render

     

    I found it unusual that slowing down the VRAM didn't slow down the render speeds much, but speeding up the VRAM decreased render time significantly.  Later, I'll try with CPU and GPU rendering together and maybe different system RAM speeds.

    Post edited by chrislb on
  • chrislbchrislb Posts: 100

    Christoph7891 said:

    ...

    The reason I ask is I am planning on upgrading my PC, and I am leaning towards an AMD 5950x (amazing single-thread performance with good overall performance too). I would prefer run 2 3090's but I am concered about the potential overheating of the top card (my current dual 1080 TI set up is showing signs of ware, with the top card overheating and casuing blue screens (video tdr failure).

    I would prefer not go down the water cooling route as honestly I am not sure I wan't to faff around with that stuff if I want to swap GPU's at a later date....

    EVGA makes 3090 Hybrid cards with fully enclosed water cooling systems.  If your case and support two 240mm radiators, you can use two water cooled 3090 GPUs without the "hassle" of an open loop water cooling system

    https://www.evga.com/products/product.aspx?pn=24G-P5-3978-KR

    https://www.evga.com/products/product.aspx?pn=24G-P5-3988-KR

     

    There is also the EVGA 3090 Kingpin with a 360mm radiator, but they are expensive and hard to get and also overkill for rending.  They are designed for use with liquid nitrogen cooling and setting overclocking/graphics benchmark records.

    https://www.evga.com/products/product.aspx?pn=24G-P5-3998-KR

    If the information I have is correct, EVGA has only made between 200 and 300 Kingpin 3090 cards for public sale so far.

     

    I think I saw that ASUS, MSI, and Gigabyte were going to release or have released 3090 cards with their own water cooling system also.

  • outrider42outrider42 Posts: 3,679

    So underclocked -1000: 17.915

    Default: 18.249

    Overclocked +1000: 20.408

    You gained 2.159 iterations from the memory overclock, about 11.8%. Interestingly this percentage gain is very similar to the +800 I ran on my 1080tis. But that you gained over 2 iterations doesn't even seem fair, LOL. There are a lot of devices that only do around 2 IPS. The performance gain you got from overclocking VRAM is like adding a GTX 1070 to your system.

    My tests with 4.11 seem to indicate smaller gains. My performance gain was less than half what was with 4.15 with the same overclock. I don't have DS versions older than 4.11 now, but I would imagine 4.8 was somewhat similar. Plus I believe the cards available when Iray was introduced to Daz weren't capable of benefitting much, the 900 series was current at the time.

  • chrislbchrislb Posts: 100
    edited March 2021

    System Configuration
    System/Motherboard: MSI MEG X570 ACE
    CPU: AMD R9 3950X @ Stock with PBO +200
    GPU: MSI Gaming X Trio RTX 3090 with MSI Suprim 450 watt BIOS
    System Memory: Corsair Vengeance RGB Pro 64 GB @ 3600 MHz CAS18
    OS Drive: 1TB Sabrent Rocket NVMe 4.0 SB-ROCKET-NVMe4-1TB
    Asset Drive: XPG SX 8100 4TB NVMe SSD
    Operating System: Windows 10 Pro 64 bit Bild 19042.867
    Nvidia Drivers Version: 461.72
    Daz Studio Version: 4.15.02

     

    I did some more experimentation with GPU speed and VRAM speed tonight.

    I locked the GPU speed using the boost lock feature in overclocking software and left the monitoring tab open on another monitor to confirm GPU and VRAM speed stayed locked at their preset speed.

    Here are all today's results with different GPU and VRAM speeds

    18.25 iterations/second at 1905 MHz with stock VRAM speed vs 21.78 iterations/second at 2190 MHz with the max stable VRAM overclock.  With an older Nvidia driver version(and possibly older daz version), the same card could achieve over 22 iterations/second.

    On a side note, this card can run stable at 2,295 MHz GPU clock speed and 22,004 MHz VRAM speed with some graphics benchmarks and stress tests including DX12 ray tracing benchmarks.  Even though rendering in Daz doesn't use as much power as newer games, it also is more sensitive to GPU and VRAM overclocking.

     

    The results of the additional tests I ran today:

    3090 Gaming X Trio w/Suprim BIOS 2145 MHz GPU 9502x2(Default) VRAM speed:

    2021-03-13 22:46:30.089 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 91.884s.

    2021-03-13 22:46:30.089 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-03-13 22:46:30.712 Total Rendering Time: 1 minutes 34.48 seconds

    2021-03-13 22:46:34.243 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-03-13 22:46:34.243 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.081s init, 90.972s render

     

    3090 Gaming X Trio w/Suprim BIOS 2145 MHz GPU 10508x2(21,016 MHz) VRAM speed:

    2021-03-13 22:52:41.016 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 86.314s.

    2021-03-13 22:52:41.021 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-03-13 22:52:41.658 Total Rendering Time: 1 minutes 29.9 seconds

    2021-03-13 22:53:45.526 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-03-13 22:53:45.526 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.110s init, 85.420s render

     

    3090 Gaming X Trio w/Suprim BIOS 2145 MHz GPU 10856x2(21,712 MHz) VRAM speed:

    2021-03-13 22:57:26.636 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 85.637s.

    2021-03-13 22:57:26.641 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-03-13 22:57:27.261 Total Rendering Time: 1 minutes 28.24 seconds

    2021-03-13 22:57:31.022 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-03-13 22:57:31.022 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.083s init, 84.725s render

     

    3090 Gaming X Trio w/Suprim BIOS 2175 MHz GPU 10606x2(21,212 MHz) VRAM speed:

    2021-03-13 23:08:20.053 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 85.200s.

    2021-03-13 23:08:20.058 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-03-13 23:08:20.691 Total Rendering Time: 1 minutes 27.83 seconds

    2021-03-13 23:08:24.696 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-03-13 23:08:24.696 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.082s init, 84.290s render

     

    3090 Gaming X Trio w/Suprim BIOS 2175 MHz GPU 10752x2(21,504 MHz) VRAM speed:

    2021-03-13 23:22:25.371 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 83.657s.

    2021-03-13 23:22:25.376 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-03-13 23:22:25.993 Total Rendering Time: 1 minutes 26.69 seconds

    2021-03-13 23:22:28.612 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-03-13 23:22:28.612 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.375s init, 82.788s render

     

    3090 Gaming X Trio w/Suprim BIOS 2190 MHz GPU 10752x2(21,504 MHz) VRAM speed:

    2021-03-13 23:31:26.172 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 83.534s.

    2021-03-13 23:31:26.173 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-03-13 23:31:26.792 Total Rendering Time: 1 minutes 26.13 seconds

    2021-03-13 23:31:34.179 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-03-13 23:31:34.180 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.064s init, 82.645s render

     

    3090 Gaming X Trio w/Suprim BIOS 2190 MHz GPU 9502x2(Default) VRAM speed:

    2021-03-13 23:35:10.546 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 92.318s.

    2021-03-13 23:35:10.546 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-03-13 23:35:11.164 Total Rendering Time: 1 minutes 34.91 seconds

    2021-03-13 23:35:13.406 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-03-13 23:35:13.406 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.053s init, 91.432s render

    Post edited by chrislb on
  • skyeshotsskyeshots Posts: 148

    System/Motherboard: Gigabyte X299X
    CPU: I9-10920X  3.5 GHZ
    GPU0: PNY RTX A6000 x3
    GPU1: MSI RTX 3090 (+1150 Mem)
    System Memory: 64 GB Corsair Dominator Platinum DDR4-3466
    OS Drive: 240 GB Corsair Sata 3 SSD
    Asset Drive: Same
    Operating System: Win 10 Pro, 1909
    Nvidia Drivers Version: 461.72 Studio Drivers
    Daz Studio Version: 4.15

    2021-03-15 23:35:57.534 Total Rendering Time: 30.37 seconds
    2021-03-15 23:36:04.401 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-03-15 23:36:04.401 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 2 (RTX A6000):      415 iterations, 2.385s init, 24.275s render
    2021-03-15 23:36:04.401 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 3 (GeForce RTX 3090):      492 iterations, 2.139s init, 24.587s render
    2021-03-15 23:36:04.411 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (RTX A6000):      459 iterations, 2.052s init, 24.493s render
    2021-03-15 23:36:04.411 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (RTX A6000):      434 iterations, 2.174s init, 24.405s render
    Loading Time: 5.783
    Rendering Performance: 1800/24.587 = 73.21 Iterations Per Second

    Here are some scaling tests with (3) A6000 cards and (1) 3090 in the 4x slot.

    Still waiting on my Optane drive to arrive. Several weeks now. Is it possible they are making it on Mars, and just shipping it from there?

  • skyeshotsskyeshots Posts: 148

    chrislb said:

     

    I did some more experimentation with GPU speed and VRAM speed tonight.

    I locked the GPU speed using the boost lock feature in overclocking software and left the monitoring tab open on another monitor to confirm GPU and VRAM speed stayed locked at their preset speed.

    Here are all today's results with different GPU and VRAM speeds

     

    This is great work. Thank you for doing this. Very helpful for the overclockers here. I think my biggest opponent so far with overclocks has been the ambient temps. The 3 and 4 card setups put out a few thousand BTU under load. With overclocks & sizable renders, internal temps tend to climb with ambient. This is OK if I catch it and open a window, but less than ideal. The other opponent with overclocks on the MSI Ventus cards is the 2 cable adapters. They can only handle a modest push.
  • chrislbchrislb Posts: 100

    skyeshots said:

    This is great work. Thank you for doing this. Very helpful for the overclockers here. I think my biggest opponent so far with overclocks has been the ambient temps. The 3 and 4 card setups put out a few thousand BTU under load. With overclocks & sizable renders, internal temps tend to climb with ambient. This is OK if I catch it and open a window, but less than ideal. The other opponent with overclocks on the MSI Ventus cards is the 2 cable adapters. They can only handle a modest push.

    I think to see how signiifigant the real world difference is with overclocking, I may need to set up a larger scene with more objects and light sources.

  • RayDAntRayDAnt Posts: 1,135

    chrislb said:

    skyeshots said:

    This is great work. Thank you for doing this. Very helpful for the overclockers here. I think my biggest opponent so far with overclocks has been the ambient temps. The 3 and 4 card setups put out a few thousand BTU under load. With overclocks & sizable renders, internal temps tend to climb with ambient. This is OK if I catch it and open a window, but less than ideal. The other opponent with overclocks on the MSI Ventus cards is the 2 cable adapters. They can only handle a modest push.

    I think to see how signiifigant the real world difference is with overclocking, I may need to set up a larger scene with more objects and light sources.

    I'd certainly recommend it. The benchmarknig scene this thread is based on was designed to meet a specific set of requirements within certain limitations (all described here) which make it of increasingly less use the highe end you go.

Sign In or Register to comment.