Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Thanks, but I'm assuming (maybe incorrectly) that D|S is probably signifigantly different from industry standard implementations of Iray and whatever else. It probably depends upon the Iray version in D|S, any peculiarities/limitations of the D|S implementation, and so on. Maybe I'm wrong, but I assume that the only really useful data in this forum is actual tests in D|S using actual scenes.
Yeah, you're kinda wrong dude. iRay is iRay and even though Daz only uses a subset of iRay, the basic principles of ray tracing engines still apply. If you look at those tests, you will see one thing in common. GPU ray tracers have a linear performance progression as you add gpu's. Some are more efficient than others (Octane), but the more GPU's the more performance. If you are rendering with CPU's, the more cores, the better performance. Daz isn't some special case where the rules don't apply.
Thanks for the link to Puget Systems...
I searched for DAZ but nothing came back. However, it did have this interesting tidbit regarding Iray in general:
Q: Does Iray support multiple GPUs? Do they need to be in SLI?
A: While Iray doesn't scale absolutely perfectly, you will see massive performance gains with multiple GPUs. Compared to using a single card, two GPUs will reduce your render times by about 30%, three GPUs by about 50% (half the render time), and four GPUs by about 60%. However, since Iray is using the cards for compute purposes they do not need to be in SLI mode. In fact, SLI can sometimes cause problems so we recommend leaving it disabled if possible.
A bit of a disappointment if it applies to D|S. Two GPUs are only 30% better than one, and you need three to cut your renders in half. I guess it doesn't scale anywhere near directly (ie, two = 50%, three = 67%, four = 75%). Darn.
True, but "more" and "better" are relatively meaningless until you specify how much more and how much better, right? That's my point. Scaling linearly only means that it's linear, not directly proportional.
Linear could mean 1 GPU renders at 100% speed, 2 renders at 99% speed, 3 renders and 98% speed, and so on.
umm... first your point was there was no data (or not enough) about multiGPU performance. Then your point was unless it is done in Daz, a benchmark isn't useful. This new point brings you back full circle. Now what was your point again?
drzap, I'm not following you. What do you mean full circle?
All I'm saying is has anybody confirmed that D|S's Iray performance on real life scenes matches those other industry Iray results? We have all these supercomputers, but has anybody rendered a scene, turned off one of the GPU's, and re-rendered to see the benefits? How do you know, when you make a statement like "Daz isn't some special case where the rules don't apply" Do you have some test results?
Seems reasonable, doesn't it? I did something similar a few weeks ago when I replaced my old POC GPU with a GTX 1070. I rendered a scene, enabled the new GPU, and re-rendered, then posted the results. Has anyone done similar before going out and spending big $$ on hardware?
https://www.daz3d.com/forums/discussion/168821/iray-render-time-comparison-w-gtx-1070#latest
by "full circle" i mean this. your first point was: there's not enough data. answer: i give you data. your second point: the data is useless because it's not DAZ specific. answer: it doesn't need to be. Daz uses iRay and iRay scales linearly according to tests. Your third point: "linear is meaningless unless you specify how much". Here you have come full circle. Answer: I give you data.
And to answer you question, yes I looked at benchmarks and tests very similar to what I provided you. They are far more thorough and precise than you can expect to get on this forum. This is the only reasonable way to spend money IMO.
So you're saying you did see benchmarks specific to D|S, and that's how you know it's not a special case?
If so, where did you see them?
I feel like I'm pulling teeth here
I'm saying you don't need benchmarks specific to DAZ if you're only evaluating the iRay renderer. Just like I don't need to test VRay in every one of the software packages that implement Vray or Arnold on all the softwares that connect to the Arnold renderer. If there is a difference, it's not significant enough to mean anything. Perhaps you don't have much experience outside of Daz, but you can trust the 30% measurement made my Puget. (which is significant. It's not Octane, but it's definitely a big difference if you are a professional).
https://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks#latest
Any good?
Okay, well as much as I want to trust you ....
Once I get my Ryzen machine built this week, I'll do some comparisons using the same scene like I did before. And when I get my second GTX 1070 I'll do it again. So people will have actual numbers.
And for those with 800 lanes of PCI and 16 CPUs with 128 threads, it would be nice if you could also take some time and perform some render tests to see how much benefit the technologies make. Same with SSD's, and so on.
It's really not asking a lot, right? It takes all of 15 minutes to render a scene, then turn off a GPU and try again.
It would be even nicer if we could agree on a standard scene, say something from the store, that might tax your system and we can get numbers from different systems.
WOWOWOWW !!!!!!!!!!! IT EXISTS !!!!!!!!!!!!!!
Mr. Tomalin, you're awesome !!!
I'm in nerd heaven
Thanks much
Personally, I think you'd be better off with a single 1080 Ti vs. two 1070's.
Check out the benchmarks. You'll probably have similar Iray performance, much more VRAM for Iray rendering (11 GB vs 8 GB), and similar and better performance for games and won't be dependant on SLI for the performance.
Okay, after looking at the wonderful thread that Mr. Tomalin referenced, I learned that my render times with my GTX 1070 match almost exactly with others' for the reference scene that Sickleyield produced, and that's around 3 minutes, 15 seconds (with Optix enabled).
Now if I added a second GTX 1070, I'd get about a 50% improvement (down to 1 min 35 sec)
Now if I replaced it with a GTX 1080 ti, I would get almost a 40% improvement (down to 1 min, 53 sec).
If I replaced it with dual GTX 1080ti's, I'd get an additional 30% improvement (down to 1 minute total).
All of that assumes I'm reading the results correctly. And it seems to be at odds with the Puget Systems results which said:
"...two GPUs will reduce your render times by about 30%, "
If we can believe the results, instead of 30% it's closer to 50%. Which I think agrees with what others have said here, that a second card will get closer to 50%.
Someone mentioned CPUs: If you can avoid using yours, do so. Unlike the GPU in your graphics card, even Xeon and non-consumer "workstation" CPUs aren't designed to be pegged at 100% utilization for hours on end. I burned out a perfectly good Xeon-based high-end Dell workstation by using the CPU for long renders. Even though the CPUs and motherboard never went over their rated heat limits, the added heat over weeks and months took its toll.
I suppose you could augment any CPU use with a favorable heat exchange mechanism, such as water-cooling. But that can be expensive. It might be cheaper and safer to simply add another 1070.
I'm not sure that TDP has any real benefit in calculating longevity, as it really comes down to how well the heat is removed from the relatively small surface area of the CPU. Fan cooling over a basic sink doesn't really remove a lot heat. Sixty-five watts over a 2 inch square area can easily produce >200 degree F temperatures.
All modern motherboards have sensors for detecting CPU temperature. You'll want to check that in your renders. But do consider that it isn't enough that the temperature is under its rated maximum. You have to consider how many houts during the day the CPU is generating so much heat.
...sadly not quite a reality yet, but this is pretty much the configuration I settled on for my next build. Yeah the two Xeons will cost around 1,400$ alone. Moving to Threadripper will mean W10 and I have my reasons for staying with W7 Pro. I would rather just throw raw power at the situation instead of spending a great deal of time manually reprocessing texture resolutions in a 2D programme for a highly involved scene. My objective is creating large format super high quality images for gallery printing and publication. Ever see the works of a former member here named AlphaChannel? Yeah, one of my influences. I just don't have a steady enough hand anymore for that type of postwork which is why I need most of the finishing accomplished in the render pass.
When I do a gritty city scene for example, I like to "dirty" things up to make it look "lived in" which means lots of additional polys and textures.
The system is designed for working in Daz, Carrara, and Vue Infinite, hence the dual 8 core CPUs and boatload of memory as neither of the latter two support GPU rendering but have excellent biased engines..
Would love to build around the forthcoming 32 core Epyc 7105 (mmmm....8 channel memory 128 PCI lanes, and 64 CPU threads) but that means running in Linux which is not supported by Daz and most other 3D software companies. Too many instabilities running Daz in Wine to deal with and wouldn't be able to run either Carrara or Vue.
Yeah, but I'd probably just buy at 1080ti and add it to the existing 1070 in parallel. And use my 1070 for my three monitors I guess, and to help with the rendering.
I assume they'd work okay together, assuming I'm not using SLI (which Iray doesn't like).
By the way, Kyoto Kid, is that animation with the girl and the bear something you did? I just love it. It's brilliant
...wish it was. That's from the FIlm Brave It would probabaly toast my current 12 GB system to animate that.
Still working on creating that hair in both LAMH and Garabaldi but it's slow going to gat that kind of depth (and only renderable in 3DL).
Pixar wrote custom software and built a custom system just for developing her hair. It also took them about three years.
Wow, I'm embarassed...I actually saw that film quite a while ago and loved it and told everyone about it.
Geez, I must be getting old or something.
Did you try VWD for the hair? Or maybe the cloth sim in Blender and composite it on the DAZ render or something?
Anyway, back to the topic at hand...
Please, someone, tell me not to go out and buy a GTX 1080 ti huh? Please? They're crazy expensive, around $750. And while I thought that was way overpriced, if you look at the history on partpicker or whatever it says that's about the average price since it came on the market.
And really, I'll only cut my render time by maybe 60%. So a 10 minute render will become a 4 minute render. Big woop. Now if 10 minutes dropped down to 1 minute, now you're talking.
Okay, nevermind. I just convinced myself to stick with my GTX 1070. I'll wait until the bitcoin nonsense dies off, or they come out a headless replacement for the bitcoiners and the market gets saturated with used 1080ti's.
Well you ain't gonna get supercomputer status with a 1070!
With the current demand, you should be able to get near what you paid on the 1070. And prices aren't that bad, I just saw an air cooled a few days ago for $650. I bought my water cooled for $809 a week or so ago.
Yeah, I remember you used Carrara & Vue. I mentioned as a Daz Studio machine referring to ebergerly, in case he didn't realize so much CPU power would be overkill and likely see very little return in DS.
If you want to select a card that best supports your needs and you want to know how it compares to others check out benchmarks for GPU's.
I agree for a "supercomputer" for Daz, at least 32 GB and probably the 6-cores. I found the 6 cores searching, what's not really clear is if more cores help significantly more. You can actually get worse performance for the dollar, if you have a CPU with many cores but a lower clock vs. a processor that's much faster per core depending upon the application.
https://helpdaz.zendesk.com/hc/en-us/articles/207530513-System-Recommendations-for-DAZ-Studio-4-
Win 7 & 8 reserve RAM too, just not as much.
Waiting to see what happens with the Ryzen Threadripper, hopefully it doesn't disappoint.