Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Thanks, but as we already discussed I think the industry benchmarks don't necessarily tell you much about how the cards perform with D|S Iray.
For example, the Passmark results you referenced show the following scores:
GTX 1080 ti: 13383
GTX 1070: 11026
That tells you nothing more than the GTX 1080ti is like 21.4% "gooder" than the 1070.
In fact, with a reference scene, the 1070 renders it in 3 minutes 14 seconds, and the 1080ti renders in 1 minute 53 seconds.
That means the 1080ti is about 40% faster than the 1070, not 21.4%.
Not a problem. Status is irrelevant in these matters. To me at least. And honestly a 40-50% improvement in render times isn't really that earth shattering IMO. People act like "OMG the 1080ti is an absolute beast !!", but based on the numbers people are actually getting with D|S renders, is it really that big a deal to go from a 1 hour render to a 35 minute render? You're still off twiddling your thumbs forever waiting for the render to finish. Or even a 10 minute render becoming a 4 minute render. Big woop.
Maybe that's a big deal to some, and to me it's certainly nice to have, but for almost $800?? I don't think so. I'm certainly not doing this for any sort of time-sensitive production, just a hobby, so I can be off in Blender doing other stuff while it's rendering and it really doesn't matter.
Again, I think the never-discussed issue of scene management (ie, cutting down your scene to a size that renders much quicker and is much easier to navigate) can do a lot to make D|S life MUCH easier, without spending big bucks on a 40% improvement.I guarantee, with some decent scene management and good choices of render settings and so on, people can do a LOT to knock down render times as well as making their scene navigation a lot more responsive.
On the other hand, when I ditched my POC GT 730 for a GTX 1070, I got a 75% improvement in render times. And that only cost me just under $400. And that kind of price/performance seems far more reasonable. To me at least.
I dunno, just trying to break thru the tons of hype surrounding this hardware and look at real numbers.
UDPATE:
Hey it just occurred to me...here are the actual calculations of price ($) to performance (%) for the two cases I mentioned (smaller is better):
Wow, that's a pretty stark contrast. Maybe I'll draw a line in the sand. From now on, I'll only accept an upgrade that gives me a price/performance of, say, 10 or less.
Cool.
DOUBLE UPDATE:
So if I buy a GTX 1080ti, and add it in parallel with my GTX 1070, it looks like my render times will improve by around 60% from just the single 1070. And the cost is $800. So....
BZZTTT !!! Not good enough, if I stick to my goal of 10. Which means I shouldn't have spent the money on a new Ryzen system just so I could add an extra GPU.
Bummer. But that's what happens when you chase the hype.
I think you said it all earlier. You're a hobbiest. And a semi-serious hobbiest at that. So in your case, you're either just rendering stills for fun or simple animations for fun and time is not a factor for you. Like you said, 10 minutes to 4 minutes is no big woop for you. So for you, it is all hype. You can afford to go with a mid-performance card like the 1070. But some people are more serious about their hobby or they are not hobbiests. For them, time is more valuable than money. If I can bring my 10 minute frame times down to 4 minutes, that is a big freakin' woop for me because I have to render 24 of those frames just to make a second of video. And since my time is valuable, spending just $800 to regain some time is very cheap. So it's all a matter of perspective. That's why there are so many tiers of gpu's. Lower tiers for those who just want to have a little cheap fun and bleeding edge high end for those who need things in a more timely manner (time is money). If my time were worth $100 an hour, and I saved 60% of that time rendering, the high end card would pay for itself very quickly. When the bottom line is saving money, that ain't hype.
Ummm. no. That's exactly what Xeons are designed for: 24 hour work cycles. What do you think is in your banks transaction system servers? Do online banks take a break? What about renderfarms? Check what's in their systems. Xeons. That's not to say they won't burn out. Everything breaks, your Dell being one. But they were designed to work. Business depend on them to work reliably around the clock (by the way, are you implying consumer gaming graphics cards like the GTX were designed to be pegged for hours on end?) and in the end, you can count on a CPU render for accuracy. There are some things GPU's can't render yet. As a backup, there is nothing like a box full of cores.
drzap, my only point is that people should think about what they really need and put numbers to it, rather than just believing the "it's awesome !!" nonsense by folks who don't really know what "awesome" is. Im guessing that few people actually evaluate price/performance or anything close to it. They scan the industry benchmarks, think that means something, and run off and spend a lot of money.
And more importantly, how many people really first step back and think about how they can tweak their scenes to save 60% in render time without buying anything?
Okay, a show of hands...how many people really evaluated price/performance and scene management before they upgraded their systems?
As an example, I have an indoor scene, basically a box with 3-G3'S inside and a cutout window with glass surface, and emissive lights inside. Render time is almost 12 minutes.
Now if I add a Sunlight to the scene, the render time is just under 20 minutes. Now that's a 40% improvement just by one mouse click to disable the Sunlight. One mouse click, vs. upgrading to a GTX 1080 and spend $800 to get the same 40% improvement.
I'm just suggesting that people think about stuff like that rather than just say "wow it's awesome" and "if you don't get a 1080ti you're not really cool, you're just a lowly hobbyist"
You're making it seem like it's either this or that. Of course scene management is important. But just because it is, doesn't mean high end hardware is "hype". You can walk and chew gum at the same time. At least I can.
And I would never call you a lowly hobbyist for buying a 1070. Just about everybody who uses Daz is a hobbyist. It isn't professional software.
I render complex scenes, usually with 3+ Genesis 3 figures at 4K. Going from 2 Titan X Maxwells to 2 Titan X Pascals cut my rendering time by 60%.
Since my renders were typically going for 7-9 hours, dropping it to 4-5 hours was a pretty big deal. My new 1080 Ti Hybrids perform a smidge better than the Titan X pascal. I'm happy.
I don't even have what I'd call a super computer, a few here a running four Titan X's.
No offense, but you started off the thread with building a supercomputer for Daz...
Not sure why anyone would take offense to your post...I certainly didn't. But I'm not one to be easily offended. Especially when we're talking facts. Hard to get offended by facts.
Anyway, I'm real surprised your renders take over 7 hours. I can't remember ever having a render that takes even an hour, with a single GTX 1070. In fact I just posted about a scene with 3-G3's, same as you, in an interior scene, and it took 11 minutes.
Maybe you're talking about animations that take a long time? I hestitate to ask about scene management for fear someone will get upset, but I'm sure you've looked into stuff like separating your renders into layers/canvases so you aren't re-rendering stuff that doesnt' change in each frame?
LOL, I'm sorry but I had a little laugh over your comment. It highlights the real difference in perspective I was talking about. I bet you can't imagine ever using up all of your 8GB on your GPU either, can you? Maybe you think anything over 8GB is excessive and hype? And then you compare your 3-G3 scene with his without any information to compare with (except that you both had 3 G3's). LOL, good one. There is nothing wrong with trying to save a buck, especially when this is just your hobby. But your perspective puts you at a disadvantage because you seem to deny other people's valid reasons who have a different perspective.
Wow, I just checked newegg to see what a Titax X Pascal is. $1,400 each !!! Which means almost $3,000 for two? And that gives you a 60% improvement in render times?
If you're talking price/performance, that comes out to a grand total of 50...
Wow. That's way out of my ballpark of 10 max. And I was even thinking of dropping it to around 8...
I don't see where you contradicted me, but I do see where you have drawn obviously wrong conclusions based on your narrow vision. But I won't chase you.
If this thread continues to be marred by comments adressing other posters rather than the topic at hand it will be locked.
Nope, there's a whole lot of variables that impact render time, also personal preference for what's good enough. I render at 4K (3840x2160) and usually at 10K iterations. Many will say 10K is overkill, but I found the default convergence and quality looked grainy to me. 10K may be overkill, as sometimes 5K will looks good enough to me, or 8K, whatever. But I will set up as many scenes as I can, and batch them. It's easier to set them all to 10K and forget it than to inspect each one.
Also, many emissive surfaces, specular objects, things like that can greatly increase render times. Could I replace a window for a faster render? Sure. But I wouldn't get the subtle reflections like I would in real life if I did that.
Got them for $1200 each, but yeah they aren't cheap. The watercooled 1080 Ti at $800 was a bargain!
Buying computing power was never a linear expense... 2x the price isn't 2x the power, 4x the price doesn't equal 4x the power, etc.
I dunno what you mean by those numbers... What?
Scott...those numbers were what I explained before. Basically I take the price of the upgrade (say, $800 for a new GPU), and divide it by the percent improvement in render times. So if a new GPU gives you a 50% improvement in render time, the Price/Performance is 800/50, or 16.
Just a way I look at cost to benefit of upgrading my system. So if I decide I'm only willing to pay $100 for a 10 percent improvement in render times, I only consider upgrades with a price/performance of 10 or less. Keeps me from getting too carried away by new technology, which may or may not be cost effective.
I assume that professionals who make their living at this do something similar to maximize profit, like any other industry.
Ummm, yes. Even Xeon's aren't rated at full throttle 100% utilization for extended periods without enhanced cooling. Stock, these Precisions come with a fan on a standard heat sink. Not enough for pegging a CPU pedal-to-the-metal 16-20 hours a day. Silly to suggest otherwise. Either bump up your game with liquid cooling, or forget the CPU (my advise) and render GPU only. If you've got 2000+ cores on the GPU you don't really gain a lot by adding the CPU anyway. It's wasted watts.
Ahh... Sure, most do a cost/benefit analysis. I just never tried to boil it down to such a simple number like that.
...exactly.
The same applies to pro grade GPU cards like the Quadro and Firepro series. They too are designed to operate at peak output for extended periods of time compared to the enthusiast grade GTX/Radeon cards.
However, in spite of recent GPU advances, major studios continue to rely on CPU/physical memory based render farms due to the extreme demands involved and accruacy required. That may eventually change wIth the introduction of NVLInk MBs for production work, ultra high core count CPUs, and the Volta series GPU/Compute cards with 32 GB or more of fast HBM2 VRAM.. Totally our of our league budget wise but not for a major film studio to reduce their VFX production time.
...oh I agree, what you have to consider is the type of work you are primarily looking to produce. As I mentioned earlier I am looking at creating very high quality work rendered in large pixel formats for gallery purposes. I also do very involved scenes (I've had up to nine figures in just one scene on a number of occasions) If I could afford it, I'd go for a pair of 16 GB Quadro P5000s to ensure 90% of my scenes rendering at high quality settings remain in GPU memory. I also prefer using Iray Emissive lighting for existing light props in a scene, which can be a huge resource hog (still have not got a clean finished render job using Stonemason's Urban Future 5 on my current rig even letting it go all night and into the morning).
As I am looking at using dual 1080 Ti's each with 5 GB less than the P5000 (though together, costing about as much as a single Titan Xp), having more CPU cores and multi channel memory as a fallback is important as it will keep render times more manageable should rendering dump to the CPUs. As I mentioned, I've had big jobs go into swap mode on my current system and can say that is even more excruciating.
Effectively what I have designed is a combination workstation and mini render farm in one system. For someone who primarily does portraits or simple scenes, yes it is extreme overkill. For someone like myself who likes to create "epic" level scenes at very crisp resolution, it is a necessity.
Xeons are what renderfarms use. They are not complaining about Xeons burning up. Do we need more than that? And of course anyone who is serious is going to think about more cooling than a stock PC case. And if GPU also need enhanced cooling because they aren't made for full throttle utilization either. In addition, GPU's have limitations like maximum memory and unable to render some special effects (not a problem for Daz users because Daz can't produce many special effects). But if you run out of GPU ram, you will sure wish you had a good CPU as a back up. If your just doodling for fun on DAZ, a high end cpu is a waste of money.
Well the deed is done. And surprisingly it went without a hitch. Well, kinda...
Just keep in mind, if you are installing a cooler on a Ryzen 7 1700 (the included Wraith cooler), make CERTAIN that you're screwing all four screws into the MB at the same time. I was so careful, I had the MB on a countertop, lined the cooler screws up, and even unscrewed them a bit until they clicked to make sure they were ready to engage. So I screw them in an X pattern very carefully from the top. And never noticed that two screws never engaged. And I screwed the other two all the way down.
DOH !!!
So I had visions of damaging the CPU and so on.
But today I fired it up, the CPU's look fine, all running okay, temps down in the low 30's...
So here's what I ended up with:
I really like the MSI UEFI, it's got a lot of good info. But I think that's pretty much standard in gaming MB's....
So after the fairly quick Windows 10 installation, installing all the apps (surprisingly quick when you just download and install), and DAZ being so nice to automatically recognize all your stuff and download it with DIM, it really didnt take much to get up and running.
However, now is the grunt work of finding all of my preferences for all my apps (uggg...), figuring out how to move and install all my non-DAZ content, figure how to set up all my DAZ database stuff so it matches the old machine, and so on...
The nice thing is now I have a big mid-tower with tons of space, 3 monster fans, and the PS actually slides right in compared to my old Dell, which almost required me to use a pry bar to install the GPU and PS.
Oh, and one more lesson learned...
I decided last minute to stop by Best Buy and grab the Samsung SSD. They have this nice price guarantee that they'll beat any price. So I paid $200 for the SSD, got home, check on the web, and their price was supposed to be $163 (same as newegg), not $200. Called and they immediately credited my card. But be careful....
Was it worth it? Well, that remains to be seen. More space and cooler temps is nice. The SSD seems pretty snappy. Most of my 16 CPU's are "parked" most of the time (I assume they shutdown if not being used somehow?). And I'm guessing my DAZ renders won't see much of a difference, except maybe in time to load a scene onto the GPU from the SSD.
But until the NVIDIA cards drop in price to something reasonable, I probably won't noticed a lot of performance difference from my old machine. We'll see...
And they either water cool, add additional airflow management, and/or limit the CPU to avoid excessive heat. The Xeon is no different than any other slice of silicone: heat kills it. Thermal management is up to the system integrator or box maker, and no CPU is immune. In my case that was Dell, and they didn't design the Precision for continuous 100% CPU utiliization. For one thing, despite its large size it's packed densely inside, with the inrternal drives covering where the CPU(s) are located. Many of us add additional squirrel fans along the bottom, and even that's not always enough.
You fail to make your point. You gave advice to not use the CPU (because you say the CPU wasn't designed for 3d rendering without extra cooling) but should use a GPU (because apparently a GPU was designed for 100% utilization without extra cooling ). While I agree in Daz you want to use the GPU as much as possible, because it's faster but when you reach your GPU's limitations, you will want a good CPU. Your explaination for why to avoid a CPU is just wonky.
Dude, your Dell konked out because sometimes computers konk out. To imply that an industrial grade Xeon chip is less reliable than a consumer grade GTX card is just not reasonable.
I guess it comes down to what you are going to do with it whether or not it is worth it.
DAZ studio using iray you are probably wasting money when a lesser computer with a great Nvidia graphics card or 4 would suffice, or an external rack of them cooled.
On the otherhand using 3Delight or other CPU based render software such as Carrara and Poser you get a definite edge if you wished to include them in your workflow (a post you made to the Carrara forum suggests otherwise)
Yeah, and if you want to do CPU rendering, don't cheap out and buy something like a Dell Precision. My Dell t7910 has plenty of room inside, huge tower heat sinks (stock) and absolutely no problem cooling dual Xeons. You gotta buy the right tools for the job.
Darn....
Tried a scene from my old computer, which took previously 12 minutes. On the new computer it took more than twice as long. Same GTX 1070 rendering on both machines. And GPU drivers up to date on new machine.
Made sure CPU's were OFF in render settings and Optix ON. Watched GTX 1070 go to 100% utilization while rendering, so I know it was doing its thing. And CPU's were pretty quiet. Same scene, same render settings.
Since this was a new install I had to install all the content for the scene first (it asked me and did it for me), so maybe that played a role somehow. So now I'm making sure all my content is downloaded with DIM, and I'll try again.
Darn.
I expected the render time would be pretty much identical, since the new CPU is out of the picture.
Darn.
I hate to have to say it (and I'm not chasing you), but this is what I meant about comparing benchmarks. Leave the benchmarking to the pros. Their methodology is precise and scientific. Most testing websites enumerate how they test software and hardware (and if they don't, I wouldn't trust their data) and it usually is quite extensive and exact. You can't get an accurate picture of how your system will perform by looking at a list of other people's render times. That may have a little value as to what you can expect but it in no way shows you the whole picture. Like I said, trust the professionals.
So you're saying it's reasonable that if I move from an i7-6700, 4 core CPU, with 48GB RAM, rendering with a GTX 1070, to a Ryzen 7 1700, 8 core CPU, with 64 GB RAM, rendering with the same GTX 1070, then the "pros" say it should take twice the time to render??
What pros say that?
I never said that was a benchmark I was following. It was merely an assumption that the same GPU rendering on two different machines should take about the same time, since the GPU is the one that's doing the rendering.
What I'm saying is that there is a high variability between different computers and components and to assume that since xxx happened on your mate's computer, then the same xxx will happen on your computer is faulty reasoning. The pros don't make these assumptions. They will test on the exact same computer with the exact same settings and measure the results. Their data is a lot more dependable than a list of random people rendering the same file.