Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Okay, well if we use the same reference scene, same D|S software, same GPU, same settings, and we get identical results, compared to a "professional" who is using different software, and D|S users get results much different from the "professionals", which one would YOU believe?
And my issue right now is totally irrelevant to "professionals" and benchmarks. It's about moving to a new computer and figuring why it's not rendering in a similar time. It's about making sure my configurations (hardware, software, etc.) are set up correctly.
Same GPU? how can you determine this? Different manufacturers design their cards around the same GPU differently. Some overclock the memory and the chip. Some use a different cooling scheme. Some people are using different drivers for their cards. (Some people's computer's aren't set up correctly). There are millions of variabilies between different machines. I will believe the professionals every time. Good luck in getting your machine in order.
Using your logic we can't believe the professionals either because we can't be sure we're using identical configurations.
And by the way, the scene I was complaining about (12 minutes on my old machine) which took over 25 minutes on my new machine, is now taking 15:32 after I downloaded all content and re-started D|S.
And the professional benchmarks apply how?
I am looking at this thread in a moderatorly way. Please keep the discussion civil.
@moderator: I'm sorry, but maybe you can be more specific, I don't know how I am being uncivil.
@ebergerly: I think others have also explained to you that computers are very complicated machines so you can't find a simple answer the way you want. Professional benchmarks will tell you what will happen when the same machine is upgraded with different hardware or software. It can't help you with your problem. No benchmark can predict what will happen when you change computers. Your methodology and conclusions are too simple. A PC isn't an Xbox.
try this
http://www.nvidia.com/coolstuff/demos#!/unigine-heaven-benchmark
Okay well I think I've resolved my issue. Though I'm not quite sure why....
And it appears the reason I was having 25 minute renders before this was the scene was missing some content. Specifically, images for textures. So after I made sure everything was there, the renders are pretty much identical.
And I'm trying to figure why lack of content would make the renders take longer. Same light situation, just a floor texture missing and a few others.
Anyone? This one has me scratching my head.
Oh, and another interesting tidbit...
The total system peak power during a render hit about 180 watts, almost identical to my old Dell i7-6700. But this new computer has 3 big fans (compared to 1 small one), a big Ryzen, and additional SSD, RGB lights, and so on. I dunno, I thought it would be more.
Newer generation cpu chips, though faster, usually have lower power draws because the silicon has been reduced in size. So even though your computer may be faster, it will also be less power thirsty.
I think they both have a 65 watt TDP. But yeah, I guess it comes down to the fact they're not really under load during rendering, and it's mostly what the GPU and fans and so on are doing.
Your new CPU may not be all that much faster than the "old" i7 6700K.
http://cpu.userbenchmark.com/Compare/Intel-Core-i7-6700-vs-AMD-Ryzen-7-1700/3515vs3917
The Ryzen scores better with apps that like multiple cores. The Intel chip has faster single core speed. With Iray, depending on how well Daz makes use of multiple cores, and with the same video card I wouldn't expect much of a difference in performance. If the scene is pre-loaded into VRAM (render window open), it greatly cuts down the amount of time to start rendering.
Your "old" 6700K is what I'm running in my fast rig.
Wanna sell it?
UPDATE: I checked the total package power for the Ryzen CPU before and during a render, and it was flat between 25-30 watts. Which is about 1/2 of its TDP.
Then I added the CPU to the render, and the total Ryzen package power was flat around 70 watts, total system power at 220 watts, all 16 CPU's at 100 %, and the total render time went from 11 minutes 30 sec to 9 minutes 30 sec.
Wow, those 16 CPU's cut 2 minutes off an 11:30 render. That's like a 17% improvement from the CPU's. Cool.
Though I'm not a fan of using the CPU's so much. Especially with its power hitting the TDP.
Looks like it makes 100% use of multiple cores. All 16 of mine were pegged at 100% throughout the render.
I got a 17% increase in performance just by turning the CPU's on, so I guess that seems like a reasonable expectation for D|S Iray. I'd be curious to see if others get the same numbers.
Nah, it will now become my backup computer. I got so frustrated at the tiny motherboard and micro ATX case and only one good PCI slot. No room for any significant upgrades. But I know in a pinch it can handle my GPU and my memory and has decent performance.
I did make my point, but you are under numerous misunderstandings, including how Iray balances rendering load.
1. GPUs *are* designed for high utilizations for long periods of time, even without water cooling. The consumer models most people here use are made for gaming, and gamers play for extended periods at a time.
2. nVidia GPUs will perfcap for thermal limits, maaging fan speed, overclock speed, and throttling GPU speeds as needed. IF heat *is* an issue, the card takes care of itself.
3. It's very easy to see what the temps of a GPU are. On my machine, even at 100% for *hours* they're still not more than 60-65% rating. Conversely, my replacement Precision, with dual Xeons, 100% utilization after one hour will cause a temperature rise to at least 80% of rating.
4. Assuming even a modest current-model GPU, total rendering time is not significantly reduced by including the CPU. This has been demonstrated here in numerous threads. Iray is designed for VCAs and professional appliances where loads are blanaced more or less evenly across similar rendering hardware. The typical Pascal GPU with 2000+ codes will outspeed whatever CPU is in the machine, yet Iray will try to balance the interations between the slow CPU and the fast GPU. Net result: a 25 minute GPU-only render might take 22 minutes with the CPU added in. Hardly worth the added power consumption, heat, and loss of use of the machine during the render.
Ahh using CPUs as well right? With my setup I leave them off, that way I can use the system while rendering. Did you check render times with it off? Some have reported better times with CPU off.
Mostly kidding, price would have to be cheap.
When Ryzen Threadripper comes out, hopefully end of this month, I plan on one of them or an Intel chip, whatever is better price/performance. Then the 6700K system will move down to be the "older" render box as well.
"CPUs also have extremely robust thermal protection. If the CPU starts operating above the CPU's thermal limit it will begin to reduce the frequency in order to prevent catastrophic failure. Oddly, we have found that the thermal limit for both Turbo Boost and thermal protection on Intel CPUs to be right at 100 °C - which makes it very convenient to remember. In other words, until the CPU hits 100 °C you should see 100% of the CPUs available performance. Once you starting hitting 100 °C, however, the CPU will start throttling back to keep itself from overheating" - source is Puget Systems
What you describe the GPU doing to handle heat is exactly how a CPU handles it. You can educate yourself further about modern CPU's by reading the whole article: https://www.pugetsystems.com/labs/articles/Impact-of-Temperature-on-Intel-CPU-Performance-606/
I don't know who gave you the idea that GPU's are more reliable than Xeons..... maybe your broken Dell gave you that opinion? If so, your Dell is just not designed for the workload. You don't bring a knife to a gunfight, as they say. But the movie and animation industry have been relying on a CPU for rendering power long before there were GPU renderers. They still do. Most major studios still use cpu power over gpu for rendering.
<edit> Coincidentally, if you look at the comments at the bottom of the article, this very issue is discussed in reference to 3d rendering for hours on end. I know you won't read the article, so I will post the question a reader asked:
"I'd be interested to hear your thoughts on temperatures for long term XEON CPU.
I do a lot of 3D rendering - so it's normal for me to have my CPUs running at 100% for 50-75 hours straight to complete a project, and my initial thought is that having the temps at 80-85 degrees for that long (and using them this intensely for 3-5 years) would have negative consequences."
Do you want to guess what the professional in the industry (they assemble systems for 3d effects and filmmaking professionals) said in reply? You'll have to read the article to find out
So I'd like to know what point were you trying to make? If your point is that it's better to use the GPU in Daz Studio, I agree. Because a GPU renders faster in iRay.
But if your point is because a CPU is less reliable than a GPU, the facts don't backup your point.
I already made my point. You want to imagine a CPU is wholly independent from the box it's in. More important than the CPU family is the cooling architecture, and that's up to the OEM, plus any hardware added by the user. For reliability, users would be well advised to add auxiliary cooling, but this is not a task most D|S users are comfortable with.
What the "industry" uses is irrelevent. Very little of the industry A) uses consumer PCs and B) Iray.
With the advent of affordable CUDA-based GPUs with 2000+ cores, and considering how Iray works, there is ever-decreasing reason to continue using the CPU. Doing so whenever possible regains use of the machine during the render, and reduces the risk of heat-related issues, either inherent in the design of the box, or created by the user.
I can tell you that the failure in my previous Precision workstation was due to excessive heat, As I used that machine only for renders, rendering could be the only cause of the heat. was well aware of the issues, and pushed on anyway. I regret that decision, and won't be making the same mistake again.
That's funny. Your views are dictated by your bad experience (and bad choice, IMO) with your workstation. I'm up late (in China) testing a scene (animation) and render times. It's been running full tilt for over 3 hours. My render box is a Dell (already stated), but the difference between my Dell and your Dell is I had both cpu rendering and gpu rendering in mind when I made my purchase, thus I avoided the problems that you have.
My machine is totally stock, including cooling. 1 GTX1080ti and 2 x Xeon 2683 v3. Remember, totally stock cooling. I am rendering with gpu + cpu, 100% for 3 hours now. Here are my cooling measurements: LOL, my cpu's are cooler than my gpu. So you see, it aint about staying away from the cpu rendering (my render goes 40% faster with the cpu helping my gpu), it's about proper equipment choices and setup. Maybe I will post something on the benchmark thread after I'm finished with this test.
Just finished my testing.... I rendered the animation twice. The first time, gpu only:4 hours 37 minutes. The second time, cpu+gpu: 3 hours 11 minutes. That's a big freakin' difference. There is no way I'm going to give up that advantage. My cpu temp avg. 59 deg on cpu 1 and 70 deg on cpu 2 (cpu2 probably received some of the hot air blowing off cpu1). My gpu temp was pegged at 83 degrees the whole time. This is exactly the opposite of the conclusions you made because equipment selection is extremely important. I don't need any extra cooling on my cpu's but I probably could use some for my gpu. My system can run 100% with these temperatures for months.
edit: I made a mistake, my gpu rendered in 4 hours 37 minutes, not 5 hours 37 minutes.
Well maybe we can agree on these two points in the CPU vs. GPU debate:
Yeah, that about says it.
Perhaps appropriately this conversation does seem to have become rather heated - I have a bucket ready to apply water-cooling if it doesn't calm down.
...those are some impressive numbers. My old i7 930 peaks around 75°C and that is with a big aftermarket cooler
Of course I am doing full CPU rendering as my old GTX 460 has only 1 GB of VRAM (which is enough to drive my dual displays and that is about it).
The best I can upgrade my current system's CPU to is a 6 core Xeon X5690 which gives me 2 more cores/4 more threads at 3.46 GHz base clock speed.(3.74 Turbo). No where near the beast you have there. I'd also need to upgrade to 24 GB of memory (from 12) and drop in a single 1080 Ti (for rendering purposes only, using my present card to run the displays), all of which would set me back about 1,300 - 1,400$.
I think Dell did a great job with the 7900 series workstations. I have been lusting after one every since I first saw its design. I am normally an HP guy and I am very loyal to my trusty Z1 that I use for video editing and 3d scene setup, but when companies started dumping used Dell barebones workstations on Taobao (China's Ebay), I had to go for it. As long as you have enough space for them, they are a dream to work on.
...nice, wish deals like that happened here in the States. Most of the Precision series systems I am finding on ebay here are the notebook version. Notebooks + rendering = bad.
After watching some videos on operating temperatures for the Ryzen 7 1700 and other CPU's, and all the talk about what are good temperatures and bad temperatures, and lots of discussion by experts who don't seem to know what they're talking about...
I decided to do some tests of my new system using D|S CPU-only renders, and monitor temperatures at idle, then at full load (all CPU's at 100% doing an Iray render).
Now one thing that people rarely discuss regarding CPU (or other hardware) temperatures is the ambient/room temperature at the time of testing. And that's a big deal. First of all, you can't really expect your hardware temperatures to be much BELOW the ambient temperature of your room, correct? It's always going to be a "rise over ambient".
So for example, if it's kinda warm in your computer room, say 80 degrees F (26.67 degrees C), then you can't really expect your CPU temperatures to be below 80F. And if it's cooler in your room, say 70F (21C), then your CPU should be proportionally cooler. And keep in mind that all parts of your room aren't the same temperature. Hot air rises, and also if the computer is in an area with poor circulation it might be significantly warmer. And as you do renders and crank up your CPU usage, your room is going to warm unless you have the air conditioner on in the room.
So anyway I used both CPUID HWMonitor and AMD Ryzen Master software to monitor the CPU temperatures, and surprisingly the temperatures they measured were almost identical.
This all began because I was a tiny bit concerned that my idling temperatures (CPU at about 0% utilization) were around 30-35C, and stuff I saw online was talking about idling in the low 20's. And since I built this computer myself I wanted to make sure I didn't do anything stupid.
Now I'm using the included Wraith Spire CPU cooler in a big case with 3 fans. So airflow should be pretty good.
Anyway, here's the results I got:
So at idle, the CPU temp is around 13F/8C above ambient temperature. Which means if the room warms later in the afternoon to say 80F/26.667C, I can expect the idling temp to be around 93F/33.89C.
So now I don't feel quite so worried when I see my CPU temps idling in the mid-30's centigrade. And a 10-15F variation in CPU idling temperatures is to be expected as room temps vary.
Oh, and BTW, I think the "maximum" CPU temperature allowed for short times is around 95C, so 60C is way under. And I'm sure that below 95C the CPU throttles at some point too. I think I'd start worrying if it goes much above 80C for any length of time.
I quite like my Precision T7500. It's big and heavy and all metal.
I just upgraded mine to dual Xeon X5680 6core CPUs @ 3.33Ghz (Did an Iray render test 100% CPU usage for 30 min, the CPUs held a solid 3.44Ghz boost at an 80c temperature throughout the entire render)
The rule about the Dell precision towers that I've found is to make sure you know if there are multiple cooling setups for it. The T7500 has 3 different heatsinks for the primary CPU. The only complaints I've heard about thermal issues on this one is people with the T021F heatsink. (all aluminum with no heat pipes)
kyoto kid, if you do upgrade your CPUs, don't do the X5690. 1 step down to the X5680 is half the price, I got my matching pair on ebay for $135.
I'm a total wimp when it comes to anything having to do with overclocking and extra cooling. I have it stuck in my brain that all I'll get out of an overclock of my 3GHz or whatever CPU is maybe another 10-20% or something (like 3.6GHz ?), but the downside is always worrying about cooling and instability and maybe having to buy an new cooler. And I have no clue how to set up my BIOS for overclocking, even though this new MSI BIOS seems to be all about overclocking.
As far as I know, Dell Precision workstations will run any REG ECC DDR3 modules (provided that they are not too slow for the CPU)
For "Legacy" workstation memory I look to ebay and get them second hand.
Oh, I just read your post about CPU upgrading again. You have one of the first gen i7 boards that can accept a Xeon X56xx series CPU, but you are still stuck with non ECC memory.
You could always do like me and keep an eye out for a Precision T7500 with a second CPU installed (So you get the CPU riser with 6 more RAM slots) on ebay for cheap.
Doing it this way frees up alot of your budget to spend on video cards. :P
Here are a few bullet points about the T7500