Don't be Jealous...My New Supercomputer
Well, after three days of frustration and anguish solely because I dared to plug a second GPU into my computer's PCI bus (and had to reset the BIOS, remove all traces of NVIDIA drivers, and on and on) I got so frustrated I decided to get a new computer. So yesterday I bought a Ryzen 7 1700 CPU, MSI X370 Gaming board, big ol' case with monster fans, and I'm gonna transfer my 48GB of memory and 750 watt PS, and my monster GTX 1070 GPU, to the new super machine.
And leave the old Dell computer on the shelf to gather dust and be ridiculed by its peers.
But seriously, as I start to build it today, I'm wondering if anyone out there has a somewhat similar configuration and can tell me if, in practice with D|S especially, it's really gonna make much difference. Yeah, it will have 8 CPU's with 16 threads, but I'm not sure all that CPU power will do much for me compared to my old machine ( i7-6700 with 4 cores/8 logical CPU's). Ultimately I'll buy a second GTX 1070 and high bandwidth SLI bridge (if the NVIDIA prices ever return to inside the stratosphere), which I'm sure will make the biggest difference.
Thoughts?
Thanks.
Comments
the cpu cores will help, but will not be worth the extra electricity you use.
If you can use your new 1070 for rendering only, and have a separate card for driving the displays, you'll notice much more from that.
I use, as an example, a 970 to drive three monitors and a 980ti for rendering. Before I got the 980ti, I used the 970 to render and a passively-cooled 2GB GDDR3 card to drive the monitor(s) (can't remember if I had one or two monitors back then.); it was a little laggy on the monitors at times, but I preferred it. The 970 has no trouble driving 3 x 2560x1440. I rarely add it to the rendering process, I don't like the extra nose from the cards keeping themselves cool.
Absolutely do NOT use SLI for rendering (Nvidia's recommendations).
Cool, thanks. But if I have two GTX-1070's, and plug 3 monitors into one of them, they should both still share the rendering duties right?
I think with the Ryzens you can't use the motherboard video, just the external GPU. Maybe I can grab a third cheap GPU for the monitors?
I am far from jealous loving my new beast
Interesting I was just watching some tests and they say that putting your GPU in a 16 lane PCI vs. an 8 lane PCI results in maybe a 1% performance difference. Looks like my board takes the first GPU at 16x, and the second it gives them both 8x. And if you add a 3rd it gets 4x.
I think. If I understand it correctly.
Cool. Let's see who can pound their chest and grunt the hardest
Where's Tim Allen when you need him ?
In any case, one thing I heard in one of the videos is that the SLI performance depends a lot on the game (of course everyone does games...) and the drivers. Which raises the question:
I wonder how D|S likes SLI cards? Anyone know? I suppose it depends partly on how it supports the most recent NVIDIA drivers?
That is dependant on the Motherboard. I have an ASUS Z170 Deluxe and two Zotac AMP! Extreme 980TIs. WIth two cards inserted, both run at 8x according to the ASUS specifications
Looking at your MB, I see this on their site:
PCI-E Gen
Gen3(16,0) (8,8), Gen2(4)
So it looks like yours is the same as mine.
Cool, Mattymax. Thanks.
So have you noticed with D|S a major difference when adding the second card?
Iray and Octane do not work well with SLI turned on. Never have. Make sure it's off when using those programs.
Wow, no kidding? Good info.
Yeah, I just searched "sli iray" and there's a bunch of posts saying don't enable.
Cool. Thanks much !!!
You're welcome! Best way is to research this stuff and don't assume anything!! :)
I checked the specs on the Ryzen 1700 and the TDP (total dissipated power or something like that?) is only 65 watts. Not much more than a single light bulb in your house. My existing computer sitting idle doing internet is using only 40-50 watts. So I guess I'm not too concerned about power. Heck, my GPU uses something like 150-200 watts during renders, so CPU usage is relatively tiny.
Yes, it greatly reduces the time required to render. Performance will vary depending on what cards you have.
I have an 1800x with the same motherboard. It is indeed x8 (pcie 3), x8 (pcie 3), x4(pcie 2) with three cards. With two cards, it's pcie 3 at x8, x8. The extra threads won't help that much, but it will be beneficial for driving both cards and allowing you to do other things at the same time. I have one card for rendering and I can keep playing games on my other card with no slowdown whatsoever. The multithreading is where you should notice a difference.
You could try getting another card for when you render, but x4 PCIe will affect performance on high end games.
Also, with the mining craze, it's kind of difficult to find mid range video cards lately.
Your new computer is awesome. :)
What everyone has mentioned ab out the pcie lane breakdown is correct. However, if you have a standard ATX case with 7 expansion slots, you wouldn't be able to fit a 3rd double-slot card on the bottom slot of the board.
Yeah, with current GPUs anyways, x16 will only net you a couple of percentage points of additional performance vs. x8. So say 58 seconds instead of 60. Yeah, I know that over the course of decades multiple rendering stations running at x16 vs x8 could save a lifetime or so (thinking of the Steve Jobs quote here involving reducing boottimes), but in practice yeah you aren't going to notice that.
Two GPUs at x8 vs 1 GPU at x16 WILL save you a lot of rendering time. So it's very much worth the loss of those couple of percentage points, as you'll reduce your render times by about half in the process.
I'm sure that eventually x16 will make a huge difference (on paper it should), but it'll require GPU's that can actually saturate the extra 8 lanes sufficiently and quickly enough for a significant performance boost. To be honest, I"m kinda surprised that the 1080Ti's aren't able to eek out more performance from x16 vs x8 than they do.
I doubt that Vega will do better on this front, but maybe the HBM2 will help it saturate a bit better. We'll see Vega reviews soon enough (not Vega Frontier, that's a much slower GPU). Vega is mostly useless to people here anyways, since we are on the Iray bandwagon.
On the 'extra GPU for Daz viewport displays' thing, if you have an extra x8 slot, you might be able to put a cheap or older GPU there, but of course these days cheap GPUs are scarce thanks to the cryptocurrency mining craze. In this case, even running it at x4 may not be much of a performance hit vs freeing up that first or second GPU for rendering. I wouldn't recommend using that extra 'cheap' GPU for any GPU intensive gaming at 4x though..., although assigning it to your desktop display might be worthwhile (i.e browsing the internet on el cheapo GPU while expensive GPU is completely focused on crunching your render).
Onboard APU's are also awesome for this purpose, but of course Ryzen doesn't have those (yet). And I don't think you'd want to 'downgrade' to an APU core wise in any case.
It's a thought anyways.
..letsee.
Dual 2.6 GHz Haswell 8 core Xeons on a dual socket MB supporting 128 GB of quad channel DDR4 and dual 1080 Ti's - plus dual SSDs (one for boot/applications the other for the content library and 2 2 GB Storage HDDs all running on W7 Pro. Think I'll stay with that.
When you are rendering Iray using a GPU, the whole scene gets loaded into the video card's RAM. From there it acts as if it is a stand alone unit on the render job, sending status updates and completed canvas itterations to Daz Studio. That being said, the 8x vs 16x comes into play during the initial scene load into VRAM.
Kyoto Kid,
Wow you spent some serious cash on your hardware
So I'm curious, in hindsight, in terms of price to performance ratio, do you think that you could have spent significantly less and gotten similar performance? I'm never sure where the point is that, say, a 50% increase in hardware cost gets you a 50% or more increase in performance. For example, yeah, we could buy 64 GB or even 128 GB of RAM, but practically how often do you really need it?
And one other option that is rarely (ie, never) discussed is something I'm guilty of, and that is not managing our scenes very well. I mean, if we spend some time managing our scenes in D|S to be smaller and require less resources, often that can improve performance the same or more than spending a boatload of money on hardware.
I guess that's always the balance. Hardware vs. Scene Management vs. how long we're willing to wait for stuff to happen.
I wish there was more data on how D|S responds with different hardware combinations, compared to stuff like scene management. Once I get my new rig together I'll try to do some comparison tests, say with 1 GPU vs. 2 GPU's on the same scene. Right now there's lots of "oh, this hardware is better", but no real data on how much better and whether it's worth the cost.
Was looking at going to a Core I9 7900X and well yeah at $1449 AUD I might pass on it and go AMD..
You should at least wait for AMD's Threadripper CPUs to come out before throwing down that much on a chip. It will certainly be beaten price/performance wise, but perhaps even outright in performance.
If it's solely Daz Studio & Iray, going overkill on CPU and RAM will buy you very little if anything notice able.
You have 48 GB, which is good (me too). Haven't had problems there. Video cards is where you want to spend your money. From what I've read here, mainly from Sickleyield, Iray scales very well with additional cards and is very much worth it.
I haven't done Sickleyield's test again, but with some of my larger scenes I render, my twin EVGA Hybrid 1080 Ti (watercooled) edges out my air cooled twin Titan X Pascals. Unfortunately with air cooling, the top Titan only runs base clock, as it's pulling air above the lower card which is quite warm. Performance difference is only about 4%, but the CPU in that system is much slower too (like less than 1/2).
On the CPU speed specifically that I noticed - one thing I noticed upgrading the older rig to 1080 Ti's - that one has an AMD FX-8320 processor, and a Samsung 840 series SSD. The newer is an Intel 6700K with a Samsung 850 series SSD. User benchmarks put the CPU at 87% faster. That newer computer pumps out the scene info to the card much faster (the amount of time to see the render start to draw in the window). On big scenes that take 3-4 minutes to start rendering on the fast rig can take an additional minute or two on the slower one.
They could, but there will be some lag; it will increase heat and electric use disproportionately... But will take less time, so is about your choice.
The Ryzens don't have video capability from what I recall, but depending on your gaming needs, a lesser-capable card should serve you well; the 1060 or even perhaps a 1050. It doesn't have to be a nvidia card, as they can co-exist.
The only issue I had when I was running AMD and NVidia was openCL wouldn't work on the AMD card; I ended up having ot use Display Driver Uninstaller to strip all the garbage from AMD and NVidia and then reinstalling what I needed.
What is missing here is the total number of available PCIe lanes. I, like Kyoto Kid, have a dual Xeon set up that has a total of 80 PCIe lanes between two CPUs. Simple math should make it obvious how many GPUs Kyoto Kid and I can run. For me, I currently use 48 lanes - 3 GPUs with full bandwidth. When the latest and greatest Kaby Lake i7 and E3 Xeons only have 16 lanes which has to get fractionalized, scalability limitations hit hard and fast.
I bought mine second-hand, so it was slightly expensive, but works quite well for it's Iray only purpose. Overkill is subjective.
Nice machine you have there @KyotoKid!
Omen
I thought perhaps the PCI-e lanes, but I double checked the 8320, the board that it's on does two cards at x8.
......
edit - d'oh! I just realized I commented on the wrong thread. Just ignore this
Wow you must have some big ol' scenes !! 3 - 4 minutes from pressing the Render button until the GPU goes to 100% utilization? And that's on the fast machine with an SSD?
I just tried my biggest scene, which had an entire Stonemason city scene, plus 4 - G3's, a building I added, and a couple of automobile models, and it took 2:45 to see the first grainy render image. And that's with a clunky old Western Digital 1 TB drive.
And the SSD only shaves maybe a minute or two off a scene that takes like 5 or 6 minutes to start rendering?
Wow. I don't have any SSD's, I was kinda wondering how much benefit they'd be, but based on that I'm kinda surprised it's so little.
What kind of scenes do you have that take 5 or 6 minutes to start rendering?
And to my point on scene management...wouldn't it be much nicer if you cut it down into multiple scenes as necessary? That would give much faster rendering and faster 3D view response and loading and so on...
All of this is subjective
Like I said, how often do you see actual benchmark facts and figures associated with any of this? Pretty much never. It's usually a "bigger is always better" discussion, which isn't always the case. Decreasing returns is a factor that's rarely discussed. Just get more more more, even if an additional 100% cost only gives you a 10% render time benefit.
You're not looking in the right places. Of course there are benchmarks and facts and figures. Whole industries base their livelihood on the 3d rendering applications. Don't you think they have established some type of standards and practices based on performance? Of course. You can start here. A whole slew of articles giving you extensive test results. https://www.pugetsystems.com/all_articles.php