Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
BTW, I have both a 1070 and 1080ti on my main machine, and I just rendered the benchmark scene with each, and can confirm again those numbers. I got 3 minutes render time on the 1070, and 2 minutes on the 1080ti.
Attached is my summary of all of the render times for various render devices reported in that thread.
I've seen that, those are averages and you haven't listed how many data points there are so I can't even guess at its significance.
The reason you can tell this data is not consistent and properly comparable? Look at how the time changes moving from 1 1080 Ti to 3. 1 1080 Ti is taking 2 minutes. So adding another one should get the time in half and take one minute. Except there is not perfect efficiency, so it actually takes 1.3 minutes, very reasonable looking. Annnnd then you add another one, and the time goes down to 0.5 minutes which is a mysterious gain of efficiency, what you would predict you'd get from four perfectly efficient 1080 Tis.
So I rather trust my guess based on number of CUDA cores with allowance for inefficiency as much as this.
I merely reported what others experienced for render times. You're free to review that thread and contact the people who posted those numbers and discuss with them. And you're also free to call 33% as 50%, but have you actually seen anybody's results showing anywhere near that, or have you tried it yourself?
As a software guy who has done a small amount of GPU programming, I can assure you that number of cores/threads can be somewhat irrelevant to performance. It depends on a lot of other stuff, like how efficiently the software schedules and takes advantage of those threads.
I'm not calling it 50, I'm saying that 33 number is equally unreal as my guess. There is clearly huge deviation in those numbers.
I just tried it 15 minutes ago, and got 33%. How is that unreal? And it's the same as others have posted.
No, that's fair, I missed that you said you tested this set yourself earlier, I'm up too late and I apologize.
I still want to know what the deal is with that magical third 1080 Ti card, though.
I think that he needs to verify the numbers on that spreadsheet before preaching it like gospel.
For example, His GTX 1080 ti + GTX 1070 result lists 1 minute for the render time.
The most recent result for that setup was on page 24 from Angel - Wings clocking in at 1:24 (or about 1.5 based on his counting method)
On Page 24 Robert Freise benchmarked 4 x GTX 1070 cards at 50 seconds
On Page 23 OZ-84 Benchmarked his 4 GTX 1080 ti Founders Edition cards as follows :
1 x GTX 1080 ti = 1:58
2 x GTX 1080 ti = 1:00
3 x GTX 1080 ti = 0:42
4 x GTX 1080 ti = 0:33
@ebergerly Is that enough evidence that you need to fix your spreadsheet numbers?
I've verified 6 of those results on my own system (GTX 745, Ryzen 7 1700, GTX 1060, GTX 1070, GTX 1080ti, GTX 1070 + GTX 1080ti), and the numbers posted seem to be within +/- 5 or 10 seconds compared to what I got. Which is very reasonable. Of course some others could be incorrect or maybe I made a typo, but in general I think they are at least reasonable.
And I think we also firmly busted the myth that this benchmark scene doesn't reflect larger scenes. I and others got similar improvements with much larger & slower scenes.
JamesJAB, yeah I could go to the trouble to review and update the numbers, but clearly nobody uses them or believes them, and I only did it as a public service. If it's important to you (which apparently it is since you went to the trouble of running to that thread to prove me wrong) then I suggest that YOU take the time to update and post a new spreadsheet.
I just tested that on my system and got 1 minute and 4 seconds, which is about what others have posted. Maybe Angel Wings made a mistake?
I'd vote for spending as much as you can afford to get the best hardware possible.
But like larsmidnatt said, only after you're sure it's the best option for you when considering what else is possible.
I would certainly consider the typical size of your scenes; how many are going to drop to CPU.
... And remember, Windows10 allocates GPU RAM for its own use, making an 11GB 1080ti something around an 8GB (iirc) card; yes all cards take a hit.
+ for 1080ti
JamesJAB is absolutely right that the 1080ti is worth it alone because of the additional 3gb memory. And the speed inprovment is also a very big point for it. Imagine to wait 40 minutes instead of 60 minutes for a render... and now multiply this with the next 100 renders.
I mean, really any second time saving is inportant!
Has anyone checked the prices lately of 1080ti's? I just looked on Newegg, and most of them are over $1,000.
Maybe so, but Fry's Electronics has the ASUS and EVGA versions of the GTX 1080 ti Founders Edition on clearence
Asus GeForce® GTX 1080 Ti Founders Edition 11GB 352-Bit GDDR5X Graphics Card $694.99
EVGA GeForce GTX 1080 Ti Founders Edition 11GB 352-Bit GDDR5X Graphics Card $669.59
Maybe Nvidia is winding down production of the GP102 chips (or the yields have gotten so good that they are all getting binned for the Titan XP and Quadro P6000)
They have a CES press conference count down on their main page, won't be suprised if they announce the GTX 2080, GTX 2070, and GTX 2060 cards in 15 hours or so.
If it was me, I wouldn't be so eager to jump on a 1080ti right now. I'd wait at least a week or so until the dust settles from CES.
I've been waiting to see what's in store; am also considering trying out Octane; no GPU RAM limit is very tempting
My guess is that supplies are way down because of the holidays, and Newegg is low on stock and therefore prices are through the roof on the remaining ones. Maybe it's just a function of how much stock everyone has after the holidays. If nobody bought brand x from vendor y, those prices are low cuz there's 50 of them on the shelves.
I just bought a GTX 1070 ti it can render small scenes usually in about 1min I cant imagine needing much faster and it was only £20 more than the 1070, but has nearly as many CUda cores as the GTX 1080-it depends if you do big scenes you may need 11gb ram on the GTX1080ti.
You might want to download the Sickleyield benchmark scene and give it a render with your 1070ti. I don't think anyone has posted results for the 1070ti yet, though I have checked the thread in a while.
Thanks ebergerly i will do that ,should be interesting.
Indeed!
Errr... Hurry up.
Sorry to interrupt, but if I have nVidia 1060 card and bump up to a GTX 1080 Ti card, how much of an improvement would I see?
A huge render (tons of reflective surfaces, characters, smoothing and tons of props) takes about 48 minutes. i7 processor and 16 gigs of ram.
Also, I can't run in iray mode.
What should I expect.
I just posted, a few posts above this, a chart showing the expected render times for a bunch of cards, as well as a cost/benefit analysis using the 1060 as a base. I've posted that probably 20 times in recent months.
EDIT: Oh, wait, you said you can't run in Iray mode? Umm, okay, not sure why, but I suppose you could use those numbers as a ballpark for relative improvements.
I saw that chart and couldn't read it. I assume time is in minutes. As in 4.5 minutes and a quarter of the time so I should expect my 48 minute render to happen in 10 minutes.
I figured my RAM and processor would matter so I added those in. Don't know what other thread you are referring to.
When I use iray preview mode, there's a long pause and only after about 4 minutes-----> the screen shows the iray preview. Move the mouse again and it's another 4 minutes to update.
That's what I mean by iray mode being impractical.
To render in the stand alone version, you need to export the scene from the ORDS (Octane Render for Daz Studio) plugin to either ORBX (single compressed file including all geometry and materials) or OCS (creates a folder with the geometry/materials in .obj form). You would then open the standalone and import the scene and render it. If you render only in DS, then there is no need to export. when you save the scene, all of the Octane settings are saved with the scene. However, from the subscriptions page, I can't tell for sure if the subscription comes with Octane stand alone or not (you can run the plugins without having the stand alone installed).
There's another factor worth considering besides render times: how quickly you can move around in a scene you're constructing.
When I had a single 1080 GTX, some sets I own were so painfully slow, especially in iRay preview mode, that I basically shelved them for the time being. About like trying to play e.g. Skyrim with a single 256mb graphics card. Just before Christmas I upgraded my system RAM from 16gb to 32gb--which hasn't made the slightest difference to anything as far as I can tell--and added a 1080 TI, which has made a huge difference in how fast I can navigate around even complex scenes. Render times are also down by quite a bit--I haven't tried benchmarking them, and don't intend to--but for me the biggest advantage is not having to wait 10 seconds for every frame-by-frame scene update when all I want to do is move the camera.
As I said, something else to consider.
For info, there are some special settings that can make moving around in a scene in Iray view mode much faster. And in my experience, navigating in the 3D Iray view is less about the GPU and equally about the CPU. I've never quite figured it out, but I suppose there's a lot of interaction between the GPU (doing the Iray calcs) and the D|S user interface software updating being run by the CPU.
...actually for the 1080 Ti on W10 you're left with around 9.1 GB. Still pretty significant as that's almost a 2 GB loss, so if you create truly epic level scenes (or render in large resolution output) the process could likely dump to the CPU.
...from my experience, viewport refresh is excruciatingly slow and the programme eventually crashes as it is going into physical memory, much of which is already allocated to the scene that is open.
...being able to shut the scene and Daz programme down in Reality/Lux once the information was sent to the render engine the was a real resource saver as system memory was no longer supporting a fairly large file and complex process at the same time. As Octane can split the load between GPU and CPU that means having as much physical memory you can afford is very important since textures (particularly in a large scene) put a heavier demand on memory resources than geometry.