Don't be Jealous...My New Supercomputer

2456

Comments

  • ebergerlyebergerly Posts: 3,255

    Thanks, but I'm assuming (maybe incorrectly) that D|S is probably signifigantly different from industry standard implementations of Iray and whatever else. It probably depends upon the Iray version in D|S, any peculiarities/limitations of the D|S implementation, and so on. Maybe I'm wrong, but I assume that the only really useful data in this forum is actual tests in D|S using actual scenes. 

  • drzapdrzap Posts: 795
    ebergerly said:

    Thanks, but I'm assuming (maybe incorrectly) that D|S is probably signifigantly different from industry standard implementations of Iray and whatever else. It probably depends upon the Iray version in D|S, any peculiarities/limitations of the D|S implementation, and so on. Maybe I'm wrong, but I assume that the only really useful data in this forum is actual tests in D|S using actual scenes. 

    Yeah, you're kinda wrong dude.  iRay is iRay and even though Daz only uses a subset of iRay, the basic principles of ray tracing engines still apply.  If you look at those tests, you will see one thing in common.  GPU ray tracers have a linear performance progression as you add gpu's.  Some are  more efficient than others (Octane), but the more GPU's the more performance.  If you are rendering with CPU's, the more cores, the better performance.   Daz isn't some special case where the rules don't apply.

  • ebergerlyebergerly Posts: 3,255

    Thanks for the link to Puget Systems...

    I searched for DAZ but nothing came back. However, it did have this interesting tidbit regarding Iray in general:

    Q: Does Iray support multiple GPUs? Do they need to be in SLI?

    A: While Iray doesn't scale absolutely perfectly, you will see massive performance gains with multiple GPUs. Compared to using a single card, two GPUs will reduce your render times by about 30%, three GPUs by about 50% (half the render time), and four GPUs by about 60%. However, since Iray is using the cards for compute purposes they do not need to be in SLI mode. In fact, SLI can sometimes cause problems so we recommend leaving it disabled if possible.

    A bit of a disappointment if it applies to D|S. Two GPUs are only 30% better than one, and you need three to cut your renders in half. I guess it doesn't scale anywhere near directly (ie, two = 50%, three = 67%, four = 75%). Darn. 

  • ebergerlyebergerly Posts: 3,255
    drzap said:

    Some are  more efficient than others (Octane), but the more GPU's the more performance.  If you are rendering with CPU's, the more cores, the better performance.   Daz isn't some special case where the rules don't apply.

    True, but "more" and "better" are relatively meaningless until you specify how much more and how much better, right? That's my point. Scaling linearly only means that it's linear, not directly proportional. 

    Linear could mean 1 GPU renders at 100% speed, 2 renders at 99% speed, 3 renders and 98% speed, and so on. 

  • drzapdrzap Posts: 795
    ebergerly said:
    drzap said:

    Some are  more efficient than others (Octane), but the more GPU's the more performance.  If you are rendering with CPU's, the more cores, the better performance.   Daz isn't some special case where the rules don't apply.

    True, but "more" and "better" are relatively meaningless until you specify how much more and how much better, right? That's my point. Scaling linearly only means that it's linear, not directly proportional. 

    Linear could mean 1 GPU renders at 100% speed, 2 renders at 99% speed, 3 renders and 98% speed, and so on. 

    umm... first your point was there was no data (or not enough) about multiGPU performance.  Then your point was unless it is done in Daz, a benchmark isn't useful.  This new point brings you back full circle.  Now what was your point again?frown

  • ebergerlyebergerly Posts: 3,255
    edited July 2017

    drzap, I'm not following you. What do you mean full circle?

    All I'm saying is has anybody confirmed that D|S's Iray performance on real life scenes matches those other industry Iray results? We have all these supercomputers, but has anybody rendered a scene, turned off one of the GPU's, and re-rendered to see the benefits? How do you know, when you make a statement like "Daz isn't some special case where the rules don't apply" Do you have some test results? 

    Seems reasonable, doesn't it? I did something similar a few weeks ago when I replaced my old POC GPU with a GTX 1070. I rendered a scene, enabled the new GPU, and re-rendered, then posted the results. Has anyone done similar before going out and spending big $$ on hardware? 

    https://www.daz3d.com/forums/discussion/168821/iray-render-time-comparison-w-gtx-1070#latest

    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    edited July 2017
    ebergerly said:

    drzap, I'm not following you. What do you mean full circle?

     

    by "full circle" i mean this.  your first point was: there's not enough data.  answer: i give you data.  your second point: the data is useless because it's not DAZ specific.   answer: it doesn't need to be.  Daz uses iRay and iRay scales linearly according to tests.   Your third point:  "linear is meaningless unless you specify how much".  Here you have come full circle.  Answer:  I give you data.

     

    And to answer you question, yes I looked at benchmarks and tests very similar to what I provided you.  They are far more thorough and precise than you can expect to get on this forum.  This is the only reasonable way to spend money IMO.

                              

    Post edited by drzap on
  • ebergerlyebergerly Posts: 3,255

    So you're saying you did see benchmarks specific to D|S, and that's how you know it's not a special case? 

    If so, where did you see them? 

    I feel like I'm pulling teeth here smiley

  • drzapdrzap Posts: 795
    ebergerly said:

     

    So you're saying you did see benchmarks specific to D|S, and that's how you know it's not a special case? 

    I feel like I'm pulling teeth here smiley

     

    I'm saying you don't need benchmarks specific to DAZ if you're only evaluating the iRay renderer.  Just like I don't need to test VRay in every one of the software packages that implement Vray or Arnold on all the softwares that connect to the Arnold renderer.  If there is a difference, it's not significant enough to mean anything.  Perhaps you don't have much experience outside of Daz, but you can trust the 30% measurement made my Puget. (which is significant.  It's not Octane, but it's definitely a big difference if you are a professional).

  • Daz Jack TomalinDaz Jack Tomalin Posts: 13,344
    ebergerly said:

    So you're saying you did see benchmarks specific to D|S, and that's how you know it's not a special case? 

    If so, where did you see them? 

    I feel like I'm pulling teeth here smiley

    https://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks#latest

    Any good?

  • ebergerlyebergerly Posts: 3,255

    Okay, well as much as I want to trust you smiley....

    Once I get my Ryzen machine built this week, I'll do some comparisons using the same scene like I did before. And when I get my second GTX 1070 I'll do it again. So people will have actual numbers. 

    And for those with 800 lanes of PCI and 16 CPUs with 128 threads, it would be nice if you could also take some time and perform some render tests to see how much benefit the technologies make. Same with SSD's, and so on.  

    It's really not asking a lot, right? It takes all of 15 minutes to render a scene, then turn off a GPU and try again. 

    It would be even nicer if we could agree on a standard scene, say something from the store, that might tax your system and we can get numbers from different systems.  

  • ebergerlyebergerly Posts: 3,255

    WOWOWOWW !!!!!!!!!!! IT EXISTS !!!!!!!!!!!!!!

    Mr. Tomalin, you're awesome !!! 

    I'm in nerd heaven smiley

    Thanks much

  • GatorGator Posts: 1,294
    edited July 2017
    ebergerly said:

    Okay, well as much as I want to trust you smiley....

    Once I get my Ryzen machine built this week, I'll do some comparisons using the same scene like I did before. And when I get my second GTX 1070 I'll do it again. So people will have actual numbers. 

    And for those with 800 lanes of PCI and 16 CPUs with 128 threads, it would be nice if you could also take some time and perform some render tests to see how much benefit the technologies make. Same with SSD's, and so on.  

    It's really not asking a lot, right? It takes all of 15 minutes to render a scene, then turn off a GPU and try again. 

    It would be even nicer if we could agree on a standard scene, say something from the store, that might tax your system and we can get numbers from different systems.  

    Personally, I think you'd be better off with a single 1080 Ti vs. two 1070's. 

    Check out the benchmarks.  You'll probably have similar Iray performance, much more VRAM for Iray rendering (11 GB vs 8 GB), and similar and better performance for games and won't be dependant on SLI for the performance.

    Post edited by Gator on
  • ebergerlyebergerly Posts: 3,255
    edited July 2017

    Okay, after looking at the wonderful thread that Mr. Tomalin referenced, I learned that my render times with my GTX 1070 match almost exactly with others' for the reference scene that Sickleyield produced, and that's around 3 minutes, 15 seconds (with Optix enabled).

    Now if I added a second GTX 1070, I'd get about a 50% improvement (down to 1 min 35 sec) 

    Now if I replaced it with a GTX 1080 ti, I would get almost a 40% improvement (down to 1 min, 53 sec).

    If I replaced it with dual GTX 1080ti's, I'd get an additional 30% improvement (down to 1 minute total).  

    All of that assumes I'm reading the results correctly. And it seems to be at odds with the Puget Systems results which said:

    "...two GPUs will reduce your render times by about 30%, "

    If we can believe the results, instead of 30% it's closer to 50%. Which I think agrees with what others have said here, that a second card will get closer to 50%.

    Post edited by ebergerly on
  • ToborTobor Posts: 2,300

    Someone mentioned CPUs: If you can avoid using yours, do so. Unlike the GPU in your graphics card, even Xeon and non-consumer "workstation" CPUs aren't designed to be pegged at 100% utilization for hours on end. I burned out a perfectly good Xeon-based high-end Dell workstation by using the CPU for long renders. Even though the CPUs and motherboard never went over their rated heat limits, the added heat over weeks and months took its toll. 

    I suppose you could augment any CPU use with a favorable heat exchange mechanism, such as water-cooling. But that can be expensive. It might be cheaper and safer to simply add another 1070.

    I'm not sure that TDP has any real benefit in calculating longevity, as it really comes down to how well the heat is removed from the relatively small surface area of the CPU. Fan cooling over a basic sink doesn't really remove a lot heat. Sixty-five watts over a 2 inch square area can easily produce >200 degree F temperatures.

    All modern motherboards have sensors for detecting CPU temperature. You'll want to check that in your renders. But do consider that it isn't enough that the temperature is under its rated maximum. You have to consider how many houts during the day the CPU is generating so much heat.

  • kyoto kidkyoto kid Posts: 41,023
    edited July 2017
    ebergerly said:
    kyoto kid said:

    ..letsee.

    Dual 2.6 GHz Haswell 8 core Xeons on a dual socket MB supporting 128 GB of quad channel DDR4 and dual 1080 Ti's - plus dual SSDs (one for boot/applications the other for the content library and 2 2 GB Storage HDDs all running on W7 Pro.  Think I'll stay with that.

    Kyoto Kid,

    Wow you spent some serious cash on your hardware smiley

    So I'm curious, in hindsight, in terms of price to performance ratio, do you think that you could have spent significantly less and gotten similar performance? I'm never sure where the point is that, say, a 50% increase in hardware cost gets you a 50% or more increase in performance. For example, yeah, we could buy 64 GB or even 128 GB of RAM, but practically how often do you really need it?

    And one other option that is rarely (ie, never) discussed is something I'm guilty of, and that is not managing our scenes very well. I mean, if we spend some time managing our scenes in D|S to be smaller and require less resources, often that can improve performance the same or more than spending a boatload of money on hardware. 

    I guess that's always the balance. Hardware vs. Scene Management vs. how long we're willing to wait for stuff to happen. 

    I wish there was more data on how D|S responds with different hardware combinations, compared to stuff like scene management. Once I get my new rig together I'll try to do some comparison tests, say with 1 GPU vs. 2 GPU's on the same scene. Right now there's lots of "oh, this hardware is better", but no real data on how much better and whether it's worth the cost. 

    ...sadly not quite a reality yet, but this is pretty much the configuration I settled on for my next build.  Yeah the two Xeons will cost around 1,400$ alone.  Moving to Threadripper will mean W10 and I have my reasons for staying with W7 Pro.  I would rather just throw raw power at the situation instead of spending a great deal of time manually reprocessing texture resolutions in a 2D programme for a highly involved scene. My objective is creating large format super high quality images for gallery printing and publication.  Ever see the works of a former member here named AlphaChannel? Yeah, one of my influences.  I just don't have a steady enough hand anymore for that type of postwork which is why I need most of the finishing accomplished in the render pass.

    When I do a gritty city scene for example, I like to "dirty" things up to make it look "lived in" which means lots of additional polys and textures.

    If it's solely Daz Studio & Iray, going overkill on CPU and RAM will buy you very little if anything notice able.

    You have 48 GB, which is good (me too).  Haven't had problems there.  Video cards is where you want to spend your money.  From what I've read here, mainly from Sickleyield, Iray scales very well with additional cards and is very much worth it.

    I haven't done Sickleyield's test again, but with some of my larger scenes I render, my twin EVGA Hybrid 1080 Ti (watercooled) edges out my air cooled twin Titan X Pascals.  Unfortunately with air cooling, the top Titan only runs base clock, as it's pulling air above the lower card which is quite warm.  Performance difference is only about 4%, but the CPU in that system is much slower too (like less than 1/2).

    The system is designed for working in Daz, Carrara, and Vue Infinite, hence the dual 8 core CPUs and boatload of memory as neither of the latter two support GPU rendering but have excellent biased engines..

    Would love to build around the forthcoming 32 core Epyc 7105 (mmmm....8 channel memory 128 PCI lanes, and 64 CPU threads) but that means running in Linux which is not supported by Daz and most other 3D software companies. Too many instabilities running Daz in Wine to deal with and wouldn't be able to run either Carrara or Vue.

    Post edited by kyoto kid on
  • ebergerlyebergerly Posts: 3,255

     

    Personally, I think you'd be better off with a single 1080 Ti vs. two 1070's. 

    Yeah, but I'd probably just buy at 1080ti and add it to the existing 1070 in parallel. And use my 1070 for my three monitors I guess, and to help with the rendering.

    I assume they'd work okay together, assuming I'm not using SLI (which Iray doesn't like).

  • ebergerlyebergerly Posts: 3,255

    By the way, Kyoto Kid, is that animation with the girl and the bear something you did? I just love it. smiley  It's brilliant

  • kyoto kidkyoto kid Posts: 41,023
    edited July 2017

    ...wish it was.  That's from the FIlm Brave  It would probabaly toast my current 12 GB system to animate that.

    Still working on creating that hair in both LAMH and Garabaldi but it's slow going to gat that kind of depth (and only renderable in 3DL). 

    Pixar wrote custom software and built a custom system just for developing her hair.  It also took them about three years.

    Post edited by kyoto kid on
  • ebergerlyebergerly Posts: 3,255

    Wow, I'm embarassed...I actually saw that film quite a while ago and loved it and told everyone about it. 

    Geez, I must be getting old or something. 

  • kyoto kidkyoto kid Posts: 41,023
    edited July 2017
    ...actually created a pretty good approximation of her a year or so ago using Julie5 as the starting base as well as a number of merchant morph resource kits. For the hair I layered two instances of 3Dream's Bolina Hair. I'll post one of the pics when I get home later.
    Post edited by kyoto kid on
  • ebergerlyebergerly Posts: 3,255

    Did you try VWD for the hair? Or maybe the cloth sim in Blender and composite it on the DAZ render or something? 

  • ebergerlyebergerly Posts: 3,255

    Anyway, back to the topic at hand...

    Please, someone, tell me not to go out and buy a GTX 1080 ti huh? Please? They're crazy expensive, around $750. And while I thought that was way overpriced, if you look at the history on partpicker or whatever it says that's about the average price since it came on the market. 

    And really, I'll only cut my render time by maybe 60%. So a 10 minute render will become a 4 minute render. Big woop. Now if 10 minutes dropped down to 1 minute, now you're talking. 

    Okay, nevermind. I just convinced myself to stick with my GTX 1070. I'll wait until the bitcoin nonsense dies off, or they come out a headless replacement for the bitcoiners and the market gets saturated with used 1080ti's. 

  • GatorGator Posts: 1,294
    edited July 2017
    ebergerly said:

    Anyway, back to the topic at hand...

    Please, someone, tell me not to go out and buy a GTX 1080 ti huh? Please? They're crazy expensive, around $750. And while I thought that was way overpriced, if you look at the history on partpicker or whatever it says that's about the average price since it came on the market. 

    And really, I'll only cut my render time by maybe 60%. So a 10 minute render will become a 4 minute render. Big woop. Now if 10 minutes dropped down to 1 minute, now you're talking. 

    Okay, nevermind. I just convinced myself to stick with my GTX 1070. I'll wait until the bitcoin nonsense dies off, or they come out a headless replacement for the bitcoiners and the market gets saturated with used 1080ti's. 

    Well you ain't gonna get supercomputer status with a 1070!  wink

     

    With the current demand, you should be able to get near what you paid on the 1070.  And prices aren't that bad, I just saw an air cooled a few days ago for $650.  I bought my water cooled for $809 a week or so ago.

    Post edited by Gator on
  • GatorGator Posts: 1,294
    kyoto kid said:
    ebergerly said:
    kyoto kid said:

    ..letsee.

    Dual 2.6 GHz Haswell 8 core Xeons on a dual socket MB supporting 128 GB of quad channel DDR4 and dual 1080 Ti's - plus dual SSDs (one for boot/applications the other for the content library and 2 2 GB Storage HDDs all running on W7 Pro.  Think I'll stay with that.

    Kyoto Kid,

    Wow you spent some serious cash on your hardware smiley

    So I'm curious, in hindsight, in terms of price to performance ratio, do you think that you could have spent significantly less and gotten similar performance? I'm never sure where the point is that, say, a 50% increase in hardware cost gets you a 50% or more increase in performance. For example, yeah, we could buy 64 GB or even 128 GB of RAM, but practically how often do you really need it?

    And one other option that is rarely (ie, never) discussed is something I'm guilty of, and that is not managing our scenes very well. I mean, if we spend some time managing our scenes in D|S to be smaller and require less resources, often that can improve performance the same or more than spending a boatload of money on hardware. 

    I guess that's always the balance. Hardware vs. Scene Management vs. how long we're willing to wait for stuff to happen. 

    I wish there was more data on how D|S responds with different hardware combinations, compared to stuff like scene management. Once I get my new rig together I'll try to do some comparison tests, say with 1 GPU vs. 2 GPU's on the same scene. Right now there's lots of "oh, this hardware is better", but no real data on how much better and whether it's worth the cost. 

    ...sadly not quite a reality yet, but this is pretty much the configuration I settled on for my next build.  Yeah the two Xeons will cost around 1,400$ alone.  Moving to Threadripper will mean W10 and I have my reasons for staying with W7 Pro.  I would rather just throw raw power at the situation instead of spending a great deal of time manually reprocessing texture resolutions in a 2D programme for a highly involved scene. My objective is creating large format super high quality images for gallery printing and publication.  Ever see the works of a former member here named AlphaChannel? Yeah, one of my influences.  I just don't have a steady enough hand anymore for that type of postwork which is why I need most of the finishing accomplished in the render pass.

    When I do a gritty city scene for example, I like to "dirty" things up to make it look "lived in" which means lots of additional polys and textures.

    If it's solely Daz Studio & Iray, going overkill on CPU and RAM will buy you very little if anything notice able.

    You have 48 GB, which is good (me too).  Haven't had problems there.  Video cards is where you want to spend your money.  From what I've read here, mainly from Sickleyield, Iray scales very well with additional cards and is very much worth it.

    I haven't done Sickleyield's test again, but with some of my larger scenes I render, my twin EVGA Hybrid 1080 Ti (watercooled) edges out my air cooled twin Titan X Pascals.  Unfortunately with air cooling, the top Titan only runs base clock, as it's pulling air above the lower card which is quite warm.  Performance difference is only about 4%, but the CPU in that system is much slower too (like less than 1/2).

    The system is designed for working in Daz, Carrara, and Vue Infinite, hence the dual 8 core CPUs and boatload of memory as neither of the latter two support GPU rendering but have excellent biased engines..

    Would love to build around the forthcoming 32 core Epyc 7105 (mmmm....8 channel memory 128 PCI lanes, and 64 CPU threads) but that means running in Linux which is not supported by Daz and most other 3D software companies. Too many instabilities running Daz in Wine to deal with and wouldn't be able to run either Carrara or Vue.

    Yeah, I remember you used Carrara & Vue.  I mentioned as a Daz Studio machine referring to ebergerly, in case he didn't realize so much CPU power would be overkill and likely see very little return in DS.

  • frank0314frank0314 Posts: 14,013
    ebergerly said:

    Anyway, back to the topic at hand...

    Please, someone, tell me not to go out and buy a GTX 1080 ti huh? Please? They're crazy expensive, around $750. And while I thought that was way overpriced, if you look at the history on partpicker or whatever it says that's about the average price since it came on the market. 

    And really, I'll only cut my render time by maybe 60%. So a 10 minute render will become a 4 minute render. Big woop. Now if 10 minutes dropped down to 1 minute, now you're talking. 

    Okay, nevermind. I just convinced myself to stick with my GTX 1070. I'll wait until the bitcoin nonsense dies off, or they come out a headless replacement for the bitcoiners and the market gets saturated with used 1080ti's. 

    If you want to select a card that best supports your needs and you want to know how it compares to others check out benchmarks for GPU's.

  • kyoto kidkyoto kid Posts: 41,023
    ebergerly said:

    Did you try VWD for the hair? Or maybe the cloth sim in Blender and composite it on the DAZ render or something? 

    ...don't have VWD and never could wrap ny brain around Blender's UI and setup. I feel I have a better chance to get it down more accurately with one of the strand based hair generators as that is the route Pixar took.
  • kyoto kidkyoto kid Posts: 41,023
    edited July 2017
    kyoto kid said:
    ebergerly said:
    kyoto kid said:

    ..letsee.

    Dual 2.6 GHz Haswell 8 core Xeons on a dual socket MB supporting 128 GB of quad channel DDR4 and dual 1080 Ti's - plus dual SSDs (one for boot/applications the other for the content library and 2 2 GB Storage HDDs all running on W7 Pro.  Think I'll stay with that.

    Kyoto Kid,

    Wow you spent some serious cash on your hardware smiley

    So I'm curious, in hindsight, in terms of price to performance ratio, do you think that you could have spent significantly less and gotten similar performance? I'm never sure where the point is that, say, a 50% increase in hardware cost gets you a 50% or corencrease in performance. For example, yeah, we could buy 64 GB or even 128 GB of RAM, but practically how often do you really need it?

    And one other option that is rarely (ie, never) discussed is something I'm guilty of, and that is not managing our scenes very well. I mean, if we spend some time managing our scenes in D|S to be smaller and require less resources, often that can improve performance the same or more than spending a boatload of money on hardware. 

    I guess that's always the balance. Hardware vs. Scene Management vs. how long we're willing to wait for stuff to happen. 

    I wish there was more data on how D|S responds with different hardware combinations, compared to stuff like scene management. Once I get my new rig together I'll try to do some comparison tests, say with 1 GPU vs. 2 GPU's on the same scene. Right now there's lots of "oh, this hardware is better", but no real data on how much better and whether it's worth the cost. 

    ...sadly not quite a reality yet, but this is pretty much the configuration I settled on for my next build.  Yeah the two Xeons will cost around 1,400$ alone.  Moving to Threadripper will mean W10 and I have my reasons for staying with W7 Pro.  I would rather just throw raw power at the situation instead of spending a great deal of time manually reprocessing texture resolutions in a 2D programme for a highly involved scene. My objective is creating large format super high quality images for gallery printing and publication.  Ever see the works of a former member here named AlphaChannel? Yeah, one of my influences.  I just don't have a steady enough hand anymore for that type of postwork which is why I need most of the finishing accomplished in the render pass.

    When I do a gritty city scene for example, I like to "dirty" things up to make it look "lived in" which means lots of additional polys and textures.

    If it's solely Daz Studio & Iray, going overkill on CPU and RAM will buy you very little if anything notice able.

    You have 48 GB, which is good (me too).  Haven't had problems there.  Video cards is where you want to spend your money.  From what I've read here, mainly from Sickleyield, Iray scales very well with additional cards and is very much worth it.

    I haven't done Sickleyield's test again, but with some of my larger scenes I render, my twin EVGA Hybrid 1080 Ti (watercooled) edges out my air cooled twin Titan X Pascals.  Unfortunately with air cooling, the top Titan only runs base clock, as it's pulling air above the lower card which is quite warm.  Performance difference is only about 4%, but the CPU in that system is much slower too (like less than 1/2).

    The system is designed for working in Daz, Carrara, and Vue Infinite, hence the dual 8 core CPUs and boatload of memory as neither of the latter two support GPU rendering but have excellent biased engines..

    Would love to build around the forthcoming 32 core Epyc 7105 (mmmm....8 channel memory 128 PCI lanes, and 64 CPU threads) but that means running in Linux which is not supported by Daz and most other 3D software companies. Too many instabilities running Daz in Wine to deal with and wouldn't be able to run either Carrara or Vue.

    Yeah, I remember you used Carrara & Vue.  I mentioned as a Daz Studio machine referring to ebergerly, in case he didn't realize so much CPU power would be overkill and likely see very little return in DS.

    ...a decent CPU and amount of system memory is a good backup should a scene dump to the CPU. As I understand W10 reserves a noticeable portion of VRAM on consumer cards so that 8 GB 1070 will only have about 6.4 GB available for rendering (this apparantly is not the case for the more expensive Quadro line). I would go with no less than a 6 core CPU (12 threads) and 32 GB. The other downside of Ryzen is it only supports 2 memory channels instead of 4 (and there is a difference in performance). Threadripper will be the first AMD CPU to support 4 memory channels. At a projected cost of 500$ less than Intel's 16 core Skylake-X (and has 64 PCIe lanes) it would be worth the added cost in the long run.
    Post edited by kyoto kid on
  • GatorGator Posts: 1,294
    kyoto kid said:
    kyoto kid said:
    ebergerly said:
    kyoto kid said:

    ..letsee.

    Dual 2.6 GHz Haswell 8 core Xeons on a dual socket MB supporting 128 GB of quad channel DDR4 and dual 1080 Ti's - plus dual SSDs (one for boot/applications the other for the content library and 2 2 GB Storage HDDs all running on W7 Pro.  Think I'll stay with that.

    Kyoto Kid,

    Wow you spent some serious cash on your hardware smiley

    So I'm curious, in hindsight, in terms of price to performance ratio, do you think that you could have spent significantly less and gotten similar performance? I'm never sure where the point is that, say, a 50% increase in hardware cost gets you a 50% or corencrease in performance. For example, yeah, we could buy 64 GB or even 128 GB of RAM, but practically how often do you really need it?

    And one other option that is rarely (ie, never) discussed is something I'm guilty of, and that is not managing our scenes very well. I mean, if we spend some time managing our scenes in D|S to be smaller and require less resources, often that can improve performance the same or more than spending a boatload of money on hardware. 

    I guess that's always the balance. Hardware vs. Scene Management vs. how long we're willing to wait for stuff to happen. 

    I wish there was more data on how D|S responds with different hardware combinations, compared to stuff like scene management. Once I get my new rig together I'll try to do some comparison tests, say with 1 GPU vs. 2 GPU's on the same scene. Right now there's lots of "oh, this hardware is better", but no real data on how much better and whether it's worth the cost. 

    ...sadly not quite a reality yet, but this is pretty much the configuration I settled on for my next build.  Yeah the two Xeons will cost around 1,400$ alone.  Moving to Threadripper will mean W10 and I have my reasons for staying with W7 Pro.  I would rather just throw raw power at the situation instead of spending a great deal of time manually reprocessing texture resolutions in a 2D programme for a highly involved scene. My objective is creating large format super high quality images for gallery printing and publication.  Ever see the works of a former member here named AlphaChannel? Yeah, one of my influences.  I just don't have a steady enough hand anymore for that type of postwork which is why I need most of the finishing accomplished in the render pass.

    When I do a gritty city scene for example, I like to "dirty" things up to make it look "lived in" which means lots of additional polys and textures.

    If it's solely Daz Studio & Iray, going overkill on CPU and RAM will buy you very little if anything notice able.

    You have 48 GB, which is good (me too).  Haven't had problems there.  Video cards is where you want to spend your money.  From what I've read here, mainly from Sickleyield, Iray scales very well with additional cards and is very much worth it.

    I haven't done Sickleyield's test again, but with some of my larger scenes I render, my twin EVGA Hybrid 1080 Ti (watercooled) edges out my air cooled twin Titan X Pascals.  Unfortunately with air cooling, the top Titan only runs base clock, as it's pulling air above the lower card which is quite warm.  Performance difference is only about 4%, but the CPU in that system is much slower too (like less than 1/2).

    The system is designed for working in Daz, Carrara, and Vue Infinite, hence the dual 8 core CPUs and boatload of memory as neither of the latter two support GPU rendering but have excellent biased engines..

    Would love to build around the forthcoming 32 core Epyc 7105 (mmmm....8 channel memory 128 PCI lanes, and 64 CPU threads) but that means running in Linux which is not supported by Daz and most other 3D software companies. Too many instabilities running Daz in Wine to deal with and wouldn't be able to run either Carrara or Vue.

    Yeah, I remember you used Carrara & Vue.  I mentioned as a Daz Studio machine referring to ebergerly, in case he didn't realize so much CPU power would be overkill and likely see very little return in DS.

     

    ...a decent CPU and amount of system memory is a good backup should a scene dump to the CPU. As I understand W10 reserves a noticeable portion of VRAM on consumer cards so that 8 GB 1070 will only have about 6.4 GB available for rendering (this apparantly is not the case for the more expensive Quadro line). I would go with no less than a 6 core CPU (12 threads) and 32 GB. The other downside of Ryzen is it only supports 2 memory channels instead of 4 (and there is a difference in performance). Threadripper will be the first AMD CPU to support 4 memory channels. At a projected cost of 500$ less than Intel's 16 core Skylake-X (and has 64 PCIe lanes) it would be worth the added cost in the long run.

    I agree for a "supercomputer" for Daz, at least 32 GB and probably the 6-cores.  I found the 6 cores searching, what's not really clear is if more cores help significantly more.  You can actually get worse performance for the dollar, if you have a CPU with many cores but a lower clock vs. a processor that's much faster per core depending upon the application.

    https://helpdaz.zendesk.com/hc/en-us/articles/207530513-System-Recommendations-for-DAZ-Studio-4-

    Win 7 & 8 reserve RAM too, just not as much. 

    Waiting to see what happens with the Ryzen Threadripper, hopefully it doesn't disappoint.

  • kyoto kidkyoto kid Posts: 41,023
    edited July 2017
    ...the amount is almost negligible in comparison plus there are other "features" of W10 I find less than desirable. I've been working with Windows since the 3.0 days. Seen good and seen bad. From what I have been reading in tech journals, it looks as if it may become worse.
    Post edited by kyoto kid on
Sign In or Register to comment.