Don't be Jealous...My New Supercomputer

1356

Comments

  • ebergerlyebergerly Posts: 3,255
    edited July 2017
    frank0314 said:

    If you want to select a card that best supports your needs and you want to know how it compares to others check out benchmarks for GPU's.

    Thanks, but as we already discussed I think the industry benchmarks don't necessarily tell you much about how the cards perform with D|S Iray. 

    For example, the Passmark results you referenced show the following scores:

    GTX 1080 ti: 13383

    GTX 1070:   11026

    That tells you nothing more than the GTX 1080ti is like 21.4% "gooder" than the 1070. 

    In fact, with a reference scene, the 1070 renders it in 3 minutes 14 seconds, and the 1080ti renders in 1 minute 53 seconds.

    That means the 1080ti is about 40% faster than the 1070, not 21.4%. 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited July 2017

    Well you ain't gonna get supercomputer status with a 1070!  wink

     

    With the current demand, you should be able to get near what you paid on the 1070.  And prices aren't that bad, I just saw an air cooled a few days ago for $650.  I bought my water cooled for $809 a week or so ago.

    Not a problem. Status is irrelevant in these matters. To me at least. And honestly a 40-50% improvement in render times isn't really that earth shattering IMO. People act like "OMG the 1080ti is an absolute beast !!", but based on the numbers people are actually getting with D|S renders, is it really that big a deal to go from a 1 hour render to a 35 minute render? You're still off twiddling your thumbs forever waiting for the render to finish. Or even a 10 minute render becoming a 4 minute render. Big woop.  

    Maybe that's a big deal to some, and to me it's certainly nice to have, but for almost $800?? I don't think so. I'm certainly not doing this for any sort of time-sensitive production, just a hobby, so I can be off in Blender doing other stuff while it's rendering and it really doesn't matter.   

    Again, I think the never-discussed issue of scene management (ie, cutting down your scene to a size that renders much quicker and is much easier to navigate) can do a lot to make D|S life MUCH easier, without spending big bucks on a 40% improvement.I guarantee, with some decent scene management and good choices of render settings and so on, people can do a LOT to knock down render times as well as making their scene navigation a lot more responsive.  

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited July 2017

    On the other hand, when I ditched my POC GT 730 for a GTX 1070, I got a 75% improvement in render times. And that only cost me just under $400. And that kind of price/performance seems far more reasonable. To me at least. 

    I dunno, just trying to break thru the tons of hype surrounding this hardware and look at real numbers. 

    UDPATE:

    Hey it just occurred to me...here are the actual calculations of price ($) to performance (%) for the two cases I mentioned (smaller is better):

    • $400 for 75% improvement: Price/performance = 5.3 (upgrading from junk GPU to 1070)
    • $800 for 40% improvement: Price/performance = 20 (replacing 1070 with 1080ti)

    Wow, that's a pretty stark contrast. Maybe I'll draw a line in the sand. From now on, I'll only accept an upgrade that gives me a price/performance of, say, 10 or less. 

    Cool. 

    DOUBLE UPDATE:

    So if I buy a GTX 1080ti, and add it in parallel with my GTX 1070, it looks like my render times will improve by around 60% from just the single 1070. And the cost is $800. So....

    • $800 for 60% improvement: Price/performance = 13.3

    BZZTTT !!! Not good enough, if I stick to my goal of 10. Which means I shouldn't have spent the money on a new Ryzen system just so I could add an extra GPU. 

    Bummer. But that's what happens when you chase the hype. 

    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    edited July 2017
    ebergerly said:

     

    ebergerly said:

    Bummer. But that's what happens when you chase the hype. 

    I think you said it all earlier.  You're a hobbiest.  And a semi-serious hobbiest at that.  So in your case, you're either just rendering stills for fun or simple animations for fun and time is not a factor for you.  Like you said, 10 minutes to 4 minutes is no big woop for you. So for you, it is all hype.  You can afford to go with a mid-performance card like the 1070.  But some people are more serious about their hobby or they are not hobbiests.  For them, time is more valuable than money.  If I can bring my 10 minute frame times down to 4 minutes, that is a big freakin' woop for me because I have to render 24 of those frames just to make a second of video.  And since my time is valuable, spending just $800 to regain some time is very cheap.  So it's all a matter of perspective.  That's why there are so many tiers of gpu's.  Lower tiers for those who just want to have a little cheap fun and bleeding edge high end for those who need things in a more timely manner (time is money).  If my time were worth $100 an hour, and I saved 60% of that time rendering,  the high end card would pay for itself very quickly.  When the bottom line is saving money, that ain't hype.

    Post edited by drzap on
  • drzapdrzap Posts: 795
    Tobor said:

    Someone mentioned CPUs: If you can avoid using yours, do so. Unlike the GPU in your graphics card, even Xeon and non-consumer "workstation" CPUs aren't designed to be pegged at 100% utilization for hours on end. I burned out a perfectly good Xeon-based high-end Dell workstation by using the CPU for long renders. Even though the CPUs and motherboard never went over their rated heat limits, the added heat over weeks and months took its toll. 

     

    Ummm. no.  That's exactly what Xeons are designed for:  24 hour work cycles.  What do you think is in your banks transaction system servers?  Do online banks take a break?  What about renderfarms?  Check what's in their systems.  Xeons.  That's not to say they won't burn out.  Everything breaks, your Dell being one.   But they were designed to work.  Business depend on them to work reliably around the clock (by the way, are you implying consumer gaming graphics cards like the GTX were designed to be pegged for hours on end?)  and in the end, you can count on a CPU render for accuracy.  There are some things GPU's can't render yet.  As a backup, there is nothing like a box full of cores.

  • ebergerlyebergerly Posts: 3,255
    edited July 2017

    drzap, my only point is that people should think about what they really need and put numbers to it, rather than just believing the "it's awesome !!" nonsense by folks who don't really know what "awesome" is. Im guessing that few people actually evaluate price/performance or anything close to it. They scan the industry benchmarks, think that means something, and run off and spend a lot of money. 

    And more importantly, how many people really first step back and think about how they can tweak their scenes to save 60% in render time without buying anything?  

    Okay, a show of hands...how many people really evaluated price/performance and scene management before they upgraded their systems? smiley 

    As an example, I have an indoor scene, basically a box with 3-G3'S inside and a cutout window with glass surface, and emissive lights inside. Render time is almost 12 minutes.

    Now if I add a Sunlight to the scene, the render time is just under 20 minutes. Now that's a 40% improvement just by one mouse click to disable the Sunlight. One mouse click, vs. upgrading to a GTX 1080 and spend $800 to get the same 40% improvement. 

     I'm just suggesting that people think about stuff like that rather than just say "wow it's awesome" and "if you don't get a 1080ti you're not really cool, you're just a lowly hobbyist" smiley

    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    edited July 2017
    ebergerly said:

    than just say "wow it's awesome" and if you don't get a 1080ti you're not really cool, you're just a lowly hobbyist smiley

    You're making it seem like it's either this or that.  Of course scene management is important.  But just because it is, doesn't mean high end hardware is "hype".  You can walk and chew gum at the same time.  At least I can.

    And I would never call you a lowly hobbyist for buying a 1070.  Just about everybody who uses Daz is a hobbyist.  It isn't professional software.

    Post edited by drzap on
  • GatorGator Posts: 1,294
    edited July 2017
    drzap said:
    ebergerly said:

     

    ebergerly said:

    Bummer. But that's what happens when you chase the hype. 

    I think you said it all earlier.  You're a hobbiest.  And a semi-serious hobbiest at that.  So in your case, you're either just rendering stills for fun or simple animations for fun and time is not a factor for you.  Like you said, 10 minutes to 4 minutes is no big woop for you. So for you, it is all hype.  You can afford to go with a mid-performance card like the 1070.  But some people are more serious about their hobby or they are not hobbiests.  For them, time is more valuable than money.  If I can bring my 10 minute frame times down to 4 minutes, that is a big freakin' woop for me because I have to render 24 of those frames just to make a second of video.  And since my time is valuable, spending just $800 to regain some time is very cheap.  So it's all a matter of perspective.  That's why there are so many tiers of gpu's.  Lower tiers for those who just want to have a little cheap fun and bleeding edge high end for those who need things in a more timely manner (time is money).  If my time were worth $100 an hour, and I saved 60% of that time rendering,  the high end card would pay for itself very quickly.  When the bottom line is saving money, that ain't hype.

    I render complex scenes, usually with 3+ Genesis 3 figures at 4K.  Going from 2 Titan X Maxwells to 2 Titan X Pascals cut my rendering time by 60%. 

    Since my renders were typically going for 7-9 hours, dropping it to 4-5 hours was a pretty big deal.  My new 1080 Ti Hybrids perform a smidge better than the Titan X pascal.  I'm happy.  smiley

    I don't even have what I'd call a super computer, a few here a running four Titan X's.  surprise

     

     

    No offense, but you started off the thread with building a supercomputer for Daz...

    Post edited by Gator on
  • ebergerlyebergerly Posts: 3,255
    edited July 2017

     

    No offense, but you started off the thread with building a supercomputer for Daz...

    Not sure why anyone would take offense to your post...I certainly didn't. But I'm not one to be easily offended. Especially when we're talking facts. Hard to get offended by facts. 

    Anyway, I'm real surprised your renders take over 7 hours. I can't remember ever having a render that takes even an hour, with a single GTX 1070. In fact I just posted about a scene with 3-G3's, same as you, in an interior scene, and it took 11 minutes.

    Maybe you're talking about animations that take a long time? I hestitate to ask about scene management for fear someone will get upset, but I'm sure you've looked into stuff like separating your renders into layers/canvases so you aren't re-rendering stuff that doesnt' change in each frame?  

    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    Tobor said:

     

    ebergerly said:

     

     

    Anyway, I'm real surprised your renders take over 7 hours. I can't remember ever having a render that takes even an hour, with a single GTX 1070. In fact I just posted about a scene with 3-G3's, same as you, in an interior scene, and it took 11 minutes.

     

    LOL, I'm sorry but I had a little laugh over your comment.  It highlights the real difference in perspective I was talking about.  I bet you can't imagine ever using up all of your 8GB on your GPU either, can you? Maybe you think anything over 8GB is excessive and hype?laugh  And then you compare your 3-G3 scene with his without any information to compare with (except that you both had 3 G3's).  LOL, good one.   There is nothing wrong with trying to save a buck, especially when this is just your hobby.   But your perspective puts you at a disadvantage because you seem to deny other people's valid reasons who have a different perspective.

     

  • ebergerlyebergerly Posts: 3,255

    Wow, I just checked newegg to see what a Titax X Pascal is. $1,400 each !!! Which means almost $3,000 for two? And that gives you a  60% improvement in render times? 

    If you're talking price/performance, that comes out to a grand total of 50...

    Wow. That's way out of my ballpark of 10 max. And I was even thinking of dropping it to around 8...smiley 

  • drzapdrzap Posts: 795
    ebergerly said:

    Okay, looks like drzap is going to be chasing me from now on because I contradicted him.

    Chill dude, huh? 

    I don't see where you contradicted me, but I do see where you have drawn obviously wrong conclusions based on your narrow vision.  But I won't chase you.

  • Richard HaseltineRichard Haseltine Posts: 100,481

    If this thread continues to be marred by comments adressing other posters rather than the topic at hand it will be locked.

  • GatorGator Posts: 1,294
    ebergerly said:

     

    No offense, but you started off the thread with building a supercomputer for Daz...

    Not sure why anyone would take offense to your post...I certainly didn't. But I'm not one to be easily offended. Especially when we're talking facts. Hard to get offended by facts. 

    Anyway, I'm real surprised your renders take over 7 hours. I can't remember ever having a render that takes even an hour, with a single GTX 1070. In fact I just posted about a scene with 3-G3's, same as you, in an interior scene, and it took 11 minutes.

    Maybe you're talking about animations that take a long time? I hestitate to ask about scene management for fear someone will get upset, but I'm sure you've looked into stuff like separating your renders into layers/canvases so you aren't re-rendering stuff that doesnt' change in each frame?  

    Nope, there's a whole lot of variables that impact render time, also personal preference for what's good enough.  I render at 4K (3840x2160) and usually at 10K iterations.  Many will say 10K is overkill, but I found the default convergence and quality looked grainy to me.  10K may be overkill, as sometimes 5K will looks good enough to me, or 8K, whatever.  But I will set up as many scenes as I can, and batch them.  It's easier to set them all to 10K and forget it than to inspect each one.

    Also, many emissive surfaces, specular objects, things like that can greatly increase render times.  Could I replace a window for a faster render?  Sure.  But I wouldn't get the subtle reflections like I would in real life if I did that.

  • GatorGator Posts: 1,294
    ebergerly said:

    Wow, I just checked newegg to see what a Titax X Pascal is. $1,400 each !!! Which means almost $3,000 for two? And that gives you a  60% improvement in render times? 

    If you're talking price/performance, that comes out to a grand total of 50...

    Wow. That's way out of my ballpark of 10 max. And I was even thinking of dropping it to around 8...smiley 

    Got them for $1200 each, but yeah they aren't cheap.  The watercooled 1080 Ti at $800 was a bargain!  laugh

    Buying computing power was never a linear expense...  2x the price isn't 2x the power, 4x the price doesn't equal 4x the power, etc.

    I dunno what you mean by those numbers...  What?

  • ebergerlyebergerly Posts: 3,255

    Scott...those numbers were what I explained before. Basically I take the price of the upgrade (say, $800 for a new GPU), and divide it by the percent improvement in render times. So if a new GPU gives you a 50% improvement in render time, the Price/Performance is 800/50, or 16. 

    Just a way I look at cost to benefit of upgrading my system. So if I decide I'm only willing to pay $100 for a 10 percent improvement in render times, I only consider upgrades with a price/performance of 10 or less. Keeps me from getting too carried away by new technology, which may or may not be cost effective. 

    I assume that professionals who make their living at this do something similar to maximize profit, like any other industry. 

  • ToborTobor Posts: 2,300
    drzap said:

    Ummm. no.  That's exactly what Xeons are designed for:

    Ummm, yes. Even Xeon's aren't rated at full throttle 100% utilization for extended periods without enhanced cooling. Stock, these Precisions come with a fan on a standard heat sink. Not enough for pegging a CPU pedal-to-the-metal 16-20 hours a day. Silly to suggest otherwise. Either bump up your game with liquid cooling, or forget the CPU (my advise) and render GPU only. If you've got 2000+ cores on the GPU you don't really gain a lot by adding the CPU anyway. It's wasted watts.

  • GatorGator Posts: 1,294
    ebergerly said:

    Scott...those numbers were what I explained before. Basically I take the price of the upgrade (say, $800 for a new GPU), and divide it by the percent improvement in render times. So if a new GPU gives you a 50% improvement in render time, the Price/Performance is 800/50, or 16. 

    Just a way I look at cost to benefit of upgrading my system. So if I decide I'm only willing to pay $100 for a 10 percent improvement in render times, I only consider upgrades with a price/performance of 10 or less. Keeps me from getting too carried away by new technology, which may or may not be cost effective. 

    I assume that professionals who make their living at this do something similar to maximize profit, like any other industry. 

    Ahh...  Sure, most do a cost/benefit analysis.  I just never tried to boil it down to such a simple number like that. 

  • kyoto kidkyoto kid Posts: 41,023
    drzap said:
    Tobor said:

    Someone mentioned CPUs: If you can avoid using yours, do so. Unlike the GPU in your graphics card, even Xeon and non-consumer "workstation" CPUs aren't designed to be pegged at 100% utilization for hours on end. I burned out a perfectly good Xeon-based high-end Dell workstation by using the CPU for long renders. Even though the CPUs and motherboard never went over their rated heat limits, the added heat over weeks and months took its toll. 

     

    Ummm. no.  That's exactly what Xeons are designed for:  24 hour work cycles.  What do you think is in your banks transaction system servers?  Do online banks take a break?  What about renderfarms?  Check what's in their systems.  Xeons.  That's not to say they won't burn out.  Everything breaks, your Dell being one.   But they were designed to work.  Business depend on them to work reliably around the clock (by the way, are you implying consumer gaming graphics cards like the GTX were designed to be pegged for hours on end?)  and in the end, you can count on a CPU render for accuracy.  There are some things GPU's can't render yet.  As a backup, there is nothing like a box full of cores.

    ...exactly. 

    The same applies to pro grade GPU cards like the Quadro and Firepro series.  They too are designed to operate at peak output for extended periods of time compared to the enthusiast grade GTX/Radeon cards. 

    However, in spite of recent GPU advances, major studios continue to rely on CPU/physical memory based render farms due to the extreme demands involved and accruacy required.  That may eventually change wIth the introduction of NVLInk MBs for production work, ultra high core count CPUs, and the Volta series GPU/Compute cards with 32 GB or more of fast HBM2 VRAM..  Totally our of our league budget wise but not for a major film studio to reduce their VFX production time.

  • kyoto kidkyoto kid Posts: 41,023
    ebergerly said:

    drzap, my only point is that people should think about what they really need and put numbers to it, rather than just believing the "it's awesome !!" nonsense by folks who don't really know what "awesome" is. Im guessing that few people actually evaluate price/performance or anything close to it. They scan the industry benchmarks, think that means something, and run off and spend a lot of money. 

    And more importantly, how many people really first step back and think about how they can tweak their scenes to save 60% in render time without buying anything?  

    Okay, a show of hands...how many people really evaluated price/performance and scene management before they upgraded their systems? smiley 

    As an example, I have an indoor scene, basically a box with 3-G3'S inside and a cutout window with glass surface, and emissive lights inside. Render time is almost 12 minutes.

    Now if I add a Sunlight to the scene, the render time is just under 20 minutes. Now that's a 40% improvement just by one mouse click to disable the Sunlight. One mouse click, vs. upgrading to a GTX 1080 and spend $800 to get the same 40% improvement. 

     I'm just suggesting that people think about stuff like that rather than just say "wow it's awesome" and "if you don't get a 1080ti you're not really cool, you're just a lowly hobbyist" smiley

    ...oh  I agree,  what you have to consider is the type of work you are primarily looking to produce.  As I mentioned earlier I am looking at creating very high quality work rendered in large pixel formats for gallery purposes.  I also do very involved scenes (I've had up to nine figures in just one scene on a number of occasions) If I could afford it, I'd go for a pair of 16 GB Quadro P5000s to ensure 90% of my scenes rendering at high quality settings remain in GPU memory.  I also prefer using Iray Emissive lighting for existing light props in a scene, which can be a huge resource hog (still have not got a clean finished render job using Stonemason's Urban Future 5 on my current rig even letting it go all night and into the morning).

    As I am looking at using dual 1080 Ti's each with 5 GB less than the P5000 (though together, costing about as much as a single Titan Xp), having more CPU cores and multi channel memory as a fallback is important as it will keep render times more manageable should rendering dump to the CPUs.  As I mentioned, I've had big jobs go into swap mode on my current system and can say that is even more excruciating.

    Effectively what I have designed is a combination workstation and mini render farm in one system. For someone who primarily does portraits or simple scenes, yes it is extreme overkill.  For someone like myself who likes to create "epic" level scenes at very crisp resolution, it is a necessity.

  • drzapdrzap Posts: 795
    Tobor said:
    drzap said:

    Ummm. no.  That's exactly what Xeons are designed for:

    Ummm, yes. Even Xeon's aren't rated at full throttle 100% utilization for extended periods without enhanced cooling. Stock, these Precisions come with a fan on a standard heat sink. Not enough for pegging a CPU pedal-to-the-metal 16-20 hours a day. Silly to suggest otherwise. Either bump up your game with liquid cooling, or forget the CPU (my advise) and render GPU only. If you've got 2000+ cores on the GPU you don't really gain a lot by adding the CPU anyway. It's wasted watts.

    Xeons are what renderfarms use.  They are not complaining about Xeons burning up. Do we need more than that?  And of course anyone who is serious is going to think about more cooling than a stock PC case.  And if GPU also need enhanced cooling because they aren't made for full throttle utilization either.  In addition, GPU's have limitations like maximum memory and unable to render some special effects (not a problem for Daz users because Daz can't produce many special effects).  But if you run out of GPU ram,  you will sure wish you had a good CPU as a back up.  If your just doodling for fun on DAZ,  a high end cpu is a waste of money.

  • ebergerlyebergerly Posts: 3,255
    edited July 2017

    Well the deed is done. And surprisingly it went without a hitch. Well, kinda...

    Just keep in mind, if you are installing a cooler on a Ryzen 7 1700 (the included Wraith cooler), make CERTAIN that you're screwing all four screws into the MB at the same time. I was so careful, I had the MB on a countertop, lined the cooler screws up, and even unscrewed them a bit until they clicked to make sure they were ready to engage. So I screw them in an X pattern very carefully from the top. And never noticed that two screws never engaged. And I screwed the other two all the way down. 

    DOH !!! surprise

    So I had visions of damaging the CPU and so on. 

    But today I fired it up, the CPU's look fine, all running okay, temps down in the low 30's...

    So here's what I ended up with:

    • Ryzen 7 1700 CPU with 8 cores, 16 logical CPU's
    • MSI X370 Gaming Pro Carbon motherboard
    • Single GTX 1070 GPU
    • 64 GB DDR4-2133 RAM (I decided to buy an additional 32 GB just for grins)
    • Samsung 850 EVO SSD, 500 GB (yeah, I decided to give an SSD a try...)
    • Old 1-TB Western Digital SATA drive
    • 750 watt power supply

    I really like the MSI UEFI, it's got a lot of good info. But I think that's pretty much standard in gaming MB's....

    So after the fairly quick Windows 10 installation, installing all the apps (surprisingly quick when you just download and install), and DAZ being so nice to automatically recognize all your stuff and download it with DIM, it really didnt take much to get up and running.

    However, now is the grunt work of finding all of my preferences for all my apps (uggg...), figuring out how to move and install all my non-DAZ content, figure how to set up all my DAZ database stuff so it matches the old machine, and so on...

    The nice thing is now I have a big mid-tower with tons of space, 3 monster fans, and the PS actually slides right in compared to my old Dell, which almost required me to use a pry bar to install the GPU and PS. smiley

    Oh, and one more lesson learned...

    I decided last minute to stop by Best Buy and grab the Samsung SSD. They have this nice price guarantee that they'll beat any price. So I paid $200 for the SSD, got home, check on the web, and their price was supposed to be $163 (same as newegg), not $200. Called and they immediately credited my card. But be careful....

    Was it worth it? Well, that remains to be seen. More space and cooler temps is nice. The SSD seems pretty snappy. Most of my 16 CPU's are "parked" most of the time (I assume they shutdown if not being used somehow?). And I'm guessing my DAZ renders won't see much of a difference, except maybe in time to load a scene onto the GPU from the SSD. 

    But until the NVIDIA cards drop in price to something reasonable, I probably won't noticed a lot of performance difference from my old machine. We'll see...

     

     

    Post edited by ebergerly on
  • ToborTobor Posts: 2,300
    drzap said:

    Xeons are what renderfarms use.

    And they either water cool, add additional airflow management, and/or limit the CPU to avoid excessive heat. The Xeon is no different than any other slice of silicone: heat kills it. Thermal management is up to the system integrator or box maker, and no CPU is immune. In my case that was Dell, and they didn't design the Precision for continuous 100% CPU utiliization. For one thing, despite its large size it's packed densely inside, with the inrternal drives covering where the CPU(s) are located. Many of us add additional squirrel fans along the bottom, and even that's not always enough.

  • drzapdrzap Posts: 795
    edited July 2017
    kyoto kid said:
    ebergerly said:

     

     

    Tobor said:
    drzap said:

    Xeons are what renderfarms use.

    And they either water cool, add additional airflow management, and/or limit the CPU to avoid excessive heat. The Xeon is no different than any other slice of silicone: heat kills it. Thermal management is up to the system integrator or box maker, and no CPU is immune. In my case that was Dell, and they didn't design the Precision for continuous 100% CPU utiliization. For one thing, despite its large size it's packed densely inside, with the inrternal drives covering where the CPU(s) are located. Many of us add additional squirrel fans along the bottom, and even that's not always enough.

    You fail to make your point.  You gave advice to not use the CPU (because you say the CPU wasn't designed for 3d rendering without extra coolingfrown) but should use a GPU (because apparently a GPU was designed for 100% utilization without extra cooling frownfrown).  While I agree in Daz you want to use the GPU as much as possible, because it's faster but when you reach your GPU's limitations, you will want a good CPU.  Your explaination for why to avoid a CPU is just wonky.

     

    Dude, your Dell konked out because sometimes computers konk out.  To imply that an industrial grade Xeon chip is less reliable than a consumer grade GTX card is just not reasonable.

    Post edited by drzap on
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,169

    I guess it comes down to what you are going to do with it whether or not it is worth it.

    DAZ studio using iray you are probably wasting money when a lesser computer with a great Nvidia graphics card or 4 would suffice, or an external rack of them cooled.

    On the otherhand using 3Delight or other CPU based render software such as Carrara and Poser you get a definite edge if you wished to include them in your workflow (a post you made to the Carrara forum suggests otherwise)

     

  • drzapdrzap Posts: 795
    edited July 2017
    th3Digit said:

    I guess it comes down to what you are going to do with it whether or not it is worth it.

    DAZ studio using iray you are probably wasting money when a lesser computer with a great Nvidia graphics card or 4 would suffice, or an external rack of them cooled.

    On the otherhand using 3Delight or other CPU based render software such as Carrara and Poser you get a definite edge if you wished to include them in your workflow (a post you made to the Carrara forum suggests otherwise)

     

    Yeah,  and if you want to do CPU rendering, don't cheap out and buy something like a Dell Precision.   My Dell t7910 has plenty of room inside, huge tower heat sinks (stock) and absolutely no problem cooling dual Xeons.  You gotta buy the right tools for the job.

    Post edited by drzap on
  • ebergerlyebergerly Posts: 3,255
    edited July 2017

    Darn....

    Tried a scene from my old computer, which took previously 12 minutes. On the new computer it took more than twice as long. Same GTX 1070 rendering on both machines.  And GPU drivers up to date on new machine. 

    Made sure CPU's were OFF in render settings and Optix ON. Watched GTX 1070 go to 100% utilization while rendering, so I know it was doing its thing. And CPU's were pretty quiet. Same scene, same render settings.

    Since this was a new install I had to install all the content for the scene first (it asked me and did it for me), so maybe that played a role somehow. So now I'm making sure all my content is downloaded with DIM, and I'll try again. 

    Darn. 

    I expected the render time would be pretty much identical, since the new CPU is out of the picture. 

    Darn. 

    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    ebergerly said:

     

    I expected the render time would be pretty much identical, since the new CPU is out of the picture. 

    Darn. 

    I hate to have to say it (and I'm not chasing you), but this is what I meant about comparing benchmarks.  Leave the benchmarking to the pros.  Their methodology is precise and scientific.  Most testing websites enumerate how they test software and hardware (and if they don't, I wouldn't trust their data) and it usually is quite extensive and exact.  You can't get an accurate picture of how your system will perform by looking at a list of other people's render times.  That may have a little value as to what you can expect but it in no way shows you the whole picture.  Like I said, trust the professionals.

  • ebergerlyebergerly Posts: 3,255

    So you're saying it's reasonable that if I move from an i7-6700, 4 core CPU, with 48GB RAM, rendering with a GTX 1070, to a Ryzen 7 1700, 8 core CPU, with 64 GB RAM, rendering with the same GTX 1070, then the "pros" say it should take twice the time to render??

    What pros say that? 

    I never said that was a benchmark I was following. It was merely an assumption that the same GPU rendering on two different machines should take about the same time, since the GPU is the one that's doing the rendering. 

  • drzapdrzap Posts: 795
    edited July 2017
    ebergerly said:

    So you're saying it's reasonable that if I move from an i7-6700, 4 core CPU, with 48GB RAM, rendering with a GTX 1070, to a Ryzen 7 1700, 8 core CPU, with 64 GB RAM, rendering with the same GTX 1070, then the "pros" say it should take twice the time to render??

     

    What I'm saying is that there is a high variability between different computers and components and to assume that since xxx happened on your mate's computer, then the same xxx will happen on your computer is faulty reasoning.  The pros don't make these assumptions.  They will test on the exact same computer with the exact same settings and measure the results.  Their data is a lot more dependable than a list of random people rendering the same file.

    Post edited by drzap on
Sign In or Register to comment.