Yet another PC suggestions thread

All right; my turn to ask for advice on a new computer. This is the one I'm currently leaning toward, although I will probably add a second video card for rendering and leave the main GTX 960 to handle the monitors. http://pcpartpicker.com/p/H9TdNG

Comments

  • dracorndracorn Posts: 2,333

    My first question is: are you having someone build it for you or are you doing it yourself?

    I recently bought a new PC specifically for Daz Studio.  Granted, my budget was almost twice yours, but I hold on to my PC's for 5-7 years.  I went to Fry's (not sure if there is one in your area), and had a knowledgeable salesman help me pick everything out.  It's really a workstation, and my longest renders are about half an hour.  You can take a look at the information and advice here:  http://www.daz3d.com/forums/discussion/60696/help-me-build-my-new-daz-computer#latest

    Processor:

    I must urge you to reconsider your processor.  Yes, I know that AMD is a quality product and all my PC's until this one have used AMD processors.  But my research (confirmed by others) is that AMD just doesn't do CG as well as Intel.  If you can afford it, get the most powerful Intel processor affordable.  I know this is going to spark a debate as heated as the DAZ vs. Poser debate, but do the research yourself.

    RAM:

    If at all possible, get at least 32 GB RAM.  That's the minimum I have heard is best for CG.  Last thing you want is to have lots of stuff in a render and have it crash because it doesn't have enough RAM to complete it.  If you can afford it, DDR4 is better than DDR3.  RAM speed isn't as important. It's best to get this at purchase time, because if you upgrade, you must put in the exact same type of RAM, and that may be hard to find a couple of years down the road. 

    VIDEO Card:

    One thing I found out is that if you add more than one video card, your graphics processing will only be as fast as the slowest video card.  So if you elect to buy a faster card down the road, it won't gain you much if you slow it down with an older card.  I also recall a comment on this forum that linked video cards don't necessarily work for rendering as they do for gaming.  (Somebody please elaborate on this).  As far as one graphics card for the monitors and another for rendering, I will have to defer to someone more knowledgeable for that.

    I think you are wise in doing your homework - it took me 4 months before I finally had sufficient information to buy a PC.  There are many people on the forum who are happy to give you advice.

  • mtl1mtl1 Posts: 1,501

    If I remember correctly, it's that the video cards [b]can't[/b] be placed in SLI mode for iray. Other than that, textures aren't spread across the total memory pool; if it can't fit in the video card with the smallest memory, it will not fill the other video card and default to CPU rendering instead.

    I have that current video card and it's acceptable for rendering. However, my word of caution is that you'll very likely outgrow that video card if this will become a serious hobby. I suggest waiting a couple of months (several?) until the new Pascal cards come out. That way, you don't have to buy twice.

  • Peter FulfordPeter Fulford Posts: 1,325
    dracorn said:

    One thing I found out is that if you add more than one video card, your graphics processing will only be as fast as the slowest video card.  So if you elect to buy a faster card down the road, it won't gain you much if you slow it down with an older card. 

    This is not true for GPU accelerated rendering, i.e. Iray. Iray will use all the cuda cores available, and those of a "slow" video card will be added to those of a faster card to speed rendering further.

    You may be getting confused with graphics card memory capacity. In this case, the working capacity for Iray will be limited to that of the video card with the lowest amount of memory.

  • dracorn said:

    One thing I found out is that if you add more than one video card, your graphics processing will only be as fast as the slowest video card.  So if you elect to buy a faster card down the road, it won't gain you much if you slow it down with an older card. 

    This is not true for GPU accelerated rendering, i.e. Iray. Iray will use all the cuda cores available, and those of a "slow" video card will be added to those of a faster card to speed rendering further.

    You may be getting confused with graphics card memory capacity. In this case, the working capacity for Iray will be limited to that of the video card with the lowest amount of memory.

    Even there, Iray will drop any card(s) that lack sufficient memory for the current scene and continue to use the other(s).

  • I was considering the alienware area 51 with Intel Core i7-5960X Processor (8-cores, 20MB Cache, Overclocked up to 4.0 GHz w/ Turbo Boost), 32GB (4X8GB) DDR4 2133MHz SDRAM Memory, 256GB SSD 6Gb/s Main + 4TB 5400RPM SATA 6Gb/s Storage, and Triple NVIDIA GeForce GTX 980 Ti with 18GB total (3x 6GB) GDDR5 - NVIDIA SLI Enabled.  BUT I've read so many bad things about Dell but for few hundred more Origion who seems to have a really good reputation  has this one can put together  The Chronus Pro picked the case Corsair 350D , ORIGIN High-Performance Ultra Silent Fans - Red,
    motherboard-ASUS Maximus VIII Gene, ORIGIN FROSTBYTE 240 Sealed Liquid Cooling System, Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost), 850 Watt Tough Power 80+Gold, Dual 12GB NVIDIA GTX TITAN X (next powerful one is Dual 12GB NVIDIA Quadro K6000 (Non-SLI)that the big guys use but over $9000-bit much for me), memory-32GB Corsair Dominator Platinum DDR4 3000Mhz (4 x 8GB) - Z170, RAID 1: For better data safety and security(raid 0 speed performance), now this model can have up to 5 hard drives so I thought 1 1TB Samsung 850 Evo Series, and 4 4TB Western Digital Black SATA 6.0Gb/s, 7200RPM, 64MB Cache, can't decide between the 6X Slim Slot Load Blu-ray Writer or the 16X Blu-ray Burner  don't know what the difference is though the first is more expensive 
    anyway I'm really leaning towards the origion one like said dell seems to have a bad rep and origion is highly rated plus they ship your computer in a wooden crate and the have other more powerfull computers like the millenium and genesus with up to 4 graphics cards 2 main drives your choices of hdd and ssd, plus a 5 drive front loading setup you just push the drives into slots no need to connect cables plus a bottom section where you can either have more fans or more hard drives for a total of 35 but these ones come at a very high price. 

    I

  • was also thinking about getting zbrush

  • dracorndracorn Posts: 2,333

    I was considering the alienware area 51 with Intel Core i7-5960X Processor (8-cores, 20MB Cache, Overclocked up to 4.0 GHz w/ Turbo Boost), 32GB (4X8GB) DDR4 2133MHz SDRAM Memory, 256GB SSD 6Gb/s Main + 4TB 5400RPM SATA 6Gb/s Storage, and Triple NVIDIA GeForce GTX 980 Ti with 18GB total (3x 6GB) GDDR5 - NVIDIA SLI Enabled.  BUT I've read so many bad things about Dell but for few hundred more Origion who seems to have a really good reputation  has this one can put together  The Chronus Pro picked the case Corsair 350D , ORIGIN High-Performance Ultra Silent Fans - Red,
    motherboard-ASUS Maximus VIII Gene, ORIGIN FROSTBYTE 240 Sealed Liquid Cooling System, Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost), 850 Watt Tough Power 80+Gold, Dual 12GB NVIDIA GTX TITAN X (next powerful one is Dual 12GB NVIDIA Quadro K6000 (Non-SLI)that the big guys use but over $9000-bit much for me), memory-32GB Corsair Dominator Platinum DDR4 3000Mhz (4 x 8GB) - Z170, RAID 1: For better data safety and security(raid 0 speed performance), now this model can have up to 5 hard drives so I thought 1 1TB Samsung 850 Evo Series, and 4 4TB Western Digital Black SATA 6.0Gb/s, 7200RPM, 64MB Cache, can't decide between the 6X Slim Slot Load Blu-ray Writer or the 16X Blu-ray Burner  don't know what the difference is though the first is more expensive 
    anyway I'm really leaning towards the origion one like said dell seems to have a bad rep and origion is highly rated plus they ship your computer in a wooden crate and the have other more powerfull computers like the millenium and genesus with up to 4 graphics cards 2 main drives your choices of hdd and ssd, plus a 5 drive front loading setup you just push the drives into slots no need to connect cables plus a bottom section where you can either have more fans or more hard drives for a total of 35 but these ones come at a very high price. 

    I

    Dude!  If you can afford a computer like THAT, then don't skimp on the RAM.  Bump it up to 64 GB!

    My workstation typically does a complicated render with lighting and multiple figures in about 30 minutes (3Delight).  I tried to push it by rendering a scene with 7 figures at 10,000 x 10,000 - that took 1 hour 29 minutes.  The link for my work station is on page one of this thread. 

  • well that was the max spec out for that computer except the k6000 graphics cards and they were over $9000 bit steep, still better than the maxed out dell one availble in australia and only few hundred more too but was looking at a local store's site and looking at the components maybe able to put together one just as good or close or maybe better not sure there lol just going by the most spec out components and if went the 5 drives and 4 x gtx 980ti and everything else not sure but looking like it's 2 to 3000 cheaper. don't want to go for the liquad filled system where you have to watch and refill though but know there is one you don't have to worry about that frostbyte one for example but yeah if I can save a few thousand then I can maybe get that extra ram then right if they can do it

  • KnittingmommyKnittingmommy Posts: 8,191
    edited April 2016

    I thought I would way in with my thoughts since I actually have that same processor.  Actually, I have some very similar things in my computer that are on your list.  I, also, managed to add something from your list to my computer wishlist on Amazon!  I hadn't seen that fan controller.  Nice!

    Anyway, I set my system up before Iray came out.  I think I had it all of two months before Iray went into beta here on DAZ.  I did research and, while most sources seemed to say the I7 was better, I went with the AMD 8350 Black Edition.  One reason for my decision was that my husband had an I7 in his rig.  He uses Blender and so I know what his processing times were and the type of renders he was doing.  I really couldn't say if one is better than another.  I can only give you my experience and let you decide, but it might help you make your decision since I have experience with comparing both AMD and I7 cards. 

    When I made the decision to go AMD, my husband was not thrilled.  He likes Intel!  The first thing we had to do when I set my computer up was run a benchmark comparison between the two systems.  For the most part, our systems are similar in everything except processor, cooling system, and motherboard.  We have different graphics cards and neither of us have an nVidia card.  I have the Sapphire Radeon R7 260X.  If I had known Iray was coming, I probably would have gone with an nVidia card, but again, I set this up before it came out in DS beta.

    Speed-wise for our test there was almost no difference between the AMD 8350 and the I7 when it came to rendering our benchmark scene.  We didn't set the scene up.  We got a scene from the Blender website and ran the comparison in Blender.  There was about a 1 second difference.  Literally, 1 second.  The more complicated the scene the more this would probably change.  We also ran a scene comparing renders in Blender with what I could render in DS.  My husband isn't about to load DS on his system so I can't really compare the I7 in a DS render.  However, running a similar scene, he rendering in Blender and I running in DS, my render was actually a little faster.  Again, not by much.  And, at the time, I had no idea what I was doing so I'm sure the textures that I chose weren't all that complicated.  I'm not sure if I would have the same results with my current knowledge of textures and how I apply them as I can get downright complicated, and probably CPU intensive, textures these days.

    I, also, ran SickleYield's benchmark render she has set up.  I tracked down that post to see what my times were back then and quoted the pertinent parts:

    My post to SickleYield's thread

    AMD FX-8350 FX-Series 8-Core Black Edition Processor
    GIGABYTE GA-990FXA-UD3 AM3+ AMD 990FX SATA 6Gb/s USB 3.0 ATX AMD Motherboard (dual graphics card slots so I can add another better graphics card later.)
    16GB DDR3 1866MHZ PC3 14900 8GBx2 memory (extra slots to add more memory later)
    Corsair Hydro Series High Performance Liquid CPU Cooler H60 (I really love how quiet this is while keeping everything at optimal temps)
    Corsair RM Series 850 Watt ATX/EPS 80PLUS Gold-Certified Power Supply (also selected for the quiet factor)

    I have to say that, even when I'm rendering a heavy scene with a lot of elements, I can't even hear my computer, its so quiet, and I have to look to make sure it is still running.

    All of that said, here are my rendering times for Sickleyield's benchmark setup with speed as the desired method of optimization in the render settings:

    1st render with spheres 8 and 9
    90% reached at 33 minutes 02 seconds Final render at 40 minutes 15.64 seconds

    2nd render without spheres 8 and 9
    90% 20 minutes 6 seconds Final render at 38 minutes 16.33 seconds

    I'm not about to get into a debate about which is better AMD or Intel.  There is a reason I never got on the debate team way back when!  I'm terrible at it.  I can just tell you what my experiences have been.  I don't know about @dracorn's comment about AMD not being able to do CG as well as Intel.  I'm not sure what she is looking at when comparing.  All I know, is that I'm actually happy with my rig and I do CG and animation just fine.  I do have to say that the more complicated my scenes get, the longer my render times get, too.  It isn't unusual for me to have a render run over 6 hours and I've had at least one render that was usual size in pixels run for about 17 hours.  That was my Winter Cabin render from last December.  I say usual size, I did recently run a render for a poster which required the output to be fairly large as I was rendering 300 ppi for a 16 in x 20 in image for printing.  That render, on my machine, took 3 days to render.  However, those same scenes would probably run just as long with an Intel rig.  My average render time for most scenes is about 20 minutes or less.  Draft mode, I can run an Iray render in under 10 minutes.  CPU only, of course.  Sometimes, my render times can run up to over an hour.  It all depends on what is in the scene.

    Dracorn has an excellent point about starting out at 32 GB ram.  I started out at 16 for my rig and was fine for a long time.  However, that same Winter Cabin render was extremely resource intensive and, despite, removing everything that wasn't in camera view, I kept crashing DS as it ran out of memory trying to render the thing.  I had to upgrade to 32 GB quickly in order to finish do the render.  Luckily memory was even cheaper than it was when I first got my system so it wasn't a hardship to upgrade.  If you can add the extra memory now, I highly suggest doing it.

    It is good you are doing your research.  I probably spent about the same amount of time as dracorn researching before I bought the components for my rig.  4 months sounds about right.

    All that being said, I don't know if I would build my rig differently with Iray in mind.  I do know I'd definitely go for a nVidia card and it is on the wishlist.  I defintely can't afford it now especially considering I'm happy with my current setup.

    Hope this helps.  Good luck!

    Post edited by Knittingmommy on
  • hphoenixhphoenix Posts: 1,335
    dracorn said:

    I must urge you to reconsider your processor.  Yes, I know that AMD is a quality product and all my PC's until this one have used AMD processors.  But my research (confirmed by others) is that AMD just doesn't do CG as well as Intel.  If you can afford it, get the most powerful Intel processor affordable.  I know this is going to spark a debate as heated as the DAZ vs. Poser debate, but do the research yourself.

    This is patently false.

    AMD and Intel CPUs are all built on the very same principles of semiconductors.  The primary architecture (CISC) is well-established.  From a straight comparison, they are clock-cycle equivalent, though the actual layout some areas differ (more on that below.)

    Due to patents and copyrights, the whole 3DNow/SSE dust-up required AMD to implement their vector processing units differently than Intel had.  They are functionally equivalent, but the actual gate-level architecture differs, so they cycles-per-instruction differs.  Hyperthreading on Intel Core I7 gives it a benefit on integer pipeline instructions, but branch-prediction isn't perfect so it sometimes can slow things down.  For general purpose apps, it usually gives a big boost.

    CG can be done one of two ways:  generated or pipelined.  Generated is where the calculations are done, where as pipelined is an optimized high-throughput hardware solution.  Gaming is usually done via pipelined graphics.  Pipelined is fast, easily parallelized, and scales well.  Generated is where more sophisticated and flexible calculations are done, whether on CPU or GPU.  This is how most "CG" graphics work is done.

    This means that IF your application is optimized for a specific processors vector extensions, but not for the other, then the optimized one will give significantly BETTER performance.  This has NOTHING to do with the CPUs capability.  It has to do with the availability and use of inline intrinsics by the developers, and properly developed compiler optimization outputs.

     

    An 8-core AMD cpu is going to be as good as a 4-core Intel with hyperthreading, and sometimes better (assuming equal clocking.)

    A 4-core AMD cpu is going to be up to as good as a 4-core Intel with hyperthreading, but often less (as the hyperthreading + branch prediction gives the I7 the equivalent of doubling the cores for integer pipeline)

     

    Optimized code will naturally run better on the CPU brand it is optimzed for.  Actually knowing about how the microcode instructions execute differently between the two architectures and their equivalent vector units means knowing how to write your code to run the best on each, and how to tell them apart in the application.  But VERY few developers bother.  I'm really curious if DS uses SSE234 enhancements in their code.....it can really speed up certain math operations if you code it right (Example:  I wrote an optimized SSE4 distance-between-two-3D-points function and another that was just straight C++ math.  Then I tested and timed them.  The SSE4 optimized method was about SIX TIMES faster.  Considering how often most 3D apps have to calculate distances between points, that can be a HUGE speedup.)

     

    So if anyone tells you that AMD is better than Intel, or vice-versa, just shake your head and ignore them.  Right now, the only REAL benefit to Intel over AMD is that most AMD motherboards don't support PCI-E 3.0.  They only support PCI-E 2.1 and before.  So your potential bandwidth to your GPUs is much more limited.  There are a COUPLE of high-end AMD motherboards that DO support PCI-E 3.0, but that's it.

     

  • hphoenix said:
     

    So if anyone tells you that AMD is better than Intel, or vice-versa, just shake your head and ignore them.  Right now, the only REAL benefit to Intel over AMD is that most AMD motherboards don't support PCI-E 3.0.  They only support PCI-E 2.1 and before.  So your potential bandwidth to your GPUs is much more limited.  There are a COUPLE of high-end AMD motherboards that DO support PCI-E 3.0, but that's it.

     

    Is it likely to significantly impact performance if using anything but the latest cards that support the newer PCIe revision?

  • dracorn said:

    My first question is: are you having someone build it for you or are you doing it yourself?

    I recently bought a new PC specifically for Daz Studio.  Granted, my budget was almost twice yours, but I hold on to my PC's for 5-7 years.  I went to Fry's (not sure if there is one in your area), and had a knowledgeable salesman help me pick everything out.  It's really a workstation, and my longest renders are about half an hour.  You can take a look at the information and advice here:  http://www.daz3d.com/forums/discussion/60696/help-me-build-my-new-daz-computer#latest

    Processor:

    I must urge you to reconsider your processor.  Yes, I know that AMD is a quality product and all my PC's until this one have used AMD processors.  But my research (confirmed by others) is that AMD just doesn't do CG as well as Intel.  If you can afford it, get the most powerful Intel processor affordable.  I know this is going to spark a debate as heated as the DAZ vs. Poser debate, but do the research yourself.

    RAM:

    If at all possible, get at least 32 GB RAM.  That's the minimum I have heard is best for CG.  Last thing you want is to have lots of stuff in a render and have it crash because it doesn't have enough RAM to complete it.  If you can afford it, DDR4 is better than DDR3.  RAM speed isn't as important. It's best to get this at purchase time, because if you upgrade, you must put in the exact same type of RAM, and that may be hard to find a couple of years down the road. 

    VIDEO Card:

    One thing I found out is that if you add more than one video card, your graphics processing will only be as fast as the slowest video card.  So if you elect to buy a faster card down the road, it won't gain you much if you slow it down with an older card.  I also recall a comment on this forum that linked video cards don't necessarily work for rendering as they do for gaming.  (Somebody please elaborate on this).  As far as one graphics card for the monitors and another for rendering, I will have to defer to someone more knowledgeable for that.

    I think you are wise in doing your homework - it took me 4 months before I finally had sufficient information to buy a PC.  There are many people on the forum who are happy to give you advice.

    At this point, it's going to be a number of months before I make a final decision on the configuration. I've been building my own systems off and on for 20 years, but with the rapid pace of changes these days, I figure asking for input is a good thing.

  • TLDR.  Way TLDR.

    Here's the upshot:  Do NOT buy an AMD CPU.  You are getting much less for your money.  There are many published comparisons available if you want to read the TL. 

    What is important to you in this activity?  Just buy a fast i5 or better yet, any recent i7.  Then use it roughly and forget all about this argument for the next 7 years or so.

  • mjc1016mjc1016 Posts: 15,001
    edited April 2016
    hphoenix said:
    dracorn said:

    I must urge you to reconsider your processor.  Yes, I know that AMD is a quality product and all my PC's until this one have used AMD processors.  But my research (confirmed by others) is that AMD just doesn't do CG as well as Intel.  If you can afford it, get the most powerful Intel processor affordable.  I know this is going to spark a debate as heated as the DAZ vs. Poser debate, but do the research yourself.

    This is patently false.

    AMD and Intel CPUs are all built on the very same principles of semiconductors.  The primary architecture (CISC) is well-established.  From a straight comparison, they are clock-cycle equivalent, though the actual layout some areas differ (more on that below.)

    Due to patents and copyrights, the whole 3DNow/SSE dust-up required AMD to implement their vector processing units differently than Intel had.  They are functionally equivalent, but the actual gate-level architecture differs, so they cycles-per-instruction differs.  Hyperthreading on Intel Core I7 gives it a benefit on integer pipeline instructions, but branch-prediction isn't perfect so it sometimes can slow things down.  For general purpose apps, it usually gives a big boost.

    CG can be done one of two ways:  generated or pipelined.  Generated is where the calculations are done, where as pipelined is an optimized high-throughput hardware solution.  Gaming is usually done via pipelined graphics.  Pipelined is fast, easily parallelized, and scales well.  Generated is where more sophisticated and flexible calculations are done, whether on CPU or GPU.  This is how most "CG" graphics work is done.

    This means that IF your application is optimized for a specific processors vector extensions, but not for the other, then the optimized one will give significantly BETTER performance.  This has NOTHING to do with the CPUs capability.  It has to do with the availability and use of inline intrinsics by the developers, and properly developed compiler optimization outputs.

     

    An 8-core AMD cpu is going to be as good as a 4-core Intel with hyperthreading, and sometimes better (assuming equal clocking.)

    A 4-core AMD cpu is going to be up to as good as a 4-core Intel with hyperthreading, but often less (as the hyperthreading + branch prediction gives the I7 the equivalent of doubling the cores for integer pipeline)

     

    Optimized code will naturally run better on the CPU brand it is optimzed for.  Actually knowing about how the microcode instructions execute differently between the two architectures and their equivalent vector units means knowing how to write your code to run the best on each, and how to tell them apart in the application.  But VERY few developers bother.  I'm really curious if DS uses SSE234 enhancements in their code.....it can really speed up certain math operations if you code it right (Example:  I wrote an optimized SSE4 distance-between-two-3D-points function and another that was just straight C++ math.  Then I tested and timed them.  The SSE4 optimized method was about SIX TIMES faster.  Considering how often most 3D apps have to calculate distances between points, that can be a HUGE speedup.)

     

    So if anyone tells you that AMD is better than Intel, or vice-versa, just shake your head and ignore them.  Right now, the only REAL benefit to Intel over AMD is that most AMD motherboards don't support PCI-E 3.0.  They only support PCI-E 2.1 and before.  So your potential bandwidth to your GPUs is much more limited.  There are a COUPLE of high-end AMD motherboards that DO support PCI-E 3.0, but that's it.

     

    It was somewhat true when the very first FX AMD CPUs came out...but that was several years ago and basically just the first batch released.  In other words, it was fixed...and even then it wasn't all that 'bad'.  Yes, the performance impact was mostly in the gaming area, but even then it wasn't a huge difference, especially for the price differential.  (We are talking a 3 to 10 fps difference at 90+ fps frame rates.)

    And, having access to one of those early FX CPUs, for rendering...at least with the standalone 3DL version of the time...it was FASTER.  That's right...faster than a comparable Intel chip.  3DL rendering was one of the few things, even on the benchmark sites, that those early FX chips beat out the Intel chips on.  But since then, little or no performance 'lag' on the type of work done in Studio...and many areas, AMD still beats the Intel chips.

    As to whether PCI-e 2/3 is going have an impact...probably not much more than a couple of seconds difference.

    Post edited by mjc1016 on
  • dracorndracorn Posts: 2,333
    hphoenix said:
    dracorn said:

    I must urge you to reconsider your processor.  Yes, I know that AMD is a quality product and all my PC's until this one have used AMD processors.  But my research (confirmed by others) is that AMD just doesn't do CG as well as Intel.  If you can afford it, get the most powerful Intel processor affordable.  I know this is going to spark a debate as heated as the DAZ vs. Poser debate, but do the research yourself.

    This is patently false.

    AMD and Intel CPUs are all built on the very same principles of semiconductors.  The primary architecture (CISC) is well-established.  From a straight comparison, they are clock-cycle equivalent, though the actual layout some areas differ (more on that below.)

    Due to patents and copyrights, the whole 3DNow/SSE dust-up required AMD to implement their vector processing units differently than Intel had.  They are functionally equivalent, but the actual gate-level architecture differs, so they cycles-per-instruction differs.  Hyperthreading on Intel Core I7 gives it a benefit on integer pipeline instructions, but branch-prediction isn't perfect so it sometimes can slow things down.  For general purpose apps, it usually gives a big boost.

    CG can be done one of two ways:  generated or pipelined.  Generated is where the calculations are done, where as pipelined is an optimized high-throughput hardware solution.  Gaming is usually done via pipelined graphics.  Pipelined is fast, easily parallelized, and scales well.  Generated is where more sophisticated and flexible calculations are done, whether on CPU or GPU.  This is how most "CG" graphics work is done.

    This means that IF your application is optimized for a specific processors vector extensions, but not for the other, then the optimized one will give significantly BETTER performance.  This has NOTHING to do with the CPUs capability.  It has to do with the availability and use of inline intrinsics by the developers, and properly developed compiler optimization outputs.

     

    An 8-core AMD cpu is going to be as good as a 4-core Intel with hyperthreading, and sometimes better (assuming equal clocking.)

    A 4-core AMD cpu is going to be up to as good as a 4-core Intel with hyperthreading, but often less (as the hyperthreading + branch prediction gives the I7 the equivalent of doubling the cores for integer pipeline)

     

    Optimized code will naturally run better on the CPU brand it is optimzed for.  Actually knowing about how the microcode instructions execute differently between the two architectures and their equivalent vector units means knowing how to write your code to run the best on each, and how to tell them apart in the application.  But VERY few developers bother.  I'm really curious if DS uses SSE234 enhancements in their code.....it can really speed up certain math operations if you code it right (Example:  I wrote an optimized SSE4 distance-between-two-3D-points function and another that was just straight C++ math.  Then I tested and timed them.  The SSE4 optimized method was about SIX TIMES faster.  Considering how often most 3D apps have to calculate distances between points, that can be a HUGE speedup.)

     

    So if anyone tells you that AMD is better than Intel, or vice-versa, just shake your head and ignore them.  Right now, the only REAL benefit to Intel over AMD is that most AMD motherboards don't support PCI-E 3.0.  They only support PCI-E 2.1 and before.  So your potential bandwidth to your GPUs is much more limited.  There are a COUPLE of high-end AMD motherboards that DO support PCI-E 3.0, but that's it.

     

    Who am I to argue with an expert? 

    I was going by a quote of stats that someone gave me on my own help me build a PC thread. 

    Based on that, I chose an Intel i7 6 core processor rather than AMD.  All I can say at this point is that I am very happy with my purchase.  So if you have budget constraints, Daywalker, then get the best AMD you can afford.  hphoenix obviously knows what he's/she's talking about, so I'm sure you will be happy with that. 

    As a comparison:  with my Intel CPU, GTX 980ti and 64GB RAM, I was able to render this (smaller version for the forum) at 3000 x 3000 in 3Delight in 16.5 minutes.  Granted there is no background.  Renders with background and multiple figures, props, lighting, my renders typically don't take more than 30 minutes on my PC.

    Mandala 08.jpg
    800 x 800 - 367K
  • hphoenixhphoenix Posts: 1,335
    hphoenix said:
     

    So if anyone tells you that AMD is better than Intel, or vice-versa, just shake your head and ignore them.  Right now, the only REAL benefit to Intel over AMD is that most AMD motherboards don't support PCI-E 3.0.  They only support PCI-E 2.1 and before.  So your potential bandwidth to your GPUs is much more limited.  There are a COUPLE of high-end AMD motherboards that DO support PCI-E 3.0, but that's it.

     

    Is it likely to significantly impact performance if using anything but the latest cards that support the newer PCIe revision?

    PCI-E 3.0 provides for additional lanes, as well as higher bandwidth transfer rates.  For gaming, this is a big thing, as stuff is getting shuffled around from main memory to the GPUs often and frequently.  For rendering on GPU, it won't have much of an impact, as Iray primarily runs on the GPU, with everything being loaded up first, and then only returning pixel buffer information.  For rendering on CPU, the PCI-E bus won't be involved except for displaying the frame buffer.

    So, if you don't game on the machine, PCI-E 3.0 will have little impact on rendering times.  A few seconds, at most.

     

  • hphoenixhphoenix Posts: 1,335
    edited April 2016
    dracorn said:
    hphoenix said:
    dracorn said:

     

    Based on that, I chose an Intel i7 6 core processor rather than AMD.  All I can say at this point is that I am very happy with my purchase.  So if you have budget constraints, Daywalker, then get the best AMD you can afford.  hphoenix obviously knows what he's/she's talking about, so I'm sure you will be happy with that. 

     

    Core-for-core, the i7 WILL have a performance benefit, due to the hyperthreading of the integer pipeline.  Just not for CG or any other computationally-intensive floating point applications.  Normal applications, servers, web, etc., will all see a major benefit.  This is why when most people compare AMD FX CPUs with Intel Core i7 CPUs, they compare an 8-core AMD FX vs the 4-core Intel i7.  Some will try to skew the data by comparing a 4-core AMD FX against the 4-core Intel i7 (when for 90% of applications, the i7 is effectively a 8-core CPU.)

     

    Also, a lot of software is built using Intel compiler intrinsics....not AMD.  So when it is run, it looks to see if it's an Intel CPU.....and if it isn't, it doesn't use the SIMD vector math to speed things up.  So of course the AMD runs slower.  Properly written software will use BOTH, and handle the differences between the supported SIMD instructions.

    (SIMD = Single Instruction, Multiple Data.  I.e., a vector processing unit.  This encompases MMX, 3DNow!, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX....it's gotten kind of complicated since the Pentium-III days.....SSE extensions allow 2 double or 4 single floating point precision values to be operated on at once.  So you could, if you code it well, see up to a 4x speed-up of mathematical code.  It's a huge benefit.)

     

    AMD FX CPUs are very good, but depending on what OS you are running, and what software you use primarily, Intel may perform better (due to the above.)  Intel costs more, core-for-core/freq-for-freq, but if you are simply wanting overall performance, the Core i7 will probably edge out the AMD.  However, Dollar-For-Dollar, the AMD is much more bang-for-the-buck.

    (oh, and one last note to explain they why for the last.  The hyperthreaded integer pipeline actually executes the same thread in BOTH pipelines, alternating instructions, so you effectively get double speed in the integer pipeline....except that sometimes you can't, or it would cause data coherency conflicts, or would branch wrong.  This is where the whole branch-prediction thing comes into play.  So while it CAN make the individual core faster by a lot for integer instructions, it often DOESN'T, due to the nature of the program code, or the code being written in such a way that it can't take advantage of the hyperthreading as much as it could.)

     

    Post edited by hphoenix on
  • Peter FulfordPeter Fulford Posts: 1,325
    hphoenix said:

     Core-for-core, the i7 WILL have a performance benefit, due to the hyperthreading of the integer pipeline.  Just not for CG or any other computationally-intensive floating point applications. 

     hphoenix, can you please point to any online benchmarks showing topline AMD CPUs keeping up with topline Intel CPUs for " CG or any other computationally-intensive floating point applications". Thanks.

     

  • hphoenixhphoenix Posts: 1,335
    edited April 2016
    hphoenix said:

     Core-for-core, the i7 WILL have a performance benefit, due to the hyperthreading of the integer pipeline.  Just not for CG or any other computationally-intensive floating point applications. 

     hphoenix, can you please point to any online benchmarks showing topline AMD CPUs keeping up with topline Intel CPUs for " CG or any other computationally-intensive floating point applications". Thanks.

     

    http://www.anandtech.com/show/3470

    (took me 5 seconds to find it.  I can find more.  You just have to know what benchmarks are cpu-agnostic and floating-point based.  LINPACK is the one used to test even supercomputers.)

    Edit:  Here's another.  https://www.cpubenchmark.net/high_end_cpus.html  Obviously, if you start talking Xeon, you're talking about orders of magnitude more cost-per-flops, but i7 vs FX shows they tend to stay pretty close, with the comparable i7s slightly faster (mostly due to the integer parts of the testing code between the floating-point calculations.)

     

    Post edited by hphoenix on
  • Peter FulfordPeter Fulford Posts: 1,325
    hphoenix said:
    hphoenix said:

     Core-for-core, the i7 WILL have a performance benefit, due to the hyperthreading of the integer pipeline.  Just not for CG or any other computationally-intensive floating point applications. 

     hphoenix, can you please point to any online benchmarks showing topline AMD CPUs keeping up with topline Intel CPUs for " CG or any other computationally-intensive floating point applications". Thanks.

     

    http://www.anandtech.com/show/3470

    (took me 5 seconds to find it.  I can find more.  You just have to know what benchmarks are cpu-agnostic and floating-point based.  LINPACK is the one used to test even supercomputers.)

     

    Thanks, that's useful if I want to run 1970s software on 8 year old CPUs. Are there any benchmarks showing contempory AMD CPUs keeping up with contemporary Intel CPUs on contemporary computer graphics floating point intensive software?

  • Peter FulfordPeter Fulford Posts: 1,325

     

    hphoenix said:

    Edit:  Here's another.  https://www.cpubenchmark.net/high_end_cpus.html  Obviously, if you start talking Xeon, you're talking about orders of magnitude more cost-per-flops, but i7 vs FX shows they tend to stay pretty close, with the comparable i7s slightly faster (mostly due to the integer parts of the testing code between the floating-point calculations.)

    Can't see anything there relating specifically to floating point performance.

     

  • hphoenixhphoenix Posts: 1,335
    hphoenix said:
    hphoenix said:

     Core-for-core, the i7 WILL have a performance benefit, due to the hyperthreading of the integer pipeline.  Just not for CG or any other computationally-intensive floating point applications. 

     hphoenix, can you please point to any online benchmarks showing topline AMD CPUs keeping up with topline Intel CPUs for " CG or any other computationally-intensive floating point applications". Thanks.

     

    http://www.anandtech.com/show/3470

    (took me 5 seconds to find it.  I can find more.  You just have to know what benchmarks are cpu-agnostic and floating-point based.  LINPACK is the one used to test even supercomputers.)

     

    Thanks, that's useful if I want to run 1970s software on 8 year old CPUs. Are there any benchmarks showing contempory AMD CPUs keeping up with contemporary Intel CPUs on contemporary computer graphics floating point intensive software?

    Okay.  I know you have an axe to grind, but LINPACK has been updated regularly throughout its life, and is STILL the de-facto standard for testing high-performance supercomputers ability to usefully crunch floating-point numbers.  It's a linear algebra solver package.  It runs completely cpu-agnostic, as it has to evaluate on dozens of platforms.

    Yes, I didn't look closely, it's from 2008.  We've had a few architectural changes to the cores since then.  Trying to find more up-to-date benchmark links for you, though it is tougher now, as many of the 'modern' benchmarks use the maximum SIMD level on the chip supported, and that can actually hamstring the AMD (AVX 512bit while fully supported on current XEONS, is only emulated on-core in the AMD by using two core FPU together....which means one core cannot do FP SIMD at 512bit width, cutting the performance in half).  It's not a simple question.  But modern rendering algorithms don't typically use those extensions.

     

  • kyoto kidkyoto kid Posts: 40,627
    edited April 2016

    ...took a look at Origin's site.  Interesting.

    So I configured a system similar to the one I designed using pretty much the same components and the total came to 11,327$. That was actually a bit surprising as I was expecting it to cost more. The last time I ran my design through a custom build house that dealt with professional workstations, it came to over 15,000$

    Genesis Pro X2

    • Motherboard: ASUS Z10PE-D8 WS
    • System Cooling: ORIGIN CRYOGENIC Stage III Liquid Cooling PRO (For Standard/Inverted Configurations)
    • Case: Genesis Pro HPC
    • Processors: Dual Intel XEON E5-2630 v3 Octa-Core
    • Thermal Compound: GELID GC-Extreme Dual CPU Application
    • Power Supply: 1.6 Kilowatt EVGA SuperNOVA G2
    • Power Supply Sleeved Cable Color: Blue Individually Sleeved Cables EVGA
    • Graphic Cards: Quad 12GB NVIDIA GeForce GTX Titan X - [VR Ready]
    • Memory: 128GB ECC Registered 2133MHz (8 X 16GB)
    • Operating System: MS Windows 7 Professional
    • Operating System Drive #1 (Primary): 512GB Samsung 850 Pro Series
    • Operating System Drive #2: 1TB Samsung 850 Pro Series
    • Hot Swap Bay Drive #1: 2TB ORIGIN PC Approved Hard Drive
    • Hot Swap Bay Drive #2: 2TB ORIGIN PC Approved Hard Drive
    • Hard Drive Cage: 5 Bay Hot-Swap Cage
    • Optical Drive One: 24X CD/DVD Burner
    • Lower Unit: Cryogenic Cooling Support
    • Audio: On Board High Definition 8-Channel Audio
    • Networking: Onboard Network Port
    • ORIGIN Maximum Protection Shipping Process: ORIGIN Wooden Crate Armor
    • Warranty: Lifetime 24/7 U.S. Based Support and Lifetime Free Labor. 1 Year Part Replacement & 45 Day Shipping Warranty
    • ORIGIN Recovery: ORIGIN Recovery USB3.0 Flash Drive

    For about another 920$ I could get the Hybrid liquid cooled Titan-Xs.

    As I already have Dual Displays, Keyboard, Mouse, Trackball etc, don't need those.

    The DIY system I designed came to about 8,200$ for all the components, but then I have to put it together myself and I am my own "service and support plan".

     

    Post edited by kyoto kid on
  • Peter FulfordPeter Fulford Posts: 1,325
    hphoenix said:
    hphoenix said:
    hphoenix said:

     Core-for-core, the i7 WILL have a performance benefit, due to the hyperthreading of the integer pipeline.  Just not for CG or any other computationally-intensive floating point applications. 

     hphoenix, can you please point to any online benchmarks showing topline AMD CPUs keeping up with topline Intel CPUs for " CG or any other computationally-intensive floating point applications". Thanks.

     

    http://www.anandtech.com/show/3470

    (took me 5 seconds to find it.  I can find more.  You just have to know what benchmarks are cpu-agnostic and floating-point based.  LINPACK is the one used to test even supercomputers.)

     

    Thanks, that's useful if I want to run 1970s software on 8 year old CPUs. Are there any benchmarks showing contempory AMD CPUs keeping up with contemporary Intel CPUs on contemporary computer graphics floating point intensive software?

    Okay.  I know you have an axe to grind...

    My only agenda here is useful and accurate information on hardware that is pertinent to typical use by fellow DAZ software users.

  • Peter FulfordPeter Fulford Posts: 1,325

    The results in this comparison are typical of those I've seen between AMD and Intel CPUs running contemporary 3D rendering software (floating point based).

    http://www.guru3d.com/articles_pages/amd_fx_8370_and_8370e_processor_review,11.html

    I've never seen an online benchmark test showing topline AMD processors coming near topline Intel CPUs for 3D rendering performance. I'd be mighty interested in seeing one, because at their price point equivalently performing AMD chips would be must haves.

  • hphoenixhphoenix Posts: 1,335
    edited April 2016

    First, I said "comparable" not top-line.  The top of the line for Intel would be Xeon CPUs that run over $3000 each, and compete with AMD Opterons which exceed $1000 each (comparable in that realm would be the Xeon MP X7560 vs the Opteron 6386SE, $3500 vs $1400).  But let's stay in the realm of reason.  Comparable, in this case would be the Intel Core i7 - 4790K compared to the AMD FX 8370.  Both run at 4.0GHz, though the AMD's turbo is slightly faster with all 4 cores loaded.  The i7 is only 4 cores to the FXs 8, but the hyperthreading helps out immensely with the integer pipelines.  Let's look at the comparison table you linked.

    The Frybench x64 benchmark is a pretty good test.  Lots of FP calculations, but also does some integer stuff.  Let's look at the results.  AMD FX 8370 = 311s, Intel i7 4790k = 278s.  Hmm.  That's about a 13% boost.  If we discount ANY potential code favoritism in the benchmark, the integer benefits alone could account for easily half that difference.  So purely on clocks, and assuming NO test that use Intel-only SIMD (and NO tests that use AMD only SIMD), it shows maybe a 7% boost.  And it only costs 175% of the AMD FX CPU (list prices show the 4770k at $350, and the FX8370 at $200)?  Wow, a bargain for the performance.  And I'd be willing to bet the code is not optimized to use all the benefits equally....since the same instructions on both processors may not execute equally (as the designs have to differ to avoid legal issues) and the algorithms may have to be adjusted to compensate.  Most benchmarks don't go that deep into 'being equal'.  And as noted earlier, if the benchmark uses any AVX stuff, it would seriously hamstring the AMD, as it manages emulating the 256-bit registers by dropping down to only 4 FPU cores equivalent (it basically concatenates the two SIMD FPUs in one module.)

    So how is that 'not coming near' equivalently performing?

    And since intel intrinsics are easily available right in Windows Visual Studio C++ (they're included, but AMD's extensions on their high-end processors aren't, so most developers don't both trying to implement using the XOP or FMA4 instructions) so there is a bias right there.  Raw performance-wise, they're damn near even, though single-threaded execution will get a boost from the i7 hyperthreading, but multi-threaded will benefit from the additional cores on the FX.

     

    Many online benchmarks and benchmarking websites have favorites.  Read between the lines.  And you did specify "keeping up with".....the very graph you linked shows it clearly.  Oh, that i7 on the top of the Frybench chart?  You'll notice they didn't include the equivalent FX CPU, the FX-9590 on the chart....Which is stable at a turbo speed on all 8 cores @ 5GHz.  The i7-5930k only gets up to 3.6GHz turbo with all cores running, and that FX would have been close to, if not better than, that i7.  And at half the price or less.

     

    Post edited by hphoenix on
Sign In or Register to comment.