New GPU/CPU comparison questions.

JoltrastJoltrast Posts: 199
edited December 1969 in Daz Studio Discussion

I hope I'm posting in the correct forum - please move me if not! Thanks!

I managed to fry my 8800GT card last week, (which isn't the end of the world as I'll be building a new system soon) so I need to get it replaced, but I'm a little confused by some of the new technology and wondered if some of you might be able to help me work out what would be the best choice for a replacement.

Knowing that the graphics card can easily be upgraded in the future, I'm not so worried about going high end.

The kind of range I'm looking at is something like a HD7870 or a GTX660 with either an i5 or i7 CPU. (The CPU will be in a new build, whereas the GPU is a replacement for the melted card, but will be moved to the new build).

I've read in various places that you get better performance from OpenGL cards, but then I'm hearing people also talking about CUDA cores. I'm not sure how either of those impact performance.
The new build will be a Win7 64bit system with as much RAM as I can afford (should the RAM be DDR5 now?) running DAZ 6.4
I don't plan to overclock or have cooling.

Any advice gratefully received. Thank you in advance!

Comments

  • Jim_1831252Jim_1831252 Posts: 728
    edited December 1969

    Not sure why you didn't get a response to this! I'll give it a go. I just recently wrote a huge article about what graphics cards to consider for what purposes. If you are happy with 3Delight (what daz uses), then a graphics card won't help with rendering - focus on getting the best CPU you can afford. It really depends on what software you want to use, and how heavily.

    Articles:
    Article looking at what considerations you need to make when thinking about a new graphics card for DAZ/Poser, LuxRender, and Octane
    http://www.digisprawl.com/blog/hardware/best-graphics-card-daz-studio-poser-luxrender/
    Follow-up dealing with advances in DAZ Studio and advanced OpenGL features
    http://www.digisprawl.com/blog/hardware/graphics-card-addendum-1-daz-studio-display-optimization-preview-improvements/

    Hit me up if you have any other questions and I'll try to help as best as I can.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 1969

    The real question is wether you eventually plan to use third pary plugin to render or not. If that is the case it could be usefull. Otherwise you can almost pretty much buy anything as it won't make any difference for DS

  • Jim_1831252Jim_1831252 Posts: 728
    edited December 1969

    @ Takeo: It really depends on how big your DS scenes are and what sort of viewport quality you want. A mid range graphics card should be enough to handle huge scenes. I was just testing this out the other day. A scene of 15 million polygons with a about 300 textures (with nothing over 2.7 megs) uses up just about all my VRAM on my 1 gig HD 5770 + 4.2 gigs in system RAM. This was with display optimisation turned to best and texture resources on the second and third setting (third setting took a good while to load into VRAM/sharded).

    Admittedly this is not a typical scene and I'd have to export it to render in 3Delight standalone to stand a chance of rendering it as is. And then I still have to admit the posing and viewport manipulation was just about flawless.

    So, it really comes down to what you're going to do with DS. If portraits and simple scenes is what you plan to do, then yes, just about any card currently on the market will do.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 1969

    @Jim

    Try with following parameters and tell me if you see a difference :
    - Display optimisation = none
    - Hardware antialiasing = off
    - Texture ressource = all for performance
    - Pixel Buffer = off

    I get visually no difference with all parameters maxed out or not. There is no difference too between my ATI Radeon X800XL which only has 256 MB VRam and my 2Go Quadro with the above settings and with these parameters my ATI has no problem handling any scene without slowing down. On my Desktop PC, my 12 Go RAM are eaten long before the VRAM of the ATI Card

    From my hardware I can say one thing : if my almost 10 year old card can do it, almost any card that features better than openGL 2.1 will do
    Second point back on topic : I also have Cuda with my Quadro and it doesn't bring anything in DS comparing to my ATI

  • Jim_1831252Jim_1831252 Posts: 728
    edited September 2013

    Like I said, I've been testing it out, so I knew exactly what would happen. Just for laughs I went and tested again. With optimisation at best the scene moves like water. With it turned off it becomes like trying to eat hamburger through a straw. My CPU is an i7 860, so it isn't exactly a push over.

    The textures take up the greater room in the VRAM by far. Geometry only accounts for around 100MB. At around 600K polygons my CPU starts to slow down in DS (without optimisation). Texture size is virtually irrelevant if you've got the RAM, so it isn't the VRAM that's the helper, it is the GPU itself.When panning the viewport the GPU ticks up to 100% frequently.

    It's all written up in the above linked blog post. I tested my grpahics card's capacity to virtually all system resources were used. The killer is geometry (at least as far a viewport performance goes). Textures are what will choke your RAM to death.

    Edit: and it's actually largely thanks to you, Takeo, that the second blog was written. Your mention of the new shader preview was too interesting not to go checkout. I mentioned all this in that other thread if you happen to stumble on it.

    Post edited by Jim_1831252 on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 1969

    Just read your article. Nice addition to your preceding one. That's showing what could happen with bigger scene (I should do the test with my ATI to see what's up)

    However I wasn't speaking of specific case but rather in common use (were mainly hobbyist no?) and not really thinking about DS beta's new features
    And I'm pretty sure you can make almost any card work with DS because it's still aimed at hobbyist market and making it work in a wide range of hardware is better to get more customers

    Sidenote : there is something you can add to your article but I'll let you get to the conclusion

    - Write down your VRam usage before starting DS
    - Start DS and load a scene
    - Write down your Vram Usage
    - Set your System to "optimised for best performance" (I assume you have Win7+ and you know how to do that, and may be you already know what happens with default settings)
    - Write down your Vram Usage

  • Jim_1831252Jim_1831252 Posts: 728
    edited December 1969

    It might be a specific case, but I assume it generalises to larger scenes in general. I've already stated the generally good performance most cards will get with typical scenes, so it doesn't hurt to talk about larger ones.

    I did mean to make the addition you suggest. I think the default usage would be between 70 - 200 megs of VRAM.

  • StratDragonStratDragon Posts: 3,249
    edited December 1969

    Joltrast said:
    I hope I'm posting in the correct forum - please move me if not! Thanks!

    I managed to fry my 8800GT card last week, (which isn't the end of the world as I'll be building a new system soon) so I need to get it replaced, but I'm a little confused by some of the new technology and wondered if some of you might be able to help me work out what would be the best choice for a replacement.

    Knowing that the graphics card can easily be upgraded in the future, I'm not so worried about going high end.

    The kind of range I'm looking at is something like a HD7870 or a GTX660 with either an i5 or i7 CPU. (The CPU will be in a new build, whereas the GPU is a replacement for the melted card, but will be moved to the new build).

    I've read in various places that you get better performance from OpenGL cards, but then I'm hearing people also talking about CUDA cores. I'm not sure how either of those impact performance.
    The new build will be a Win7 64bit system with as much RAM as I can afford (should the RAM be DDR5 now?) running DAZ 6.4
    I don't plan to overclock or have cooling.

    Any advice gratefully received. Thank you in advance!

    If you plan to do any CPU or hybrid rendering go with the i7, those 4 hyperthreaded cores do make a difference.

  • Dream CutterDream Cutter Posts: 1,224
    edited December 1969

    AMD FX-8350 Vishera --- 8 cores for less. My 3 liquid cooled render cows chew with them in my farm - all summer long 24x7 - 24 cores raytracing.

  • JoltrastJoltrast Posts: 199
    edited December 1969

    Joltrast said:

    Knowing that the graphics card can easily be upgraded in the future, I'm not so worried about going high end.

    The kind of range I'm looking at is something like a HD7870 or a GTX660 with either an i5 or i7 CPU. (The CPU will be in a new build, whereas the GPU is a replacement for the melted card, but will be moved to the new build).

    If you plan to do any CPU or hybrid rendering go with the i7, those 4 hyperthreaded cores do make a difference.

    Would the i7 create any kind of bottleneck with either of those cards?
    I plan to make print quality renders, possibly using the Reality plugin (I have not decided about that yet), mostly using UE within DAZ with 3Delight.

  • StratDragonStratDragon Posts: 3,249
    edited September 2013

    the bottle neck would be the slowest component in your system, so if your RAM is slow, that's the bottleneck, your MOBO bridge is slow then that's your bottleneck.
    At the time of me writing this (Sept 2, 2013 2:15 PM EST) Reality bridged renders to the LuxRender 1.3 RC1 engine are generally better quality using the CPU then the GPU, the GPU will give you great results quickly provided you have a good card but your wont get the quality of the CPU render.
    Intel or AMD is fine, it's generally a matter of preference. I have an army of Intels at work getting beat on all day and they're rock solid so I'm an Intel fan right now, that being sad I've built plenty of AMD CPU boxes and they're very good too.

    Post edited by StratDragon on
  • JoltrastJoltrast Posts: 199
    edited December 1969


    At the time of me writing this (Sept 2, 2013 2:15 PM EST) Reality bridged renders to the LuxRender 1.3 RC1 engine are generally better quality using the CPU then the GPU, the GPU will give you great results quickly provided you have a good card but your wont get the quality of the CPU render.

    Are you saying that LuxRender gives you a choice of GPU or CPU render?
  • Jim_1831252Jim_1831252 Posts: 728
    edited September 2013

    With Lux you can go hybrid cpu + gpu, cpu, or pure gpu. All of the GPU methods are somewhat experimental, but hybrid is quite good. If you do hybrid renders you are more reliant on CPU than graphics card.

    If you do plan to uses any GPU methods with LuxRender then the only real choice for graphics card is an AMD HD 7xxx +. Well, you could get something older. When you start to look at older cards from the HD 6000 series newer GTX cards 650 + become a lot more competative in terms of performance. This is basically quoting from the articles I linked.

    Post edited by Jim_1831252 on
  • StratDragonStratDragon Posts: 3,249
    edited December 1969

    I know you can use ATI cards but I think the devs over at LuxRender favor the Nvidia chips. I have a lowly GTS250 on a 3.5 year old build, the card can barely squeeze out flash animations but it is on the list of supported cards for hybrid renders. I don't use hybrid except for my own curiosity but the CPU is an i7920 and it's still a rock
    I'm holding off on the 6xx and 7xx's the price of RAM is all over the place and I think we might see 4GB DDR5 come standard in video cards so I'm strictly CPU until that markets figures out where it's going.
    That and I have 200+ cores off of dual Xeon's at my job and after hours farming off of flash drives is a good thing.

  • Jim_1831252Jim_1831252 Posts: 728
    edited December 1969

    Sorry StratDragon, your info there is very mistaken. Here the developers state best performance comes from Radeon 5xxx series cards http://www.luxrender.net/en_GB/gpu_support

    This is actually quite old info. Newer Radeon cards including all 7000 series are supported and soundly thump all nvidia cards. If you look at any thread on upgrade suggestions for LuxRender you will find AMD cards are always suggested by the developers. If you google nvidia vs amd luxmark results you'll find that Radeon's always win. The only instance where this isn't true is if you look at the results submitted direct to the LuxMark database where the Geforce Titan and GTX 680 have pulled ahead, but that is definitely not off the shelf results, and I doubt anyone that isn't a programer/opencl wiz could get that.

    It is widely known fact that nvidia has underdeveloped their OpenCL support in favour of their own CUDA technology.

    Not sure if 4gb is going to become the standard any times soon. There are some cards with 4 and 6gb, but the next crop of mid range AMD cards is set to be 3gb, but who knows what else will be announced. Not much known about GTX 800 series, but I would be surprised if they packed in more RAM than AMD when the opposite seems to be how it usually goes.

  • StratDragonStratDragon Posts: 3,249
    edited December 1969

    jimzombie said:
    Sorry StratDragon, your info there is very mistaken. Here the developers state best performance comes from Radeon 5xxx series cards http://www.luxrender.net/en_GB/gpu_support

    This is actually quite old info. Newer Radeon cards including all 7000 series are supported and soundly thump all nvidia cards. If you look at any thread on upgrade suggestions for LuxRender you will find AMD cards are always suggested by the developers. If you google nvidia vs amd luxmark results you'll find that Radeon's always win. The only instance where this isn't true is if you look at the results submitted direct to the LuxMark database where the Geforce Titan and GTX 680 have pulled ahead, but that is definitely not off the shelf results, and I doubt anyone that isn't a programer/opencl wiz could get that.

    It is widely known fact that nvidia has underdeveloped their OpenCL support in favour of their own CUDA technology.

    Not sure if 4gb is going to become the standard any times soon. There are some cards with 4 and 6gb, but the next crop of mid range AMD cards is set to be 3gb, but who knows what else will be announced. Not much known about GTX 800 series, but I would be surprised if they packed in more RAM than AMD when the opposite seems to be how it usually goes.

    good heads up, there have been some big advancements since the last time I looked and I think I might have cross referenced Octane with LuxRender, either way my bad.
    They do need to update that link though, most series 4xx card are going on 3 years old or more.

  • Jim_1831252Jim_1831252 Posts: 728
    edited December 1969

    Ah, the first thing I thought when you mentioned Nvidia was that you had Octane and Lux mixed up.

Sign In or Register to comment.