Yet Another Hardware Question
First of all, I'm really sorry if this is in the wrong place.
I know that a certain someone (from somewhere else) have said:
"If you want faster rendering time get a better CPU, if you want to work better in viewport get a better GPU"
Well i'm gonna get (going to, not yet bought) a Radeon R9 270, 2Gb memory, 256-bit bus (Yes i know there are better ones, but for now i prefer to chunk out only this much money)
Does radeons work well with hardware assisted renders? namely Daz's OpenGL (or GLSL or whatever it is called)
A friend said to me that to work with OpenGL better i should get a radeon card and not nvidia, since nvidia intentionally cripples their OpenGL capabilities in favor of their own CUDA architecture.
What do you guys think?
Comments
I like the Radion and AMD approach, works well for me. Good graphics card for consumer 3d. You might find see if an AmdFirePro V4900 is available. Cheaper for pro graphic use and you only need 1gb ram.
This is the first I've come across the one about NVidia graphics cards. Ever since the very first public test releases of D|S, though, NVidia has been the recommended brand because of its OpenGL support. You don't even need much more than a halfway-decent card — I have an ancient GeForce GT240, and with up-to-date drivers from NVidia that gives me OpenGL 3.3.0. The Viewport is sometimes a bit sluggish in a really object/textures-heavy scene, but still perfectly useable. Radeons are also known to work well with D|S, with the small gotcha that when you update drivers, you should be prepared to roll back to your previous version; every now and then an update comes along that sees a D|S render job and tosses its electronic cookies. (The following update usually fixes this.)
Whichever card you go for, though, it should have the most onboard GRAM that'll fit (or the most you can budget for), since this determines not just the Viewport speed/efficiency, but also how many textures the objects in the scene can have before things start slowing down.
Actually the reference to Nvidia crippling performance is for OpenCL, not OpenGL. OpenGL has been the standard for interactive 3D graphics viewport performance acceleration for many many years, and ever since the introduction of the Nvidia Quadro line of video cards (1990's), Nvidia has been considered one of the leaders in OpenGL support (and still is).
OpenCL is the "open source" competitor to Nvidia's Cuda. Both are designed to provide support for massively parallel processing via the GPU. More specifically : "OpenCL™ is the first open, royalty-free standard for cross-platform, parallel programming of modern processors found in personal computers, servers and handheld/embedded devices. OpenCL (Open Computing Language) greatly improves speed and responsiveness for a wide spectrum of applications in numerous market categories from gaming and entertainment to scientific and medical software". More information can be found here: https://www.khronos.org/opencl/
"CUDA® is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU)". - See more at: http://www.nvidia.com/object/cuda_home_new.html#sthash.rNpAXtk5.dpuf
DS/3Delight does not use either OpenCL or CUDA for rendering, The OpenGL rendering in DS does use the video card for rendering, but it just uses OpenGL, just like what is used for the interactive viewport in DS, only at better quality settings. The quality or capabilities of the OpenGL renderer in DS is no where close to what is provided by either CUDA or OpenCL in quality, capabilities, or performance. To use either CUDA or OpenCL for rendering with DS you would need to use the Octane Render plugin (cuda only) or use Lux Render via Reality or Luxus (OpenCL - but GPU only rendering in Lux currently has a fairly limited feature set, really isn't easy to use, and is also very easy to crash). Other options exist for those who are willing to export their scenes to external render engines. Blender/Cycles (cuda) and Thea Render are two good choices here.
With regard to Nvidia "crippling" OpenCL on their cards, that's a claim that seems very popular on various forums, but IMHO really doesn't make sense. I tend to think that it's more a case of Nvidia is putting their resources into optimizing CUDA which is their own technology, while making sure that OpenCL is functional, but not putting a lot of resources into optimizing it. Because CUDA is proprietary technology, it's not available on ATI cards, therefor ATI is putting significant resources into optimizing OpenCL performance. I know there is a lot of talk on different forums about how poor Nvidia cards perform with OpenCL compared to ATI (with some people implying that they are almost non functional), but the speed of SLG (Simple Lux GPU) with nvidia cards is still quite impressive. Take a look here (http://www.luxrender.net/luxmark/) to see some benchmark scores, there are a lot of Nvidia cards in the top GPU scores.
As for which is "better" ATI or Nvidia, they both have their pluses and minuses. If at some point you want to start using true GPU rendering, but don't know which flavor of GPU processing you will be using, the safest option is an Nvidia card, as they can run both CUDA and OpenCL. If you do a lot of gaming, I'd look for specific benchmarks for the card(s) your interested in to get the most bang for your buck. If all your interested in is display performance it DS, IMHO either ATI or Nvidia will give great OpenGL performance (I've used both), it really depends on what you prefer.
Sorry for the long reply, there seems to have been a lot of confusion between OpenGL and OpenCL lately (and it's quite understandable due to how similar CL looks to GL, especially on smaller displays like are found on tablet computers). Hopefully you found something of value.
FWIW, OpenCL appears to be good on my card and is mentioned in the spec sheet; I use Luxus now and then (still trying to get my head wrapped around it), and LuxRenderer has two versions, one with OpenCL support and one without. I installed the one with, and whenever I try a hybrid render (parallel processing with CPU and GPU) it sometimes gives impressive render speeds. Not entirely reliable yet, though, LuxRender OpenCL support for hybrid mode is still sorta in Beta.
Thanks for the replies, that really shed some light on the choices i'm about to make.
As for choices in working with renders (and/or gpu assisted rendering):
My choices expanded to a GTX470 (found one that's on sale around the block) and (still) the R9 270/x
The bottomline of my choices would come down to whether or not i'm willing to invest in Octane (since LuxRender is free). But thanks for all the input and thoughts, it gives something to think about (in a good way, instead of adding more brain farts).
I'll be sure to share my choice later on and how it works out :lol:
edited
FYI: What i'm doing with 3D is I'm creating a Blog Mascot-slash-PosterGirl, so i don't really render heavy scenes just one character (or two at max) with a simple backdrop. image provided is somewhat a sample of what "she" does.
This is why i need faster render times compared to the 10-15 minutes per render (yes i know that sounded ridiculous, but i do my posts in comic formats, a single comic page can consist of 5 - 8 images of said person in different poses) so i can chunk out more posts.
LuxRender is free, but the two plug-ins that allow simple conversion/bridging from DS to it (Reality and Luxus) are both paid for items. There is, or was at least, a trial version of the Octane standalone render engine (you can epxort from DS as obj and import into Octane).
With that out the way ...
I have, and use, all three render engines and have no bias or connection to any.
LuxRender has 3 principal modes of operation: CPU only, GPU only or a hybrid. As has been mentioned the GPU utilisation is still being worked on. LuxRender is a damned good render engine, just go look at what people like Charley have created using it. With the right simple setup CPU-only renders CAN be blisteringly fast and a final Samples/pixel value of 700+ can provide a decent result.
Octane, likewise has 3 main modes of operation: diretc lighting, path tracing and PMC which give, in order, better handling of light, and in particular mesh lights. Being purely GPU-based, Octane is very quick, though you can, of course, slug it with complex scenes full of lights and reflections. You also have limits of the size/complexity/textures of the scene, though with your suggested image I do not think you'd have much issue. The DS/Octane plug-in is still in beta and makes use of the old 1.2 version of the Octane render engine (which it is hard-coded/linked to use) where as the latest standalone is 2.3. Octane only started to make use of Displacement maps at version 2.
3Delight has the great benefit of being free and allowing unrealstic lighting! It also allows full use of all the various shaders the PAs use. Both Octane and Reality/Luxus can 'choke' a little one non-standard shaders and, at best, map across basic map data. 3Delight can produce wonderful images and can do so quickly. It can also take a while ...!
Something that pertains to all three is that PPPPPP (Proper Planning Prevents P***-Poor Perfomance). My advice, whatever route you go down is to spend a fair degree of time with the basic model, adjusting materials and lights, etc., until you are happy. Save that as a scene so you retain all that work - both Octane, Luxus and Reality will save plug-in specific data so all oyu need do is load the scene, pose the figure, render ... ;)
You don't need to buy any new video card unless you're going to use Octane. The 270 is descent for Luxrender according to Anantech's bench http://www.anandtech.com/bench/GPU14/845 . The Luxmark DB is full of false results and I can't get Single GPU benchmarks. ATI is better at OpenCL than Nvidia but if you need Cuda there is no other way than to buy Nvidia. Your choice is more of an application choice than hardware.
Nvidia may not be crippling it's OpenCL drivers (Not openGL) but that is sure to be a field where they don't push themselves . I bought the GTX750 Ti because I use Blender/Cycles and need Cuda. I thought it would be OK but I have some bugs with Luxrender. Waiting for OpenCL 1.2 from Nvidia if it ever comes out (there are rumours it is coming).
Right now unless you want to go the GPU rendering way, you'd better buy a better CPU and learn how to render more efficiently with DS.
If you do go with an NVIDIA card, check out Teleblender by casual in the freepository.
Gus
Here is an example render that took 2-3 min. in Octane using a GTX 670M with 3Gb of VRAM (the GTX 470 should provide fairly comprable performance). Nothing special here, just playing around. But it should give you an idea regarding GPU rendering speeds with simple scenes, and simple (no SSS) shaders.
Keeping your future options open for GPU rendering using Octane, Blender/Cycles, or Lux might be a good idea if you have any interest at all in GPU rendering. Of course the AoA ambient, spot, and distant lights might also provide the speed you are looking for in DS/3Delight.
Hi dustrider, what is the name of that outfit the lady is wearing? I forgot.
Hey starionwolf! It's Prehistoric Princess for G2F
Thanks dustrider, I will bookmark that page.
The render is pretty. Her skin looks nice.
That certain someone is correct if he or she is referring the the 3D engine/render caalled 3Delight that Daz Studio uses. 3Delight render uses the CPU, not the graphics processor. The viewport (preview) of Daz Studio does use OpenGL and the video card. The OpenGL preview render uses the GPU as well. Renders using the OpenGL engine leave much to be desired. I don't think Studio's OpenGL render applies shaders or transparencies.