Free Non-Commercial RenderMan ...soon.
Just got an email update on this, it was supposed to be ready for SIGGRAPH but it's still in beta testing.
I can't help thinking 3Delight's days are numbered if their future potential customers are being carpet bombed by Pixar. With Softimage's death, studios that start migrating to Maya will probably be considering Pixar's new pricing ($495 per license) as they make the change and I doubt the numbers will be good for new students choosing free 3Delight over free RenderMan to go with their free educational copies of Maya. The "Name Brand" factor alone could crush them...
... and then there's the actual features those students will be considering.
RenderMan/RIS | Interactive Re-Rendering:
https://www.youtube.com/watch?v=ytdIF24H0ks
That's a beasty computer they're on...but..wow.
Free Non-Commercial RenderMan FAQ:
http://renderman.pixar.com/view/DP25849
Hopefully, the 3Delight team will be proactive about this but I fear that as their budgets get eaten away they will fall further and further behind.
DAZ might have to start thinking about long term contingency plans for their renderer.
Time to take another look at that "Render to RIB' option. :P
Comments
...received the same email today as well. So far the only compatible software appears to be the high priced stuff like 3DS, C4D, and Maya.
I was about to address the compatibility issue today when I noticed the email from them. Hopefully the Renderman/3Dlight team will also be working on an RIB bridge for Daz and other more "affordable" 3D software.
Love the "realtime" rendering and the fact it employs procedural shaders.
Just in time, received two AMD Radeon 7950s with 3G VRAM and 1750 processor threads each (from a friend who upgraded his gaming rig) which I am going to install this week.
ETA:
So I wonder if I can DL the installer to a mobile unit like a notebook and transfer the .exe it for installation on my workstation that does not have a Net connection?
The updated FAQ mentions third party integration with Blender but a quick google search didn't turn up anything worth mentioning.
...strange as Blender Development has been touting their own Cycles render engine.
Gave up on Blender anyway because of it's overly cumbersome keystroke driven UI.
Since 3Delight is Renderman compliant, It would surprise me if they didn't develop the RIB interface for Daz Studio. Maybe not right at release time.
Considering that we already have two different plugins for Luxrender, I don't think it's to risky to assume that someone might jump at the chance to make a plugin for a renderer from a company that even non-nerds are are familiar with.
...maybe a new project for Alessandro and Kendall :)
...the more I think about this situation the less I think 3Delight will be able to sustain themselves. Their Softimage customers are probably a much much smaller percentage of their income than their Maya base but starting to lose those licenses at the same time that Pixar is planting seeds for the new generation is just going to hurt their ability to make adjustments. I don't see them having much room to maneuver anyway, they're not just being undercut by a competitor, they're being undercut by the people they owe their existence to.
3Delight has been free for commercial use for quite some time now:
This is a smart move on thier behalf but its not doing to hurt DNA Research at all.
As a relative noob, I would be interested to know what advantages Renderman has over 3Delight. I assume that it offers better image quality. Is it any faster?
Cheers,
Alex.
I believe it is a hybrid render engine that can render both biased and unbiased.
...yes. I personally like the progressive style of rendering. I use the progressive mode in 3Delight however only for test renders to check scene elements for "floating" or "sinking" and shadows as it seems to have it's own defaults which ignore some custom settings (returning an error message in the History/Progress pane) and misses details particularly where some shadow effects are concerned.
As to Reality/Luxus/LuxRender, too bloody slow even compared to using the native 3Delight and IBL/GI. They (Lux) need to get cracking on finally making GPU assisted rendering available. Two to three days to cook a render job (that still isn't 100% clean) makes even old Bryce look lighting fast in comparison.
Did I mention that the new version of Reality, currently finishing development, is designed to support multiple renderers? No? Well, now you know ;)
...you mean like Octane, and of course, Renderman?
No, Octane is not something I will ever work with.
Renderman on the other hand is very interesting. Of course other renderers can be considered, if they make sense in this market.
Renderman's new license, that allows to use the program for free for personal use, fits the Studio/Poser world perfectly. For people who need to use it commercially, the new pricing, $495.00, is very attractive.
Cheers.
I have been working for the past 3 years, off and on, to update Stefan Werner's PoserMan, a RIB export script. I have almost everything working, even the ability to reproduce complex shading networks (translating the shader nodes into function calls with matching parameters), though I am still filling in the code for those functions (using a mix of porting the Cg code and consulting the RSL guide and Advanced Renderman). I am waiting until Renderman is released to finish it, mostly because there are inconsistencies and limitations with 3Delight, and Renderman will not have a core limit, so I am going to continue using the actual Renderman product. I have been unemployed for 5 months, but only recovered from the crippling anxiety of the job for about a month. It is my hope that, with Stefan's permission, I can do something with my script once it's done.
...indeed.
Even Maxwell is more expensive than the Rendrman commercial version, and that is for the "node locked" version. Meanwhile, the Maxwell "Learning Version" (79$) has a rendering size restriction while the free Renderman is the exact same engine (with no "hamstrings") as the commercial one.
Not really impressed with Octane myself either as to really make it useful (particularly for "big" scenes like I do) you need a really expensive nVidia GPU with at least 6GB of VRAM and lots of CUDA cores (like the GTX Titan) since it only recognises CUDA architecture. Totally useless if you have an AMD GPU.
Update / Edit:
You can use the integrated functions of the OctaneRender plugin for DAZ Studio to load only those textures that are actually visible in the viewport.
You can use the option to reduce the size of textures that are only visible in the background with the click of a button.
With some effort it is possible to fit most standard DS scenes into workstations with limited VRAM size.
You mostly need very large VRAM sizes if you only want to use the one click autoconversion of the plugin that will convert all materials in the scene and are not willing at all to do some manual work.
I put together some information on VRAM use and converting DAZ Studio materials to OctaneRender materials in another forum post to not go further offtopic here:
http://www.daz3d.com/forums/discussion/45153/
- - -
...I am primarily into single scene/single workstation rendering as I know I do not nor ever will have the resources (financial or hardware) to work with animation (unless I win a lotto ;-) )
Octane is too expensive, even the basic single node Maxwell is too expensive. Cycles is moot (also mentioned above), as I just don't grok Blender's overly cumbersome keyboard driven UI. Lux is just too slow and really needs a GPU boost to get render times down to a more "reasonable" level.
As to Octane 2.x, I would think that swapping between the GPU and parent system's memory would increase render time thus negating a good part of the engine's performance advantage as for improved rendering times, one would still need a hefty GPU.
On AMD and Octane, FirePro is effectively AMD's "Quadro" line, a similarly more expensive unit compared to the Radeon series.
Regarding "in the cloud processing", I have no interest in leaving my work on someone else's server even if it is only for the duration of the render process (and of course having to pay extra for it).
My background in art is as a painter and set/lighting designer, not a photographer (another reason I have difficulty with Luxrender). As I read in a couple of the responses to the linked-to thread, I also don't use extreme settings so any distortion that may present itself is pretty minimal. I actually feel I have been achieving pretty decent results combining photo backgrounds and 3D models in a rendered scene. Heck I watch a lot of trailers for what I call "CG-eye candy" films and can see the CGI elements standing out from the rest of the surroundings (sometimes annoyingly so). If the same happens with studios that have megabuck budgets to throw around, I don't really feel all that bad if my little scenes aren't 100% "photo real". Maybe this is why I actually prefer Pixar's fully CG generated worlds over the compositied effects used in most action films today in that at least their work is consistent within itself.
The bottom line is I am looking at the best solution for my current hardware setup which also fits within my meager budget. If going with Renderman will be an improvement over Daz's built in 3Delight, it will be worth it.
@Kyoto Kid
Actually Lux is quite fast. The issue here is that you might be using a slow machine. I don't know your configuration so I'm only guessing here. CPUs have stalled in the core speed area. In the last few years we have very small increments in speed. On the other hand the number of core available has increased dramatically. Rendering is typically a parallel process and Lux takes advantage of all the cores and hyper-threading available.
Five years ago I bought an 8-core machine with Xeon processors, which means that Lux renders at 16 threads. The results are great and the speed is good. This is on a 5-year old machine. If you buy a computer today and you don't get at least 8-cores then you are penalizing your rendering. Parallelization is the way of the future, actually of the present. So, if you have a machine with a limited number of cores than you might have to wait a little longer. That is a hardware issue though, it's not from the software.
In any case Lux 2.0, which should be out in December, has a redesigned architecture with real-time rendering optimized for several GPUs. And since it uses OpenCL it runs on all modern GPUs. With OpenCL LuxRender uses both the CPU and the GPU, while CUDA cannot do that.
So, in a matter of a few months we will have a new architecture, new speed and still the great features that Lux had since the beginning: top-of-the-line physics rendering, multiplatform, extreme accuracy and the best price in the market: free.
I think the Renderman will be a tremendous addition. This is the same Renderman used to make all the Pixar movies and if that is good enough for a multi-billion industry that creates spectacular movies that are then envy of the whole world, I think that it will be good enough for Studio users :)
Cheers.
I have a more detailed explanation of the speed and CPU issues in my blog: http://preta3d.com/luxrender-life-in-the-fast-lane/
I hope that it's OK to post this link here.
For the more frugal out there... a ton of servers are on ebay. Decent old systems, such as HP Proliant systems generally have dual Xeon processors. You can have 8 additional cores working on your renders for $200 to $400 (and up as the age goes down and core counts rise). These old systems are generally extremely reliable. You can pick up a free version of Linux. Install a Luxrender slave on it... Fire up your render on your PC... send a copy over to however many slaves you have and eat them up! It's really neat to watch! The worst part of it all is they are generally loud systems, so you might want them tucked somewhere in the distance. You will need to think of heat. Beyond that it's just the extra electricity cost. Xeons are vastly superior apples to apples vs. non-xeon cpus.
The connection to the slaves is made through the Luxrender GUI. This makes it Studio plug-in independent. Yes! Owning your own render farm is not out of reach for most. :)
If you want something more power efficient and less noisy, you can consider the motherboards with Intel Octa Core Avoton C2750 cpu
http://ark.intel.com/products/77987
like ASRock C2750D4I Mini ITX Server Motherboard
http://www.newegg.com/Product/Product.aspx?Item=N82E16813157475
...currently have an i7 (8 threads) with 12GB memory in tri channel mode. GPU is irrelevant as it does not figure into the rendering equation (until Lux finally introduces GPU based rendering). I am not into overclocking as I am not a gamer and that only compromises system longevity. I finished building my system just before losing my job early last year, so any idea of upgrading to anything bigger and faster is pretty much out. Yeah, I know there are a number of members here with newer more powerful systems than mine (ps1borg has the latest Mac Pro with a 12 core Xeon and 64 GB memory), but again, being unemployed I would need to win tonight's lotto to consider "keeping up". That said what I currently have is still a major step up from what I originally had been working on for years: a duo core 32 bit notebook.
Even so, I have been hearing from others with more "horsepower" than my system has that they are still waiting upwards of a day or two for a render to finish cooking.
..well I need to first get a more steady income and then find a bigger place to live before considering such a setup. As I am not a linux person that means also having to get multiple copies of Win7 Pro/Ultimate as most likely these systems will have/support more than 16GB. Furthermore wouldn't each one would need to have a GPU as well?
My "dream workstation" would be built on a Supermicro Server MB with Dual Xeon 2697s (12 cores ea) 768GB memory, nVidia Quadro K6000 (12 GB GDDR5) and multiple SSD's/HDDs. Basically it would be a "deskside supercomputer" .
You have a fair system Kyoto Kid. Unless you hit the swap you should be able to get some good renders at decent pace. Without highjacking this thread more (apologies for this), we can continue the discussion in the Reality forums and I will gladly help you optimize your render times.
Cheers.
...thanks.
Of course still very interested in the Reality/Renderman connection. I can get some pretty good if not slightly amazing results with the native 3Delight which is why I am so excited about the Renderman offer. I may even be able to use Garibaldi Express without exporting hair to a .obj (which really spikes the polycount).
Paolo, you are SO right about this.
I think a lot of people "shoot low" when they buy or build a new system for rendering, then wonder why their performance and speed is not what they wished for.
Oh, thank you also for keeping up with your timely email notifications to your customers. Now I just need to do a better job of reading them in a timely manner!
You're very welcome Subtropic Pixel.
I think that a lot of people are not aware, yet, of the advantages of multiple cores and so they avoid making the investment. Multiple cores are now used by so many softwares and they help not only with Reality and LuxRender but with Studio itself. All 3D programs are getting updated to support multiple cores.
At the bare minimum, your OS takes advantage of the cores and you het better, faster multitasking.
Glad that you like the updates. Everything is on the website, so you can read the post any time you want :)
Cheers.
For me it's less an "unaware of the advantages" issue and more of a "costs and time" issue. I'm a slave to my production cycle, and that cycle is short. As much as I'd love to have renders that look so good, I just can't afford the time.
3Delight is fast. Biased renderers in general are faster than unbiased. And, from what I'm hearing, networking creates bottlenecks that are unsupportable unless you were planning on the render taking days anyway.
The other thing about multi-CPU rigs is that they require server licenses, unless Linux has gotten about nine hundred percent easier to use than it was the last time I looked at it. Windows 7 licenses are around $75 right now where Windows Server 2008 R2 (its server counterpart) costs around $700. At that price it'd be cheaper to just build multiple quad core machines and do simultaneous different renders on them in 3Delight instead. They still won't look as good as renders from an unbiased engine, but they'll be a heckuva lot faster and cheaper (and I know I can sell enough product that way to keep me in business).
...when I built my system, the "new" 6 core (12 thread) i7s were about 1,000$ compared to 270$ for the 4 core (8 thread) one I have. On my budget I had to go with what I could afford (the total cost of the build was about 1,400$).
AMD does not employ hyperthreading architecture and at the time they only had a straight 6 core CPU available. Furthermore, the i7 has a wider pipeline (about 3 times) between the CPU and memory than the comparable AMD CPU offered. Like a superhighway, the fatter the pipeline, the more instructions and calculations that can be passed back and forth between CPU and memory which is important when dealing with CPU based rendering.
As to server licences, yes they are pricey. It's no wonder the Dual Xeon Mac Pro (last generation) was so expensive as only server boards (at least as I know) can support multiple CPUs.
Of course if one can drop 18,000$ - 20,000$ on an "ultimate" duo 12 core Xeon workstation build with 512 - 768GB of memory and Quadro K6000 GPU, 700$ for the Win7 server licence is the least of your expenses.
The consumer version of Windows 7 and 8 will run with a 2P (2-slot) CPU system, and does not require additional license cost. It's when you get to 4P systems that you start sweating the pennies, because 4P native Windows will require Windows Server, which gets into the 4-digit dollar amounts.
I have not investigated this, but I think it may be possible to run a 4P system with VM and two Windows 7 or 8 guests. In the VM, you'd restrict the Windows 7/8 guests each to a maximum of 2 CPU slots. This would avoid the need for Windows Server. You'd still have to buy two separate Windows licenses. Such a system starts to get a bit convoluted in terms of support, but still, $400 is a lot cheaper than the cost of a single $1200+ Windows Server 2012 license. And the hardware outlay only requires one PSU, one case, one motherboard, one hard drive (if it's just a rendering server), one UPS, and so on. Basically, you're building only one PC but making it act like two (or more).
What I would prefer to do is to find a rendering engine that can use CPU, OpenCL, and CUDA on either Windows (local) or Linux (remote).
Then I'd set it up like this:
System #1 - Local workstation 1P or 2P 8 Core + Hyperthreading + either OpenCL or CUDA. With hyperthreading, this is getting into the 16-32 core range. Nice. Good for test renders.
System #2 - Remote rendering server running in guest bedroom as 2P or 4P Xeon 10 + Core + Hyperthreading + OpenCL and/or CUDA. This would be a total of 40 or more CPU cores, twice that if you count hyperthreading. For the heavy stuff.
The "System 2" would be constantly running my distributed computing stuff (Folding @ Home, etc), and would dynamically shut down the folding processes when it receives rendering work to do. Or if I have a guest staying in that bedroom who might like ambient room temps under 85 degrees f. When the network rendering pipeline is empty (or when my houseguest leaves), then folding would start up again.
To economize software costs, System 2 would run a VM with one or more Linux guests and a Windows guest. The Windows guest would only have access to two CPU slots, so therefore could run a consumer version of Windows and would not violate license terms.
My biggest problem right now, and a major reason I haven't done this, is because at the moment I prefer OpenCL for GPU compute, because it is more efficient for the Folding at Home work. And not all renderers use OpenCL. So this dichotomy causes financial challenges, because for every slot I give to a CUDA card (say for Octane (which I don't have yet)), I lose a slot for OpenCL.
This is all pie-in-the-sky conjecture right now and I don't have the money to do any of this anyway (busy fixing the house this summer). So it's just as well because I can wait for something in the marketplace to pop loose.
In order for this to work, the host OS would need to be Linux. The VM environment will only see the number of CPUs that the Host makes visible. I've seen it tried.
Kendall
The consumer version of Windows 7 and 8 will run with a 2P (2-slot) CPU system, and does not require additional license cost. It's when you get to 4P systems that you start sweating the pennies, because 4P native Windows will require Windows Server, which gets into the 4-digit dollar amounts.
I have not investigated this, but I think it may be possible to run a 4P system with VM and two Windows 7 or 8 guests. In the VM, you'd restrict the Windows 7/8 guests each to a maximum of 2 CPU slots. This would avoid the need for Windows Server. You'd still have to buy two separate Windows licenses. Such a system starts to get a bit convoluted in terms of support, but still, $400 is a lot cheaper than the cost of a single $1200+ Windows Server 2012 license. And the hardware outlay only requires one PSU, one case, one motherboard, one hard drive (if it's just a rendering server), one UPS, and so on. Basically, you're building only one PC but making it act like two (or more).
What I would prefer to do is to find a rendering engine that can use CPU, OpenCL, and CUDA on either Windows (local) or Linux (remote).
Then I'd set it up like this:
System #1 - Local workstation 1P or 2P 8 Core + Hyperthreading + either OpenCL or CUDA. With hyperthreading, this is getting into the 16-32 core range. Nice. Good for test renders.
System #2 - Remote rendering server running in guest bedroom as 2P or 4P Xeon 10 + Core + Hyperthreading + OpenCL and/or CUDA. This would be a total of 40 or more CPU cores, twice that if you count hyperthreading. For the heavy stuff.
The "System 2" would be constantly running my distributed computing stuff (Folding @ Home, etc), and would dynamically shut down the folding processes when it receives rendering work to do. Or if I have a guest staying in that bedroom who might like ambient room temps under 85 degrees f. When the network rendering pipeline is empty (or when my houseguest leaves), then folding would start up again.
To economize software costs, System 2 would run a VM with one or more Linux guests and a Windows guest. The Windows guest would only have access to two CPU slots, so therefore could run a consumer version of Windows and would not violate license terms.
My biggest problem right now, and a major reason I haven't done this, is because at the moment I prefer OpenCL for GPU compute, because it is more efficient for the Folding at Home work. And not all renderers use OpenCL. So this dichotomy causes financial challenges, because for every slot I give to a CUDA card (say for Octane (which I don't have yet)), I lose a slot for OpenCL.
This is all pie-in-the-sky conjecture right now and I don't have the money to do any of this anyway (busy fixing the house this summer). So it's just as well because I can wait for something in the marketplace to pop loose.
...as I am not up on networking (and certainly not Linux savvy) I'd still look to keep everything in one "box" (or in this case "rack") for simplicity's sake. Yeah, my dream "minisupercomputer" is pretty much just that, a dream though it is interesting to see just how far I could take the concept of an "ultimate CG workstation".