Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
...so basically unless I can afford a Quadro P6000, I am stuck with CPU rendering in Iray.
...I have no choice as my GPU has only 1 GB of VRAM. Even so, most of my scenes end up rendering in swap mode which is even slower thanks to the fact I usually have only 2.5 - 3 GB overhead for rendering due to the base amount of memory the scenefile and Daz Programme take up. Doubling my system memory to 24 GB would help but it also means upgrading the OS from W7 Home to Pro Edition. all totaled, about a 255$ expense.
No. As has been discussed in other threads, you can lower the scene's size through techniques like using the texture atlas, hiding geometry that will not be seen, turning off bump and displacement where it is going to make absolutely no difference, etc. These will lower the resource load (even in 3DL) making it more likely that things will fit in VRAM and speeding up the render in general.
If the tongue is not seen, turn it off (same with teeth and inner mouth). That removes the facets and the texture load for it. It also means that the engine doesn't need to do visibility tests for it during render. This goes for feet, legs, upper arms, etc when they are completely covered by opaque clothing. Doing that can bring an otherwise huge scene down significantly in size.
Kendall
+1
This is partly because the tools needed to do highly detailed morphs aren't available to the prospective vendors, so most new folks get into the habit of doing the highly detailed textures from the beginning and don't change how they do it.
I'm not talking about morphs, but the base geometry. HD Morphs are a completely different animal. PAs have been conditioned to create models using almost "game level" numbers of polygons simply because 3DL preferred it. For instance, if modeling a house made of brick, the tendency would be to create the walls with as few facets as possible, apply a brick texture and brick displacement map to match the bricks. In 3DL, this would almost be mandatory. These types of habits are very hard to break. Iray would prefer that each brick be made of its own facets, separated by facets for the mortar. If the bricks could be instances, so much the better.
Kendall
True, at least to a point. I agree with you on the environments and even clothing items, but until DAZ provides a separate high polygon base figure specifically for use with Iray, character vendors will have to make at least one low poly figure and sell it before they can get the tools to be able to improve how their characters loom in Iray to any significant degree.
...that is still a lot of time consuming and tedious piecemeal work particularly in a large scene like I create. . Also again I am looking to render large format so turning off Bump and Displacement could have an undesirable impact on the final rendered scene. Iray already poorly interprets these last two values without really cranking them up.
Pay, Think, or Wait. I have to do one of these. I will not get fast high quality renders without effort for free.
This is quite true
If one prefers to do dense ,content heavy stills one should at least consider learning some "cheats" like rendering elements seperately and compositing them in post or at least implementing some of the polycount optimizations
suggested by Kendall Sears.
Or invest in some $$uber hardware$$.
One needs to be willing to abandon long held practices and learn diferent
ones( hence the thinking)
No easy solutions no matter what course you take.
some studios may use CPU for some aspects and GPU for others. They may not want a propriety engine which locks them into a proprietary hardware necessity, or their finding may show GPU rending engine for specific functionality is not as exact as a CPU based methodology. 3D studios are popping up all the time and with results raging from utterly astounding for a small group of people to 'do they know what a bump map is?' but a variety of engines are being used . Pixar uses it's own engine developed nearly 20 years ago and it allows specific features that may not be available in proprietary engines, it may also be specific for their modeling pipeline. If you coming at this from a Studio centric mindedness that "iRay works for me, it should work for everyone " you may not be considering the workflow of something far more involved and demanding.
You're not Kendall Sears..... oh... there's Kendall Sears..... I'm all ears..... :)
Sorry, just slipped into the silly season.....
I remember watching a small "Extra" on the Battlestar Galactica blurays about the special effects and the local render farm they had built on the premesis. It was very impressive, and at a time when I didn't even know about DS or Poser.
I don't know what digital tools that series used, but if anyone does know please chime in. I am still amazed by the Galactica digital item used in that show. Did they ever make it available for public use?
I suspect hobbiests under-use rental render farms. With DS, there isn't a pratical way (I know of) to get your files to the farm, but if you're working with a supported application like the Autodesk pacakges, C4D, or Modo, pushing your renders out to a farm is probably cheaper than buying the hardware to do them nearly as quickly…and you put the burden of keeping current on the farm owners, as well as the power use and noise.
I spun up a Vue RenderCow instance on Amazon Web Services to see if it could be done. It can, and it worked reasonably well. You can fire up an 8 CPU Windows box for about 77 cents an hour.
If the alternative is purchasing a new system at $2400, that's about 3100 hours of render time, without the heat and power issues.
DS supports RIB output for Renderman export and standalone rendering. Can't get much more standard than that.
Kendall
FRom what Kendall has said, I think it's more a case that Iray is treating them like the poor substitute for real details in the mesh that they are. I Would love to be able to take Nadiya's mesh to the next level and make her have more surface detail, but even at SubD 3 she still doesn't look quite right.
I don't agree with the bolded part at all. I was making reference to maps being bad in the context of Iray only. Iray's use of maps isn't well implemented, but when implemented well maps are an excellent way to introduce "hard to model" details into models. There are some types of features that are just too freaking hard to get right using geometry, displacement/bump/normal maps allow these to be added without making a huge mess of the mesh. Iray is one of the outliers that actually excels at working with geometry. Most engines really, really struggle when poly counts get high. nVidia happened to get it done well.
I believe in using the right tool for the jobs as well as using thebest features of the tool to get the most out of it you can. 3DL excels at using maps, and so it is perfectly appropriate to use them there. Whether one needs 4Kx4K maps for EVERY SURFACE is a different argument. Iray excels at geometry, but that doesn't mean one needs to unnecessarily add geometry where it isn't necessary.
Kendall
The nice thing about Studio...you do have choices. By including and supporting both (at least at the application level) both Iray and 3Delight there is a lot more that can be done that with just one of them. And that doesn't even count 3rd party exporters...for Luxrender (2x), Blender Cycles (free script by Casual) and Octane.
I've got Octane and Thea, but lately I've been going the other direction and aiming more towards CPU rendering instead.
The reason? Because of Carrara's ability to network render natively, which I had never taken advantage of before. But I bought a new desktop, and then thought 'hey, I've got these 2 laptops that are going unused' and installed a render node on each, and instantly I went from 8 render cores) to 24 render cores.
Then I discovered Ebay, something I'd never really looked into before, and realized that you can get 1st, 2nd, and 3rd generation i7 computers for a next to nothing! I guess corporations are constantly upgrading their workstations to the latest/greatest, and turn around and sell the tech from yesteryear for peanuts, and the price is driven down even lower by the fact they all dump their stuff on the market at the same time.
I remember the bad old days when buying any computer with an i7 cost an arm and a leg. Not so anymore! So I thought, 'why not build my own little render farm on the cheap?' I picked up a 3rd generation i7 workstation for what I thought at the time was a very low cost of $250, and when it arrived I now was up to 32 render cores. Whoo-hoo! I was addicted!
Then someone on the Carrara forum wisely pointed out I was thinking too small, and I ought to set my sights on a dual CPU Xeon server. I was absolutely floored that I could pick up a z600 hp server with 2 hyper-threaded xeons for just over $200 (!)
To put that in perspective, this is a server that 4 years ago would have cost me over $5000 to buy... and I got it for just over $200!
Why would something that was nearly twice as powerful end up costing me even less than an i7 machine? I think it's because the consumer market doesn't really think 'server' when they go out to find a computer, and the professional market is too focused on buying the newest and most expensive tech, but this kind of thing is absolutely perfect for those of us in with a rendering hobby. (by the way, I'm not a tech person, so I was a little afraid of getting a 'server' as I thought it might take labyrinthine tech skills to operate. Turns out, it's no different than any other computer. I installed Windows 7 pro on it and was good to go.)
Now I have 48 cores of rendering power at any time I want (it really flies). Perfect for high-quality, very complex scenes and animations.
But I actually bought 'stupidly' and while I'm not unhappy with my purchases, I really could have paid even less. See, I had never done an Ebay auction before and didn't know how. I just went to the marketplace, found all the 'buy it now' entries, and picked the ones that looked like the best deals and bought right away. I've since learned that paying just a little bit of attention and being willing to bid, a buyer can do even better than I did. And I also think I've become addicted to watching the Ebay auctions tick down...
I bought a 3rd laptop (2nd gen i7) for $150. Then I was mad a myself because while I got the 2nd best deal of the day on any i7 laptop sold that day, there was another auction a few hours later for a similar spec i7 laptop that went for freakin' $79! And I don't need a 3rd laptop, but I told myself I wanted a backup to my other two laptops (one of which I use for work, and both of which I paid a ton of money for, so I wanted a laptop that could fail at any moment and it wouldn't pain me in the least).
Then I bought an hp 8200 i7 workstation for $140, just because I couldn't believe the price (I really thought in the last few seconds it would get bid up much higher and I would lose it). This is very comparable to the 1st machine I bought for $250, and most models of this type go for about that much.
I really shouldn't have made either of those purchases, but the prices were so low that I just couldn't resist, and the thought of adding all those render cores... Yes, I begin to fear I'm addicted. I made a promise to myself: no more!
Then yesterday a Lenovo d20 dual core Xeon server came up and no one was bidding on it. I was sure it was an exercise in futility but I put in a bid - and got it for $124! That's 2 more xeons and 16 more render cores. Yes, I realize I have a problem and need to start going to meetings, but man what a steal, I've never seen any Lenovo D20 server go for less than $300, so even if I come to my senses I could probably re-sell it for a substantial profit margin.
So the last 3 that I bought haven't come yet, but once they do I'll be up to 80 render cores(!) Contrast that with the priciest Xeon cpu on the market right now, which has 44 threads and costs $4100.00, and I can't help but think it's a very good time to be a render enthusiast
...the downside is so much new content is being released with Iray shaders only, converting to other programmes/engines can be a real pain.
...I was going to bring up the subject of third party render farms the other night but it got late and I was tired.
I have been discussing this with a friend of mine and yes, in an ideal situation, it can save a lot of zlotys compared to buying/building a beefy system. The one rub, Dax studio does not yet have a network render interface. Carrara does, have one however I have little issue with Carrara render times compared to Iray CPU mode. A comparable scene that I can render in Carrara in say 15 min could easily take 5 - 6 hours in Iray.
Now I wouldn't waste processing time and money on numerous test renders, but for say proofing and the final finished image, yes it could be very economical, particularly for the quality level I am needing.
Apparently Daz has (or maybe had) something in the works as under the Advanced Render settings in 4.8 there is a tab labelled "Cloud (Beta)" which has server designation and login along with several parameter sliders.
I've said it before...
I'd prefer if all content came with just maps, no presets. Then which renderer is used wouldn't matter...the end user would have to set up the materials for the one they preferred. Frequently, the presets are either not optimal or even contain flat out wrong settings (color maps in control slots, greyscale maps in things like glossy color locations and so on).
I was surprised to see this, as on my current little render farm I'm running win7pro, win8pro, and win10pro on my various PCs, but I'm not encountering any cpu core limit yet (currently rendering with 48 cores, will add another 32 cores soon). All I did was set all my PCs to the same homegroup network so they could talk to each other, and they seem to all render together in Carrara just fine. Did I hit some lucky loophole here, or am I misunderstanding what the limitation you're describing is?
Whoa, that's fascinating! I had no idea, going to have to concentrate on Xeons then in the future! It does seem to square with the fact the render buckets in my Xeon machine seem to render much faster than I would have expected for a 2.4 Ghz processor. I'm still looking at upgrading to 2 hexcore Xeons at above 3.4 Ghz power in the near future (it boggles my mind these cpus were orignally over $1600 a pop and now can be had for just over a hundred dollars) but I was surprisingly happy to see how fast even the little 2.4 Ghz renders, and I think your technical explanation goes a long way towards explaining why this might be.
I kind of agree with this, since that's pretty much the way I view all products (to render in Carrara or Octane or Thea I'm going to have to tune the shaders anyway, so I'm used to this). but in practice I think that's probably asking a lot of a new user just coming into Studio or Poser for the first time, and I can certainly see why vendors sell products with shaders pre-made (even though personally I think people would be much happier with the render results if they took the time to understand materials/textures/shaders settings and tweaked the settings themselves).
That is one very big advantage of doing it yourself.
But, with all the maps in the 'normal' locations, presets can still be made and shared...they just don't have to be considered 'the only way'. Since the standard ways of saving out the presets, in recent versions of Studio, don't require any 'protected' data to be shared, it's not that hard to do.
...indeed, there would be a lot less people using this software were that the case (self included).
Jonstark - I congratulate you for your render farm. I'm using network rendering with Bryce since v5 but only on 5 machines. Network rendering has the advantage that your machines could be anywhere on your network, in any room. This distributes the mains power though I'm not sure it is really more efficient in main power use. Theoretically, Bryce can be made to network render over the public Internet but it is very tricky to set up (I did it once) because it uses ephemeric TCP and UDP ports. Nevertheless, I prefer the option to network render over having everything in one computer with several graphics cards and huge power supplies. If it fails, you're lost. If one of the computers in the network fails, the others are still there.
Thanks for all the super informative posts, especially this one.
I hope all the crucial bits are heard loud and clear.
Network distributing is not the same as multi-processor. I am talking about multiple physical processors on the same MotherBoard. Windows requires a server license for any machine with >2 physical processors in the same machine.
Kendall