Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
It's pretty clear that logically iRay being CPU rendered must follow a different algorithm than iRay being rendered on a nVidia video card and those different algorithms don't yield identical images although I'm surely not the expert to point anyone exactly where they differ.
Apologies for the miscue ... I uploaded the unblurred version by mistake.
RICHARD:
Thanks much for the link.
Now that I know "where the line is", I should not have any problems staying well shy of it ... which I'm happy to do.
Regards ...
prixat:
RESIZE
I assume that you're talking about the package/plug-in offered by OnOne ???
If so, I know that tool and -- you're right -- it's great! Worth every penny, if you have a need for it.
As far as I know, it won't eliminate any nasty artifacts in your original images, but it sure can go a long way in not adding any when you upsize the original image ... and it does a superior job of retaining at least the "feeling" of more detail when downsizing an image (compared to most "built in" functions I see in other packages, including Photoshop -- whose tools ain't bad at all).
If the story I heard is correct ... OnOne acquired the underlying technology from the folks behind Genuine Fractals, to which I was exposed in the early 1990s. It was a commercialization of an imaging technology developed originally by "people that don't exist, for purposes of doing things that never happened". (I can't say anything further about that, as I have a vague memory of signing some non-existent document which said I never knew anything more and, if I did, I'd permanently forgotten all of it.) Fractal mathematics ... and chaos theory ... now there's an invitation to brain cramps, albeit a very interesting one :-)
HORSEPOWER
You're obviously correct that the fastest way to a given level of image quality is a "fast" computer configured specifically for 3D rendering ... and that spending money on such a beast is likely more prudent in the long run than trying to tweak-and-optimize a "low-powered" machine.
What we're trying to gauge, however -- and for commercial reasons, we're trying to do it based on "hard", quantitative analysis -- is whether and how our effective throughput capacity using even a "monster" machine can be maximized (and still maintain the necessary high level of image quality). Since the algorithms used with Iray (or most any other PBR render engine) to calculate the appearance of pixels in a rendered image remain the same essentially regardless of hardware, it is feasible -- though often frustrating -- to use a low-powered (but readily available) computer (even a CPU-bound laptop with only 8 cores) for such testing. And in our case, I can't justify taking any of the limited few GPU-oozing computers "offline" to do the testing we discuss in this thread.
So fortunately for us ... If allowed to run to the same level of Convergence Ratio with the same other settings, an image should look exactly the same without regard to the machine used to render it (unless maybe your eyes are so bleary from watching the low-powered render progress on a low-powered machine that you can't see straight any more ... :-)
.Thanks for sharing your thoughts ...
TOBOR: Excellent points. Special thanks for the link to the Iray documentation. (I'd found a link to migenius some time ago, but at the time, it lead to a "broken" table of contents that didn't link to anything else.) Now I can rest easy for the weekend ... knowing that I have some good reading material lined up ... :-)
Not sure that I can agree with you on this one, nonesuch00 ... at least not in practical terms. Here's why ...
While the structure of how different chunks of the render processing obviously must differ between higher-powered/GPU and lower-powerd/CPU computers, the algorithmic "rules" applied to determine how any given pixel in any given image from any given scene must remain essentially the same, regardless of the machine configuration used. If not, the resulting images you'd get from the same scene could (and often would) look quite noticeably different depending on the machine used (and even the assignment/allocation of devices and cores used within the same computer over time).
My guess is that such a situation would prove both artistically and commercially unacceptable. So my assumption is that the image-processing rules/algorithms of Iray for determination of actual pixel characteristics in the final image do remain the same, regardless of machine.
Of course, I could be wrong ...
@will.barger.arts "Of course, I could be wrong ..."
You're not the one that is wrong. Someone else doesn't understand what the CUDA Cores are. They are parallel processors that allow a developer to send C, C++, or FORTRAN code directly to the GPU without using assembly code. Difference is the instructions are being parallel processed on thousands of cores, oddly enough however many CUDA cores your GPU has. Point is, same code as goes to the CPU. Thus NVIDIA's claim that CPU and GPU rendering produce the same results provided the same settings are used. CUDA = Compute Unified Device Architecture.
In the age of the Internet, one rarely has to guess. One justs needs to be open to the possibility that one doesn't know as much as one thinks.
For a more standardized scene to work with, perhaps sickleyield's benchmark scene would suffice? The scene was created with items that are included with Daz's install, so everybody has access to them. You have a figure and several spheres that use different settings.
https://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks/p1
Now this bench is pretty old. It could be updated with a new Genesis 8 figure, since the Genesis 2 figure in it is not truly optimized for Iray. But this scene has been benched hundreds of times across that thread, with a treasure trove of data on Iray performance on different machine setups. Some people played with Iray render settings at times, too.
You could use these tests as a baseline for you ideas here. Run the scene at default, then run the test at double the image size at varying convergences, and run it at different quality settings, and so on. This scene is one that should benefit from higher quality settings, because it has still has some grain at default settings.
Thanks for the confirmation, fastbike1.
What "scares" me is that I actually understood the tech-geek points you made -- well, enough to be dangerous, anyway ... :-)
Hope this can help some others out there who may still be learning (or maybe unlearning some speculation "they read somewhere" that might not have been all that well informed).
Particularly in the age of the Web, I generally operate under the assumption that "at least 80% of what comes to you on a screen is at least 30% BS". :-)
So I do what I can to stay in the other 20%/70%. Sometimes, I even succeed.
PS -- If you think those "BS percentages" are over-stated, I refer you to our most recent federal elections in the U.S. (regardless of where on the spectrum your political views might fall). :-)
Thanks. And, yeah, sickleyield tends to put out good stuff, and plenty of it.
I'll look into your suggestions. But for now, we'll probably stick with our own tests, since we slip some specific-to-us stuff into them at times.
More generally, though, I suspect that "everybody" could benefit from having a few well-constructed, de facto standard sets of scenes, lighitng presets, render presets, etc. ... for calibrating all kinds of stuff.
Heck, even in the middle ages, most villages had a one-yard iron bar mounted into the stone of a fountain or something in key marketplaces and such. That way, it was easy to know whether say, a fabric merchant, was giving the "exact" length of cloth he was charging you for.
But those bars often were "official" and used broadly only because they were forged by the authority of a king. Good luck trying to get the Internet crowd (including me) to name a king or queen who gets to make such stuff "official". :-) :-) :-)
Maybe the updated way to approach this is with the ubiquitous teapot render. That would remove the artistic component while emphasizing raw computations. There are many ways to make a D|S scene more efficient, so for actual renders it's better to tackle techniques to streamline ray sampling and iterations.
I agree, Tobor.
After wading through some Iray Programmer documentation, I started noodling on way to streamline "realistic" lighting that minimize the number of light sources and bounces. Turns out -- in at least the first few tests I ran -- it made a huge difference in Render Time, with some pretty satisfying visual results in less than 300 iterations for a 1600x2400 "Summer Sun" scene. (See attached images.) Without the hair, they seem to "cook" enough in roughly half the time. Having little in the scene to get "bounced from" obviously cuts way down on calculations, too.
[ For those who care: This is G8F with some manual posing and morphing using only the "dials" -- and clothing objects -- that "come with". Lighting comes from a custom Environment image and a single Spotlight. (The sharp-shadows version of the lady in blue uses a Distant Light instead of the Spotlight.)
The white background is set as a solid color in the Environment tab (to comply with Google Shopping and Amazon image requirements. I've found that relaibly getting the main subjects "properly exposed" AND looking like they "belong" on the white background -- especially across different poses, garment colors/brighnesses, skin tones, etc., -- is more difficult than it first seems.)
It's an interesting topic. As Tobor mentioned I also think I read the Nvidia blog post about it a while back.
I may give it a shot again. I didn't as I render all in 4K, to double to 8K leaves me with huge ~50 MB images IIRC. I think your earlier renders were too low res to really see the difference. I did a quick test at 1920x1080, and rendering higher then downsampling gave me a better image faster, in my opinion.
I wish I kept better track. First, I rendered at 4K, then downsized with Photoshop and save as a JPG at max quality. Render Quality was set at 1, and Convergence set to 80% or 90%. It finished up after about 2200 samples taking 20-25 minutes. Second was done at 1920x1080, Render Quality set at 4, Convergence set to 99%, max samples at 5000. It finished up after hitting the max samples of 5000 at around 45 minutes.
ETA: The forum software really squashed the jpg, compressing it a lot more. Difference is not as noticeable as a compression artifacts are clearly visible, but there.
Wasn't really doing a time v. quality test on the attached image, but along the way ...
I visually monitored the progression of this one pretty closely. Past about 1200 Iterations, I couldn't spot much (if any) changes, even though I let it run to 2500 Iterations. Perhaps this was because I put the Render Quality setting at 2.8, but I don't really know. The Convergence Ratio reached was about 48%, and only creeping up 1/2% or so each 25 or so Iterations at that point, so that's why I bailed out.
scott762_948aec318a and others are right, of course, about it being difficult with lower-resolution images to spot many of the finer enhancements that come with longer (more Iterations) renders. The attached image is 1600x2400.
But that's one of the things we're trying to "get a bead on" with hard data and clear examples. In our market research, we're finding that many people want (or just assume they want/need) very "high resolution" images -- which obviously takes us more time and money, but which many don't want to wait out nor pay for. Some Art DIrector "types", in particular, seem to demand images that look "flawless" to them -- when viewed on their 32" Retina displays, etc., -- without regard to the FACT that those "master" images will be down-sampled and compressed so much before any Consumer sees them (especially for time-responsive mobile Users) that big steaming chunks of all that resolution/detail they demand is inherently lost.
Oh well ... I guess that -- until the education process "kicks in" -- we'll have to give 'em what they want instead of what they need (as long as they're willing to pay for it :-)
PS -- Yeah, we know. The fabric textures (and resulting colors) in the attached image don't match from the tank to the pants. That's a whole 'nother test ...
@will.barger.arts ""realistic" lighting that minimize the number of light sources and bounces"
There is a setting to limit the number of bounces. It's Max Path Length in Render Settings> Optimization. The default is -1 which allows infiinite bounces. A setting between 7 and 11 has been found to deliver sufficient image quality. I use 9.
Thanks, fastbike.
I already understood the concept of Maximum Path Length, and it seems semi-obvious (when you stop to think about it) that -- especially for certain scenes -- it could be a huge time-saver,
Having the good reference range of settings you provided should be a great help in testing and developing a "feel" for what tweaks should be good for any given scene.
@will.barger.arts "Having the good reference range of settings "
That ranger was suggested in a "long ago" thread about Iray settings. There were test shots at each incremental setting. I repeated the test for myself and had trouble distinguishing results between 7 and 11. I only used one scene that had a simple lighting rig with a figure and a few reflective items. At 6 and below the scene looked kind of "unfinished" even though it ahd sufficient iterations and convergence. I stopped at 11 becuase i really couldn't find any differences from the 9 setting and decided I was past the point of diminishing returns.