IMAGE QUALITY - Render Time v. Iterations v. Convergence

OPENING POST
Focusing & Framing This Thread

 

*** NOTE:  This thread addresses Iray-related information specifically and exclusively, even though some of it might also be very useful when using 3Delight or some nother Render Engine.  Everyone is quite welcome to read this thread, but please restrict your Comments to Iray (or engine-agnostic/neutral) topics only.  THANKS  ***

     I've recently been reading about and "playing with" different settings in the Render Settings | Progressive Rendering tab, with a primay focus on how different combinations of those settings might help achieve the "highest quality" image output with the least practical Render Time.   

     In particular, I'm doing some analyses of how the increases in Render Time which are inherent when the Dimensions of an image Size are increased might be offset by adjusting Convergence limits and/or Iteration limits to get the most optimal image quality possible, as quickly as possible.  In part, this is a study on how to best apply the often-suggested technique that some folks call "Manual Oversampling", i.e.,

  1. Doubling (or more) the Size of the image over what you need for its final form;
  2. Rendering the image at that larger Size;
  3. Making any fixes/enhancements required to the larger image once it's rendered; and then
  4. Resampling it back down to its intended final size. 

     The idea of using the Manual Oversampling approach seems (quite resonably) to be that problems appearing in the bigger original render -- like grain, fireflies, noise, etc. -- tend to get "washed away" automatically when that bigger image is resized downward.  And in any case, you start off with "more information" in the larger, higher-resolution image -- so any reduced-size final image inherently tends to show greater "detail" and "realism" than if you simply rendered directly to the targeted final Size. 

     Perhaps obviously, the "price" you pay for using this technique is that -- when you increase the Size/Resolution of any image -- you have a LOT more pixels (increasing geometrically) for which the Render Engine has to calculate the proper values.  Consequently, total Render Time can get way out of hand pretty quickly.

     However, the perceived Quality of a render does improve some for every Iteration  that the Render Engine makes (i.e., with each single processing-pass through the image).  But not every Iteration gives you the same degree of improvement.  In particular, you tend to get more "bang for your buck" from the earlier Iterations, and much less in the very latest Iterations as the render approaches its theoretical "perfect/finished" state.  So a couple of important, related question arise:  

  • How many Iterations do you have to complete to get a level of image Quality that you find satisifying?  ... and ...
  • When setting the "termination" limits on the render, how much Convergence and Render Time do you have to allow in order for that many Iterations to be complete?

     Well, it turns out that the answers to those questions are:  It depends.   (Sorry, I'm fresh out of magic bullets.  This takes work.)

     Your subjective judgement of what looks "good enough" is a key factor.  But generally, the more complex the scene is, the more Iterations you'll likely need to have completed before you're happy with the image -- and more Iterations means more Render Time.  

     For purposes of this discussion, the level of "complexity" is driven by factors such as:

  • GEOMETRY
    • Number of Objects in the scene
    • Number of Polygons in the object(s)
    • SubDivision level(s) applied
  • SURFACES
    • Extent to which Transparency is used
    • Extent to which Sub-Surface Scattering is used
    • Extent to which Bump and (especially) Displacement are used
    • Extent to which surface(s) are Refractive 
  • LIGHTING
    • Number of Light Sources in the scene
    • Number of polygons in any Emissive Light sources used
    • Extent to which surfaces are Reflective
  • RENDER / IMAGE FILTER(s) Applied
  • OTHER FACTORS

     Unfortunately, there are so many such factors, variations within factors, and potential interactions among them involved ... that it is a practical impossibility to predict with confidence what the optimal mix of Size-Convergence-Iterations-Time settings would be, in an exact way, for any specific scene.

INITIAL GOALS 

     Based upon our initial poking around, however, it presently is our hypothesis that it is quite possible to develop some general "rules of thumb" which can -- at the least:  

  • reliably give you some useful insight as to how likely it is that significant benefits could be achieved for a given scene by optimizing Convergence-Iteration-Time settings (regardless of whether Manual Oversampling is used); and if so 
  • some useful "ballpark feel" for the values that might prove to be the most effective settings for that particular scene.  (If nothing else, having such a ready "best guess" at what setting ultimately would be optimal could dramatically reduce the number of "trial and error" efforts to find the "right" settings, and thereby make it more practical to realize the potential benefits of these techniques with a much larger portion of the scenes you render.)

"Going in", this is the information -- and maybe a tool or two -- that we hope to discover or develop.  

     We suspect that no reliably predictive ability will be found which will work widely with particularly unique scenes.  But we also suspect that many artists will find (as we do) that much of their work falls into a relatively few "scene types" for such purposes, and that quite usable predictive guidelines and rule-of-thumb "formulas" can likely be developed for each type.

KEY POTENTIAL PAYOFFS 

     Based upon our reading and limited-so-far testing, we suspect that such techniques can both:  

  • maintain (if not increase) the typical Quality of the images an artist produces; and
  • simultaneously increase the throughput capacity of the computer systems (and overall workflows) used, regardless of the "horsepower" of those systems.
    Our (very preliiminary) best guess at this point is that throughput increases of 30% to 50% or more may be readily available through such means. 

THANKS, and LOOKING FORWARD ...

... to hearing your thoughts.

RUSSELL
For Will Barger Arts

«1

Comments

  • RENDER PERFORMANCE  v.  Convergence Limits & Iteration Limits
    SOME INITIAL TEST RESULTS

    Warning ... the images shown here are dangerously boring ... but as such, they allow (force?) a more technical comparison of overall quality among those images, i.e., free of much "aesthetic" or other bias.  Ultimately, of course, subjective and qualitative assessments of what is "best" matter far more than technicalities.  This is "art", after all.  But staying "coldly technical" and focused on "hard data" -- at leat in the early stages, seems the most productive route to figuring out the settings required in Daz Studio to get those ultimate "dreamy/cool" images.

    Also ... what you see below is still quite preliminary and incomplete, even for this initial phase of the overall analyses we hope to complete.  (Time is short here right now, but the workload keeps getting longer ... :-)  ... Sound familiar?)

    We'll be fleshing this post out more later, as time permits ... including details on the settings and materials used (and why),and what the various terms and charts "mean", etc.

    Meanwhile, feel free to Comment with questions, concerns, etc., on it, and/or the overall OP above.

    Later ...
    RUSSELL
    For Will Barger Arts

    WBA_RenderPerf_Test-A1_ComposA_01.jpg
    1600 x 1600 - 2M
    WBA_RenderPerf_Test-A1_CHT_COMPOS_Time-Conv-Iter_Cnv95_01.jpg
    1600 x 1200 - 807K
  • pdspds Posts: 593

    I will be following this thread with great interest...

  • nonesuch00nonesuch00 Posts: 18,076

    For me personally most images converge to 99% CPU rendering at almost 2000 iterations or in many cases almost 3 days of rendering so what I did was turn off maximum time, convergence and render quality and instead just set the number of samples to make be 2000. 

  • prixatprixat Posts: 1,588

    The idea with rendering larger is that the required detail emerges from a large, but quick, low quality render when the image is reduced. Preferably the large render should be quicker than the actual size/high quality render.

    Can you test with considerably reduced 'convergence' or 'samples' or 'time limit' to keep the render times at least comparable with the actual size/high quality render?

  • fastbike1fastbike1 Posts: 4,077
    edited August 2017

    Nevermind. Too cynical plus I don't have a dog in this fight.

    Post edited by fastbike1 on
  • will.barger.artswill.barger.arts Posts: 60
    edited August 2017

     

    Nevermind. Too cynical plus I don't have a dog in this fight.

    Not to start a spat or anything, but I'm sincerely curious to know ...
    If that's the case, why does somebody even bother to post?

    Post edited by will.barger.arts on
  • will.barger.artswill.barger.arts Posts: 60
    edited August 2017

    For me personally most images converge to 99% CPU rendering at almost 2000 iterations or in many cases almost 3 days of rendering so what I did was turn off maximum time, convergence and render quality and instead just set the number of samples to make be 2000. 

    Hmmmm ...
    It was only a textured cube, of course, but even the 3200x3200 image only took about 300 Iterations to hit 95% Convergence.  So 2,000 as a standard sounds a bit high ... unless ... of course, you mostly construct some fairly complex scenes.  But I suspect you usually get some pretty "clean" renders that way.

    By comparison, the attached work-in-progress image only required about 180 Iterations to get to the quality you see here.  It's not a hugely complex scene, but a lot more than the cube.  It was rendered directly (not oversampled) at 1200x1800.  Because this was a "side project', rendering was done on an 8-core, CPU-bound laptop that's around six years old.  It hit a Render Time limit at two hours.  Sorry, don't have the level of Convergence reached readily at hand.

     

    WBA_SERB_OLOMA_Half-FL_Dragon_01.jpg
    1200 x 1800 - 3M
    Post edited by will.barger.arts on
  • Can you test with considerably reduced 'convergence' or 'samples' or 'time limit' to keep the render times at least comparable with the actual size/high quality render?

    "Great minds think alike ..."

    I've already rendered the cube at 800 x 800 and 1600 x 1600 at each of the following Convergence limits:  95%   75%   50%   and   25%.

    Unfortunately, it will be a while before I can compile all of the data from the Render Log, see what it has to say, and then put the resulting images and any charts, etc., together to post here.

    So "stay tuned" ...

    PS -- "The plan" is to get such hard data first for at least this one "tecnical" scene with the cubes, and then -- using what's learned there -- put together a set of "real world" scenes that hopefully will test as rigorously as possible with the smallest number of scenes/renders required to learn what needs to be learned.
    At some point, I'll probably publish the "methodology" (with a link from this Thread ... or ... wherever) -- maybe along with some standardized Scenes, and presets for Lighting, etc. That might allow different folks to use a "reference standard" more-or-less to make the results they get with their own work more readily comparable.  That way, hopefully everybody can leverage everybody else's experience more reliably.   Or not ... :-) 

  • fastbike1fastbike1 Posts: 4,077

     Because I changed my after posting and deleted my comment. It would have likely started a spat or got a moderator wound up, probably both.

    Plus the forum doesn't allow you to delete a post once made.

     

     

    Nevermind. Too cynical plus I don't have a dog in this fight.

    Not to start a spat or anything, but I'm sincerely curious to know ...
    If that's the case, why does somebody even bother to post?

     

  • nonesuch00nonesuch00 Posts: 18,076
    edited August 2017

     

    Nevermind. Too cynical plus I don't have a dog in this fight.

    Not to start a spat or anything, but I'm sincerely curious to know ...
    If that's the case, why does somebody even bother to post?

    I didn't say that, fastbike1 said that.

    I don't consider what I do with rendering settings correct or incorrect and I am not arguing or fighting: I just stated the technical limitations of my PC doing CPU renders according to the DAZ Studio UI. For me 2000 iterations/samples I've seen in my 4K sized renders gets to 99% convergence in the maximim limited default seconds equal to 3 days for over 90% of my renders using iRay & Genesis 3 version of iRay skin materials.

    However, since the newest version of DAZ Studio and the update of the Genesis 8 skin materials settings to be more realistic and complex that is not quite the case anymore so I just turned off the render timer, the render quality convergence check, and set the required sample to be 2000.

    Post edited by nonesuch00 on
  • nonesuch00nonesuch00 Posts: 18,076

    For me personally most images converge to 99% CPU rendering at almost 2000 iterations or in many cases almost 3 days of rendering so what I did was turn off maximum time, convergence and render quality and instead just set the number of samples to make be 2000. 

    Hmmmm ...
    It was only a textured cube, of course, but even the 3200x3200 image only took about 300 Iterations to hit 95% Convergence.  So 2,000 as a standard sounds a bit high ... unless ... of course, you mostly construct some fairly complex scenes.  But I suspect you usually get some pretty "clean" renders that way.

    By comparison, the attached work-in-progress image only required about 180 Iterations to get to the quality you see here.  It's not a hugely complex scene, but a lot more than the cube.  It was rendered directly (not oversampled) at 1200x1800.  Because this was a "side project', rendering was done on an 8-core, CPU-bound laptop that's around six years old.  It hit a Render Time limit at two hours.  Sorry, don't have the level of Convergence reached readily at hand.

     

    Well I've seen others in the forum state that they set the number of samples to 15000 so if you thing 2000 is a lot, there you go. However, I also know they have nVideo video cards & I do not.

    One thing I do try different on my latest render was completely turn off all types of filters - firefly, gaussian, and so on. I did it to see if it make the image render a lot faster - no, it does not. Does it make the image look better? No it does not - It is still OK but I can see aliasing so I will turn the default filters back on.

  • nonesuch00nonesuch00 Posts: 18,076

    Can you test with considerably reduced 'convergence' or 'samples' or 'time limit' to keep the render times at least comparable with the actual size/high quality render?

    "Great minds think alike ..."

    I've already rendered the cube at 800 x 800 and 1600 x 1600 at each of the following Convergence limits:  95%   75%   50%   and   25%.

    Unfortunately, it will be a while before I can compile all of the data from the Render Log, see what it has to say, and then put the resulting images and any charts, etc., together to post here.

    So "stay tuned" ...

    PS -- "The plan" is to get such hard data first for at least this one "tecnical" scene with the cubes, and then -- using what's learned there -- put together a set of "real world" scenes that hopefully will test as rigorously as possible with the smallest number of scenes/renders required to learn what needs to be learned.
    At some point, I'll probably publish the "methodology" (with a link from this Thread ... or ... wherever) -- maybe along with some standardized Scenes, and presets for Lighting, etc. That might allow different folks to use a "reference standard" more-or-less to make the results they get with their own work more readily comparable.  That way, hopefully everybody can leverage everybody else's experience more reliably.   Or not ... :-) 

    Well hate to disqualify myself from the great minds category but it was prixat that said that, not me. 

  • DustRiderDustRider Posts: 2,737

    Any materials using SSS, caustics, etc. will take longer to converge than your cube render. It also looks like your anatomy render was using fairly simple shaders (though I could be wrong, but I don't "see" any indication of  SSS, bump, normal map, or displacement on the anatomy shaders), thus it did not need more time/iterations to clear up, though it would be interesting to know what the convergence was after the 2 hours was up.

    For example, the render below took somewhere around 3,000 iterations to reach the proper convergence, and while it some people might say it "looks fine" at 2,000 iterations, the extra time definitely made a huge difference in clarity and "realism" of the cubes, at least in my eyes (sorry, I don't remember exactly the number of iterations, I've slept many times since then laugh). Without the cubes in the image, it would resolved (95%) to around 500 iterarions. So yes, the shaders on your cube could make a huge difference. Keep in mind that the cubes in the render below have all of the things that take increased iterations to resolve, SSS, reflection, refraction, and cuastics.

    Just thought it might be an important point that shaders can make a huge difference in render speed. As far as the practice of rendering larger to a lower convergence, then down sampling, I really don't have a dog in the fight. I prefer to render the image at the resolution I want. but it will be interesting to see your results.

  • will.barger.artswill.barger.arts Posts: 60
    edited August 2017

     Dear nonesuch00, fastbike, and prixat ...

         Apolgies for mis-attributing Comments in a couple of my responses.
         I'm figuring out that the Quote function doesn't work the way I thought it does.  (I like to edit down Quotes so that folks don't have to scroll through so much.)
         Looks like that got me in little unintended "trouble".
         Thanks for the kind/tolerant corrections.

     

    Post edited by will.barger.arts on
  • nonesuch00nonesuch00 Posts: 18,076

    Oh, for cubes like that being so large and opaque as to dominate the scene I could see needing 3000 samples. Maybe I've noticed the Genesis 8 skin shader settings need 2500 samples more so than the 2000 samples I had been using with Genesis 3 characters.

  • will.barger.artswill.barger.arts Posts: 60
    edited August 2017

    ... For me 2000 iterations/samples I've seen in my 4K sized renders gets to 99% convergence in the maximim limited default seconds equal to 3 days for over 90% of my renders using iRay & Genesis 3 version of iRay skin materials.

    However, since the newest version of DAZ Studio and the update of the Genesis 8 skin materials settings to be more realistic and complex that is not quite the case anymore so I just turned off the render timer, the render quality convergence check, and set the required sample to be 2000.

    Hey, nonesuch00, sounds like you might be back in that "great minds" camp ... :-)

    Seems like what you've done -- albeit less formally than my own "anal-retentive" self -- is to figure out what the render-quality aspects are that are common to the bulk of your work, and then set a standard (or at least a starting point) that serves that kind of work.

    And from what I've seen informally, a 4K image is going to take "a good while" to render, even if it's not all that complex.  
    This also seems like an example of where "Manual Oversampling" may not be all that practical, since doubling the Size of your initial render (not mention quadrupling, etc.) from a base of 4K would likely create a "massivley" long Render Time to reach the level of quality you want.

    So that seems to leave you -- and, by implication, the rest of us -- with figuring out how "low" you can set the Convergence limit without any unacceptable degration in overall image quality.  From the "hard" data I've seen so far, 99% is probablylot higher than you need to go.  I often see little or no perceptible difference between, say, 90% (and often a lot lower) and 99%.  The same is true for Iterations.
    For a render that takes two days on your machine (I'll assume 48 hours), a 10% reduction in time from a lower Convergence and/or Iterations would -- roughly and theoretically -- save you about 4.8 hours per render.
    So it does all seem like something worth exploring is some detail.

     

    "I've seen in other forums ..."

    ... is a major reason I'm trying to do this "hard" testing I've mentioned.
    I have no doubt that almost everyone who makes Comments and offers advice or just a rough summary of their experience is trying to help and writes "honestly".
    But that doesn't mean they're correct ... or perhaps more often ... they understandably don't provide enough "background" information for the readers to know the conditions under which their adivice/experience would and would NOT apply for others.
    The result can often be a lot more mis-informed folks who end up wasting a lot of time and "infecting" others.
    Humans.  Go figure.  :-)  

    That's a key reason I'm trying to do "controlled" experiments of my own ... and then publish the visual results along with a lot of "background detail" ... so folks can make up their own minds on an "informed" basis.
    No hero here.  We just need to know this stuff "for sure" for commercial-efficiency purposes.  So if we get it nailed down, might as well share most of it with others.

    Unfortunately, such an approach takes a lot of time -- not just for the multiple renders required for each aspect tested, but the analysis and write-up (which can take lots longer than many renders :-)

     

    Post edited by will.barger.arts on
  • ne thing I do try different on my latest render was completely turn off all types of filters - firefly, gaussian, and so on. I did it to see if it make the image render a lot faster - no, it does not. Does it make the image look better? No it does not - It is still OK but I can see aliasing so I will turn the default filters back on.

    Thanks for the info on the effect of Filters.

    They're  definitely factors we want to explore ... but we'll probably defer "isolating" their effects in formatl testing for later phases.

  • nonesuch00nonesuch00 Posts: 18,076

    For me it's sort of a case of 'yes, we know that those filter settings are there and that they have an effect BUT will I save substantial render time and get results that look the same for all intents and purposes to untrained amateur eyes by turning them off,' No. Even my amateur eyes could see many aliasing effects. sad

  • fastbike1fastbike1 Posts: 4,077

    @will.barger.arts  "I'm figuring out that the Quote function doesn't work the way I thought it does.  (I like to edit down Quotes so that folks don't have to scroll through so much.)"

    I have found this style of quote works better for the very reasons you state. A bit more "fuss" to do, but cleaner in the end. You can also multiquote different posts without a jumbled nightmare at the end.

     

    Fishtales got me thinking about a parameter that, in some circumstances, doesn't seem to be a very good success endstate. That parameter is convergence. This thread made me go back and view some renders I thought were "acceptable". As I looked closer I saw some "noise" in shaded areas that wan't apparent until you zoomed to 100%. All of these images had convergence set to 95% and Quality =1. I begin to see why some folks just set iterations to a value they have found safe and then judge by eye. Remember also that the Quality setting changes the internal metric for satisfactory convergence of a pixel.

     

     

  • will.barger.artswill.barger.arts Posts: 60
    edited August 2017

    "There are lies.  There are damn lies.  And then there are statistics."

    -- Mark Twain

    I have the "mixed blessing" of being at least semi-conversant in the mathematical science of Statistics.
    So at the risk of connecting completely unrelated dots ... 

    It occurred to me that many "flaws" in rendered images (especially those like Fireflies and Noise) might be rightly thought of as statistical "errors" -- that is, they represent "deviations" from the "average" or "right" value for a pixel ... the value it would have if allowed to render to "full completion" at a very high level of Convergence (approaching 100%).

    IF this is true, then the probability of any randomly selected pixel in the image being "off" by a given amount -- which is something like "an acceptable level of image quality" for that pixel) -- should (famous last word) be something that can be estimated with useable accuracy.  

    [Keep in mind, however, that we're talking about probability, NOT a guarantee.  Just because it's highly "probable" that something will (or will not) occur does not mean that -- in any specific case -- that thing necessarily/definitely will (or won't) occur.  The chances of any one of us reading this NOT getting struck by lightning within 48 hours are extremely high, but there is still some chance you will get crispy-fried in a thunderstorm.  And if that does happen to you, the chances that it did occur to you are 100%.  
    By extension, remember that setting a parameter for a render based upon the probability that it will get you a highly satisfactory image does not "guarantee" that you will always get that expected level of image quality in every case.  But it can mean that -- a lot more often than not -- you will get at least as good an image as you expected.
    And so, over time and a significant number of individual renders, you're likely to save a lot of time, stress, and maybe money by "playing the odds" intelligently with your statistically based render settings.]

    Standard Deviations  v.  Settings for Convergence and/or Iteration Limits?

    [SIDE NOTE for you fellow/would-be StatGeeks:  The figures below apply for a so-called Normal distribution (having nothing to do with geometry or texture "normals").  It seems likely that "errors" in a render would follow such a Normal distribution, so I'm going to assume that this statistical assumption is valid in at least a practical sense.  If you know otherwise, please shout out and explain.]

    In effect, a "standard deviation" gives you a single reference number that indicates -- for an overall set of data points considered -- how "close to" or "far away" from the Mean (average or "most likley" or "expected") value is for that set of points.  In rendering, that's for all of the pixels in the image together.  

    As a simple example, consider an image that is filled completely with 50% gray pixels.  The brightness value of every pixels is 50, so the average/mean value of the pixels overall is also 50 -- there is zero "deviation".  Now start adding in pixels, at random, with brightness values other than 50.  The more of these "deviant" pixels you add -- and/or the further the brightness valueof each one strays from 50 -- the more "deviation" you have for the image overall.  
    Now start adding in deviations for each deviant pixel's hue and saturation values as well as brightness, and you've covered most of the ways in which the appearance of a given pixel in an image can differ from it's "right" or "perfect" appearance.

    Key "Breakpoints" of Standard Deviation.  The following are three levels of standard deviation that often are used in a variety of situations.

    • 1 Standard Deviation covers about 68% of all errors/deviations from the Mean.
    • 2 Standard Deviations covers about 95% of all errors/deviations from the Mean.  "Suspiciously", this is the default Convergence level in Daz Studio.  
      It also is commonly used as a level beyond which "errors" are considered "outliers" -- a sort of randomly occuring "fluke" event, rather than something that happens because of the inherent nature of the "system" which is "creating" the events (kind of like a Render Engine creating pixels).  
    • 3 Standard Deviations covers about 99.7% of all errors/deviations from the Mean.
      This leve is, for example, the so-called "Six Sigma" level applied as a key reference-point/standard for high-precision/quality manufacturing (including computer systems).

    Some of the recent testing I've done suggests that -- at least for small/medium-size renders of fairly moderate complexity -- these also may serve as good reference points for evaluating "adequate" (if not "satisfying") settings of Maximum Samples (a.k.a, Iterations) and/or Converged Ratio as termination points for progressive renders in Iray.  

    THOUGHTS?
    ANY EXPERIENCE WITH USING SUCH STATISTICAL TOOLS FOR RENDERS?

     

    Post edited by will.barger.arts on
  • will.barger.artswill.barger.arts Posts: 60
    edited August 2017

    Render Quality setting might not have much effect?

    In looking at the effects of using various Samples (Iterations) settings and/or Converged Ratio settings as a means of terminating a render "as soon as acceptable image quality" can be reached, it dawned on me that the Render Quality setting might be having a lot more effect on image quality than the "maximum"/stop settings, regardless of which "limit" parameter is used.

    After some Internet searching (including Daz-Iray documentation and forums) and not finding much on how the Render Quality function works or what kinds and extents of effects it could have ... I ran a few render tests of my own on the subject.  (I was also confirming some details on how an "Animation" can be used to render a series of actual Still Images, so it was a "killed two birds ..." kind of thing, too.)

    SEE THE ATTACHED IMAGE for the key results, and make up your own mind how much the Render Quality function affects image quality.

    Personally, I'm not seeing much effect, even across a broad orders-of-magnitude range of settings at 100, 1000, and 10,000.  (I first tried settings of 1, 2, 4, and 8 ... but couldn't see any difference, so I "went big".)  This parameter seems a bit weird, since the limits on its values appear to be set up to range from 0 to 10,000 ... but ... the limits entered are not "turned on", and you can enter negative numbers, too. 

    What improvementsdo see -- and they seem quite significant -- appear to be driven primarily by increases in the number of Samples/Iterations allowed, which included 30, 60, and 120.  These levels are what stopped the renders in all cases.
    Resulting Convergence Ratios for each image overall ranged from about 70% to about 90%

    I "read somewhere" that increasing the Render Quality setting in integer steps (i.e., 1, 2, 3, ...) would result in a commensurate linear increase in total Render Time.  I did NOT see that at all -- although my indirect gauge of total Render Time with these test images was the somewhat high levels of Converge Ratios reached within a relatively low number of Samples/Iterations completed, as I did not take the time to let each of these many images render "to completion".  My actual Render Times to the "forced" stop (per the Maximum Samples settings shown) did increase linearly with the increases in the number of Samples/Iterations completed.
    .

    About The Renders

     

    1. All images were rendered at 0800 x 1200 pixels.  Results might vary significantly for larger and/or more complex scenes.  I haven't tested those factors yet in terms of variations in Render Quality settings.
       
    2. All Filtering was turned OFF.
       
    3. The character is the Genesis 8 Base Female ... loaded with "out of the box" default settings and materials, etc., except for raising the arms to the "old" T position.
       
    4. Lighting is the Sun & Sky only.
       
    5. Each body shown was rendered separately and then composited in the render-setting groups as shown.
      Having a front and two profile views for each body in the directional light shown creates darker/low-illumination areas in which rendering artifacts often are more prominent.
       
    6. The garments shown were added in post-processing to comply with DS non-nudity guielines.  They are all the same render, so they do not reflect any changes in image quality across the various combinations of factors.  So please restrict your assessments to the G8F bodies only.
      (FYI:  The garments were added in the scene to the body, and -- using the same light setup -- the Canvas functions of Iray were used to render the garment pieces only, on a transparent background.)
       
    7. In the scene, the background is absent/transparent.  The dark-gray, neutral background shown was added in Photoshop along with the text.  No exposure, color, nor other image-appearance adjustments were made in Photoshop.

    Feel free to opine !!!
    RUSSELL
    For Will Barger Arts

    WBA_RenderQualityTest_A2-Compos-Dressed_0800x100_01.jpg
    2620 x 3800 - 4M
    Post edited by will.barger.arts on
  • prixatprixat Posts: 1,588

    There is an averaging step, it's in the firefly filter that you turned off. smiley

    That's only done locally though, a pixel is compared to it's immediate neighbours and if it's very different then Iray treats it as an outlier.
    It's the constant balance to remove noise and keep detail, and to tell which is which!

    I think the reason you didn't see an effect when changing Render Quality is because the Iterations were too low. As you said, it reached the Samples limit before the Convergence limit in all your tests.

  • Sorry, I had to delete the image as it showed nudes.

    Render Quality determines, as I understand it, when a pixel is considered to be converged - turning it up makes the engine fussier. I suspect you might need relatively noisy looking scenes, which would not converge rapidly, to see a big effect.

  • ToborTobor Posts: 2,300

    A contributor posted the oversampling technique a few years back on a nVidia blog, and many here have taken up the concept to shorten their renders. The basic problem is that there's been numerous Iray versions since then, and there is no guarantee "cutting short" a render will yield the results described in the blog.

    Another issue: The pixel and filters "kick in" at various points in the render timeline. This is how their algorithm works, and the pushbutton nature of Iray doesn't allow a user to change the behavior. It's best to experiment with this technique in a known environment:

    1. Duplicate a typical use-case. This means leaving default filtering on, and rendering scenes typical of Daz users, which means characters in a set with hair and clothing.

    2. Avoid manually stopping the render before the specified convergence has been reached. This allows the filtering and other (biased) phases to be completed. 

    3. Use instead the Convergence Ratio and/or Render Quality setting to control the sampling progress. The later can be set to <1 to decrease the threshold of analyzing convergence. Allow the render to naturally complete.

    4. Bookmark and refer to the Iray programmer's manual (link below) which provides some useful additional information on Iray's feature set, and how the settings work in the engine. Be warned, however, that the manual does not always reflect the version of Iray that is packaged with D|S. Differences may be encountered.

    http://www.migenius.com/doc/realityserver/latest/resources/general/iray/manual/index.html#/concept/preface.html

  • Sorry, I had to delete the image as it showed nudes.

    Apologies for the miscue ... I uploaded the unblurred version by mistake.

    I've replaced it now with the blurred version.  If that one is not discreet enough, please remove it and let me know -- if you would -- what I'd need to do to meet the standards.

  • nonesuch00nonesuch00 Posts: 18,076

    Sorry, I had to delete the image as it showed nudes.

    Apologies for the miscue ... I uploaded the unblurred version by mistake.

    I've replaced it now with the blurred version.  If that one is not discreet enough, please remove it and let me know -- if you would -- what I'd need to do to meet the standards.

    That's not allowed either. They have to be wearing underwear, swimwear, fig leaves, or so on.

  • will.barger.artswill.barger.arts Posts: 60
    edited August 2017

    Under-Sampling as a Way to Estimate Best Render-Length Limits ???

    I've previously talked in this Thread about the potential of "Manual Oversampling" as a way to achieve greater image quality (at least when "normal" rendering doesn't do the trick) ... with the hope of at least sometimes offsetting the increase in Render Time with lower, carefully selected render limits set with the Maximum Samples (number of complete-image passes or "Iterations") and/or Maximum Convergence Ratio parameters under the Render Settings | Progressive Rendering tabs.

    As I began to do some further testing on that, I kicked off the render of a fairly simple-scene, 800x1200 image with the following key settings:

    • Filtering
      • Fireflies = ON  (Default Settings)
      • Degrain = ON  (Default Settings)
      • Noise = ON  (Default Settings)
      • Architectural = ON  (Default Settings)
         
    • Render Quality = ON  
      Set at the implied-maximum 10,000
       
    • Tone Mapping = ON  (Default Settings - Including Burn Highlights and Crush Blacks ON)
       
    • Rendering Limits  (First reached to auto-stop Render)
      • Maximum Convergence Ratio = 99.7%  (3 Standard Deviations)
      • Maximum Samples = 5,000
      • Maxime Render Time = 25,000 seconds  (almost 7 hours)
        This was the expected self-terminating factor, set random-ish for an overnight run on a CPU-bound laptop.

    The idea behind these settings was to "load up" the render with as many common quality-enhancing factors as I could readily think of, do the render, and then analyze the Render Log against "lighter weight" settings I'd run previously of the same scene ... hoping especially to construct a curve of the Convergence Ratios achieved over time as the effectively unliimited number of Iterations progressed the image to a near-maximum level of overall quality (given the high 99.7% Convergence Ratio limit). 
    The hypothesis was that I would get curves for both the smaller and larger images that were similar enough in shape -- although very different in duration -- to see if a much smaller/faster render of a planned large image could be used to effectively estimate the "optimal" quality-vs-time settings for the larger/longer render.

    ATTACHED are some graphs that I developed from the analysis of the Render Log, along with a PDF file that shows the detailed data.

    Combined with some prior work, this latest test seems to confirm -- at least to some degree -- our contention that the quality for many renders may tend to "peak out" well before a Convergence Ratio of the default 95%, and that perhaps something in the 80%-85% (or maybe a bit less) might be a good starting/reference point for general use.  (As mentioned before, if your work tends to fall into one or more common types for such technical considerations, you might want to find your own curves and starting/reference-point settings specific to those types.  Your optimum might be noticeably higher or lower.)

    All of this also is starting to suggest (to me, anyway) that "governing" Render Time with the Maximum Convergence Ratio parameter and then setting Maximum Samples and Maximum Render Time to WAY HIGH levels might be the best appraoch in many/most cases.  
    At least in the render studied here, it seems clear that I could let Iray rack up Iterations/Samples "until the cows come home" and "never" get any greater image quality than I had when it self-terminated on time.  My educated guess (that we hope to test formally soon), is that I would get the same high level of image quality (and related Convergence Ratio achieved) at the same number of Samples/Iterations completed for a much larger image from the same scene (although it definitely would take a lot longer -- per Iteration/Sample, and overall.  

    Procedurally, I'm thinking that "Manual Undersampling" might work like this;

    • Develop your scene.
    • Assess its complexity in terms of render-workload/time factors and final image size needed.
      And then if the final render seems likely to take a long time ...
    • Run a very small-scale render of the scene with a standard/reference Maximum Convergence Ratio setting as explained above (although you might still want to set a WAY HIGH / I-GIVE-UP limit on Maximum Time).
    • Once the small/quick "Undersample" render is finished, use data from the Render Log to chart a curve of the Convergence Ratios actually achieved over the life of the render.
    • Assess the extent to which there was any "quality juice" left that could have been readily harvested with a higher Maximum Convergence Ratio, as well as "scaling up" the time elapsed to estimate how long the larger/final-image render would take to hit the same point and image quality.

      A still-rapidly-rising curve at termination suggests both that there's more image quality left to get beyond your the standard/reference Maximum Convergence Ratio you set, and that there's a good bargain to be made in the inherent tradeoff between more quality and the more render time required to get it. 
      If the curve has already "flattened out" (as it did in the case illustrated in the Attachments) -- and especially if it levelled off well below your standard/reference level -- "you're done" ... meaning that setting any higher levels for your larger/final render will most likely get you very little improvement in image quality, even after a much longer wait in render time to get those measly scraps.

    Sound anywhere in the neighborhood of right ... and potentially useful?

     

    Also ... The Black Pixel Conflict  --  I got the same "black pixel" error message 220 times in the test render.  However, this didn't seem (per the Render Log) to extend Render Time perceptibly, if at all.  But it did make the Render Log a mess to sort through, and would really be a pain on a 3,000- or 5,000-Sample render! 
    From what I can tell, one (or maybe more) of the Filters I activated tries to interact with whatever this "black pixel" is, and in a way that's not compatible with some other settings.  I suspect that it may be a conflict with Crush Blacks function in the Tone Mapping section.  I plan to turn off Crush Blacks in the next test to find out.  (I find the Burn Highlights function pretty useful.  Likewise for Crush Blacks, but it seems like the same effect could readily be achieved with Curves, etc., in Photoshop-like post processing, whereas blown-out highlights can't be recoverd "in post".)
    Anybody know "for sure" what thing(s) conflict with the dreaded Black Pixel?  And maybe even what it is?

    UPDATE:

         As an ATTACHMENT, I've added a render of the same scene, but done at 2400x3600 ... to roughly the same Convergence Ratio of about 80%-85% achieved with the smaller version posted previously.  I hope to add details from an analysis and comparison later -- probably in a separate post.  (To maintain comparability of the bodies across renders -- both of which were originally done with no clothing -- I rendered the top and bottom she's wearing separately from the same scene using the Canvas function and then composited the final images shown here.  FYI ... the clothes were rendered to about 97% Convergence Ratio.  There didn't seem to be much improvement from 85% to 97%, but there did seem to be some very subtle-but-perceptible-from-up-close changes.)

    WBA_RenderPerf_Test-A2_01_StepTime-01.png
    1504 x 1129 - 81K
    WBA_RenderPerf_Test-A2_01_RenderPhases-01.png
    1504 x 1129 - 66K
    WBA_RenderPerf_Test-A2_01_ConvergeCurve-01.png
    1504 x 1129 - 100K
    pdf
    pdf
    WBA_RenderPerf_Test-A2_02.pdf
    1M
    WBA_RenderQualityTest_A2-Compos-Dressed_01.jpg
    800 x 1200 - 243K
    WBA_RenderQualityTest_A2-Compos-Dressed_2400x3600_01.jpg
    2400 x 3600 - 2M
    Post edited by will.barger.arts on
  • prixatprixat Posts: 1,588
    edited August 2017

    I often render smaller and enlarge the image, especially with the amazing tools you get these days, I use On 1 Resize, its the best £60 that I ever spent.yes

     

    Since you're considering quality against time, I think some conclusions are being skewed by the choice of hardware - using a CPU/laptop. 

    For example, rendering a scene similar to yours, my old nVidia 750ti gets to 5000 iterations in less than 10 minutes.

    The difference between a CPU and a GPU (even a basic one) is so great that converging to 95% or more becomes feasable.

    Post edited by prixat on
  • Sorry, I had to delete the image as it showed nudes.

    Apologies for the miscue ... I uploaded the unblurred version by mistake.

    I've replaced it now with the blurred version.  If that one is not discreet enough, please remove it and let me know -- if you would -- what I'd need to do to meet the standards.

    https://www.daz3d.com/forums/discussion/3279/acceptable-ways-of-handling-nudity#latest

  • fastbike1fastbike1 Posts: 4,077

    I think folks are under-appreciating Tobor's comment's about the actual Iray rendering process and when some processes kick in.

    I also think prixat has a major point " I think some conclusions are being skewed by the choice of hardware - using a CPU/laptop"

    This "Assess its complexity in terms of render-workload/time factors and final image size needed." sounds great, but like most great concepts, the devil is in the details, and the factors/importance will be different for everyone.

    This looks suspiciously like a typical thesis project that generates a whole lot of data with no actual relevance / use. This project seems determined to quantize a process that has a virtually inifinite number of values for the main factor of the render (the user's eye).

     

     

Sign In or Register to comment.