Denoise renders and save a LOT of time! (Nvidia not Reqd)

1568101113

Comments

  • PadonePadone Posts: 3,700

     I've tried an attempt at faking an albedo pass using emissive materials in an OpenGL export .. It's really disappointing that even though Iray actually supports albedo AOVs, Daz doesn't provide it as an export canvas

    That's a good idea .. For OpenGL it should be enough to convert everything to the default daz shader then set ambient color to white and strenght 100%. And yes, it is a pita not having the albedo export, even if I guess most fine details are from the normal buffer.

  • Matt_CastleMatt_Castle Posts: 2,585
    edited May 2019
    Toonces said:

    I took a screen shot of the pic and ran it through the external Intel denoiser (post-denoise result attached). The girl on the left didn't appear to lose any detail.

    Because of nature of the denoiser trying to specifically spot ray-tracing noise, it generally won't work at all on an image that's had any other editing (including resizing, and your attempt to screenshot it has left it about 10% smaller than the file that was uploaded, even if we assume the original image wasn't resized at all), so that's not particularly conclusive.

    That said, so far I am generally more impressed by the Intel denoiser's ability to discern between detail and noise than I am the Nvidia version.

    Padone said:

     I've tried an attempt at faking an albedo pass using emissive materials in an OpenGL export .. It's really disappointing that even though Iray actually supports albedo AOVs, Daz doesn't provide it as an export canvas

    That's a good idea .. For OpenGL it should be enough to convert everything to the default daz shader then set ambient color to white and strenght 100%. And yes, it is a pita not having the albedo export, even if I guess most fine details are from the normal buffer.

    I'll stress that the results are far from perfect, given my notes about all the effects that OpenGL isn't trying to render at all - the test image I used certainly ended up with some odd artefacting because of the mismatch in some areas. However, it does show a lot of promise for what could be achieved with proper albedo data.

    Post edited by Matt_Castle on
  • PadonePadone Posts: 3,700
    edited May 2019
    Toonces said:

    I took a screen shot of the pic and ran it through the external Intel denoiser .. The girl on the left didn't appear to lose any detail.

    That's because the girl on the left has 98% convergence so there's almost no noise to process at all. It would be hilarious for a denoiser to blur even where there's no noise.

    Post edited by Padone on
  • shaneseymourstudioshaneseymourstudio Posts: 383
    edited May 2019

    I am not sure if this LPE applies as I was just looking up what albedo even is but I wanted to share in case it is possible this helps at all:

    color emissive
    color lpe:C[<L.>O]
    Post edited by shaneseymourstudio on
  • PadonePadone Posts: 3,700
    edited May 2019

    cycles vs intel denoiser (with intel using the beauty canvas only)

    To better show what I mean here's a proof of concept of a simple cube rendered in blender with 8x steps. The first image is the original noised rendering, that would be the beauty canvas in iray. The second image is denoised with cycles. The third image is denoised with intel using only the original rendering as source. You can clearly see that cycles doesn't lose details nor textures, it simply can't because it's using the albedo and normal buffers. On the other side intel is just trying to guess what's noise and what's data and the result is far from decent.

     

    original image (beauty canvas)

    cycles denoiser (using the albedo and normal buffers)

    intel denoiser (using the beauty canvas only)

    cycles-8x.png
    480 x 270 - 273K
    cycles-8x-denoise.png
    480 x 270 - 164K
    cycles-8x-intel.png
    480 x 270 - 108K
    Post edited by Padone on
  • PadonePadone Posts: 3,700
    edited May 2019

    I am not sure if this LPE applies as I was just looking up what albedo even is but I wanted to share in case it is possible this helps at all:

    color emissive
    color lpe:C[<L.>O]

    While I mainly use cycles for rendering I also like to compare things for fun .. Thank you so much, going to do some tests ..

     

    EDIT. I had no luck with that lpe, it just shows a black exr to me. Also, even if I don't know how lpes work, I guess that's not correct because it's too simple. Again, below is an extract from the intel documentation.

    https://openimagedenoise.github.io/documentation.html

    For simple matte surfaces this means using the diffuse color/texture as the albedo.

    For metallic surfaces the albedo should be the average reflectivity.

    The albedo for dielectric surfaces (e.g. glass) should be 1.

    Post edited by Padone on
  • TaozTaoz Posts: 9,941
    edited May 2019

    I'm updating the DragNDrop app to support albedo and normal maps. Does anyone know which formats these normally are in (jpg, png, etc.), and if they need to be in the same format as the render?

    Post edited by Taoz on
  • Richard HaseltineRichard Haseltine Posts: 101,010
    Padone said:

    Perhaps a "DIY" albedo could be done by turning all the materials to emitters so to emulate a flat shader, but it's just a guess.

    A sound theory, although even if it works well enough, trying to do that using Iray would mean having to render the albedo separately, at which point you might as well argue you should just render twice as long in the first place.

    However, taking the core idea, I've tried an attempt at faking an albedo pass using emissive materials in an OpenGL export, and the results of what the denoisers can achieve even with just that imperfect albedo AOV (with no proper reflection, refraction, depth of field, etc, and no normal AOV) do show staggering promise, with a huge leap in retained fine detail.

    It's really disappointing that even though Iray actually supports albedo AOVs, Daz doesn't provide it as an export canvas, because being able to feed the denoiser all of the parts of the picture (pun only slightly intended) would allow better results in less time.

    Canvasses support all the features of LPEs, as described here https://raytracing-docs.nvidia.com/iray/manual/index.html#reference#light-path-expressions - I don't know if that is how Albedo AOV is built, if it is it should be doable in DS.

  • Matt_CastleMatt_Castle Posts: 2,585
    edited May 2019
    Padone said:

    Perhaps a "DIY" albedo could be done by turning all the materials to emitters so to emulate a flat shader, but it's just a guess.

    A sound theory, although even if it works well enough, trying to do that using Iray would mean having to render the albedo separately, at which point you might as well argue you should just render twice as long in the first place.

    However, taking the core idea, I've tried an attempt at faking an albedo pass using emissive materials in an OpenGL export, and the results of what the denoisers can achieve even with just that imperfect albedo AOV (with no proper reflection, refraction, depth of field, etc, and no normal AOV) do show staggering promise, with a huge leap in retained fine detail.

    It's really disappointing that even though Iray actually supports albedo AOVs, Daz doesn't provide it as an export canvas, because being able to feed the denoiser all of the parts of the picture (pun only slightly intended) would allow better results in less time.

    Canvasses support all the features of LPEs, as described here https://raytracing-docs.nvidia.com/iray/manual/index.html#reference#light-path-expressions - I don't know if that is how Albedo AOV is built, if it is it should be doable in DS.

    As I've heard (and seems to be backed up by testing above), attempts to get Daz to supply an Albedo canvas come out black for some reason.

    (EDIT: I should add that I'm away from home at the moment, so can't currently test it myself).

     

    Taoz said:

    I'm updating the DragNDrop app to support albedo and normal maps. Does anyone know which formats these normally are in (jpg, png, etc.), and if they need to be in the same format as the render?

    They would normally be EXRs if you're storing them at the same time, but I think my previous testing shows that you can mix and match formats.

    (However, that said, anyone using JPG prior to denoising deserves a slap). cheeky

    Post edited by Matt_Castle on
  • Richard HaseltineRichard Haseltine Posts: 101,010
    Padone said:

    Perhaps a "DIY" albedo could be done by turning all the materials to emitters so to emulate a flat shader, but it's just a guess.

    A sound theory, although even if it works well enough, trying to do that using Iray would mean having to render the albedo separately, at which point you might as well argue you should just render twice as long in the first place.

    However, taking the core idea, I've tried an attempt at faking an albedo pass using emissive materials in an OpenGL export, and the results of what the denoisers can achieve even with just that imperfect albedo AOV (with no proper reflection, refraction, depth of field, etc, and no normal AOV) do show staggering promise, with a huge leap in retained fine detail.

    It's really disappointing that even though Iray actually supports albedo AOVs, Daz doesn't provide it as an export canvas, because being able to feed the denoiser all of the parts of the picture (pun only slightly intended) would allow better results in less time.

    Canvasses support all the features of LPEs, as described here https://raytracing-docs.nvidia.com/iray/manual/index.html#reference#light-path-expressions - I don't know if that is how Albedo AOV is built, if it is it should be doable in DS.

    As I've heard (and seems to be backed up by testing above), attempts to get Daz to supply an Albedo canvas come out black for some reason.

    (EDIT: I should add that I'm away from home at the moment, so can't currently test it myself).

    Well, it's possible that there is a bug somewhere - but it's quite possible that the bug is in the LPE being fed to iray, rather than on the Daz Studio or Iray side. This is the first I've heard of Albedo AOV, do you have a link to previous discussions - ideally giving the LPE used?

     

  • LenioTGLenioTG Posts: 2,118
    Padone said:

    cycles vs intel denoiser (with intel using the beauty canvas only)

    To better show what I mean here's a proof of concept of a simple cube rendered in blender with 8x steps. The first image is the original noised rendering, that would be the beauty canvas in iray. The second image is denoised with cycles. The third image is denoised with intel using only the original rendering as source. You can clearly see that cycles doesn't lose details nor textures, it simply can't because it's using the albedo and normal buffers. On the other side intel is just trying to guess what's noise and what's data and the result is far from decent.

     

    original image (beauty canvas)

    cycles denoiser (using the albedo and normal buffers)

    intel denoiser (using the beauty canvas only)

    Umh...the difference is noticeable, but this stuff is too complex for me!

    I wish one day Daz will include this method for noobs like me too xD

  • TaozTaoz Posts: 9,941

     

    Taoz said:

    I'm updating the DragNDrop app to support albedo and normal maps. Does anyone know which formats these normally are in (jpg, png, etc.), and if they need to be in the same format as the render?

    They would normally be EXRs if you're storing them at the same time, but I think my previous testing shows that you can mix and match formats.

    OK, thanks, I'll include them all then.

    (However, that said, anyone using JPG prior to denoising deserves a slap). cheeky

    smiley

  • Leonides02Leonides02 Posts: 1,379
    Toonces said:

    I took a screen shot of the pic and ran it through the external Intel denoiser (post-denoise result attached). The girl on the left didn't appear to lose any detail.

    Yeah, the Intel denoiser seems better from my experience. I've been using it exclusively. It also has a huge advantage in that it's non-destructive, so you keep the original render and can "pain back in" any lost details using layers in PS. 

  • LenioTGLenioTG Posts: 2,118
    (However, that said, anyone using JPG prior to denoising deserves a slap). cheeky

    I use .png.

    Since I'm always curious, what's the difference? Is it the fact that JPG is more compressed?

  • Matt_CastleMatt_Castle Posts: 2,585
    Padone said:

    Perhaps a "DIY" albedo could be done by turning all the materials to emitters so to emulate a flat shader, but it's just a guess.

    A sound theory, although even if it works well enough, trying to do that using Iray would mean having to render the albedo separately, at which point you might as well argue you should just render twice as long in the first place.

    However, taking the core idea, I've tried an attempt at faking an albedo pass using emissive materials in an OpenGL export, and the results of what the denoisers can achieve even with just that imperfect albedo AOV (with no proper reflection, refraction, depth of field, etc, and no normal AOV) do show staggering promise, with a huge leap in retained fine detail.

    It's really disappointing that even though Iray actually supports albedo AOVs, Daz doesn't provide it as an export canvas, because being able to feed the denoiser all of the parts of the picture (pun only slightly intended) would allow better results in less time.

    Canvasses support all the features of LPEs, as described here https://raytracing-docs.nvidia.com/iray/manual/index.html#reference#light-path-expressions - I don't know if that is how Albedo AOV is built, if it is it should be doable in DS.

    As I've heard (and seems to be backed up by testing above), attempts to get Daz to supply an Albedo canvas come out black for some reason.

    (EDIT: I should add that I'm away from home at the moment, so can't currently test it myself).

    Well, it's possible that there is a bug somewhere - but it's quite possible that the bug is in the LPE being fed to iray, rather than on the Daz Studio or Iray side. This is the first I've heard of Albedo AOV, do you have a link to previous discussions - ideally giving the LPE used?

    I'll have to get back to you on that one, as I'm having trouble tracking it down again (wrong search terms or something), so I'll have to check my desktop's browser history when I get back home.

    TGFan4 said:
    (However, that said, anyone using JPG prior to denoising deserves a slap). cheeky

    I use .png.

    Since I'm always curious, what's the difference? Is it the fact that JPG is more compressed?

    PNG uses lossless compression, using more storage space to ensure perfect reproduction of the image.

    JPG is lossy compression that throws away fine image detail in the name of reducing file size. High quality JPEGs are usually okay for most real life photos, because camera images are naturally at least a little noisy and you're not worried if it doesn't reproduce the noise in the image exactly. (For an analogy, JPEGs are a bit like translated text. It won't be perfect - perhaps the adjectives have slightly different connotations - but you'll probably get the general meaning).

    If you're doing editing on an image, and eventually need to compromise on file size (upload limits/download speed/whatever), you only really want to suffer that loss of detail once, and only at the final stage, in order to be able to best edit the image.
    This is particularly important where these denoisers are concerned, because they're trying to spot the difference between ray-tracing noise and other detail, so smudging and smearing it just makes it harder and more likely it will guess wrong in the "noise or detail" game.

    ~~~~~

    It should be said that even PNG is a compromise, as it uses 8 or 16 bit integer bit depth (8 being more common), meaning it often has to lose contrast in very bright or dark areas in order to have more contrast for the majority of the image - a concept called "clipping". This can become important when colour correcting, as the contrast outside the image's range is lost and cannot be recovered - if an image is over or under exposed, the damage is done.

    Formats like EXR, HDR and PFM instead store floating point data that can store a huge range of numeric data with good precision at any order of magnitude, and are excellent for heavy post processing, but they have huge file sizes* and cannot actually be properly displayed on any monitor without range compression (because while the format can accurately store the brightness of the sun at high noon, no monitor can actually accurately replicate that).

    * Frequently 20-100x the size of a JPEG file of the same resolution, so not a sensible format unless you actually need perfect contrast across huge dynamic range.

    Anyway, now I'm rambling.

  • LenioTGLenioTG Posts: 2,118
    PNG uses lossless compression, using more storage space to ensure perfect reproduction of the image.

    JPG is lossy compression that throws away fine image detail in the name of reducing file size. High quality JPEGs are usually okay for most real life photos, because camera images are naturally at least a little noisy and you're not worried if it doesn't reproduce the noise in the image exactly. (For an analogy, JPEGs are a bit like translated text. It won't be perfect - perhaps the adjectives have slightly different connotations - but you'll probably get the general meaning).

    If you're doing editing on an image, and eventually need to compromise on file size (upload limits/download speed/whatever), you only really want to suffer that loss of detail once, and only at the final stage, in order to be able to best edit the image.
    This is particularly important where these denoisers are concerned, because they're trying to spot the difference between ray-tracing noise and other detail, so smudging and smearing it just makes it harder and more likely it will guess wrong in the "noise or detail" game.

    ~~~~~

    It should be said that even PNG is a compromise, as it uses 8 or 16 bit integer bit depth (8 being more common), meaning it often has to lose contrast in very bright or dark areas in order to have more contrast for the majority of the image - a concept called "clipping". This can become important when colour correcting, as the contrast outside the image's range is lost and cannot be recovered - if an image is over or under exposed, the damage is done.

    Formats like EXR, HDR and PFM instead store floating point data that can store a huge range of numeric data with good precision at any order of magnitude, and are excellent for heavy post processing, but they have huge file sizes* and cannot actually be properly displayed on any monitor without range compression (because while the format can accurately store the brightness of the sun at high noon, no monitor can actually accurately replicate that).

    * Frequently 20-100x the size of a JPEG file of the same resolution, so not a sensible format unless you actually need perfect contrast across huge dynamic range.

    Anyway, now I'm rambling.

    Thank you for the detailed explanation Matt!! :D

    I often publish JPG renders...mostly because when I save from PSD in Photoshop it's easier to spotter than PNG.
    Do you think I would see a noticeable difference in the final render if I used PNG instead of JPG?

  • outrider42outrider42 Posts: 3,679

    It depends on how large your render is. To be perfectly honest, you will most likely not see any real difference unless you render massive images and then examine those images under a microscope. You can always test this for yourself by rendering the same scene and saving it as a png and then as a jpg. But in general, I always render out as png because the simple fact that png can be done with transparency. That means you can render just a character or two on a transparent background and photoshop that image much more easily into another. For people who make book covers and that sort of thing, png is a must.

  • TaozTaoz Posts: 9,941
    TGFan4 said:
    (However, that said, anyone using JPG prior to denoising deserves a slap). cheeky

    I use .png.

    Since I'm always curious, what's the difference? Is it the fact that JPG is more compressed?

    PNG uses lossless compression, using more storage space to ensure perfect reproduction of the image.

    Just want to add that PNG has 10 compression levels (0 - 9), which you may be confronted with when saving PNG files in certain programs (e.g. IrfanView). People often think it has something to do with picture quality which is not the case, the quality is the same with any compression level. Higher compression just means a smaller file size, the price is a slower read/write time but on todays computers that hardly noticable except perhaps for very large pictures. So normally you can use level 9 for smallest file size. Never use 0, that means no compression (huge file size).

  • Matt_CastleMatt_Castle Posts: 2,585
    edited May 2019
    Taoz said:
    TGFan4 said:
    (However, that said, anyone using JPG prior to denoising deserves a slap). cheeky

    I use .png.

    Since I'm always curious, what's the difference? Is it the fact that JPG is more compressed?

    PNG uses lossless compression, using more storage space to ensure perfect reproduction of the image.

    Just want to add that PNG has 10 compression levels (0 - 9), which you may be confronted with when saving PNG files in certain programs (e.g. IrfanView). People often think it has something to do with picture quality which is not the case, the quality is the same with any compression level. Higher compression just means a smaller file size, the price is a slower read/write time but on todays computers that hardly noticable except perhaps for very large pictures. So normally you can use level 9 for smallest file size. Never use 0, that means no compression (huge file size).

    I think it's actually mostly write time, as it tries multiple different compression methods to see which works best for that specific image. Any difference on write EDIT: read time is, as far as I know, entirely negligible.

    Post edited by Matt_Castle on
  • LenioTGLenioTG Posts: 2,118

    It depends on how large your render is. To be perfectly honest, you will most likely not see any real difference unless you render massive images and then examine those images under a microscope. You can always test this for yourself by rendering the same scene and saving it as a png and then as a jpg. But in general, I always render out as png because the simple fact that png can be done with transparency. That means you can render just a character or two on a transparent background and photoshop that image much more easily into another. For people who make book covers and that sort of thing, png is a must.

    Yes, I render in .png, but I never publish something without editing it in Photoshop at least a little bit!
    And from PS I save in .jpg.

    Then I'll continue doing so, thanks for the explanation! :D

  • TaozTaoz Posts: 9,941

    .

    TGFan4 said:

    It depends on how large your render is. To be perfectly honest, you will most likely not see any real difference unless you render massive images and then examine those images under a microscope. You can always test this for yourself by rendering the same scene and saving it as a png and then as a jpg. But in general, I always render out as png because the simple fact that png can be done with transparency. That means you can render just a character or two on a transparent background and photoshop that image much more easily into another. For people who make book covers and that sort of thing, png is a must.

    Yes, I render in .png, but I never publish something without editing it in Photoshop at least a little bit!
    And from PS I save in .jpg.

    Then I'll continue doing so, thanks for the explanation! :D

    AFAIK the DAZ forum is not reducing the quality of PNG files you upload, while it does with JPG through recompression to save space, so you may want to use PNG if you don't want the quality of the renders you upload to be reduced.

  • LenioTGLenioTG Posts: 2,118
    Taoz said:
    AFAIK the DAZ forum is not reducing the quality of PNG files you upload, while it does with JPG through recompression to save space, so you may want to use PNG if you don't want the quality of the renders you upload to be reduced.

    I upload on DeviantArt!

    Then I'll do some tests, thank you! :D

  • 3dOutlaw3dOutlaw Posts: 2,471
    edited May 2019
    Just want to add that PNG has 10 compression levels (0 - 9), which you may be confronted with when saving PNG files in certain programs (e.g. IrfanView). People often think it has something to do with picture quality which is not the case, the quality is the same with any compression level. Higher compression just means a smaller file size, the price is a slower read/write time but on todays computers that hardly noticable except perhaps for very large pictures. So normally you can use level 9 for smallest file size. Never use 0, that means no compression (huge file size).

    That is helpful information thanks!  I always wondered how that affected the image.  Now I can stop guessing!

    Post edited by 3dOutlaw on
  • TaozTaoz Posts: 9,941

    DragNDrop for the denoisers has been updated, now with support for Albedo and Normal AOVs, as well as other improvements. More info and download link here:

    https://taosoft.dk/software/freeware/dnden/

    Let me know if there are any problems.

    Declan (who wrote the denoiser scripts) also asked me to share some info, to clear up some of the apparent confusion around the AOVs:

    "The denoisers can take up to two feature buffers, the normal and the albedo AOV. The normal AOV will help preserve the normals on the geo which is especially important for the finer details such as bump mapping. The albedo buffer will help preserve texture details. The albedo AOV should be a weighted sum of albedo layers with a result between [0, 1]. The Arnold renderer has a builtin LPE called denoise_albedo which does this and is worth checking out. For further information on this check out the oidn documentation on the subject https://github.com/OpenImageDenoise/oidn#rt "

  • LenioTGLenioTG Posts: 2,118
    Taoz said:

    DragNDrop for the denoisers has been updated, now with support for Albedo and Normal AOVs, as well as other improvements. More info and download link here:

    https://taosoft.dk/software/freeware/dnden/

    Let me know if there are any problems.

    Declan (who wrote the denoiser scripts) also asked me to share some info, to clear up some of the apparent confusion around the AOVs:

    "The denoisers can take up to two feature buffers, the normal and the albedo AOV. The normal AOV will help preserve the normals on the geo which is especially important for the finer details such as bump mapping. The albedo buffer will help preserve texture details. The albedo AOV should be a weighted sum of albedo layers with a result between [0, 1]. The Arnold renderer has a builtin LPE called denoise_albedo which does this and is worth checking out. For further information on this check out the oidn documentation on the subject https://github.com/OpenImageDenoise/oidn#rt "

    You've set a great and clear page there, from now on I'll link it instead of this thread, that has become a little bit scaringly long! xD

    I still haven't figured out this albedo and AVO stuff...maybe it's for another day, when I'll become better! xD

  • TaozTaoz Posts: 9,941
    TGFan4 said:
    Taoz said:

    DragNDrop for the denoisers has been updated, now with support for Albedo and Normal AOVs, as well as other improvements. More info and download link here:

    https://taosoft.dk/software/freeware/dnden/

    Let me know if there are any problems.

    Declan (who wrote the denoiser scripts) also asked me to share some info, to clear up some of the apparent confusion around the AOVs:

    "The denoisers can take up to two feature buffers, the normal and the albedo AOV. The normal AOV will help preserve the normals on the geo which is especially important for the finer details such as bump mapping. The albedo buffer will help preserve texture details. The albedo AOV should be a weighted sum of albedo layers with a result between [0, 1]. The Arnold renderer has a builtin LPE called denoise_albedo which does this and is worth checking out. For further information on this check out the oidn documentation on the subject https://github.com/OpenImageDenoise/oidn#rt "

    You've set a great and clear page there, from now on I'll link it instead of this thread, that has become a little bit scaringly long! xD

    I still haven't figured out this albedo and AVO stuff...maybe it's for another day, when I'll become better! xD

    There are some tutorials on youtube for how to create an albedo map from a diffuse map, in Photoshop. Png albedos made in Photoshop CS6 gives an sRGB error message in the Nvidia denoiser however. It should be possible to fix by editing the png files but I'm not quite sure how yet.

  • MattymanxMattymanx Posts: 6,908

    Thank you for the update Taoz!!!  Your DnD UI makes life so much easier for such a simple task. (cause I dont like having to type stuff out in a command line if I dont have to)

  • TaozTaoz Posts: 9,941
    Mattymanx said:

    Thank you for the update Taoz!!!  Your DnD UI makes life so much easier for such a simple task. (cause I dont like having to type stuff out in a command line if I dont have to)

    You're welcome!

  • Okay I have this thing downloaded and placed in the folder of my choice. I have the paths set up now what?  drag images from where to where? Is the image suppose to come from where my image is as in soruce file? I'm confused as to where to drop the image. DND? the Denoiser.exe?

    Annotation 2019-05-17 162022.png
    1600 x 900 - 179K
  • TaozTaoz Posts: 9,941
    edited May 2019

    Okay I have this thing downloaded and placed in the folder of my choice. I have the paths set up now what?  drag images from where to where? Is the image suppose to come from where my image is as in soruce file? I'm confused as to where to drop the image. DND? the Denoiser.exe?

    Just use the mouse to drag the saved render(s) from Explorer or whatever file manager you use onto the DnD app.

    Post edited by Taoz on
Sign In or Register to comment.