IRAY Photorealism?

16263646567

Comments

  • UncannyValetUncannyValet Posts: 218
    edited November 2023

    dbmelvin1993 said:

    As for the hair shader itself, that you can put together over a coffee knowing where to look (Sorry uncannyvalet, but i've not used your shader and instead done it myself. Knew of it's existance already, but i won't drop them here either as i've completely done them, and for Both SBH and Non-SBH)

    I wouldnt mind if you wanted to share your own hair shader. I scarcely put any effort into my own one, which is why i offer it as freebie on the 3dshards site. 

    Attached image totally unrelated to anything

     

    0008.jpg
    1920 x 1920 - 336K
    Post edited by UncannyValet on
  • UncannyValet said:

    dbmelvin1993 said:

    As for the hair shader itself, that you can put together over a coffee knowing where to look (Sorry uncannyvalet, but i've not used your shader and instead done it myself. Knew of it's existance already, but i won't drop them here either as i've completely done them, and for Both SBH and Non-SBH)

    I wouldnt mind if you wanted to share your own hair shader. I scarcely put any effort into my own one, which is why i offer it as freebie on the 3dshards site. 

    Attached image totally unrelated to anything

     

    That's how i got it, but since you are technically still selling it, i'm not dropping them here.

  • GoggerGogger Posts: 2,401
    edited November 2023

    I was mucking about with DoF on a scene I was working with (Ultrascenery, and DinoRaul beasties) and was surprised when this render looked like SO MANY of my real world nature photographs that I like it for what it is, and what it isn't.  :)

    FS_CreosoteBush_3D_Erik_Pedersen.jpg
    1920 x 1200 - 215K
    Post edited by Gogger on
  • DartanbeckDartanbeck Posts: 21,626

    Beautiful!

  • charlescharles Posts: 848
    edited January 31

    So anyone here who knows me..knows me..and I didn't die from covid no matter how many probably wish I did....

    nope

    I've been working the last few years on new techniques to realistic the S**T out of Daz for FUN and PORFIT!

     

    Image removed

    Yes I ran MadSkills Studio, it may be back sometime soon with new tools.

    Anyone is welcome to join Mara and me every night on Discord for fun and giggles just PM me.

    Post edited by Richard Haseltine on
  • lilweeplilweep Posts: 2,531

    Can't say I was following your life story.  

    The face and hairline looks good.

    I read on your gallery you used AI to enhance the image.  Perhaps that explains why the hair and face are particularly good. 

    I notice many peoples' daz renders suddenly having realistic facial expressions and good hair (although often the artists leave residual AI artifacts in their image). See some of the trending images in the gallery right now for such examples, although they often do not disclose use of AI.

  • takezo_3001takezo_3001 Posts: 1,989
    edited January 31

    charles said:

    So anyone here who knows me..knows me..and I didn't die from covid no matter how many probably wish I did....

    nope

    I've been working the last few years on new techniques to realistic the S**T out of Daz for FUN and PORFIT!

     

    Image removed

    Yes I ran MadSkills Studio, it may be back sometime soon with new tools.

    Anyone is welcome to join Mara and me every night on Discord for fun and giggles just PM me.

    Holy CRAP! That is nothing short of magnificent work, those eyes look 100% real, among other things that contribute to its realism!

    Post edited by Richard Haseltine on
  • charlescharles Posts: 848
    edited February 1

    takezo_3001 said:

    charles said:

    So anyone here who knows me..knows me..and I didn't die from covid no matter how many probably wish I did....

    nope

    I've been working the last few years on new techniques to realistic the S**T out of Daz for FUN and PORFIT!

     

    Image removed

    Yes I ran MadSkills Studio, it may be back sometime soon with new tools.

    Anyone is welcome to join Mara and me every night on Discord for fun and giggles just PM me.

    Holy CRAP! That is nothing short of magnificent work, those eyes look 100% real, among other things that contribute to its realism!

    It's what I've come up with called AI bumping. Think of it as post work using certain Stable Diffusion models and loras to enhance it. Keep in mind the image requires your Daz image to be freaking good in the first place. It's all about adding in the extra details like skin, hair and clothing details. In that case it was also extra inpainting for the hot tub bubbles and other things.

    It requires a lot of sampling, well for me anways. I do batches of 2-10 looking for the best parts of each with various low denoising, if any, and take the good ones into PS and blend those all together into the final image. In the exmaple posted I did not do my final lighting post work though, just to show the AI postwork stuff.

     You can bump the denoising up, but when you do that's going to make the results drift more and more from your original Daz image. If you are working on something like graphic novels like I do, consistancy is key. So less drift the better. If you just want to do a one off image there is no limits, except the more you drift the image the more possibilites for AI weirdness to creep in.

    I also use key models for the face for consistancy, where I have a large collection of EU models in different angles. But you only need one but needs to be good. People often gripe about this as too difficult right now, but using IPAdapter Control Nets you can get pretty good results. If you try and use Reactor or even worst Face Swap, you're going to get often a lot of junk, as it can't blend the face to the lighting in the scene as good. And that's kind of the key. If the lighting on the face can't match the scene it's going to look like a cheap PS image.

    One of the key things is to have your Daz character as close to your control model as possible. Otherwise it's trying to fit too much. I will often reshape the daz characters face especially the nose to be as close to the control net models as possible for best fitting and least drift.

    Also the best thing of this technique is it can totally correct poor daz anatomy and overblown skin specular and gloss to be realistic. As well as fix fake looking hair and make clothing really POP!

    Finally, the best part is you remain in complete control, from your scene, lighting, framing, camera setup, posing, it's all you, not some random thing from AI. The AI is just used to help nudge the work to more realism and add details where there were none.

    I have this template which if anyone else wants let me know I'll share it, but here is the test for consistancy results. Some of these angles are really hard for it to match (which was the point) and if it can't fit ALL of them, it's not necessarly a deal breaker. But it does give one an idea of what the limits of consistancy are going to be. It just depends on how much I want to use a certain character through my work.

     

    Untitled-35.png
    628 x 628 - 310K
    Post edited by charles on
  • takezo_3001takezo_3001 Posts: 1,989

    charles said:

    It's what I've come up with called AI bumping. Think of it as post work using certain Stable Diffusion models and loras to enhance it. Keep in mind the image requires your Daz image to be freaking good in the first place. It's all about adding in the extra details like skin, hair and clothing details. In that case it was also extra inpainting for the hot tub bubbles and other things.

    It requires a lot of sampling, well for me anways. I do batches of 2-10 looking for the best parts of each with various low denoising, if any, and take the good ones into PS and blend those all together into the final image. In the exmaple posted I did not do my final lighting post work though, just to show the AI postwork stuff.

     You can bump the denoising up, but when you do that's going to make the results drift more and more from your original Daz image. If you are working on something like graphic novels like I do, consistancy is key. So less drift the better. If you just want to do a one off image there is no limits, except the more you drift the image the more possibilites for AI weirdness to creep in.

    I also use key models for the face for consistancy, where I have a large collection of EU models in different angles. But you only need one but needs to be good. People often gripe about this as too difficult right now, but using IPAdapter Control Nets you can get pretty good results. If you try and use Reactor or even worst Face Swap, you're going to get often a lot of junk, as it can't blend the face to the lighting in the scene as good. And that's kind of the key. If the lighting on the face can't match the scene it's going to look like a cheap PS image.

    One of the key things is to have your Daz character as close to your control model as possible. Otherwise it's trying to fit too much. I will often reshape the daz characters face especially the nose to be as close to the control net models as possible for best fitting and least drift.

    Also the best thing of this technique is it can totally correct poor daz anatomy and overblown skin specular and gloss to be realistic. As well as fix fake looking hair and make clothing really POP!

    Finally, the best part is you remain in complete control, from your scene, lighting, framing, camera setup, posing, it's all you, not some random thing from AI. The AI is just used to help nudge the work to more realism and add details where there were none.

    I have this template which if anyone else wants let me know I'll share it, but here is the test for consistancy results. Some of these angles are really hard for it to match (which was the point) and if it can't fit ALL of them, it's not necessarly a deal breaker. But it does give one an idea of what the limits of consistancy are going to be. It just depends on how much I want to use a certain character through my work.

    Yeah, I have a hand in the beta testing of an AI-driven video upscaling and frame interpolating program, and they're using facial models to train their models... I'd like to see what's possible with AI, but that depends on how they're obtaining their input graphics to train their models with, if it's stock photos/art, then good on them; otherwise... But yeah, I'd be interested in seeing Daz take advantage of their own AI model integration into Daz studio in the future, hopefully...

    I have my eyes open for offline/internet-independent downloaded programs that can convert 3D animation from any, and all sources of video, not just video shot by the end user, but found footage as well, and being able to transfer that to Genesis would be groundbreaking!

  • charlescharles Posts: 848
    edited February 1

    takezo_3001 said:

     

    Yeah, I have a hand in the beta testing of an AI-driven video upscaling and frame interpolating program, and they're using facial models to train their models... I'd like to see what's possible with AI, but that depends on how they're obtaining their input graphics to train their models with, if it's stock photos/art, then good on them; otherwise... But yeah, I'd be interested in seeing Daz take advantage of their own AI model integration into Daz studio in the future, hopefully...

    I have my eyes open for offline/internet-independent downloaded programs that can convert 3D animation from any, and all sources of video, not just video shot by the end user, but found footage as well, and being able to transfer that to Genesis would be groundbreaking!

    It would be, and might be sooner than later. The problem currently as I see it is if you are wanting to animate inside Daz as the current system is would be grueling. Obtaining an openpose from a single frame or image then converting that to a Daz pose would not be difficult, and is on my list stuff to do. Then it's just a matter of  extrapulating that from a video into a series of poses for animation, and there you go. Right now something like that would be even easier to just do in Blender or other systems or game engines, but yeah totally doable inside Daz too.

    However one of the issues from earlier tests this year is that the openpose model's limb stick does not really match up with the bone structure of the Daz character that well. The training of an AI limbstick tailored to Genesis would be more ideal, but adds a lot more work and time to such a development then just using what's already availble.

     

    Here is the test of openpose limbstick matched to a daz pose (I can't find the image it was matched to, but some woman holding a gun I think.) Notice how the limbs are distorted in ways they should not be. The conclusion from the test was that openpose is not a good tool for Genesis posing, something else is needed, maybe even some new type of rigging. There is still several angles one could tackle the solution, but it's probably wise to take time to figure out which has the best potential and weigh each and do more tests to help figure that out.

     

    op_test1.png
    1440 x 1080 - 92K
    Post edited by charles on
  • charlescharles Posts: 848
    edited August 30
    ashley_test_ai1.png
    1440 x 1440 - 4M
    Post edited by charles on
  • charlescharles Posts: 848
    edited August 30

    ..

    Post edited by charles on
  • charlescharles Posts: 848
    edited August 30

    Masterstroke said:

    I'd like to add something else to all this. 
    The problem in IRAY cannot just be the skin shaders, but also the way light works in DS IRAY.
    My observation is, that with one HDRI plus one light source, you can get a quite realistic looking scene, but the more light sources you are adding to the scene, the more it looks like a CG scene.
    So I guess, that some sort of inaccuracy in lights (lights and mesh lights) are accumulating and ruining realism

    A late reply but totally agree.lighting is key to realism. you don't HAVE to use HDRI to get there, but for easly cheap settings it's your fastest route. But the downside is you don't have real tweak control over this. To get where you want using normal lights you have to explore the ghost lights nad normal camera lights. a normal spotlight can act like a ghost light just fine..hell sometimes i use headlamps! Especially if wanting to make an image that looks like it was shot from an actual camera, there headnlamp isyour friend.

     

    Post edited by charles on
  • lilweeplilweep Posts: 2,531

    Yes AI can certainly do a lot of heavy lifting to elevate fairly mediocre renders for sure.

  • charlescharles Posts: 848

    lilweep said:

    Yes AI can certainly do a lot of heavy lifting to elevate fairly mediocre renders for sure.

    Yes that was kind of my point. 

  • charlescharles Posts: 848

    lilweep said:

    Yes AI can certainly do a lot of heavy lifting to elevate fairly mediocre renders for sure.

    Yes that was kind of my point. 

  • MasterstrokeMasterstroke Posts: 2,000

    charles said:

    Updates...

    Starting with this Daz character portrait.

    https://www.youtube.com/shorts/VTTRf5HIX-k

    https://youtube.com/shorts/OV4yyfvQ9jk?si=ZMa0wCgVJtAGvneu

     

    Now I'd like to know more about this. 

  • charlescharles Posts: 848
    edited September 1

    Masterstroke said:

    charles said:

    Updates...

    Starting with this Daz character portrait.

    https://www.youtube.com/shorts/VTTRf5HIX-k

    https://youtube.com/shorts/OV4yyfvQ9jk?si=ZMa0wCgVJtAGvneu

     

    Now I'd like to know more about this. 

     

    Sure.

    I wanted to share a technique I’ve been working on that I call "bumping." It's a method in Image 2 Image where you subtly enhance a Daz image to give it more life. I’m still refining the process, so I’m not quite ready to share a full walkthrough just yet. However, once you’ve achieved the desired effect (or even if you haven’t), you can use the new KlingAI for Image to Video conversion. If you’re looking to create a talking head synced with text or music, HedraAI is great for that.

    If anyone is interested in learning more about the AI bumping technique, the models being used, or how to set it up, feel free to PM me for a Discord invite.

    On Discord, I’ll go over the setup, the process, and the current challenges that still need work. In my opinion, this is still very much an art form, with AI serving as a powerful tool to push Daz images closer to realism. But to maintain consistency and avoid randomness, it’s crucial to set up control profiles. My goal, in this case, is to ensure that the characters remain consistent throughout the process.

     

    Here is an example, I hope it's ok to use. To show the result using just one control image.

    One thing you may notice is it fixes not just the face, eyes, hair clothing, even necklace, but most importantly to pop most Daz renders to realism is it fixes the specular gloss to be more natural. This is one of the biggest things this does to help it get over the uncanny valley.

     

     

     

    masterstroke1.png
    1040 x 1040 - 1M
    Post edited by charles on
  • MasterstrokeMasterstroke Posts: 2,000

    charles said:

     

    Sure.

    I wanted to share a technique I’ve been working on that I call "bumping." It's a method in Image 2 Image where you subtly enhance a Daz image to give it more life. I’m still refining the process, so I’m not quite ready to share a full walkthrough just yet. However, once you’ve achieved the desired effect (or even if you haven’t), you can use the new KlingAI for Image to Video conversion. If you’re looking to create a talking head synced with text or music, HedraAI is great for that.

    If anyone is interested in learning more about the AI bumping technique, the models being used, or how to set it up, feel free to PM me for a Discord invite.

    On Discord, I’ll go over the setup, the process, and the current challenges that still need work. In my opinion, this is still very much an art form, with AI serving as a powerful tool to push Daz images closer to realism. But to maintain consistency and avoid randomness, it’s crucial to set up control profiles. My goal, in this case, is to ensure that the characters remain consistent throughout the process.

     

    Here is an example, I hope it's ok to use. To show the result using just one control image.

    One thing you may notice is it fixes not just the face, eyes, hair clothing, even necklace, but most importantly to pop most Daz renders to realism is it fixes the specular gloss to be more natural. This is one of the biggest things this does to help it get over the uncanny valley.

    Oh wow! A real progress in terms of realism, although I think, the main character features got lost here. 

  • bluejauntebluejaunte Posts: 1,903

    It just looks like a photo. Funny that the AI stuff is finally making us ask: did we want a photo in the first place? Or did we want a very realistic looking render that still looks like a render?

  • charlescharles Posts: 848
    edited September 1

    Masterstroke said:

    charles said:

     

    Sure.

    I wanted to share a technique I’ve been working on that I call "bumping." It's a method in Image 2 Image where you subtly enhance a Daz image to give it more life. I’m still refining the process, so I’m not quite ready to share a full walkthrough just yet. However, once you’ve achieved the desired effect (or even if you haven’t), you can use the new KlingAI for Image to Video conversion. If you’re looking to create a talking head synced with text or music, HedraAI is great for that.

    If anyone is interested in learning more about the AI bumping technique, the models being used, or how to set it up, feel free to PM me for a Discord invite.

    On Discord, I’ll go over the setup, the process, and the current challenges that still need work. In my opinion, this is still very much an art form, with AI serving as a powerful tool to push Daz images closer to realism. But to maintain consistency and avoid randomness, it’s crucial to set up control profiles. My goal, in this case, is to ensure that the characters remain consistent throughout the process.

     

    Here is an example, I hope it's ok to use. To show the result using just one control image.

    One thing you may notice is it fixes not just the face, eyes, hair clothing, even necklace, but most importantly to pop most Daz renders to realism is it fixes the specular gloss to be more natural. This is one of the biggest things this does to help it get over the uncanny valley.

    Oh wow! A real progress in terms of realism, although I think, the main character features got lost here. 

     

    And that's the reason for the control images. You usually need a series of profile angle and frontal shots, otherwise you get "drift" which is the randomness of the AI just making up and filling in the gaps without direction.

    It's also still very much an art, so you have to know what are those key features you are looking for wanting to preserve. It's not just submit image and done, there is a lot of photoshop work also involved after you crank out some passes, I usually take 2-3 along with the original and take the best parts of each. This is why you can still get perfect fingers because you can always use the ones from the daz render.

     

    It just looks like a photo. Funny that the AI stuff is finally making us ask: did we want a photo in the first place? Or did we want a very realistic looking render that still looks like a render?

     

    I think the original goal of this thread was always seeking how to use Daz as a tool for photorealism. Granted some people have used Blender, Maya and other systems for rendering. It's not that Daz's Iray is bad, it's the shader system in general has issues rising to a level of realism in a lot of cases, plus the Genesis series isn't really designed for full realism, it's a hybrid between that and characterization, meant to be more universal. 

     

    Post edited by charles on
  • bluejauntebluejaunte Posts: 1,903

     

    charles said:

    I think the original goal of this thread was always seeking how to use Daz as a tool for photorealism. Granted some people have used Blender, Maya and other systems for rendering. It's not that Daz's Iray is bad, it's the shader system in general has issues rising to a level of realism in a lot of cases, plus the Genesis series isn't really designed for full realism, it's a hybrid between that and characterization, meant to be more universal. 

     

    Yeah. I'm asking if we were wrong to have this goal all along. Now that we have AI basically transforming our render into full photorealism, I'm not sure anymore if I ever really wanted it.

  • I think this is one of those "Who is this 'We'..." situations. Some people never were aiming for photo-real, others may actually be happy with the new tools.

  • MasterstrokeMasterstroke Posts: 2,000

    Richard Haseltine said:

    I think this is one of those "Who is this 'We'..." situations. Some people never were aiming for photo-real, others may actually be happy with the new tools.

    I'd be happy with "most realistic", but without including generative AI. 
    On the other hand, under the hood  AI tools can help.

  • charlescharles Posts: 848
    edited September 3

    Masterstroke said:

    Richard Haseltine said:

    I think this is one of those "Who is this 'We'..." situations. Some people never were aiming for photo-real, others may actually be happy with the new tools.

    I'd be happy with "most realistic", but without including generative AI. 
    On the other hand, under the hood  AI tools can help.

    It depends on who you ask, but since this thread is focused on Iray Photorealism, it's important to distinguish that from other styles like semi-realism or kinda-realism. AI generative systems are now capable of producing images that approach true photorealism, especially with the right models. However, the results can be quite unpredictable—sometimes you hit the mark, and other times it feels like a complete gamble.

    There are various tools like ControlNets, LoRAs, and pose controllers that can help steer the AI towards more consistent and desired outcomes. These tools certainly reduce the element of chance, but they don't entirely replace the hands-on control you get when using a tool like Daz to meticulously build and compose your scene, particularly with Iray rendering in mind.

    In this context, AI can serve as a powerful post-processing tool to enhance a Daz-rendered image, but it doesn't replace the detailed work involved in creating a truly photorealistic render with Iray from the ground up.

    My method allows for a bit more flexibility—or you might even say, laziness—when it comes to scene details. In the past, I might have spent an hour meticulously adjusting a necklace to make sure it wasn't hovering off the neckline. Now, the AI can often recognize that it's a necklace and automatically adjust it to fit properly. It also enhances details like hair and clothing, adding fine wrinkles to garments that might otherwise look like flat sheets of vertices.

    As we've discussed in earlier posts, adding these kinds of details is crucial for achieving photorealism, whether it's scratches on furniture, lint on socks, or sandy feet. AI bumping can introduce many of these elements with minimal effort, though it does tend to be more random. Still, I'd rather spend my time on other aspects of the scene than obsessing over every wrinkle on a shirt.

    However, when it comes to faces, I’m definitely focused on minimizing as much of the randomness as possible. The goal is to ensure the features are consistent and natural-looking, correcting any elements that appear off or unnatural. It's about striking the right balance—letting the AI enhance where it can while making sure key aspects like the face are as precise and controlled as possible.

     

    Post edited by charles on
  • bluejauntebluejaunte Posts: 1,903

    It's really not Iray at that point, right? Man, this whole AI thing is really turning things upside down.

  • charlescharles Posts: 848
    edited September 3

    bluejaunte said:

    It's really not Iray at that point, right? Man, this whole AI thing is really turning things upside down.

    It is still Iray. AI is used as a post processing touchup, that's it. How much you want to keep of your Iray original image is up to you. But I often let things like backgrounds and fingers fall through to the original Iray render, keeping just the parts of the AI that improve the features I need it to. I'll put together a better demonstration later this week.

     

     

    Post edited by charles on
  • bluejauntebluejaunte Posts: 1,903

    charles said:

    bluejaunte said:

    It's really not Iray at that point, right? Man, this whole AI thing is really turning things upside down.

    It is still Iray. AI is used as a post processing touchup, that's it. How much you want to keep of your Iray original image is up to you. But I often let things like backgrounds and fingers fall through to the original Iray render, keeping just the parts of the AI that improve the features I need it to. I'll put together a better demonstration later this week.

    Iray is involved, of course. But the photorealism does not come from Iray. Not in the video example you showed anyway. Iray is only used to give the absolute basics to the AI which does the heavy lifting. I'd be hesitant to call this post processing.

    Absolutely nothing wrong with that. I'm just saying this has little to do with Iray. Could use any other crap renderer. Probably even Filament or a screenshot of the viewport.

  • charlescharles Posts: 848
    edited September 4

    ...

    Post edited by charles on
  • bluejauntebluejaunte Posts: 1,903

    charles said:

    bluejaunte said:

    charles said:

    bluejaunte said:

    It's really not Iray at that point, right? Man, this whole AI thing is really turning things upside down.

    It is still Iray. AI is used as a post processing touchup, that's it. How much you want to keep of your Iray original image is up to you. But I often let things like backgrounds and fingers fall through to the original Iray render, keeping just the parts of the AI that improve the features I need it to. I'll put together a better demonstration later this week.

    Iray is involved, of course. But the photorealism does not come from Iray. Not in the video example you showed anyway. Iray is only used to give the absolute basics to the AI which does the heavy lifting. I'd be hesitant to call this post processing.

    Absolutely nothing wrong with that. I'm just saying this has little to do with Iray. Could use any other crap renderer. Probably even Filament or a screenshot of the viewport.

    Sorry, but I have to disagree with you. Your gallery shows clear signs of relying heavily on Photoshop, particularly with one-click filters. If you believe in taking things to the next level, then it seems contradictory to have a gallery dominated by Photoshop filtering.

    The AI actually enhances the Iray render in ways that go beyond what Photoshop filters can achieve. And the best part is, you can still go back and add post-processing in Photoshop if you want—I do it too! This can help fix issues like bad shading, specular highlights, and other Daz-related quirks. But at the end of the day, it's your art, and it's totally up to you how you want to refine it.

     

     

     

    In Photoshop, when you're adding filters and maybe correct the exposure or things like that, you're trying to enhance the render. AI regenerates the whole image completely. Even if you force it to stay as close to the original as possible, it still technically generates a new image. Hence, to me at least, what you're seeing after that is not Iray. It is an AI-generated image that was made from an Iray render, and you could have used any other renderer, even probably a Filament image as mentioned. It's just a matter of prompting and settings.

    An equivalent could be photoshopping a face from a real photo onto your render. Nobody would call this post processing. But it doesn't matter much, nobody cares in the end what was used as long as it looks good. But this is a thread about Iray photorealism so I feel it's at least worth pointing out that we are leaving that realm once AI image generation is involved.

    It's still interesting though! Please don't take this the wrong way and by all means continue to show your results :)

Sign In or Register to comment.