IRAY Photorealism?

1616263646567»

Comments

  • UncannyValetUncannyValet Posts: 144
    edited November 2023

    dbmelvin1993 said:

    As for the hair shader itself, that you can put together over a coffee knowing where to look (Sorry uncannyvalet, but i've not used your shader and instead done it myself. Knew of it's existance already, but i won't drop them here either as i've completely done them, and for Both SBH and Non-SBH)

    I wouldnt mind if you wanted to share your own hair shader. I scarcely put any effort into my own one, which is why i offer it as freebie on the 3dshards site. 

    Attached image totally unrelated to anything

     

    0008.jpg
    1920 x 1920 - 336K
    Post edited by UncannyValet on
  • UncannyValet said:

    dbmelvin1993 said:

    As for the hair shader itself, that you can put together over a coffee knowing where to look (Sorry uncannyvalet, but i've not used your shader and instead done it myself. Knew of it's existance already, but i won't drop them here either as i've completely done them, and for Both SBH and Non-SBH)

    I wouldnt mind if you wanted to share your own hair shader. I scarcely put any effort into my own one, which is why i offer it as freebie on the 3dshards site. 

    Attached image totally unrelated to anything

     

    That's how i got it, but since you are technically still selling it, i'm not dropping them here.

  • GoggerGogger Posts: 2,311
    edited November 2023

    I was mucking about with DoF on a scene I was working with (Ultrascenery, and DinoRaul beasties) and was surprised when this render looked like SO MANY of my real world nature photographs that I like it for what it is, and what it isn't.  :)

    FS_CreosoteBush_3D_Erik_Pedersen.jpg
    1920 x 1200 - 215K
    Post edited by Gogger on
  • DartanbeckDartanbeck Posts: 21,229

    Beautiful!

  • charlescharles Posts: 775
    edited January 31

    So anyone here who knows me..knows me..and I didn't die from covid no matter how many probably wish I did....

    nope

    I've been working the last few years on new techniques to realistic the S**T out of Daz for FUN and PORFIT!

     

    Image removed

    Yes I ran MadSkills Studio, it may be back sometime soon with new tools.

    Anyone is welcome to join Mara and me every night on Discord for fun and giggles just PM me.

    Post edited by Richard Haseltine on
  • lilweeplilweep Posts: 2,241

    Can't say I was following your life story.  

    The face and hairline looks good.

    I read on your gallery you used AI to enhance the image.  Perhaps that explains why the hair and face are particularly good. 

    I notice many peoples' daz renders suddenly having realistic facial expressions and good hair (although often the artists leave residual AI artifacts in their image). See some of the trending images in the gallery right now for such examples, although they often do not disclose use of AI.

  • takezo_3001takezo_3001 Posts: 1,936
    edited January 31

    charles said:

    So anyone here who knows me..knows me..and I didn't die from covid no matter how many probably wish I did....

    nope

    I've been working the last few years on new techniques to realistic the S**T out of Daz for FUN and PORFIT!

     

    Image removed

    Yes I ran MadSkills Studio, it may be back sometime soon with new tools.

    Anyone is welcome to join Mara and me every night on Discord for fun and giggles just PM me.

    Holy CRAP! That is nothing short of magnificent work, those eyes look 100% real, among other things that contribute to its realism!

    Post edited by Richard Haseltine on
  • charlescharles Posts: 775
    edited February 1

    takezo_3001 said:

    charles said:

    So anyone here who knows me..knows me..and I didn't die from covid no matter how many probably wish I did....

    nope

    I've been working the last few years on new techniques to realistic the S**T out of Daz for FUN and PORFIT!

     

    Image removed

    Yes I ran MadSkills Studio, it may be back sometime soon with new tools.

    Anyone is welcome to join Mara and me every night on Discord for fun and giggles just PM me.

    Holy CRAP! That is nothing short of magnificent work, those eyes look 100% real, among other things that contribute to its realism!

    It's what I've come up with called AI bumping. Think of it as post work using certain Stable Diffusion models and loras to enhance it. Keep in mind the image requires your Daz image to be freaking good in the first place. It's all about adding in the extra details like skin, hair and clothing details. In that case it was also extra inpainting for the hot tub bubbles and other things.

    It requires a lot of sampling, well for me anways. I do batches of 2-10 looking for the best parts of each with various low denoising, if any, and take the good ones into PS and blend those all together into the final image. In the exmaple posted I did not do my final lighting post work though, just to show the AI postwork stuff.

     You can bump the denoising up, but when you do that's going to make the results drift more and more from your original Daz image. If you are working on something like graphic novels like I do, consistancy is key. So less drift the better. If you just want to do a one off image there is no limits, except the more you drift the image the more possibilites for AI weirdness to creep in.

    I also use key models for the face for consistancy, where I have a large collection of EU models in different angles. But you only need one but needs to be good. People often gripe about this as too difficult right now, but using IPAdapter Control Nets you can get pretty good results. If you try and use Reactor or even worst Face Swap, you're going to get often a lot of junk, as it can't blend the face to the lighting in the scene as good. And that's kind of the key. If the lighting on the face can't match the scene it's going to look like a cheap PS image.

    One of the key things is to have your Daz character as close to your control model as possible. Otherwise it's trying to fit too much. I will often reshape the daz characters face especially the nose to be as close to the control net models as possible for best fitting and least drift.

    Also the best thing of this technique is it can totally correct poor daz anatomy and overblown skin specular and gloss to be realistic. As well as fix fake looking hair and make clothing really POP!

    Finally, the best part is you remain in complete control, from your scene, lighting, framing, camera setup, posing, it's all you, not some random thing from AI. The AI is just used to help nudge the work to more realism and add details where there were none.

    I have this template which if anyone else wants let me know I'll share it, but here is the test for consistancy results. Some of these angles are really hard for it to match (which was the point) and if it can't fit ALL of them, it's not necessarly a deal breaker. But it does give one an idea of what the limits of consistancy are going to be. It just depends on how much I want to use a certain character through my work.

     

    Untitled-35.png
    628 x 628 - 310K
    Post edited by charles on
  • takezo_3001takezo_3001 Posts: 1,936

    charles said:

    It's what I've come up with called AI bumping. Think of it as post work using certain Stable Diffusion models and loras to enhance it. Keep in mind the image requires your Daz image to be freaking good in the first place. It's all about adding in the extra details like skin, hair and clothing details. In that case it was also extra inpainting for the hot tub bubbles and other things.

    It requires a lot of sampling, well for me anways. I do batches of 2-10 looking for the best parts of each with various low denoising, if any, and take the good ones into PS and blend those all together into the final image. In the exmaple posted I did not do my final lighting post work though, just to show the AI postwork stuff.

     You can bump the denoising up, but when you do that's going to make the results drift more and more from your original Daz image. If you are working on something like graphic novels like I do, consistancy is key. So less drift the better. If you just want to do a one off image there is no limits, except the more you drift the image the more possibilites for AI weirdness to creep in.

    I also use key models for the face for consistancy, where I have a large collection of EU models in different angles. But you only need one but needs to be good. People often gripe about this as too difficult right now, but using IPAdapter Control Nets you can get pretty good results. If you try and use Reactor or even worst Face Swap, you're going to get often a lot of junk, as it can't blend the face to the lighting in the scene as good. And that's kind of the key. If the lighting on the face can't match the scene it's going to look like a cheap PS image.

    One of the key things is to have your Daz character as close to your control model as possible. Otherwise it's trying to fit too much. I will often reshape the daz characters face especially the nose to be as close to the control net models as possible for best fitting and least drift.

    Also the best thing of this technique is it can totally correct poor daz anatomy and overblown skin specular and gloss to be realistic. As well as fix fake looking hair and make clothing really POP!

    Finally, the best part is you remain in complete control, from your scene, lighting, framing, camera setup, posing, it's all you, not some random thing from AI. The AI is just used to help nudge the work to more realism and add details where there were none.

    I have this template which if anyone else wants let me know I'll share it, but here is the test for consistancy results. Some of these angles are really hard for it to match (which was the point) and if it can't fit ALL of them, it's not necessarly a deal breaker. But it does give one an idea of what the limits of consistancy are going to be. It just depends on how much I want to use a certain character through my work.

    Yeah, I have a hand in the beta testing of an AI-driven video upscaling and frame interpolating program, and they're using facial models to train their models... I'd like to see what's possible with AI, but that depends on how they're obtaining their input graphics to train their models with, if it's stock photos/art, then good on them; otherwise... But yeah, I'd be interested in seeing Daz take advantage of their own AI model integration into Daz studio in the future, hopefully...

    I have my eyes open for offline/internet-independent downloaded programs that can convert 3D animation from any, and all sources of video, not just video shot by the end user, but found footage as well, and being able to transfer that to Genesis would be groundbreaking!

  • charlescharles Posts: 775
    edited February 1

    takezo_3001 said:

     

    Yeah, I have a hand in the beta testing of an AI-driven video upscaling and frame interpolating program, and they're using facial models to train their models... I'd like to see what's possible with AI, but that depends on how they're obtaining their input graphics to train their models with, if it's stock photos/art, then good on them; otherwise... But yeah, I'd be interested in seeing Daz take advantage of their own AI model integration into Daz studio in the future, hopefully...

    I have my eyes open for offline/internet-independent downloaded programs that can convert 3D animation from any, and all sources of video, not just video shot by the end user, but found footage as well, and being able to transfer that to Genesis would be groundbreaking!

    It would be, and might be sooner than later. The problem currently as I see it is if you are wanting to animate inside Daz as the current system is would be grueling. Obtaining an openpose from a single frame or image then converting that to a Daz pose would not be difficult, and is on my list stuff to do. Then it's just a matter of  extrapulating that from a video into a series of poses for animation, and there you go. Right now something like that would be even easier to just do in Blender or other systems or game engines, but yeah totally doable inside Daz too.

    However one of the issues from earlier tests this year is that the openpose model's limb stick does not really match up with the bone structure of the Daz character that well. The training of an AI limbstick tailored to Genesis would be more ideal, but adds a lot more work and time to such a development then just using what's already availble.

     

    Here is the test of openpose limbstick matched to a daz pose (I can't find the image it was matched to, but some woman holding a gun I think.) Notice how the limbs are distorted in ways they should not be. The conclusion from the test was that openpose is not a good tool for Genesis posing, something else is needed, maybe even some new type of rigging. There is still several angles one could tackle the solution, but it's probably wise to take time to figure out which has the best potential and weigh each and do more tests to help figure that out.

     

    op_test1.png
    1440 x 1080 - 92K
    Post edited by charles on
Sign In or Register to comment.