Remixing your art with AI

1121315171821

Comments

  • Artini said:

    Yes, they have vertex colors, when exported as .FBX

    Daz Studio cannot read such files directly, so I convert them to .OBJ in Blender.

     

    well then you import them into Zbrush, UV map them with UV master, subdivide and create texture from vertex colours

    reduce the subdivision back to the original after creating the map 

  • ArtiniArtini Posts: 9,455

    Thanks for the tips, Wendy.

    Just wonder if any other program would help, like

    Carrara, 3D Coat or some of Realusions programs.

    After last big crash of my Windows computer and necessity to reformat it,

    I am not installing any programs, than really needed for my hobby.

    Guess, need to take a chance.

     

  • DiomedeDiomede Posts: 15,168
    edited November 2023

    DOUBLE EDIT - No, Not Solved 

    Error message says I have a likely extension conflict

    Trying to begin.  Cannot.

    Gang, I cannot get started.  I have installed Stable Diffusion on local.  I have installed controlnet.  Followed online tutorials.  But Stable diffusion does not seem to take into account my picture prompt even though I have enabled check. Neither the pose nor the clothes nor the... seems to be being taken into account.  I've tried other photos and other figures and other poses, but only get even worse results.  I followed some online videos.  

    I uploaded a picture of Leonardo in a suit, no tie, in basic sort of sitting pose.  The result is nothing like it except for gray suit.  I had 'canny' checked, and 'balanced' checked for this test.

     

    bb01 controlnet result on right incorrect.png
    1740 x 898 - 315K
    man in suit start.png
    369 x 434 - 273K
    Post edited by Diomede on
  • DiomedeDiomede Posts: 15,168
    edited November 2023

    DOUBLE EDIT - No, Not Solved 

    Edit #3 - Now it is solved.

    Error message says I have a likely extension conflict

    Found an error message.  Not able to understand it, but something must be installed incorrectly, I assume.  Hmmm.

     

     

    bb03 found an error message.png
    1700 x 577 - 80K
    Post edited by Diomede on
  • ArtiniArtini Posts: 9,455

    Yes, it is just like another program knowledge, so learning curve from the beginning was high for me.

    I have stopped using Automatic 1111, because my graphics card has only 8 GB of VRAM

    and it was not sufficient to run Stable Diffusion XL models.

    In ComfyUI, I have not reached controlnet level in knowledge, yet.

     

  • ArtiniArtini Posts: 9,455
    edited November 2023

    I have discovered, that FBX files from Shape-E does not preserve vertex colors.

    Below is a sofa geometry created by Shap-E

    image

    art_deco_sofa01.jpg
    1109 x 851 - 424K
    Post edited by Artini on
  • DiomedeDiomede Posts: 15,168
    edited November 2023

    Thought Solved, but NOT

    Now it is solved.

    I went to extensions and did an apply and restart.  I then ran from image2image.  Better.

    This might be rough, but at least it did what it was supposed to do with no errors.

     

    bb04 i am an idiot.png
    1612 x 1035 - 761K
    Post edited by Diomede on
  • DiomedeDiomede Posts: 15,168

    ERROR Identification Progress

    The new error log says I have a likely conflict among the extensions.

    Has this happened to anyone elese?  Does anyone know which ones are likely?

     

    bb04 extension conflict.png
    1724 x 705 - 109K
  • DiomedeDiomede Posts: 15,168
    edited November 2023

    ARGH - I am off to the tech forums for Github, etc.  Frustrating.

    Many more errors.  This one is typical.

    Error running setup: E:\AI\stable-diffusion-webui\modules/processing_scripts\refiner.py
        Traceback (most recent call last):
          File "E:\AI\stable-diffusion-webui\modules\scripts.py", line 741, in setup_scrips
            script.setup(p, *script_args)
        TypeError: ScriptRefiner.setup() takes 5 positional arguments but 99 were given

     

     

    Post edited by Diomede on
  • ArtiniArtini Posts: 9,455

    Yes, I got some warnings with ComfyUI on Linux,

    but at the end it works.

    No idea about your errors, sorry.

     

  • ArtiniArtini Posts: 9,455
    edited November 2023

    In the meantime, I have UV mapped in Blender sofa from Shap-E

    in 3 different ways, but Cube Projection gives the best results, so far.

    The Piggie like it, as well.

    Daz Studio render.

    image

    Sofa01pic01.jpg
    1920 x 1200 - 1M
    Post edited by Artini on
  • ArtiniArtini Posts: 9,455
    edited November 2023

    Some shapes generated today...

    Need to find a way to convert vertex colors to another format,

    that I can save in the file.

    image

    ShapEs01.jpg
    1920 x 1080 - 264K
    Post edited by Artini on
  • DiomedeDiomede Posts: 15,168
    edited November 2023

    My 1st Render-to-AI-Output

    Installation Solved (as far as I know). 

    Rendered G3M Santa reading a prop magazine sitting on a cube and plane against Orestes HDRI sky preset.  Cube and plane with brick and roof shaders respectively.  Loaded the result in Stable Diffusion with Controlnet.  Prompt = Santa sitting on a chimney reading a magazine at night.

    And here is the result.  OK, the result sucks.  But I am in the game!

    Side by Side 1st santa.jpg
    939 x 640 - 147K
    Post edited by Diomede on
  • DiomedeDiomede Posts: 15,168
    edited November 2023

    Installation Solution

    Some of the extensions for Controlnet had conflicts.  I don't know for sure how many, but it appears to have started working correctly after I disabled the SD-CN-Animation extension (or something spelled similarly).

    Post edited by Diomede on
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,206
    edited November 2023

    I got errors with some ControlNet extensions, it's very fussy which version is used with which checkpoint

    you need the ones that match

    I had some I really like that now won't work with anything but did before, all I can suggest is getting the models and extensions from the same source,

    Automatic 1111 has stopped working on my PC altogether so I am little help, I am using Fooocus, NMKD and Visions of Chaos

    yesterday and today using textural inversion i trained 3 awful embeddings that give caricatures of what they are meant to, I see the resemblence but they are horrible

    my face, my late cat, my art (old scribbles I found in a book, truly awful but worth a laugh)

    00030-3377824114.png
    512 x 512 - 387K
    00066-3084733960.png
    512 x 512 - 437K
    00071-1712679739.png
    512 x 512 - 532K
    340867059_582686193810290_2965451962093383692_n.jpg
    960 x 1280 - 148K
    IMG_0990.JPG
    3264 x 2448 - 1M
    IMG_1604.JPG
    2448 x 2448 - 2M
    Post edited by WendyLuvsCatz on
  • ArtiniArtini Posts: 9,455

    Nice experiments, Wendy.

     

  • Artini said:

    Nice experiments, Wendy.

     

    like all experiments I learnt 3 ways NOT to do it wink 

    it would have takem 80 hours with 100 000 steps so I only did 10 000 steps (8 hours each)

    more VRAM probably exponentially faster

    better using the online services for such things like Leonardo.ai

  • ArtiniArtini Posts: 9,455

    Yes, I also use Shap-E online.

    I like experimenting with the current technology.

     

  • DiomedeDiomede Posts: 15,168
    edited November 2023

    WendyLuvsCatz said:

    I got errors with some ControlNet extensions, it's very fussy which version is used with which checkpoint

    you need the ones that match

    I had some I really like that now won't work with anything but did before, all I can suggest is getting the models and extensions from the same source,

    Thanks - yes, I had it working again.  Added some more extensions.  And now it is not working again.  Argh.

    But here are some recent results with Santa Claus.  Apparently, AI really likes hot curvacious female Santa Claus, and that is OK with me.  (See first attachment).  But in this case, I want a more traditional Santa Claus.  So I put (Wilford Brimley) in with the prompts.  Much closer to what I was looking for. (See second attachment).  See my posts above for the original Daz render - really just a blockout.

    Now I have to go back and try to figure out what is not working with what.  But I am getting an error with controlnet again.

    santa hot girl.png
    512 x 512 - 410K
    santa 2 wilfoird brimley.png
    512 x 512 - 374K
    Post edited by Diomede on
  • DiomedeDiomede Posts: 15,168

    WendyLuvsCatz said:

    ...

    Automatic 1111 has stopped working on my PC altogether so I am little help, I am using Fooocus, NMKD and Visions of Chaos

    yesterday and today using textural inversion i trained 3 awful embeddings that give caricatures of what they are meant to, I see the resemblence but they are horrible

    my face, my late cat, my art (old scribbles I found in a book, truly awful but worth a laugh)

    These are great!  You have inspired me.  My goal for my hobby this week is to have me as Santa Claus sitting on a rooftop reading a magzine, using an AI program.  I hope to share the results so folks can get a chuckle.

  • DiomedeDiomede Posts: 15,168
    edited November 2023

    Error Identified

    The problem was the extension for 'eyemask.'  I removed that, and now controlnet is working again.  So here is a first pass simple grid of four results using my Daz render as a blockout in controlnet and then prompting for santa claus sitting on a chimney reading a magazine at night on a rooftop.  No need to include Wilford Brimley this time because no hot young pinup result. laugh

    --------------------------------------------------------------------

    And that is derived from the same Daz render included below, with the first Stable Diffusion attempt.  Some progress, to be sure.  

     

    00 grid no wilford brimley.png
    1024 x 1024 - 1M
    Post edited by Diomede on
  • you can use Roop or one of the othe faceswap extensions to put your face on him

    I am in effect still using Automatic1111 but in the Visions of Chaos python environment 

    I get the advantage of Xformers and reduced VRAM that way too albeit slower

    I found I can continue training the embedding too so getting something closer to my face now

  • Diomede said:

    And that is derived from the same Daz render included below, with the first Stable Diffusion attempt.  Some progress, to be sure. 

    I'm not faulting your workflow or shaming your progress, but I am curious to know what you're getting out of Stable Diffusion with this project that you aren't able with Daz?  The only stark difference I see between these two images is the lighting, and what SD is generating can be achieved in Daz with a photometric point light.

  • DiomedeDiomede Posts: 15,168
    edited November 2023

    Nyghtfall3D said:

    Diomede said:

    And that is derived from the same Daz render included below, with the first Stable Diffusion attempt.  Some progress, to be sure. 

    I'm not faulting your workflow or shaming your progress, but I am curious to know what you're getting out of Stable Diffusion with this project that you aren't able with Daz?  The only stark difference I see between these two images is the lighting, and what SD is generating can be achieved in Daz with a photometric point light.

    - Can only answer the first question, why not just use Daz, with a question.  Why does anyone use Daz Studio if the resulting images can be created with paints, pencils, cameras, digital image editors, and other 3D programs?

    - On the nod to my feelings.  First, thanks for the effort to be considerate.  We can all use more of that.  Second, I don't feel shame for my Santa posts.  I myself posted that they suck.  See above.  Must crawl before walk and there is no shame in that.

    - Main question asked, what am I getting out of experimenting with AI?  

    ...... To see for myself what all the fuss is about.  For systems like Stable Diffusion, I definitely see one of the ethical issues, referring to its caveat that it is only for research purposes yet is being used for profit, breaking the terms of use that they promised to contributors while gathering the data.  But to the extent that contractually paid for systems are coming from people who honestly paid Getty Images etc for the data, I also see great potential.  I will be glad to have some background knowledge to use those programs when they get full releases..

    ...... To see what the aforementioned potential is.  And it is amazing.  I am only barely dipping my toe in, yet the speed at which edits can be made to existing images, or new starter images generated, is amazing compared to using a fake 3d photography studio like Daz Studio, or Blender, or... combined with Photoshop, Gimp, etc.  

    ...... To add another tool to the toolbox. For example, my Santa images were created combining a Daz Studio render of Santa sitting on a cube with an AI processor, not using AI instead of Daz Studio.

    ...... To know things for the sake of knowing them, even if I never use Stable Diffusion again.

    Post edited by Diomede on
  • StezzaStezza Posts: 8,054
    edited November 2023

    Diomede said:

    WendyLuvsCatz said:

    I got errors with some ControlNet extensions, it's very fussy which version is used with which checkpoint

    you need the ones that match

    I had some I really like that now won't work with anything but did before, all I can suggest is getting the models and extensions from the same source,

    Thanks - yes, I had it working again.  Added some more extensions.  And now it is not working again.  Argh.

    But here are some recent results with Santa Claus.  Apparently, AI really likes hot curvacious female Santa Claus, and that is OK with me.  (See first attachment).  But in this case, I want a more traditional Santa Claus.  So I put (Wilford Brimley) in with the prompts.  Much closer to what I was looking for. (See second attachment).  See my posts above for the original Daz render - really just a blockout.

    Now I have to go back and try to figure out what is not working with what.  But I am getting an error with controlnet again.

    awesome stuff ... 

    had me thinking if I could do something similar ( Santa on a roof ) in MS Paint... ended up with this after 5 minutes..

    must incorporate this stuff with Carrara ,, hmmmm .  yes 

    santa on roof.png
    648 x 648 - 495K
    Post edited by Stezza on
  • Diomede said:

    - Can only answer the first question, why not just use Daz, with a question.  Why does anyone use Daz Studio if the resulting images can be created with paints, pencils, cameras, digital image editors, and other 3D programs?

    Fair point.

    - Main question asked, what am I getting out of experimenting with AI?  

    ...... To see for myself what all the fuss is about.  For systems like Stable Diffusion, I definitely see one of the ethical issues, referring to its caveat that it is only for research purposes yet is being used for profit, breaking the terms of use that they promised to contributors while gathering the data.  But to the extent that contractually paid for systems are coming from people who honestly paid Getty Images etc for the data, I also see great potential.  I will be glad to have some background knowledge to use those programs when they get full releases..

    ...... To see what the aforementioned potential is.  And it is amazing.  I am only barely dipping my toe in, yet the speed at which edits can be made to existing images, or new starter images generated, is amazing compared to using a fake 3d photography studio like Daz Studio, or Blender, or... combined with Photoshop, Gimp, etc.  

    ...... To add another tool to the toolbox. For example, my Santa images were created combining a Daz Studio render of Santa sitting on a cube with an AI processor, not using AI instead of Daz Studio.

    ...... To know things for the sake of knowing them, even if I never use Stable Diffusion again.

    I can't argue with any of those points.  I briefly dabbled with Stable Diffusion for the same reasons.

    Carry on, and best wishes.  :)

  • DiomedeDiomede Posts: 15,168

    Stezza said:

    Diomede said:

    WendyLuvsCatz said:

    I got errors

    Thanks -

    awesome stuff ... 

    had me thinking if I could do something similar ( Santa on a roof ) in MS Paint... ended up with this after 5 minutes..

    must incorporate this stuff with Carrara ,, hmmmm .  yes 

     

    Beautiful Santa.  yes

    RE: Carrara.  Oh, the missed opportunity for Daz3d (company) becomes all the more glaring.  Carrara is the perfect tool for basic staging, editing, rigging, and posing as part of an AI workflow. Carrara would be perfect to block out concepts, arrange, rig, and pose on the fly, add an arch or a custom tree in the background, then recursively submit to the AI processor.  Apparently, Blender is already being combined in this way.  Daz Studio does not have a true vertex modeler or a tree editor or a landscape editor, etc.  A combo of Studio/Bryce/Hexagon would be great, but they allowed Bryce and Hexagon to whither as well.

    This guy has a tutorial on a plugin for Blender.  Studio could do this particular example (pumpkins) because he only uses primitives.  But Blender or Carrara could add an arch or edit a custon tree or a custom landscape.  Studio can only load premade such things.

    I would love to transfer some of my custom toon Carrara characters (the Marx brothers) to AI but Carrara's FBX exporter is old.

  • DiomedeDiomede Posts: 15,168
    edited November 2023

    Nyghtfall3D said:

    Diomede said:

    Fair point.

    I can't argue with any of those points.  I briefly dabbled with Stable Diffusion for the same reasons.

    Carry on, and best wishes.  :)

    Yes, all good.  Cheers.

    Perhaps some young comic will be inspired to reboot the Carry On series and make Carry On 3D.

     

    carry on collection bigger.png
    889 x 390 - 385K
    Post edited by Diomede on
  • DiomedeDiomede Posts: 15,168

    Stable Diffusion with ControlNet has a menu for loading rigged FBX, JSON, and similar models.  But my custom toon Marx brothers do not appear to transfer correctly from Carrara FBX to Stable Diffusion.  crying

     

    groucho posed.png
    800 x 600 - 72K
    groucho posed mesh.png
    994 x 1004 - 205K
    groucho not appear correctly in 3d model pose loader.png
    1809 x 890 - 66K
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,206
    edited November 2023

    A video using my trained Textual Inversion Embedding

    a recent pic of me for comparisson

    I added my embedding to try, the words to trigger it are myself, AND wendyvain,

    beginning and end of prompt

    it will sometimes work with just wendyvain in the prompt but for specific use something like "myself, a woman doing such and such or wearing such and such etc, wendyvain" should trigger it

    it was trained on Dreamshaper7 specifically but I think any SD v1.5 model should work

    340867059_582686193810290_2965451962093383692_n.jpg
    960 x 1280 - 148K
    zip
    zip
    wendyvain.zip
    3K
    screenshot.png
    1920 x 1040 - 594K
    grid-0000.jpg
    4000 x 3200 - 3M
    Post edited by WendyLuvsCatz on
Sign In or Register to comment.