Is AI killing the 3D star?

1567810

Comments

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,236
    edited December 2023

    there is a plethora of captioned public domain media out there that could be used to train AI

    Google Arts, Wikipedia Media, various curated stuff from Unvertities and Museums that are available online under a CC0 licence

    Mugshots of criminals for faces (profile and front)

    it's just that so far they have not been forced to retrain it and if they do it won't be as commercially viable or OpenSource

    Post edited by WendyLuvsCatz on
  • WonderlandWonderland Posts: 6,891

    Artists who use 3D can still make money from clients who want a consistent character with specific poses. AI is still difficult to create EXACTLY what you want and be able to repeat it. It creates some amazing one-offs but still has issues with hands, feet and random artifacts. Saw an invite for a NYE party today that was obviously AI, stunning at first glance, but zoom in and yikes! 

  • JoeQuickJoeQuick Posts: 1,704
    edited December 2023

    Being able to go from rough doodle to finished image through a series of refinements, or to switch back and forth from 2d to pseudo 3d is also crazy, and again, that's close to six months ago now.

    dosjulia.jpg
    1384 x 900 - 668K
    creature.jpg
    2560 x 512 - 657K
    gelfgemmama_sidebyside.png
    1024 x 768 - 610K
    spacepirate_sidebyside.png
    1024 x 768 - 949K
    villain_sideside.png
    1024 x 768 - 983K
    Post edited by JoeQuick on
  • takezo_3001takezo_3001 Posts: 1,987

    AllenArt said:

    So, I tried AI and it was sort of interesting, but I got sick of it REAL quick. I'd rather spend my time placing 3D objects in a scene than writing descriptions and hope the AI software gets what I want in the first 100 iterations. laugh It's not for me.

    THIS is why I do not believe that AI will ever replace artists, we have persisted with every innovation and new tools, and AI is simply another tool, not a replacement for us...

  • grinch2901grinch2901 Posts: 1,246

    What I'm toying with is using DAZ to create a character/scene Using DAZ content assets ,  and render it, then use it as a reference for Controlnet (two controlnets, one with Canny, one with Depth). In my earlier renders I didn't use the depth one, just prompted Stable Diffusion to create a background (usually though it ended up being super basic).  the only one in this vignette that used the depth is the last one, in that case I set up an entire scene with the cabints, floor and wall, and all props and the depth is needed to seperate the character from the other stuff.  I attach the original render from DAZ, pretty basic quality, it's all you need to send over to get the info needed.  

    That you can see though is that the DAZ assets (shirt, skirt, shoes, G8F, hair, even the poses) transfer over to Stable Diffusion and I need to prompt for the color (it works well for the skirt and shirt but not so well for the shoes). For all intents and purposes I'm just using Stable diffision as a render engine. But with the SDXL Turbo models it will crank out a render in like 8 seconds. Next experiment will be to see how it does with mutliple characters. 

    Now to the main topic, if I wanted to use this to make a comic book I think I could assuming I sort out the mult-character process. I don't see this use of the AI as at all different than sendng it to, say, octane or Blender. as others say, it's another tool.

    CharacterConsistency.png
    2057 x 786 - 2M
    Mopping V8.png
    796 x 1024 - 1011K
  • It looks like Controlnet of the Stable Diffusion plus Daz characters are the new star of 3d artists. And it is much brighter than the previous star. The possibilities of this combination are monstrous. Because in DAZ you can absolutely accurately set poses, the relative positions of characters, the number of fingers, etc. Combining several characters and a background in Photoshop is a matter of a few minutes. If Photoshop 2d artists find out about this, they will immediately rush to DAZ3D, break down the front doors and buy all the models and props in DAZ stores. Therefore, hurry up before 2D artists have yet learned to use Daz Studio! Conduct more intensive experiments with Controlnet and study editing in Photoshop. You will be shocked by the results. This is not a multi-hour rendering, as a result of which the characters still do not look real. This will be a real work of art! Don't be afraid of the huge arsenal of Photoshop tools. You won't need 95% of this arsenal when combining different files into one.
  • JoeQuickJoeQuick Posts: 1,704
    edited December 2023
    I think you'd find yourself using in-painting for multiple characters? I hope to play with it a bit before I have to head back to work next week. Over the summer I did a render with two daz figures, using phoenixfenix anime dial spin boy and girl, posed up like the uresei yatsura reboot poster, a couple of cube primitives in the background to mark building locations. Never quite got it to work. But someone's finally released a good Ataru LORA to go with the 5 Lums on civit ai. So maybe it'll finally come together for me.
    Post edited by JoeQuick on
  • vectorinus said:

    It looks like Controlnet of the Stable Diffusion plus Daz characters are the new star of 3d artists. Because in DAZ you can absolutely accurately set... the number of fingers...

    Unless you had 'sarcasm' mode silently enabled in your post, Stable Diffusion can't consistently generate hands in complex poses (especially connected or interlaced fingers) without a signifcant amount of correction via Inpainting, and even then, results will vary wildly.

  • JoeQuickJoeQuick Posts: 1,704

    vectorinus said:

    It looks like Controlnet of the Stable Diffusion plus Daz characters are the new star of 3d artists. Because in DAZ you can absolutely accurately set... the number of fingers...

    Unless you had 'sarcasm' mode silently enabled in your post, Stable Diffusion can't consistently generate hands in complex poses (especially connected or interlaced fingers) without a signifcant amount of correction via Inpainting, and even then, results will vary wildly.

    But when you have accurate fingers in your canny and depth control nets?
  • Nyghtfall3DNyghtfall3D Posts: 777
    edited December 2023

    JoeQuick said:

    But when you have accurate fingers in your canny and depth control nets?

    In my experience, SD can still botch them.  For example, the finger poses might be accurately generated, but the fingers themselves look like flesh-colored letter openers.  And good luck rendering hands gripping throats.

    Part of the problem is that SD is heavily biased toward portraits, so getting it to produce images that feature hands doing anything complex is often an exercise in frustration that might never yield truly satisfactory results.

    Post edited by Nyghtfall3D on
  • grinch2901grinch2901 Posts: 1,246
    edited December 2023

    I've found that the Canny is best for large detail ar a distance or fine detail  up close.  So it seems do do fingers well up close, it's hit or miss at distance. For example, this is a quick and dirty render from Daz and  the Stable Diffusion + Canny + Depth version demonstrating how well it does up close.

    hands test.png
    796 x 1024 - 1M
    00099-tertiumSDXLTurbo_v10-2023-12-31.png
    896 x 1152 - 1M
    tmpz4ckx_im.png
    896 x 1152 - 76K
    Post edited by grinch2901 on
  • Nyghtfall3DNyghtfall3D Posts: 777
    edited December 2023

    grinch2901 said:

    I've found that the Canny is best for large detail ar a distance or fine detail  up close.  So it seems do do fingers well up close, it's hit or miss at distance. For example, this is a quick and dirty render from Daz and  the Stable Diffusion + Canny + Depth version demonstrating how well it does up close.

    Not that well.

    - It generated an extra pinky finger on the AI figure's left hand where the light is shining on the 3D model's right hand.

    - I can also see the sliver of what looks like an extra index finger peaking out from behind the visible index finger on the right hand.

    - The base knuckle on the middle finger of the right hand looks like it's completely disjointed from the rest of the hand.

    - It also couldn't replicate the crease in the 3D model's right wrist well enough to maintain the transition to the hard shadow on her right hand.  Consequently, the AI's hand looks like it's wearing part of a thin, transluscent glove.

    Post edited by Nyghtfall3D on
  • grinch2901grinch2901 Posts: 1,246

    Nyghtfall3D said:

    grinch2901 said:

    I've found that the Canny is best for large detail ar a distance or fine detail  up close.  So it seems do do fingers well up close, it's hit or miss at distance. For example, this is a quick and dirty render from Daz and  the Stable Diffusion + Canny + Depth version demonstrating how well it does up close.

    Not that well.  It generated an extra ring finger on the AI figure's left hand where the light is shining on the 3D model's right hand.

    Yes I was just posting that, including the canny map that shows the sunlight that it misinterpreted.  Postwork is still a thing. 

    That said, just because it sucks at hands doesn't mean it can't have a great career as an artist. Rob Liefeld couldn't draw feet but he did okay for himself  smiley

  • WonderlandWonderland Posts: 6,891
    edited December 2023

    I was using DS (or originally Poser) to create 2D looking characters from the very beginning with Photoshop postwork. I was never really into realism. But now people are seeing my older stuff and thinking it's AI. Sometimes I show people on my phone both newer AI stuff and older heavily postworked 3D and they can't really tell the difference. I was always creating more comic book/2D gamelike looking characters and images. I love combining my work now with AI but really find it disheartening when people assume everything I've ever done is AI. And the AI I use on renders is not random, it's literally like spinning dials in DS or adding makeup or lashes or brows to your character but manually using my finger or Apple Pencil on my iPad to change the shape of the face or body or use a slider to make the character taller or thinner in a SELFIE app. These selfie apps are f-ing amazing! Some selfie apps give you more manual control than DS even. 
     

    All AI isn't just a Midjourney descriptive prompt. And now Photoshop has AI included with generative fill and even some selfie app elements. So it's all becoming one big mish-mash of "digital art" which now includes AI in its various forms. I have to admit I've made some incredible looking images with pure AI, just words, but now you can tweak areas you don't like. It somehow becomes a fun game to see what you can come up with. But to me it's kind of like being an art director rather than an artist. You are still doing something creative. Picking the best images, adding prompts you think might improve it, correcting areas that are screwed up, changing colors, expressions, details, all through words as if you were an art director which is still a creative job, I don't feel like I'm "cheating" at all. But it is devaluing art because now everyone can be an art director with their AI artist. So it's going to about who is the best art director, because a lot of images come out crappy and it's still going to be hard for non-creatives to be able to even know how to get the AI to make the best image. Hope this makes sense. Just got back from a New Year's Eve Eve dinner party and I'm a bit drunk lol. BTW, happy New Year's everyone! laugh

    Post edited by Wonderland on
  • ArtiniArtini Posts: 9,473
    edited December 2023

    Hands generated by AI are not the best or realistic in my experience, so far. Below are some examples.

    Happy New Year 2024.

    Hands01.jpg
    2048 x 2048 - 383K
    Hands02.jpg
    2048 x 2048 - 369K
    Hands03.jpg
    2048 x 2048 - 379K
    Post edited by Artini on
  • FauvistFauvist Posts: 2,114
    edited December 2023

    It took me 10 seconds ro recreate Marilyn with AI.

     

    Post edited by Fauvist on
  • I've been using AI for about a year (Midjourney, Stable Diffusion - multple models, LoRAs, etc) and I can tell you AI needs to take a basic Human Anatomy course.  smiley   I still go back to DAZ studio and Zbrush to make my morphs and I get the same character in any pose I want.  I pretty much have way more control over the characters.  I see AI as being a tool to make things easier and faster.  If you want cool 1 offs AI opens doors, but until it can do all DAZ can do its not going to replace DAZ or artist anytime soon.  In order to have as much control over what you are doing and expected outcome AI would need to fill some pretty big shoes.  Face Transfer 2 is a simple example of what is possible using AI as a tool in your workflow.  I have been using it, but it has a way to go (cleaning up eyelashes etc and side views).  I do like it alot though and its fun to use.  I have tried img to img in AI and I get some great results.  As stated above its almost like a rendering tool.  The hair looked much more realistic, but my V9 eyes from Iray looked better.  AI at this point can make vague 1 off's pretty good expecially for portraits (it has improved over the last year).  Again I see it as a tool making things quicker and easier for the artist, but not really a replacement.  

  • generalgameplayinggeneralgameplaying Posts: 517
    edited December 2023

    grinch2901 said:

     

    In the word of AI text generation models (Large Langage Models, LLMs) Microsoft publisned their "Orca" paper and model that used Chat GPT to guide the training, so as part of the training the model was interacting with ChatGPT to see if it got a good answer and then learning what worked/didn't work. 

    https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/

    Could this approach be used for imagery?  Maybe, if something like ChatGPT was trained to recognize styles, for example, and be able to say 'not quite there yet' or 'oh yeah, spot on!'. Right now the synthetic data thing is more about the textgen stuff, not images.  However I think the big players want to get in front of this lawsuit stuff so if they can train the models using expert guidance from an AI to create synthetic data instead of ingesting copyrighted data I think they will be motivated to do it becore the hammer of the law comes down on them. 

    The core result is that training with step-by-step explanations might be a good way to go, human or ai generated. In these forums i think i had already stated the assumption, that the step-by-step explanations on the internet allow solving a lot of problems out there, even with different parameters, though likely not all. Training with only step-by-step explanations could be interesting here, but i am not sure, if we reach totally new depths with this, but then again, i am not a specialist, neither knowing it all.

    Concerning the new bot, it outperformed an open source 90$ chat bot (might mix up some currencies and numbers here) in some benchmarks by 100%, which is kind of touching, but what's quite interesting is, that it ended up level with ChatGPT in some tests. So maybe that's a transformational thing for some kind for solving capabilities, allowing for more focused training of skills. One should be aware, that it does not magically outperform ChatGPT, if i am right. It's still interesting, how few degradation there seems to be for some benchmarks. Thinking of humans providing those step-by-step explanationes, it would be like writing all training data by hand, which would be a huge amount of work to do. Maybe social media or learning websites may one day be abused, like with recaptcha, in order to create step-by-step explanations. "Are you an astrophysicist? Good. Answer the following questions to send your aunt instant money: ..."

    The lawsuit stuff - of course. If they can, they will.

    There just remains some entropy questions, or if you will, questions about basic laws of the nature of the thing, like "can you make a better bot out of a bot without external training data". It's a bit like asking for another variation of a perpetuum mobile. In the end, using random internet input and then boiling it into step-by-step information, might be a good way of extracting things and allow for some cross-checking in the process, all automatted. So that could really be interesting, but right now, they obviously are not ending up cheap, nor better, as they are leaning on ChatGPT in that research. Maybe there is some more future potential for more efficient ways of correction by interaction, with those step-by-step explanations, but we are not realtime yet, and there always is the byzantine question with the masses of the users, and the random astrophysicist's time is limited too, in a way.

    Post edited by generalgameplaying on
  • skyeshotsskyeshots Posts: 148

    For Daz, AI could be a game changer if executed properly. A few ideas:

    • AI generated textures for characters.
    • AI generated animation timelines from a handful of prompts.
    • AI upscaling as a Rendering output option.
  • Nyghtfall3DNyghtfall3D Posts: 777
    edited January 1

    skyeshots said:

    For Daz, AI could be a game changer if executed properly. A few ideas:

    • AI upscaling as a Rendering output option.

    I learned about a free solution called Upscayl.org for that last suggestion, thanks to a YT video posted yesterday.  They offer a downloadable app for Mac and Windows.  I used it on one of my old HD pieces and it upscaled to 4K remarkably well.  I'm considering upscaling the rest of my work.

    Note: After upscaling, I converted the 4K version of my Evlin render to a JPG with 8x compression via PaintShop Pro to reduce the file size.

    Evlin.jpg
    1920 x 1080 - 2M
    Evlin - 4K.jpg
    3840 x 2160 - 2M
    Post edited by Nyghtfall3D on
  • I saw that upscale, here is the img to img link below for a video

    See the attached below.  An old V8.1 and AI img to img in Seaart.ai.   The AI image improves hair etc.  Its a bit rough and needs improvement, but AI as a renderer to make skin, hair, etc more realistic (for us who stive for photo realism).    

    Stew V8.1.jpg
    750 x 750 - 226K
    Stew V8.1 epiCRealism denoise 25.png
    768 x 768 - 397K
  • SnowSultanSnowSultan Posts: 3,596
    edited January 1

    Joequick, did you make those AI examples (the little purple monster, woman in coveralls, etc)?

    Unfortunately, Nytefall is right about the accuracy. Even with Controlnet, proper fingers, feet, and many more expressive faces are still tough to get. When you see perfect hands on an AI image, they either got *really* lucky or it's postwork.

    Post edited by SnowSultan on
  • ArtiniArtini Posts: 9,473
    edited January 1

    I think for Joequick is not a problem.

    He has already proved what he is capable to achieve

    by creating a lot of amazing characters by himself.

     

    Post edited by Artini on
  • JoeQuickJoeQuick Posts: 1,704

    SnowSultan said:

    Joequick, did you make those AI examples (the little purple monster, woman in coveralls, etc)?

    Unfortunately, Nytefall is right about the accuracy. Even with Controlnet, proper fingers, feet, and many more expressive faces are still tough to get. When you see perfect hands on an AI image, they either got *really* lucky or it's postwork.

    Yes, they were stable diffusion experiments from over the summer. 

  • SnowSultanSnowSultan Posts: 3,596

    JoeQuick said:

    SnowSultan said:

    Joequick, did you make those AI examples (the little purple monster, woman in coveralls, etc)?

    Unfortunately, Nytefall is right about the accuracy. Even with Controlnet, proper fingers, feet, and many more expressive faces are still tough to get. When you see perfect hands on an AI image, they either got *really* lucky or it's postwork.

    Yes, they were stable diffusion experiments from over the summer. 

    Thanks, they look good. I was interested because I'm doing a ton of similar experiments. I'm not particularly interested in making 3D art for the forseeable future, at least not unless we get some real technological advancements, so I'm trying to find ways to get 2D results from combinations of sketches, OpenPose, and using AI generations as references for manual drawing. Results always seem to fluctuate between "eureka!" and "it's impossible" though, haha.   :)

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,236

    are you all happy now? devil

  • DiomedeDiomede Posts: 15,173

    are you all happy now? devil

    The video does make me happy.  I am using AI at the moment merely to learn the basic skills.  When I can (a) use an AI database that I am confident was acquired fully legally, and (b) customize that database with the addition of my own creations, then I plan to use it as just another tool.  Both (a) and (b) are very important to make me happy as far as AI goes.  Both are on the way.

     

  • kyoto kidkyoto kid Posts: 41,065
    edited January 14

    Nyghtfall3D said:

    skyeshots said:

    For Daz, AI could be a game changer if executed properly. A few ideas:

    • AI upscaling as a Rendering output option.

    I learned about a free solution called Upscayl.org for that last suggestion, thanks to a YT video posted yesterday.  They offer a downloadable app for Mac and Windows.  I used it on one of my old HD pieces and it upscaled to 4K remarkably well.  I'm considering upscaling the rest of my work.

    Note: After upscaling, I converted the 4K version of my Evlin render to a JPG with 8x compression via PaintShop Pro to reduce the file size.

    ...now this is where AI can be a useful tool

    Unfortunately after I downloaded and installed Upscayl, when I attempted to open it the following error message popped up on the screen:

     

    upscayl error.png
    494 x 172 - 11K
    Post edited by kyoto kid on
  • ArtiniArtini Posts: 9,473

    Just checked at istockphoto.com and they provide access to AI generated images

    from prompts in packs of 100 for not so bad prices,

    if one wish to use such images commercially

    and have insurance on each of the images for up to $10000.

     

  • Sven DullahSven Dullah Posts: 7,621
    edited January 13

    I feel a sudden urge to share two quotes from this article by Ai Weiwei: (The Guardian)

    1 "Today’s harsh reality witnesses technology reducing age-old modes of poetic expression and the warmth of art to a somewhat barbaric artifice."

    2 "The rapid development of technology, including the rise of AI, fails to bring genuine wellbeing to humanity; instead, it fosters anxiety and panic. AI, despite all the information it obtains from human experience, lacks the imagination and, most importantly, the human will, with its potential for beauty, creativity, and the possibility of making mistakes."

    yes

     

    Post edited by Sven Dullah on
Sign In or Register to comment.