Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Admittedly slightly leaning out of the window there, yet still:
- "Pro ai folks" (which factually are??) tend to not argue at all, but praise the function they want. "We want / it / now". So arguments don't count much at all anyway? This appears to be "crowd work".
- I'm not anti-ai at all. I see issues, a) mid to long term, b) with the current HUMANS and their concepts, pushing their specific version of "the whole damn thing".
- Concerning "understanding war." ... will you "succeed" just by not mentioning it? Or is there an actual conflict (ahead)?
But this is the joke. Once actual (side topic) discussion is done. Someone pops up with a lengthy post about how people are anti-ai, and that/why (though not actually explaining why) nothing can be done anyway, so why not just proceed as scheduled?
(Concerning "the lie": While law-people aren't fully sure, the basic idea is not incorrect. It's encylopedic by nature, despite doing something more than just a direct collage of original images. During the training process, the system expressly recreates source images as accurate as possible, it's what the system is based on. It's not directly stealing, but together with the tagging as is, it's clearly a very big part of "the problem". In essence claiming lies, but not argueing, i see where from that is going. It shouldn't go anywhere, if you want to be part of a civiliazation.)
I don't care if Frank or Richard removes my posts, I'm saying this anyway: because there is another gigantic AI thread where you can argue the legality of AI all you want. This one is supposed to be for REMIXING YOUR ART WITH AI, and I'm happy to try and help with that if I am able because I believe that AI should be used as a tool to help artists and it's not going away anyway.
Haha, no question "ai" will not go away, at least in terms of the technique, and there will be tools for sure. Also agreed about the other thread. On the other hand, the people posting images here, keep arguing themselves same way :p, while the original poster (wisely so?) has vanished from it. So be it, remixing...
To an extent it is currently possible. However, the capabilities to identify forgery currently outstrip the ability to create them.
However "flash fakes" which cause a certain event or effect during a short period of time have been around since Photoshop / the last decade. In one instance, it actually caused a protest in Europe. People came to their senses eventually.
Some AI generated art
A final reminder to keep the discussion civil.
I got Corel Paintshop Pro in a Humble Bundle sale and used it to edit the originals which I included for reference. I'm doing a lot of prompts with different renders to see what I get. Slime was one of the prompts for all of these. I make these as references for drawing
I have done an even trippier Act2 of Nosferato
I used an AI to generate seamless fabric tiles... it was an interesting and reasonably productive experiment. Unfortunately, I don't think my browser plays well with the others.
I agree, it is a mesmerising view.
It's quite interesting to read this entire thread. Although I hadn't originally planned to post, here is my attempt at a workflow with DAZ and Stable Diffusion.
I used several of my own blender renders to train an embedding. Then I wanted to combine the embedding and a Genesis 8 custom character that I created,who resembles the embedding, essentially using Genesis 8 as an avatar. That way I can pose her, alter the lighting, change her clothing and hair, render in iray, and then use imgtoimg to add the cembeddeding. It could also mimic her facial expressions.
Before I've used iray renderings plus imgtoimg without any embedding involved and with a low denoising value, but it was hit-and-miss. Even though I was using the same seed and prompt, changing the lighting, the clothing, or the camera angle would cause imgtoimg to change the face of my character. Additionally, the aspect ratio itself changed the face too.
Given that I'm using low denoising values and my G8 character already resembles the embedding (albeit not exactly, but real people look differently depending on the lighting), it ought to have worked.
If I had the same seed each time, it should to have producted the same face. Okay, no! SD doesn't know that this is the same person (or that that it is a person at all). When I tried to increase the denoising value, it initially merely combined the two faces, but it eneded up with different combination for each picture,but and it also made some caricatures, blurry images and nightmare fuel. Also, if I adjust the lighting or aspect ratio, I would get a different person.
I made several attempts to make it work. It was really hit-and-miss. I was able to make it work in the end and keep the lighting, pose, clothing and hair of the G8 while also producing a person who is reasonably consistent and somewhat resembles my character. The process as a whole was interesting and somewhat irritating. It is still far from ideal, but I like promise of what this could do someday.
Even with lower denoising levels, SD managed to mess up the hands and other little things.
The embedding is not very consistent in and of itself. The images I've shared are the ones that are more reliable. Even when used in SD with just a prompt it will change the contour of her face, the color of her eyes, and the shape of her eyebrows. Sometimes it would dramatically age her or turn her into a caricature.
I used to overlook all things in the beginning. Now looking back at it, I see plenty of nightmare fuel.
Filament and AI
Nice one, Wendy.
Belatedly moved to Art Studio
I've been having fun playing with the potential of stable diffusion in my workflow. Here's a remix that isn't quite final but shows some great results - As usual the hands are borked but they're getting better and better with the likes of ControlNet and some manual inpainting and post-work.
As it stands, designing a concept in Daz and then letting AI add that extra photorealism and its own creativity is working well for me though
original Filament render video
AI transformation
Can we even use Stable Diffusion?
I think the EULA was updated?
I do wish we keep two running discussion threads. One for opining, gushing and bashing -- the usual eye gouging and hitting below the belt not allowed. Another for tutorials, breaking news and showcasing personal work using AI. Of the later, ghoulish experiments glomming one's visge on animals are welcome ... <sigh> grudgingly so.
Cheers!
yeah, I prefer this thread be for remixing our art with AI and try to use it as such
AI video
original D|S video
Hey, what's the other thread becuase I don't think some people at Daz3D want us to use AI to remix, have you seen changes to the EULA? That's news worthy, right?
I noticed they don't have Stable Diffusion listed, is the term Remix a new thing and allowed?
At the same time, don't we own the copyrights for our renders, so we are okay to use our 2D renders as we wish, correct?
@Itera , have your tried the LoRa Model with control dot net? It should provide you with the results you are looking for.
However, I don't know if the workflow is allowed given changes to the EULA.
I'm buying a Blender model after contacting the artist and asking if they were okay if I trained LoRa Models with images generated from their 3D model.
this thread in the commons
https://www.daz3d.com/forums/discussion/441452/ai-is-going-to-be-our-biggest-game-changer#latest
I upsampled and corrected them in a photo editing software. I make them as references to drawover and so many things like faces, fingers, and proportions can be so wrong. Painting can be pretty hard especially if it's a detailed reference
I don't know if this counts as remixed or post-worked. I think I spent 2-3 times as much time on the AI\Photoshop part as I did on the original D|S render.
Original
Post-worked
Here is my experiment.
I did not use the AI to modify the final render, but only to generate an interesting background. What I did a lot was test different scenarios where I could put one Daz figure, especially with a good side light.
Then, I put a base pose and rotation. It was tricky to adjust the camera to give the illusion of size and position.
A lot of testing of lights as main illumination and the fake lights to make specific shadows on dummy planes. And some shadows drawn by hand. It is not perfect, but for now, will do.
The original image.
I got a different model that I thought would complement the style of my renders more and was trying it out on the same render from my last post. You know how the AI does some crazy stuff to your image if you give it to much freedom?
About the only thing left from the original image is the lighting angle. I know I wouldn't be able to get it to generate something like this on purpose. A random outlier that only existed for one step. Fleeting beauty, there for only a moment then gone forever.
a video by me, sort of
I used other people's 3D models as well as AI and music
But
DAMNIT I am A Producer!
Damn right you are!
Another interesting production, thanks for sharing.
I should have said it long before now, but I have to say that I find you very inspiring. Beyond the massive volume of work you've produced over the years, what I find most inspiring is the incredible range of software and methods you employ, from "simple" Cararra to the latest AI experiments, you never, ever stand still. I can honestly say I never know what you're going to try next, only that you will be trying something new as soon as something new is available.
Much appreciated.
-- Walt Sterdan
thankyou