Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
The good news is that since the training data set consists of Daz characters, any results should end up with the correct number of fingers on a hand. :-)
Will it be possible to craft a text prompt that adequately captures the subtleties of what you're trying to accomplish? I'm thinking this product may make it easier to do the easy stuff ("make eyes bigger") but precise edits of a character may be difficult to describe via text.
An intriguing development at any rate and I'll be interested to see its progress.
USD export to omniverse will give RTX owners access to the realtime rendering in omniverse.
Assuming the cost is low enough that lots of folks play with it, I guarantee the majority of the prompts will be "make me a naked <name of popular actress>". That's the internet for you.
Even if this is nothing, I guess daz are smart to use this announcement as marketing strategy, since a few people have been reporting on it i see.
It's not really one click, more like a well thought out text prompt description of what you are trying to achieve, which will turn into an interesting character. It should save users a lot of time either way.
As long as it's not using unethical datasets from outside DAZ3D's licensed and proprietary content for the machine learning (i.e. OpenAi, LAION), I'm ok with this.
However, I'm not interested either. It looks more like a toy in the same way Midjourney isn't really a tool. The video demos also seem a little too slick, like this is more for building hype than how the tech will actually look. Regardless, it's likely won't be spending any money on it.
I do have a question for someone who has signed up for the beta, what are the system requirements to have this up the running?
OK, great, thanks.
The videos gave concept/mock up vibes, which is kind of misleading of Tafi/Daz.
huh
What is the actual AI element here? I think there are some misunderstandings.
This app does not generate Daz assets. It is not generating a model. It is not using Daz models as training data. What I saw was a piece of software using text prompts to control it. THAT is where the AI is. The AI is in the control interface via text recognition. Just imagine yourself using text prompts to tell Daz Studio what you want, and there you go. We see Victoria 9 at the top of the page where it prompts "Make her eyes bigger" (I am sure that is what everybody is going to make bigger, LOL.) That pretty much shows what this is.
It is important to be clear on what this is, because when people see "AI" they logically think there is an element of generation here, and there are apps that can create 3D objects.
I have not seen how this software is monetized. There's really only a few ways to do it. A monthly fee or it asks you to buy the parts it uses to build the thing you ask it to. Considering how messy that could get, I assume this must be a monthly fee, and Tafi divides the money with PAs who have their products used. I am guessing there will be different tiers of plans, with some limits on how much you can create with smaller plans.
There could also be a "Pay per creation" model, but I don't think most people would want that as they at least want to play around first. After all, somebody will probably want to make a NVIATWAS for a giggle. And then they might want to make a more serious MSIACRACP (Michael Standing in a Christian Rock Album Cover Pose). You got have balance!
Categorization is one specific task that networks typically perform in many cases, and this appears to be a great way to leverage the metadata for DAZ assets. Combine this with the ability to understand text prompts (another thing that networks are already trained to do) and it seems like a perfect fit to me.
The video I watched didn't look like it was promising anything that isn't achievable - just a useful application of the technology. It's like a layer of abstraction above dial spinning. Personally, I'm excited to try it out and signed up for the beta.
- Greg
Correct - new mesh is not being created, nor is the AI trained on PA's mesh. The AI is pulling from data in the store to create a scene in Daz Studio, based on your prompts. The advantage for especially new users is time - you don't have to go and scroll through pages of products to maybe find what you want. By typing you can instantly get all the relevant spacesuits, change the colour, change gender etc. If you don't know how to set up lighting, no problem. Add a scene? Done. Don't have a computer? Do it on your tablet and receive a render.
I can think of several ways - but keep in mind that this is the BETA - direction can change during testing, new features added that might not be obvious right now. Personally I think it would be a pay for what you use in the render or exported scene. Add an interactive license if you need one. I already saw a number of people excited that they can access Daz models without going through the process of creating something in Daz Studio.
Take the uses a bit further. One of the problems with using Daz Studio in schools is of course the nude models aspect. Using it with an AI in charge, you can set parameters to never show nudity. They can render out images without ever needing to open the program.
Or say an app writer needs to come up with a consistent character for his app - image AIs right now is time consuming to learn to get consistent results, and then there's the fingers problem :D Render out several consistent and themed images from Daz Studio, without needing to learn complicated prompt crafting or a a program that's not in your skill set.
I am sure users here can come up with plenty of use case scenarios. The tech is new, and a lot of people are not familiar with AI at all.
I also don't see what you render in AI would be an issue with this model, if its NVIATWAS then go for it - DDs for the win. :D The images are not going to be shown in a public gallery on discord like with midjourney - which is why they have the no gore, no nudity rule - they have to abide by public internet rules for discord. What you do with your image is your own business. That said - there might be some rules for child images, and I would fully support that.
What would be really cool is help in posing an existing character, as posing is often the most time consuming part of building a scene, particularly if they are interacting with props.
So for example: "character sitting in chair with a can of coke in one hand and a sandwich in the other"
That would not be easy, but if it could be done without lots of overlapping with the prop(s) it would be impressive.
This does sound very interesting also found a video about it as well.
Thanks for the information. Very interesting. It is good to have that clarity. I had not even considered how Midjourney is all public like that, I only messed with a local install of Stable Diffusion a few times. I can see how somebody might think all the stuff is viewable publicly, too. Lots of good info there.
So this is all running in the cloud then? There is no local option? Can the created scene be exported to a local installation like Daz Studio? I imagine this is using Daz Boost as part of its cloud.
If this thing actually works it would be pretty cool. It can also serve as a way to bring people into Daz in full. I am sure that this software will still have limitations, and if people like what they are doing they might want to try out the full program. There is a lot of potential with this software. But it has to deliver something out of the gate. For a beta growing pains are expected, but I also think it needs to be able to wow people to get their attention in a period where so many things have a wow factor. This is still competing with the AI generators even if they have their own limits. If it can truly wow people, it could be something that breaks Daz into the mainstream like never before. Even Epic doesn't have something like this (at least not yet).
Speaking of limits, are there any details on what size a render can be? It has to be bigger than what AI generators do, although that should be pretty easy since most only pop out 512x512 images online. In local installs, I can get a 2048x2048 size image with 24gb of VRAM from my 3090. Of course I can render way higher than that.
On seeing this one thing had occured to me, and that maybe this is why there has been little in the way of what is going on with Daz Studio 5.
From what I can gather from the promotional blurb, this software seems more like a AI generative character generator, that you then export to Daz Studio, Unreal Engine and so on for the final touches. The only way we will really know for sure is when folks get access to the beta.
If all the software does is have you type what figure, texture, or pose you want instead of clicking it then it seems kind of pointless. That would be like going from Windows and Mac OS back to the MS-DOS command line. I mean, I like command lines. But usually because you can do things you can't easily do without them. In this case, it just seems like it would make things slower, especially if it responds to what you type as well as the DAZ search engine or Smart Content.
As I see it there's nothing interesting about it. I mean at all. That's just a AI interface using the controls for you. Instead of loading assets and setting the sliders you tell the AI to do that for you. But it can be useful for people with handicaps, assuming they can write.
As for the "exporting to game engines" part, good luck with it. I mean unless the AI is able to optimize the mesh and textures and convert the materials, that I strongly doubt, then the daz assets are unusable in games.
I'm wiling to wait until the beta is well along; then I'd like to see an example text file and the generated result.
Of course Daz Studio is going to have more freedom. That isn't the point at all. This app is not really for us.
Besides, Omniverse has Iray, too. So you don't need to convert a thing. And there is a reason to use it, because Omniverse gets updated faster than Daz Studio. The new Iray 2023 just dropped, and Omniverse will get it soon. DS on the other hand has to go through its testing before they will put it in, but the Iray team has already tested it on their end.
Plus modern game characters are a lot more dense than Genesis at its base res. Even game textures are growing quite large. 2048x2048 is pretty common now, with 4096x4096 becoming a lot more common. I am even finding textures that are 48MB in size in some of these games. These textures contain a lot of data as they are layered, but that its still a 48MB file and one of numerous similar files in use. This is why 8gb GPUs are struggling to run new games, because game devs are tired of downsizing everything for last generation tech. Genesis 2 was a struggle for game engines but that is ancient history.
One character's HEAD in The Last of Us Remake has 20,000 polygons and 350 joints. That doesn't count the body. These are triangles, but that's still a decent number for just a head. Genesis 8 has 170 bones in total, and 18,000 polygons (though they are quads) for the entire body. The game has multiple characters on screen with dense foliage all around. Most Daz scenes are not as big.
Nobody is putting this in a game engine for a game. Well somebody might, but more likely a user would be creating animations. You don't have to be running at high frame rates for that.
@Padone
As the export format will be USD your figure will already have to be animated before export.
AFAIK USD will not export a rigged figure that can be re-animated in unreal with the UE control rig.
So you will likely have to animate in Daz studio
unless they are planning some new animation system or mocap retargeting to attach external mocap before export system.
we will have to wait and see.
As for those skeptics who area asking “where is the AI here?”’
Clearly theres is a LLM and possibly a NLM at work here which could lead to voice control in the future.
Kudos to Tafi/Daz for this adaption to the future.
@outrider42 We don't have to compare the base genesis figure, but the average HD figure with outfit and hair and accessories, that would be a fair comparison with a game figure. As for environments games are much better optimized too. If you do a fair comparison of an average indoor/outdoor scene with some figures you will see that daz assets are unusable in games. As for animation the situation doesn't improve much, daz assets are in general too heavy for a good framerate even on powerful hardware. You have to avoid HD figures and reduce textures to use daz assets in animation, not to mention absolutely avoiding iray that's slow as hell as @wolf359 knows something about.
As it is now daz assets are only good for single shots on a small scene, that's what daz studio is most used for.
As for material conversion when you start to use geografts and shells the only option is to bake the textures to a new uv layer before exporting. Even so you have to convert materials it is not enough to export the textures, and with iray this is trouble because it supports a gazillion material modes and shaders. You may have a moderate success limiting the conversion to some uber modes, that's essentially what we do at diffeomorphic. Again good luck with it.
If this would be for DAZ Studio and your library of products that would be great. However I get the impression this is for the few products they ported for game engines. It requires that you have accurate,meaningful, and granular tags/identifiers for each product. My opinion is based on:
"Tafi is using a massive 3D dataset derived from its proprietary Genesis character platform, widely supported by professional artists, to generate tens of billions of 3D character variations, each comprised of hundreds or even thousands of unique, known values. This meticulously labeled and organized dataset is ideal for efficiently training its foundation model."
If they currently have granular identifiers of every product model and texture since Genesis in a dataset, I would love a copy of that. Even in an Excel workbook just to help categorize my thousands of DAZ prodcuts purchased.in my current database.
So the advancement in PC technology has made a full circle, back to text prompts
Sorry, couldn't resist... "Ken sent me"
The press text makes it sound a bit like this is going to be "Midjourney, but 3D" but watching the promo video makes it clear that it's something else entirely. Really more of an alternative way to interact with DAZ, as Mada and others have already stated.
Could be very interesting and useful, depending on how the implementation and execution is going to be! At the very least it could be a fun and convenient randomising tool, and possibly much more than that.
I wonder if it can only use morphs and textures that have been correctly cathegorised (a bit like Smart Content), or if it can recognise and handle resources from other stores or self-made resources.
Off the top of my head, some prompts that I would love to try, to see what its limits are:
"Make me a character that looks a bit like Channing Tatum, use only morphs and textures I already own."
"Convert this scene to 3DL, use Visual Style Shaders for characters, clothes, and hair only. Use conventional 3DL shaders for the background."
"Make her smile less unhinged."
"Make his biceps look more natural"
"Use DForce or Smoothing Modifier to make the couch collide naturally with the character"
"Open the door. Open the door! HAL, do you read me? Open the door! Open the BLEEPing door, HAL!!!"
I'm pretty sure it will be for items in the Daz store only, its not running locally but on a server, and it won't have access to the other store's data. Self-made would be the same, but if you have items you created yourself chances are you have a strong enough computer, and would want the scene to tweak and not just a render, which means you can load it in the scene once its on your computer.
Prompt crafting is going to be important as always - being able to create famous living people from images is going to be fraught with legal implications so I'm certain that saying make Channing Tatum is not going to work lol - what you would have to do is describe his face accurately in text, and tweak the results - things like make the mouth wider, and a little bit higher etc.
edited to add: I can actually see the possibility of famous face prompt craft recipes being traded
Using only items you already own, I suppose that is doable, depending on if the AI is going to have access to users data.
Converting a scene to 3DL is challenging as well, but I think could be in the realm of possibilities if you use 3DL shaders and not the textures that came with the clothing
Make her smile less unhinged - I think you would have more success with tweaking the strength of the smile than asking for less unhinged lol, a lot of that is subjective to the viewer. More natural biceps the same.
I am curious on how dforce is going to work, is the AI going to sim scenes? Tweak dforce settings? no idea yet - but if its something that gets asked a lot in the beta it might end up being trained into the AI
To open the door there first needs to be an outside :P
Thanks for taking the time to answer!
I'm not entirely sure if I'll have use for the tool as I'm imagining it now based on your descriptions; but I'm curious and looking forward to playing around with it once there is a wider release!
It isn't 2015 anymore. This line of thinking is completely outdated. Indeed, Genesis and Daz assets have been brutal on game engines in the past, but there are games that have used Genesis models nearly straight up. They had high requirements. But those were high requirements for their era, an era that is firmly in past tense. Not only is hardware far more powerful, the software is, too.
Unreal Engine 5 dynamically adjusts geometry AND texture sizes on the fly as needed in a given scene, totally nullifying your key points. You can have insane numbers of polygons, 4k and yes even 8k textures all going on at once and not even worry about it because the engine takes care of it for you. You need to look up some Unreal 5 presentations to see what it is capable of doing. The Quixel Megascans use 8k textures by default! We have people screaming bloody murder over some Genesis 9 normal maps being 8k, while Megascans include unique 8k textures...for video games, lol.
Unreal 5 has some pretty high requirements itself, but it works. Unreal can handle far more geometry and texture data than Daz Studio ever can thanks to its ability to adjust the data as needed. If you tried to drop several million vertices into Daz it will stop to a crawl under the mesh density because it has no method of reducing mesh density without user intervention. But you can drop dozens of such items into Unreal 5 and it will keep humming along. We are entering a new age.
Sure the shaders are different, but the idea that video games cannot possibly cope with the "massive" scale of Daz assets is dead to the point it is comical. We are entering a time period where a big game asset will have more detail than Genesis packed with clothing and hair. I've been telling you guys this day is coming.
So what purpose could this serve? Well, one option is somebody who is familiar with Unreal but is not familiar with Daz Studio. And odds are high this is the case as niche as DS is. They can skip trying to learn DS and jump to the Unreal part, and make additional optimzations there if they wish.
That's just a suggestion. The users run the full spectrum. There is no typical Daz user, nor is their a typical Unreal user. Everybody is different, and has different skill sets. All this software is doing is lowering the bar for Daz Studio. I think that is great. It could even be helpful to people who already use Daz, it could help them come up with ideas they never even thought of.
Again that may work for a static object as a prop, but unfortunately not for animation, especially character animation. The culprit is HD morphs and jcms especially used by G9 other than most HD figures. The game engine can of course use dynamic subdivision and textures, this is not new 3delight can do it as well that's ages ago, but this only supports the HD shape, not HD morphs. Unless you export alembic, but that would be a huge requirement in terms of memory for HD animations.
Then there's the shaders as you agree on, that's not a minor point at all.