Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
when it comes to lipsynch in daz studio. You can use the 32byte version of daz studio that has the built in Lipsynch into it it does a fair job with genesis and genesis2 characters but it does not have the Dmc requires to make genesis3 or 8 work.
I've been animating with daz studio for about 13 years so over time i found the best results were to hand keyframe the lip synce movements as needed you can match the lip movements to the sound files by importing sound into daz timeline and keyframe the mouth movements to the sound file. If you go to on top of the daz app click edit & scroll down to "Audio" and click to open the audio import box. then choose a .wav sound files in import into the timeline
hand keyframing you can dial in expressions as well as needed too. I know a lot of people find hand keyframing tedious and not very good for animation. But honestly once you have done it a while you'll find you'll do more hand keyframe animation than that that from canned animated presets But again that is just my opinion. & what do I know I still do animation old school.
This is not a 3delight animation below I did the whole thing in Iray . but this was the first animation I use MS Azure Ai voices for my voices and I hand keyframed all the lip movements to give you a idea about hand keyframing
@ Sven:
for specular I guess a distant light and raytracing should get a decent result, and you can use maps too; about skins I've been using ubersurface base shaders multiplying diffuse or adding/subtracting ambient by faint colors; I tend to avoid sss to speed up renders; post work could help too. For skin tuning I usually work with 2 monitors to obtain an average between the two outcomes.
I think gamma correction is not a must for still renders, greatly useful for animations; usually I set gamma 1.8/2 for natural environments, 2.2/2.4 for urban ones or interiors, then tweak the whole in post
@ Ivy:
great but hard work with sync; hopefully I'm going to wait a facial animation addon since I'm not an animator and never will be
As things shift even more out of control the whole day gets wound off with a Dementor attack.
Very interesting, tks for sharing! Yes, setting things up for animation is very different from making stills. I simply use the dz default shader for skins, or, like you, the UberSurface without SS. The obvious problem with UE2 and relying on specular rays from direct light is that everything in the shadow will have no highlights, unless you add some low intensity specular lights, or use blurred reflections, which can look nice but affects rendertimes to some amount.
Think you've found a nicely working speed vs quality compromise here
Edit: I want to add that I use the dz default simply because it has the scatter feature to simulate SS. You see I refuse to use ambient to enhance surfaces
thx Sven,
you're right about specular highlights, I usually use the glass shader (photon material: chrome) when I want to fake materials with shiny highlights, like the alloy wheels in the image below. It's a bit unnatural, needs a lot of tweaking but it works decently.
I tried dz default skin, works fine but ubersurface shader gives me more tuning channels and, as in carrara, if you want to get a more reddish skin you can multiply the diffuse map with a light color; on the contrary if you need to wash out a dark red map you may add a pale color in the ambient channel, very useful to drop high saturated maps
Rita
The Aurors show up, things get sorted, Petunia and Vernon are restored to their proper shapes, and Harry is packed off to #12 Grimmauld Place to finishh recovering.
The house gets put back together, and Snape obliviates the evidence. With prejudice.
a quick comparison between 3de and iray biased (interactive?!) render
3delight render time: 2'13'', 1920x1080
iray biased (1 iteration only): 29'', 3200x1800 (CPU+GPU); 2'08'' CPU only
thx Barefoot,
3delight is very fast but the iray biased (AO based) engine is surprising for its speed and effectiveness; for example I uploaded the undenoised images after 1 iteraction only and it's pretty usable for animations. After 2 iterations (34 seconds in this case) noise is quite invisible. This is remarkable with optimized shaders, but unfortunately needs more time to render daz figures, hair, clothes ... anyway I'm experimenting how to drop render times.
another undenoised 1 iterated iray shot with different hdr to match the 3de outcome
you see, my focus is on animation thing, how to optimize renders to get realistic quality vs render times, both for D/S or carrara as well, nothing more; nonetheless I can see many similarities between iray biased renderer and 3delight. I guess they can be used together jointly with carrara and daz universe to make noteworthy, effective indie productions
thx, really appreciated
as said, I'm not an animator, ready to assist indie productions on the lighting side and such though
My hope is not to watch good clips made in blender, unreal, unity or other tools solely, making use of the daz universe. Would be a failure, don't you think?
Made this for LL's 2022 Challenge...
Cold Starting
Having trouble starting your Wraithgate on a winter's morn? Give it a hefty whack with a hammer!
a dude
@Sven Dullah very nice render that is amost comparable to iray, did it take long to render?
I did this today for the fun in 3delight . its a old poser set
Twister
Tks @Ivy for your comment! Didn't check the rendertime, probably in the range of drinking a cup of coffee and walking the dog around the block;) I'd say, on my 2016 IMac, these kinds of "character against random backdrop" renders finish in anything between 10 to 60 min. depending on the hair model (and pixel size). Garibaldi renders very fast with scripted pathtracing, so I use it a lot:)
Its looks great you did a nice job on it
I just rendered this out to check out one of Summoners great props called totems!
street view
Sorry for not responding to this earlier, I'm playing catch-up.
My DAZ Studio has DMCs for both Genesis 3 and Genesis 8, for what it's worth. I don't do any rendering in the 32-bit version, but I've found it handy to pop into 32-bit, apply a batch of sound files and save them, one-by-one, as pose presets that I then apply to different G8 characters in teh 64-bit version. Because the lip sync is applied to a generic G8 model, I can then apply the same pose to different characters and the lips sync fairly well, despite them being different characters.
-- Walt Sterdan
...gimped...
Yea I don't render in 32 byte version either I think it limited on how much ram it uses. I do do have the dmc for g3 &8 but i got forgot where i got my copies from.
But when i was mentioning using the 32 byte version of lipsynch I was talking about where you can use 32byte version to make your lipsynch movements and then save it as a scene file when done and then load the scene and finsh the work around the lipsynch you created in the 32byte for rendering in the 64 byte version, personal I found doing the lipsynce by hand just faster with good results especially for close ups where you can dial in expressions. But that is just my way of doing it in daz studio
I'm still very much a beginner regarding animation than anything, and I'm in awe at your keyframing skills, no question!
I did find, as mentioned, that popping a blank G3 or G8 character into 32-bit, lip synch, save as pose preset (rather than a whole scene), clear, do another, clear, etc. allows you to build a quick library of lip synched poses. Once you switch back to 64-bit, you can apply the poses to any G3 or G8 character (with the subtle head movement and blinks that Mimic does so much better than most automatic systems) and keyframe any extra changes of expressions.
I think where we differ is that I find it quicker to only load a single character in 32-bit and save only the pose preset. I think it allows a more open-ended workflow, as you can open a multi-characer scene and apply one lip-synch pose preset after another to different characters in the whole scene in a matter of miniutes and then keyframe the rest, knowing that the lip sync is already good. If you apply the lip sync to a character with an emotion or expression already set, those tend to stay after the pose is applied I believe.
It's still very annoying to me that they never added the 64-bit Mimic to DAZ Studio, even though they did use a 64-bit plug in for Carrara.
-- Walt Sterdan
IMO I don't think there really is no right or wrong way to how someone makes a an animation. there is just results of the animation & the way that works for your work flow is right for you. Plus everyone has a different workflow to make their animated stories. so how can there be the wrong way if its working for them. Sorry I was trying not to sound like "Confucius"
Babina Aboard Antares