Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Thank you! Got it and the script did the job. I'm rendering my first test animation now. Just from the preview, it looked like it worked. Will let this thread know how it goes. :)
Here's the partial results of my first test, a teenage tiefling girl talking to herself. Rendered with very few iterations to try to get the frames quickly, so it's grainy.
Excited to keep experimenting.
LayLo 3D - question about the smoothing options on bringing the JSON file in. What would be reasons for choosing higher or lower smoothing settings? Is it more of a consideration when retargeting to "extreme" characters, or just a preference thing? I noticed with this animation, that sometimes it seemed like the model didn't close its mouth as much as in the original recording - does smoothing influence that?
Looking forward to giving this a real test now. I was umming and arring about the Face Cap unlock but hopefully don't need that now.
It's looking good so far!
Are you making sure to auto calibrate before recording? Not calibrating could cause the mouth not to shut all the way. Smoothing could too, but I don't think it would unless you turned it up form its default value. Speaking of the default value, I actually meant to put it as 1 for the morphs. A value of 1 really shouldn't smooth it very much since the app records at 60FPS and most 3D animations are probably rendered for either 30FPS, or 24FPS. Essentially, if you're creating a 30FPS video, if you were to use 0 smoothing the playback would just be skipping one of the recorded frames. With a value of 1, instead of skipping a frame it would take the average of the two frames and use that.
There's also the weighted moving average (WMA) that blends the latest recorded value into the one recorded before it. Using a WMA that's too low could also cause the mouth not to close all the way.
Here's how the smoothing options work:
Face Mojo iOS App Smoothing Options
If you open up the options for the app you will see a ‘Weighted Moving Average’ (WMA) slider. What this slider controls is the amount of blending of recorded values values between frames. It subtracts the last reading (L) from the newest reading (N), multiples it by the weighted moving average (W), and then adds back in the last reading (L).
(N-L)W + L
So, a WMA of 0.8 essentially blends in 80% of the next reading to help reduce jitter. A WMA of 1 disables any smoothing it would provide.
Daz Studio Smoothing options
When baking the animation in Daz Studio you are also offered additional smoothing options.
They smooth the animation by reducing the number of keyframes. It uses the value of the slider multiplied by two as N. Then it adds N number of keyframes together, divides them by N, and uses the result for the Nth keyframe.
That info is also in the iOS apps manual now. I've been working on it and will probably continue to refine it a little bit. Here's a link: https://www.laylo3d.com/face-mojo-ios-app-manual/
Thanks! That info is super helpful! And great to see the manual up now. Thanks for your work on all of this.
I plan to test Face Mojo soon, within a few weeks. I have high hopes. Your YT demos look very good. The fact that you have made some docs is encouraging.
I'm wondering if I will be able to lipsync the DAZ Housecat and the dog with Mojo. They've got some other animals with visemes too. Only DAZ has these, you know.
I've got a new iPhone XR I bought just for facial mocap. You're current sale price is irresistable, too.
Be seeing you.
I plan to get the app too but it has a bad rating so far (1/5). Could someone share his feedback here ?
I bought it myself, when it was on sale, but I haven't had time to test it yet. I may test Facemotion as well. I will say that these facial mocap plugins give DAZ a future as an animation tool. I have high hopes. Like you, I want to see what other people are doing.
I recently downloaded Laylo's app from the iOS App Store and with the few tests I've done, it works fine. Getting the files (motion capture and optionally, audio capture) from the iPhone to your computer is convoluted, but that's due to the way iOS works and not the app.
Since I run Daz Studio on a PC, it's easiest for me to use "Save to Dropbox," but obviously you need to have Dropbox installed on your iPhone and your PC. I've read that some people use email and that works fine also.
Once you have the files exported, you can access them from within Daz Studio using the Face Mojo importer. The process is a little different than shown in Laylo's original video because that video demonstrated how to import the file created by other 3rd party iPhone apps. Using Laylo's iPhone app and Face Mojo importer is actually easier:
You can tweak the mocap movements for every single frame, but that's a PITA (I wish there was a way to select a range of frames and apply a relative percentage tweak to the selected range, e.g. select 60 frames and boost the mouth opening amount by 20% for each frame)
Once you're happy with the animation, configure Daz to render out the video as a series of still images, click Render and go find something else to do for a while. Once the individual frames are all rendered, you can use Photoshop, Premiere, or other programs (even Blender, which is free) to import the rendered frames (and audio file if you have one), tweak color/contrast/fades/etc., then export the whole thing as a movie file in whatever format you want.
It's a lot of steps, but nothing too difficult and the results are fun. If only we could have full body mocap...
[note: this sequence of steps was written from memory as best as I could recall without Daz Studio in front of me, so hopefully I didn't overlook anything]
iClone facial mocap setup recommends connecting the iPhone to your LAN via a certain adapter, which is ~$100. They say this is more reliable than wifi or bluetooth. I'll be trying that connection out, I already have the box. The facial mocap demos I've seen show a realtime link between the Truedepth cam and the facial mocap plugin, rather than a file export/import process. This is what I'm expecting, but maybe I'm missing something.
Hi!
Where do you see a rating for the Face Mojo iOS app? When I look at it in the Apple App Store and App Store Connect it says there are no ratings, or reviews yet.
Thanks for sharing your experience! Sorry it's not more straight forward transfering over the files. I've heard you can also use a USB cable and iTunes on Windows to transfer over the files, but I haven't tried it yet.
The steps look good, thanks for laying them out.
You can use the strength modifiers for this! Just set a keyframe for them at 100% a bit before you want them boosted to start the transition from, then set one for 120% where you want the boost to be in full effect, then set one for 120% where you want it to end, and then set one for a 100% a little after to transaition back down to 100%. It's much easier to animate the strength modifiers to polish the animation in my opinion.
Thank you for this tip!
Terrific! Thank you for the information!
I just uploaded a demo/general use video for the Face Mojo iOS app. Check it out:
This is a FANTASTIC app-- works exactly as advertised in Daz Studio. I'd LOVE to be able to use it in Unreal Engine, but I am absolutely stumped as to how to make this happen. I tried exporting from Daz, but that only seems to work with animations as keyframes on the timeline. Unreal DOES read blend shapes and morph targets, but I'm not really sure how to get that information into Unreal.
I've also experimented using Live Link Face with Genesis 8 models in Unreal, and I hate it. It's just not as articulate as Face Mojo is, but I simply CANNOT GET IT OUT OF DAZ! I love the results of Face Mojo, and I love the speed and efficiency of Unreal Engine. How the heck can I get these two things to work together?!
I too am now a happy camper. My experiments aen't finished yet, but I've succeeded in superimposing my own facial movements onto one of my characters, and the results are what I'm looking for. It was simple enough to do, too. I used the Face Mojo iOS capture program. I recomment it, even though there are other ways to get the data into DAZ.
Now I'm going to see how easy it is to tweak the facial expressions in DAZ. My original video recording is much more expressive than the result in DAZ, so if I can, I'm going to bring them more into line with one another. I'm a bit concerned that there are many new morph controls with inscrutable names, and nothing more than YT videos as a guide. In fairness, it's a sign of the times.
Even if I totally fail at the tweaking part, the results just as they are are good enough that I would use them as is. Mocap rules. I'm sold.
Hello Landon,
Well, I still can see it on the app page :
I am in the process of comparing Face Mojo to Facemotion3D. At this time Face Mojo compares quite favorably. I cannot recommend Facemotion3D yet.
So far I've used Face Mojo to record a motion sequence, imported it into DAZ, and merged it with one of my characters. The results are very good and the difficulty level was low. Today I will record a new sequence, this time including audio. It will really be a test of the XR mic, as I already know how to bring audio into DAZ.
IMO, documentation is the weakest element, but these days that's the norm, so I can't single out Face Mojo for it. You'll get a bunch of new morphs on the model's face, but it isn't clear what they are intended to do. I've not found any of them to be useful yet.
The iPhone software has the ability to display the facial mesh. This is useful as it gives you an idea of what to expect. My eyeglasses do not interfere with the scanning process, although I remove them anyway. The mesh mouth doesn't open as much as my actual mouth, and I have found no way to adjust it. Next I will trim my beard short and see if that helps. I doubt it but I will try and see. I recommend mounting the iPhone Camera in a vibration-free manner.
Hi! Thanks, and thanks for sharing your experience!
I'm not sure of the best way to try and get the best of both worlds, so to speak.
I have this video that shows how to export the Face Mojo animation on a character out of Daz Studio via FBX:
Does that process work to then take to UE for rendering?
Hi! Thanks for sharing your experience!
I find the easiest way to adjust the animation is using the Strength Modifiers found in 'Actor/Face Mojo' and 'Pose Controls/Face Mojo.' They can be animated and it's easier to set them where and when needed vs. trying adjust the keyframes created on the morphs themselves during the baking process.
I will give the documentation a look over and see if I can make it more comprehensive as well as make a new video showing what I'm describing with the Stength Modifiers.
Hi! Thanks for sharing your experience with Face Mojo!
I'm glad you found it easy to use and I will work on the documentation to see if I can make it more comprehensive for everyone.
When you say the mesh mouth doesn't open as much as your mouth, I was wondering if you're refering to the mesh overlay, or your 3D character's mesh? If it's the mesh overlay it may be a limitation of Apple's technology. The character's mouth can be adjusted by using the Strength Modifiers which should help if you feel like it's mouth isn't opening enough.
I'm talking about the mesh that appears in the iPhone software, the mesh I can turn on and off. When my mouth is open, this mesh is less open. I'm hoping my beard could be causing it. The result follows through onto my animated DAZ face.
...........................................................................................
EDIT
I believe I am seeing the inherent limitations of the blendshapes (morphs). The mesh itself is as it should be. I have observed the same characteristics in Face Cap.
The limitations of the mouth are obvious. The "oo" shape is difficult. Words like who, you, do, these are less well formed. I believe that, for now, it's what we got. Me, I'm good with it.
Along the same lines, we can't raise one eyebrow while lowering the other one, as we do when we show skepticism.
................
I found this list of best practices for facial mocap, and I have posted them here hoping they may help others. They have helped me.
The first thing I spotted was that facial hair can interfere with the process. I was wondering about that.
The other thing that caught my eye was the importance of lighting. I hadn't made the connection.
I'll correct both these things today.
I'd like to hear what experiences others are having.
................................................
BEST PRACTICES
-
How come you got rid of your comment? I actually liked your idea.
I didn't know you read it.
I thought it was a no-brainer till I tried to find one... that is, a teleprompter that would be transparent and let me display a mocap screen beneath it. For all the teleprompters there are, not ONE will allow me to select the underlying display. The best I can do is to see a live video of myself. Argh... Some of these teleprompters are five years old, yet no one has thought of this.
I can use two cell phones, one for mocap, one for text, and try to mount them really close. I'm looking at hardware to do this now. Not ideal, but I'm probably going to try it.
https://www.bitchute.com/video/P5VgY8bXYNpo/
Should you undertake this, I would be happy to be your QA/Beta tester.
.....................................
EDIT
FYI: This is the best cell phone teleprompter rig I found. Of course, it's more expensive than software would be, especially when the cost of a second cell phone is added. It's also a bit larger than a cell phone. I think it would work.
I almost have a basic teleprompter feature added. If you want email me at [email protected] and I will add you as a beta tester when I get it together.
Hot damn! I am at your service.
Yep-- I watched all of your videos. I even downloaded that FBX export script from your website.
Are you ready for THIS frustration? I actually got the Face Mojo animation to work in Unreal Engine! I managed to do it twice successfully! NOW, however...it doesn't work(!).
From what I have researched, it seems as if UE requires facial mocap data capture imported as FBX2011 (it's how the ROKOKO face mocap works). When I import the animation into UE, more often than not I get ALL of the morph targets, but NONE of the animation (except for head movement, oddly enough). I've made so many attempts that I think I confused my notes a bit, and now I am having a difficult time determining the path that enabled me to play the Face Mojo animation in UE. Luckily, I make new UE files for every experiment, so I will be able to backtrack to see how to make this work consistently!
I feel like I am SO CLOSE! If there is ANYONE that has insight into this, it would be GREATLY appreciated!