How to import weight map for custom contents?
icemage1993
Posts: 69
in The Commons
Hello guys, Iam making this simple short dress for genesis 8 female.
I modelled and weight painted it in blender and exported it as OBJ file.
When I import and try to fit it to g8f, everything is working fine, except for the weight map.
This weight map option here, if I tick it, then Daz will automatically make a new weight map for the dress, completely disregard my hand painted weight map.
What did I do wrong, guys, or the weight map must be painted inside daz studio?
Comments
Generally it's better to weight-map in DS rather than an external application, though I think DS will import some (Motion Builder?) weight-mapped figures from FBX.
OBJ is an unrigged format and while it can support extremely basic vertex groups, cannot actually contain a true bone weight map.
The only workable method I currently have for transferring rigged figures out of Blender is to go via MikuMikuDance format and a now-hard-to-find DLL for importing that into DS, *then* transferring that weighting back onto an untriangulated OBJ import (MMD formats force triangulation). DS and Blender do not agree about any standard rigged interchange format I'm familiar with.
I have exported rigged weight mapped props from blender using dsf exporter, those import into DS with rig / weight map.
https://www.daz3d.com/forums/discussion/67464/getting-objects-from-blender-to-ds/p3
Not to derail Icemage1993's thread, but in "painting" in/on the weight maps within D|S, we are essentially paint-projecting onto the mesh/internal UV map. Is there a way(for congruency) to paint "weight maps" in corresponding blue to red values onto a UV map(outside of D|S), and import that in to D|S as a weight map? Especially for symmetrical clothes, parented props, geografts, and other items? It would be a lot easier to try and experiment with different weight ratios without having to painstakingly repaint a complicated mesh within D|S. -David
Weight maps are not images, they do not use the UVs. A weight map, like a morph, is a kind of vertex map - in this case mapping refers to matching up two lists, one of weights or of changed positions to the list of vertices making up a model.
You know, I just came in here looking for a way to do this and it looks like I'mma have to make one myself, apparently.
But this concept, this way of thinking about it, it is totally wrong.
A weight map is a 0-1 number assigned to a vertex.
A vertex also has a set of UV coordinates. Unless it's a really crappy mesh. And it should be a uique set (though it could be more than one if the vertex in UV space is on the border of a shell, that's no big deal though, you just average, and they should be the same so the average will be the same as both values, other than precision error possibilities)
So if I have a vertex and its weight is, say, 25% (or 0.25), then I see no reason I cannot just take its value, convert that to greyscale(64), and place a dot (with a little padding to accommodate any rounding errors and map scale differences) at that UV coordinate.
Then a weight map *would be a texture*.
It IS a map. It's a map of values associated with vertices. And a UV map is another map of two values (U and V, corresponding to X and Y across an image where the width of an image is always 1). So you can always store data and look it back up based on those coordinates. There's literally no reason you can't.
I don't see why this is treated like a foreign concept.
Sorry it's a year later I just didn't see a never thread on the subject.
As I see it, a weight map is just a list of values for each vertex.
First off, if it's such an obvious, simple solution, can you name a program that does it this way? Comparing a weight map to a UV map doesn't support your core argument, because neither is a "map" in the same way that a texture map is. If you're aware of a way to UV map an object by applying an image, I'm sure a lot of people would be interested in hearing that. You're also failing to consider that a weight map would not be A map, but likely dozens if not hundreds. Weights travel across UV islands and blend with other bone weights, so whether each UV island has multiple maps for the different bone weights, or each bone has multiple maps for the different island, you're dealing with a whole lot of different maps (again, for something that might not even have a UV map to make these associations). A more sensible approach would be something like a spreadsheet of weight values for each vertex.
Overlapping UVs (such as reusing the same part of a texture map to do both sleeves on a shirt), or UV co-ordinates outside the standard UV square (which are sometimes used to force tiling on a UV level, or alternatively for UDIM mapping). Not all models have these problems, but while a vertex ID is unique, a UV co-ordinate cannot be assumed to be.
~~~~~
However, yes, you can convert a weightmap to an image; I've done it in the past. I've got an add-on for Blender that I sometimes use so I can convert a weight map I've painted there into a mask I can use for things like a levels adjust layer in Photoshop (I don't have a 3D version of photoshop, and creating the mask in Blender means I can properly match it over seams).
The problem is... well, one program having a method for exporting weight maps to an image isn't any use unless other programs offer a way to convert that back. Doing so would just be a new poorly supported interchange format - if programs can't agree over things like FBX, then this isn't worth anyone putting their effort into developing a tool for.
(Another question is whether 8-bit really gives enough precision to transfer weights well, and I'm not certain it does).
What are you even talking about? (And why are you being contentious and seeming to try to start something?)
I'm not making an argument. I'm explaining a way to do something.
Just because nobody has done something that seems simple doesn't mean that thing doesn't work. (I mean, steam power is pretty simple, so why didn't the ancient Sumerians have it?)
Overlapping UVs only even matter if you're transferring. Otherwise, unless two vertices' UV coordinates are *literally the exact same* (or within the tolerance you give the system) they won't matter. And that can happen from bad UV mapping anyway but bad UV mapping causes so many other problems it's hardly unique to this one thing.
Besides, overcoming the problems with overlapping UVs is child's play. You just define the UDIM space for each material and then there are no overlapping UVs anymore.
If you are using this to transfer weight maps to another mesh, then just use multiple maps the same as is done for textures. Just like using UDIMs. Easy. Most only will be on one/in one UDIM space, but it doesnt even matter.
And importing is easier than exporting. You literally just sample the colour at the target UV coordinates. If there is more than one, you sample the colour there however many times and then average them. If you're not transferring to another mesh that takes the same maps, then they will only even ever be different if you frakked with them somewhere else manually. If you're transferring, they won't be that far oss unless you have a totally whacked out crunchy weight map that's all over the place, at which point, again, you have bigger cuddly clownfish to fry, because one thing important about transferring weight maps is that you need to start with a weight map that actually works.
As to whether 8-bit is enough, well, it really should but if not, surely 16 bit should be. I mean, the precision of a weight map isn't about it's exact six-digit weights, it's about the PLACEMENT of what weight. I mean, how often does it make a niticable difference if a specific vertex is bending at 5% or 5.1%? As long as what's happening doesn't get crunchy, it's no big deal. And it's easier to keep them smooth with less precise numbers.
But if you want higher precision, you have 3 colours to work with. Treat red as the 0..256 = 0..1 channel, then have G be 0..256 fractions of each 256th, and have B be the next subdivision down from that. That should be enough for anyone ever. That's a precision down to 0.00001 and change and is the same as a 32-bit image (it IS a 32-bit image, you're just storing it differently).
I really did not expect to have people defending not doing something that seems this simple. I was hoping for more "hey, let's try it and see!" It's way more positive.
Documentation that someone wrote and that has already been presented to read is key.
https://www.daz3d.com/forums/discussion/comment/8152301/#Comment_8152301
"Then a weight map *would be a texture*." - That is inaccurate. The only thing a 'weight' map and a 'texture' map have in common is map(ping). One process applies an actual image (bitmap, jpeg, TIF, etc as the texture) to mesh objects 'surface' for the purpose of 'covering' the object with something like a skin image while the other process applies values to a mesh object's structural vertices for the purposes of bending the mesh object.
What is the definition of a texture - the feel, appearance, or consistency of a surface or substance. example "skin texture and tone" By definition of what a texture is it is not literaly being applied to the physicality of the mesh object so that the mesh object can bend with a sense of reality or refinement of an area of the mesh to bend realistically. It, a weight map, is not a texture.
Please keep this discussion civil, and focussed on the topic.
Nobody is "defending not doing something that seems this simple"; they are explaining to you why it's not as simple as you're insisting.
It's not inaccurate. You would, in fact, have a texture. You could apply this to the figure and they would be polkadotted in greyscale dots around the vertices. You would have a bitmap image that can be applied to a figure. But it's also a map that contains data about the weight mapping that can be applied to the figure if read out for that purpose.
The definition of a texture, in 3D modelling terms not in dictionary terms, is "any 2D bitmap that can be applied to a 3D model using coordinates associated with each polygon's vertices".
The definition of a "map" in programming is "any data source which can be procedurally read to associate a property with an existing entity; a lookup table in any form".
No, they are, in fact, inadvertently explaining to me that they do not understand what I am saying.
That's on me, I guess. I do not know how to explain this in any simpler way.
Hey Doger
Would this be a script or something? I'm curious, and would love to see what you could come up with to handle this problem.
Cheers!
I'm not a rigging expert but I can tell you why this process could be very problematic:
1 - there should be a map for every joint... if you have 50 joint you should have 50 maps (Genesis 9 has many more joints)
2 - a greyscale image ussually has only 8 bit ---> you have only 256 gray colors. This is a discretization problem because every joint would have only values multiple of 0,00390625 (1/256) --> a joint can't have a value of 0,004 or 0,005.
3 - the sum of every vertex weights must be 1 (some softwares allows greater numbers).. you must manually check that the sum of every map is white or create a script to redistribuite the weights
4 - every object must have a UVMap (if you create a map from existing rigging the UV must not have overlapping vertex or you have errors).
It's actually worse than that, because joint weights can cross UV islands. In a G8 character, for example, the collar joints bridge the torso/body and arm surfaces and influence both, so you'd need a map for the collar in each UV island. The shoulders also influences the torso/body surfaces, while the chest upper and lower joints influence the arms. This wouldn't quite double the number of maps required, but would significantly increase it.
Anything I might come up with would start as a perl script, because that's where I'm strongest, but I can translate anytning into Python preatty easily. I could befoe with some effort, but not I can just make ChatGPT do most of the work so it's easier. At that point I would probably be able to figure out how to make such things work as tools in both respective programs since they both run Python interpreters.
I am a rigging expert. Just to make sure that's understood. (In Poser, anyway.)
Yeah, there would need to be 3 maps for every actor and every UV shell affected, though not necessarily all would be required. More if there's a way to get D|S to honour bulges. The "multiple maps" thing doesn't mean a separate map for each UV shell specifically though -- just hte ones that are grouped together the same way is done for textures. So yeah, there'd be a set of maps for each (though ones with no weights applied, for whatever reason, can be left out and the weights defaulted to zero when not present, so that only vertices with a non-zero weight value would matter and create maps -- easy enough to say "did I draw on this one" and discard it if not. So, like, you wouldn't end up with a pointless all-black head map for a finger joint.
It may even be feasible to make some informed decisions on what maps to combine, so, for instance, all xFingerJoint1 to xFingerJoint2 weights could be combined for a hand, for instance, as the lMid1-lMid2 joint weights will have no bearing on lIndex1-lIndex2, for instance, and so the weights in those unrelated actors can be ignored for each other joint. But then that might be too complicated to determine programmatically.
I seriously doubt that #2 will matter. I could try it with greyscale and see, but I strongly suspect that an existing weight of 0.012123 being rounded to 0.011719 will not make any visible difference at all in the way things bend. However, of course, if I come to find that you're right and the difference is visible, then it's easy enough to just use an RGB image and get precision down to 0.000000059604644775390625, as each value could then be stored in 24 bits. Or i could just use a 16-bit image. What a PNG "usually" contains is irrelevant, since PNGs also don't "usually" contain weight map data.
I don't understand what youre saying with number 3. Each vertex has a range from 0 to 1, not-affected to fully-affected. While the idea of a >1 value or even a <0 value is conceivable, where bending a joint would make that vertex move more than the bend indicates, or move backwards (e.g. if you set the bend to 4 degrees, a weight value of 2 would make that vertex move as if the bend were turned off and the joint was rotate to 8 degrees, while a value of -1 would make it move back 4 degrees), the ssimplest solution would just be to clamp to a 0..1 range and not allow that because it's weird and unpredictabe (if it's even supported deliberately). So even if out-of-bounds numbers are permitted by DAZ or Poser, *I* don't need to alow them.
But it sounds like, in #3, you're saying that the sum of all weights in all vertices must equal 1. If that's what you mean, I'm sorry but you're just completely wrong. I mean, consider an elbow. It has a pretty tight range between unaffected and fully-affected. A large number of the vertices are affected fully, as if bend were turned off. Well, if the weights had to sum to equal 1, then even one fully-affected vertex would mean *all other vertices could not have any weight at all*. Clearly that's not the case. There's no need at all to check that the... sum of every map is white (or 1). I must not be understanding what you're saying here because all I can parse out of that doesn't make sense. I apologise for not knowing what you mean here, I guess.
Yes, every object would be required to be UV mapped. And yes, overlapping or otherwise weird bad mapping would potentially mangle things. I don't see a problem here. So.. it can't support crappy meshes and if people want to make something compatible with it they need to do thing right. This may actually just be a feature.
Usually ther total weights, from all joints, on a single vertex must add up to one. If the weight was greater than 1 then the vertex would move too far, and if under 1 not far enough, in some poses*. This may not be true in Poser when mixing weight mapped and parametric rigging, but it is the default 9and was initially enforced) in DS. There may be uses for unnormalised weights on things like body handles, but those would usually not be doable until the main, normalised, rigging was complete. The normalisation options are in the Tool Settings pane with the Node Weightmap brush active.
I think I'm totally not understanding what you're saying here, still.
Like, for instance, if you consider one vertex from just over halfway through the elbow within lForeArm, thus affected only by the elbow joint, the one between lShldr and lForeArm, it might have a weight of 0.51
No other joint would have an effect on it.
0.51 is < 1. The total weight on it is only 0.51 and so "adds up to" 0.51.
I do not know what I'm missing here, so, as a result, I cannot see how it relates to "weight maps in image map form".
If it is on the arm then its weight will probably be slplit between forearm an upper arm - in general, but not always, the weight will be split between two adjacent bones in the hierarchy.