Discovered serious flaw/Bug

father1776father1776 Posts: 982

While using Daz 4.15 I discovered that my project was taking longer and longer to save even though

I was not adding a lot to it.

I was using merge scene and that was were the problem is.

Merge scene is adding a LOT of phantom data to the save file.

How much?

I took a copy of what I was working on that has used merge scene several times

and erased everything in it ... zero objects

Try to save it - too 10 min to save, EMPTY project size ?

3.5 GIG  <---- yes, not an error

so there is a serious problem in merge that are making save files HUGE

 

I would warn others to not use merge scene until this has been taken care of.

not mad, just concerned.

glad I was able to pin it down, it was killing my productivity 

 

Post edited by father1776 on
«1

Comments

  • father1776father1776 Posts: 982

    Side note,

    I would be better if you could just highlight a number of objects

    on your scene tab and right click ---> copy and have the program just

    produce a copy of those objects under them on the list.

     

  • father1776 said:

    Side note,

    I would be better if you could just highlight a number of objects

    on your scene tab and right click ---> copy and have the program just

    produce a copy of those objects under them on the list.

    Select them, then Edit>Duplicate>Duplicate selected node(s) (or Node Hierarchies if you want their children too).

  • father1776 said:

    While using Daz 4.15 I discovered that my project was taking longer and longer to save even though

    I was not adding a lot to it.

    I was using merge scene and that was were the problem is.

    Merge scene is adding a LOT of phantom data to the save file.

    How much?

    I took a copy of what I was working on that has used merge scene several times

    and erased everything in it ... zero objects

    Try to save it - too 10 min to save, EMPTY project size ?

    3.5 GIG  <---- yes, not an error

    so there is a serious problem in merge that are making save files HUGE

     

    I would warn others to not use merge scene until this has been taken care of.

    not mad, just concerned.

    glad I was able to pin it down, it was killing my productivity 

    If you use File>Save As>Scene Subset does anything show in the supposedly empty scene? Do you have an add-on or plug-in that adds its own data to the scene (Reality and Octane have done this at times)?

  • father1776father1776 Posts: 982
    edited July 2021

    did that, nothing shows (I had deleted everything in scene before)

    saved

    took 10 minutes

    save version still had nothing in it

    still 3.5 gig

     

    hope that helps

    (side note, I had used merge scene several times with that file before I had emptied it)

    (that is why I am pretty sure it has something to do with the huge save file size)

     

    also note, if I make a file from scratch and do not use merge scene at all, it saves normally

     

    Post edited by father1776 on
  • father1776father1776 Posts: 982
    edited July 2021

    Richard Haseltine said:

    father1776 said:

    Side note,

    I would be better if you could just highlight a number of objects

    on your scene tab and right click ---> copy and have the program just

    produce a copy of those objects under them on the list.

    Select them, then Edit>Duplicate>Duplicate selected node(s) (or Node Hierarchies if you want their children too).

    thxs will try that ... the merge scene things still should be looked at though.

    Post edited by Richard Haseltine on
  • margravemargrave Posts: 1,822

    father1776 said:

    While using Daz 4.15 I discovered that my project was taking longer and longer to save even though

    I was not adding a lot to it.

    I was using merge scene and that was were the problem is.

    Merge scene is adding a LOT of phantom data to the save file.

    How much?

    I took a copy of what I was working on that has used merge scene several times

    and erased everything in it ... zero objects

    Try to save it - too 10 min to save, EMPTY project size ?

    3.5 GIG  <---- yes, not an error

    so there is a serious problem in merge that are making save files HUGE

     

    I would warn others to not use merge scene until this has been taken care of.

    not mad, just concerned.

    glad I was able to pin it down, it was killing my productivity 

    Daz is poorly optimized in general, really. I'm hoping the upgrade to Qt5 will fix the terrible File I/O issues.

    I long ago learned to save my scene as presets and assemble it when I'm ready to render, rather than create one big scene file and continue to add to it.

  • margrave said:

    father1776 said:

    While using Daz 4.15 I discovered that my project was taking longer and longer to save even though

    I was not adding a lot to it.

    I was using merge scene and that was were the problem is.

    Merge scene is adding a LOT of phantom data to the save file.

    How much?

    I took a copy of what I was working on that has used merge scene several times

    and erased everything in it ... zero objects

    Try to save it - too 10 min to save, EMPTY project size ?

    3.5 GIG  <---- yes, not an error

    so there is a serious problem in merge that are making save files HUGE

     

    I would warn others to not use merge scene until this has been taken care of.

    not mad, just concerned.

    glad I was able to pin it down, it was killing my productivity 

    Daz is poorly optimized in general, really. I'm hoping the upgrade to Qt5 will fix the terrible File I/O issues.

    "poorly optimised" based on what?

    I long ago learned to save my scene as presets and assemble it when I'm ready to render, rather than create one big scene file and continue to add to it.

  • margravemargrave Posts: 1,822

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

  • margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Most of which, even if I agree to accept them as criticisms, have nothing to do with optimisation - and even the ones that might be affected by optimisation don't prove that that is the case. I've not run into the docking issue, and the timeline sounds like an issue with the filtering settings or not looking at the correct node.

  • frank0314frank0314 Posts: 13,383

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Do you have enough system RAM and a big enough GPU? I'm not having any of those problems regardless of how big my scene it. Obviously if you have the Iray shader turned on in a huge scene you'll have a bit of a lag but that really the only time I get them.

  • margravemargrave Posts: 1,822

    Richard Haseltine said:

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Most of which, even if I agree to accept them as criticisms, have nothing to do with optimisation - and even the ones that might be affected by optimisation don't prove that that is the case. I've not run into the docking issue, and the timeline sounds like an issue with the filtering settings or not looking at the correct node.

    The first two are directly related to optimization; the second two, I included UI issues as a form of optimization, since they should be improved by the design team.

    Regarding the timeline issue, cameras only have one node and no filtering settings, so I'm pretty sure I was looking at the correct node.

  • margravemargrave Posts: 1,822

    frank0314 said:

    Do you have enough system RAM and a big enough GPU? I'm not having any of those problems regardless of how big my scene it. Obviously if you have the Iray shader turned on in a huge scene you'll have a bit of a lag but that really the only time I get them.

    Iray itself and Blender both run fine on my machine, so it's not a question of RAM.

    Blender IK is lightning quick, whereas grabbing and dragging a hand in the Daz viewport incurs two-three seconds of lag unless I use pinning and IK chains, and even then it's still jerky.

  • margrave said:

    Richard Haseltine said:

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Most of which, even if I agree to accept them as criticisms, have nothing to do with optimisation - and even the ones that might be affected by optimisation don't prove that that is the case. I've not run into the docking issue, and the timeline sounds like an issue with the filtering settings or not looking at the correct node.

    The first two are directly related to optimization;

    No, they are related to how long the process takes to perform. We have no idea whether the code performing the task could be (further) optimised or not, nor do we know what trade-offs theer might be in applying (further) optimisation.

    the second two, I included UI issues as a form of optimization, since they should be improved by the design team.

    That is rather a stretch.

    Regarding the timeline issue, cameras only have one node and no filtering settings, so I'm pretty sure I was looking at the correct node.

    Please give an example.

  • margravemargrave Posts: 1,822

    No, they are related to how long the process takes to perform. We have no idea whether the code performing the task could be (further) optimised or not, nor do we know what trade-offs theer might be in applying (further) optimisation.

    If a third-party program (Blender) can pose Daz models faster and more reliably than their native application, then surely Daz is capable of further optimization.

    That is rather a stretch.

    All improvements to the reliability and stability of a program are optimizations.

    Please give an example.

    Camera coordinates changing because hidden keyframes are influencing the interpolation but can't be found.

  • margrave said:

    No, they are related to how long the process takes to perform. We have no idea whether the code performing the task could be (further) optimised or not, nor do we know what trade-offs theer might be in applying (further) optimisation.

    If a third-party program (Blender) can pose Daz models faster and more reliably than their native application, then surely Daz is capable of further optimization.

    However, Blender is not doing everything DS does with those characters so that isn't a solid comparison

    That is rather a stretch.

    All improvements to the reliability and stability of a program are optimizations.

    Please give an example.

    Camera coordinates changing because hidden keyframes are influencing the interpolation but can't be found.

    Yes, but that isn't an example as such - that needs either a saved fiel that shows the issue or specidic steps to reproduce it.

  • mrinalmrinal Posts: 641

    frank0314 said:

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Do you have enough system RAM and a big enough GPU? I'm not having any of those problems regardless of how big my scene it. Obviously if you have the Iray shader turned on in a huge scene you'll have a bit of a lag but that really the only time I get them.

    System RAM and GPU limitations are not the main bottleneck here. DS works fine when there are fewer characters and morphs assets in the content library. The problem is the scalability factor which does not seem to have been addressed in the software. With every new character or morph asset added to library, the resource cost increases at a much higher rate than the perceivable benefit it adds. For example, when I use a character in my scene, I only use a combination 2-3 other character morph dials beside a few other generic morphs. But to provide those 2-3 character morph options DS has to pre-load hundreds of character dials to make them available as option in parameters. Almost all of this preloading is unwarranted and unnecessarily draining system resources and load time when the user uses less than 1% of it in any given scene. This is entirely going against to some software design best practices and patterns like YAGNI (you aren't gonna need it).
     
    The user should not be forced to upgrade the system RAM just to accommodate the option of making the new characters available as morph option in a given scene. The system RAM could be a limiting factor for the size and complexity of the scene NOT a limiting factor for the size of the entire content library.
     
    Content sets do not address this problem since they render the excluded content totally inaccessible without restarting DS.

  • mrinalmrinal Posts: 641

    Richard Haseltine said:

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Most of which, even if I agree to accept them as criticisms, have nothing to do with optimisation - and even the ones that might be affected by optimisation don't prove that that is the case. I've not run into the docking issue, and the timeline sounds like an issue with the filtering settings or not looking at the correct node.

    Optimisation or not, the problem lies within the concern that the DS's architecture/design and morph loading approach has not evolved to support the size of the content library that users have today. When it comes to supporting larger asset library, DS still uses that archaic method of loading assets that was deemed sufficient a decade ago when the users content library wasn't that vast. There are software design issues that can no longer be swept under the rug (without buying a larger rug, of course).

  • TogireTogire Posts: 402
    edited July 2021

    mrinal said:

    Richard Haseltine said:

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Most of which, even if I agree to accept them as criticisms, have nothing to do with optimisation - and even the ones that might be affected by optimisation don't prove that that is the case. I've not run into the docking issue, and the timeline sounds like an issue with the filtering settings or not looking at the correct node.

    Optimisation or not, the problem lies within the concern that the DS's architecture/design and morph loading approach has not evolved to support the size of the content library that users have today. When it comes to supporting larger asset library, DS still uses that archaic method of loading assets that was deemed sufficient a decade ago when the users content library wasn't that vast. There are software design issues that can no longer be swept under the rug (without buying a larger rug, of course).

    An architecture rethink would abviously be required, but there is also a problem of basic code optimization. Even with this archaic method, it could be easily possible to have much faster loading times. If I load 2 g8F, I can understand that the first one takes a lot of time to load all the morphs. But what about the second g8F? Why is the full morph directory (re)scanning required? Loading should resuse the results of the first scan, unless the DB has been refreshed. So just one full scan per char family per session and the second load should be almost instant. (and it is not alas what I see...)

    It could also be possible to scan all the figures directories for their content in another thread while the user is idle. Many (simple) optimizations are possible and not done.

    Post edited by Togire on
  • alainmerigot said:

    mrinal said:

    Richard Haseltine said:

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Most of which, even if I agree to accept them as criticisms, have nothing to do with optimisation - and even the ones that might be affected by optimisation don't prove that that is the case. I've not run into the docking issue, and the timeline sounds like an issue with the filtering settings or not looking at the correct node.

    Optimisation or not, the problem lies within the concern that the DS's architecture/design and morph loading approach has not evolved to support the size of the content library that users have today. When it comes to supporting larger asset library, DS still uses that archaic method of loading assets that was deemed sufficient a decade ago when the users content library wasn't that vast. There are software design issues that can no longer be swept under the rug (without buying a larger rug, of course).

    An architecture rethink would abviously be required, but there is also a problem of basic code optimization. Even with this archaic method, it could be easily possible to have much faster loading times. If I load 2 g8F, I can understand that the first one takes a lot of time to load all the morphs. But what about the second g8F? Why is the full morph directory (re)scanning required? Loading should resuse the results of the first scan, unless the DB has been refreshed. So just one full scan per char family per session and the second load should be almost instant. (and it is not alas what I see...)

    It could also be possible to scan all the figures directories for their content in another thread while the user is idle. Many (simple) optimizations are possible and not done.

    Please stop the assumptions about how the code is inefficient or not optimised - we don't havea ccess to the code to judge.

    It's true that loading can be very slow - though memory is not the issue, so I don't think this si forcing people to upgrade - but there does need to be a recognition that people have wanted to have multiple morph sets available, and to be able to add 9or remove) sets and characters easily (which is why loading a second figure again does the property reading). I'm sure Daz is aware of the issue, but they are still bound by the laws of algorithmics and the need to maintain compatibility. We will have to see if daz Studio 5 offers an alternative (and if it does, whether it requires new content to use it).

  • frank0314frank0314 Posts: 13,383

    mrinal said:

    frank0314 said:

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Do you have enough system RAM and a big enough GPU? I'm not having any of those problems regardless of how big my scene it. Obviously if you have the Iray shader turned on in a huge scene you'll have a bit of a lag but that really the only time I get them.

    System RAM and GPU limitations are not the main bottleneck here. DS works fine when there are fewer characters and morphs assets in the content library. The problem is the scalability factor which does not seem to have been addressed in the software. With every new character or morph asset added to library, the resource cost increases at a much higher rate than the perceivable benefit it adds. For example, when I use a character in my scene, I only use a combination 2-3 other character morph dials beside a few other generic morphs. But to provide those 2-3 character morph options DS has to pre-load hundreds of character dials to make them available as option in parameters. Almost all of this preloading is unwarranted and unnecessarily draining system resources and load time when the user uses less than 1% of it in any given scene. This is entirely going against to some software design best practices and patterns like YAGNI (you aren't gonna need it).
     
    The user should not be forced to upgrade the system RAM just to accommodate the option of making the new characters available as morph option in a given scene. The system RAM could be a limiting factor for the size and complexity of the scene NOT a limiting factor for the size of the entire content library.
     
    Content sets do not address this problem since they render the excluded content totally inaccessible without restarting DS.

    Then tell that to every software manufacturer out there. Each and every program out there requires more RAM, processing and VRAM with ever new version, with the exception of a very few. Especially in this industry.

  • TogireTogire Posts: 402

    Richard Haseltine said:

    alainmerigot said:

    mrinal said:

    Richard Haseltine said:

    margrave said:

    ...

    Optimisation or not, the problem lies within the concern that the DS's architecture/design and morph loading approach has not evolved to support the size of the content library that users have today. When it comes to supporting larger asset library, DS still uses that archaic method of loading assets that was deemed sufficient a decade ago when the users content library wasn't that vast. There are software design issues that can no longer be swept under the rug (without buying a larger rug, of course).

    An architecture rethink would abviously be required, but there is also a problem of basic code optimization. Even with this archaic method, it could be easily possible to have much faster loading times. If I load 2 g8F, I can understand that the first one takes a lot of time to load all the morphs. But what about the second g8F? Why is the full morph directory (re)scanning required? Loading should resuse the results of the first scan, unless the DB has been refreshed. So just one full scan per char family per session and the second load should be almost instant. (and it is not alas what I see...)

    It could also be possible to scan all the figures directories for their content in another thread while the user is idle. Many (simple) optimizations are possible and not done.

    Please stop the assumptions about how the code is inefficient or not optimised - we don't havea ccess to the code to judge.

    It's true that loading can be very slow - though memory is not the issue, so I don't think this si forcing people to upgrade - but there does need to be a recognition that people have wanted to have multiple morph sets available, and to be able to add 9or remove) sets and characters easily (which is why loading a second figure again does the property reading). I'm sure Daz is aware of the issue, but they are still bound by the laws of algorithmics and the need to maintain compatibility. We will have to see if daz Studio 5 offers an alternative (and if it does, whether it requires new content to use it).

    That is exactly what I said "unless the DB has been refreshed". OS give no portable (nor efficient) means to determine if a subtree of the file system has been modified, but knowing when the DB is refreshed is trivial (even if UI management is in another part of the software). Daz would introduce a modification saying "To reduce the char loading time, the char morph content will be read only once. If you want changes to be taken into account for the newly loaded chars, perform a DB refresh", I am certain that ALL users would be VERY pleased. And even without knowing Daz Studio source code, I know that some modifications are complex and can break code functionality, while others (like this one) are rather simple.

  • alainmerigot said:

    Richard Haseltine said:

    alainmerigot said:

    mrinal said:

    Richard Haseltine said:

    margrave said:

    ...

    Optimisation or not, the problem lies within the concern that the DS's architecture/design and morph loading approach has not evolved to support the size of the content library that users have today. When it comes to supporting larger asset library, DS still uses that archaic method of loading assets that was deemed sufficient a decade ago when the users content library wasn't that vast. There are software design issues that can no longer be swept under the rug (without buying a larger rug, of course).

    An architecture rethink would abviously be required, but there is also a problem of basic code optimization. Even with this archaic method, it could be easily possible to have much faster loading times. If I load 2 g8F, I can understand that the first one takes a lot of time to load all the morphs. But what about the second g8F? Why is the full morph directory (re)scanning required? Loading should resuse the results of the first scan, unless the DB has been refreshed. So just one full scan per char family per session and the second load should be almost instant. (and it is not alas what I see...)

    It could also be possible to scan all the figures directories for their content in another thread while the user is idle. Many (simple) optimizations are possible and not done.

    Please stop the assumptions about how the code is inefficient or not optimised - we don't havea ccess to the code to judge.

    It's true that loading can be very slow - though memory is not the issue, so I don't think this si forcing people to upgrade - but there does need to be a recognition that people have wanted to have multiple morph sets available, and to be able to add 9or remove) sets and characters easily (which is why loading a second figure again does the property reading). I'm sure Daz is aware of the issue, but they are still bound by the laws of algorithmics and the need to maintain compatibility. We will have to see if daz Studio 5 offers an alternative (and if it does, whether it requires new content to use it).

    That is exactly what I said "unless the DB has been refreshed". OS give no portable (nor efficient) means to determine if a subtree of the file system has been modified, but knowing when the DB is refreshed is trivial (even if UI management is in another part of the software). Daz would introduce a modification saying "To reduce the char loading time, the char morph content will be read only once. If you want changes to be taken into account for the newly loaded chars, perform a DB refresh", I am certain that ALL users would be VERY pleased. And even without knowing Daz Studio source code, I know that some modifications are complex and can break code functionality, while others (like this one) are rather simple.

    Maybe, but running the utility to update the ExP files for the fourth generation Daz figures was pretty simple but caused many people problems.

  • mrinalmrinal Posts: 641

    Richard Haseltine said:

    alainmerigot said:

    mrinal said:

    Richard Haseltine said:

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Most of which, even if I agree to accept them as criticisms, have nothing to do with optimisation - and even the ones that might be affected by optimisation don't prove that that is the case. I've not run into the docking issue, and the timeline sounds like an issue with the filtering settings or not looking at the correct node.

    Optimisation or not, the problem lies within the concern that the DS's architecture/design and morph loading approach has not evolved to support the size of the content library that users have today. When it comes to supporting larger asset library, DS still uses that archaic method of loading assets that was deemed sufficient a decade ago when the users content library wasn't that vast. There are software design issues that can no longer be swept under the rug (without buying a larger rug, of course).

    An architecture rethink would abviously be required, but there is also a problem of basic code optimization. Even with this archaic method, it could be easily possible to have much faster loading times. If I load 2 g8F, I can understand that the first one takes a lot of time to load all the morphs. But what about the second g8F? Why is the full morph directory (re)scanning required? Loading should resuse the results of the first scan, unless the DB has been refreshed. So just one full scan per char family per session and the second load should be almost instant. (and it is not alas what I see...)

    It could also be possible to scan all the figures directories for their content in another thread while the user is idle. Many (simple) optimizations are possible and not done.

    Please stop the assumptions about how the code is inefficient or not optimised - we don't havea ccess to the code to judge.

    It's true that loading can be very slow - though memory is not the issue, so I don't think this si forcing people to upgrade - but there does need to be a recognition that people have wanted to have multiple morph sets available, and to be able to add 9or remove) sets and characters easily (which is why loading a second figure again does the property reading). I'm sure Daz is aware of the issue, but they are still bound by the laws of algorithmics and the need to maintain compatibility. We will have to see if daz Studio 5 offers an alternative (and if it does, whether it requires new content to use it).

    Often times one doesn't need to examine the food for staleness when the stench itself could indicate the symptoms. Not disagreeing to the requirements that people have wanted morph sets to be made available. That requires the skill of managing conflicting requirements. But the cost supporting that requirement is not scalable, or has not been implementsed in a scalable way, to say the least. Algorithms are just tools or methods that can only provide benefit when applied to address the appropriate requirements under the right context. It is not justified to blame the efficiency of an existing algorithm, when alternate options/methods are available and more suited to address the task (context).

  • mrinalmrinal Posts: 641
    edited July 2021

    mrinal said:

    Richard Haseltine said:

    alainmerigot said:

    mrinal said:

    Richard Haseltine said:

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Most of which, even if I agree to accept them as criticisms, have nothing to do with optimisation - and even the ones that might be affected by optimisation don't prove that that is the case. I've not run into the docking issue, and the timeline sounds like an issue with the filtering settings or not looking at the correct node.

    Optimisation or not, the problem lies within the concern that the DS's architecture/design and morph loading approach has not evolved to support the size of the content library that users have today. When it comes to supporting larger asset library, DS still uses that archaic method of loading assets that was deemed sufficient a decade ago when the users content library wasn't that vast. There are software design issues that can no longer be swept under the rug (without buying a larger rug, of course).

    An architecture rethink would abviously be required, but there is also a problem of basic code optimization. Even with this archaic method, it could be easily possible to have much faster loading times. If I load 2 g8F, I can understand that the first one takes a lot of time to load all the morphs. But what about the second g8F? Why is the full morph directory (re)scanning required? Loading should resuse the results of the first scan, unless the DB has been refreshed. So just one full scan per char family per session and the second load should be almost instant. (and it is not alas what I see...)

    It could also be possible to scan all the figures directories for their content in another thread while the user is idle. Many (simple) optimizations are possible and not done.

    Please stop the assumptions about how the code is inefficient or not optimised - we don't havea ccess to the code to judge.

    It's true that loading can be very slow - though memory is not the issue, so I don't think this si forcing people to upgrade - but there does need to be a recognition that people have wanted to have multiple morph sets available, and to be able to add 9or remove) sets and characters easily (which is why loading a second figure again does the property reading). I'm sure Daz is aware of the issue, but they are still bound by the laws of algorithmics and the need to maintain compatibility. We will have to see if daz Studio 5 offers an alternative (and if it does, whether it requires new content to use it).

    As other users have observed and reported in other threads, character loading happens in a single thread which is a clear and indisputable indication of inefficiency. I am not willilng to rule out the necessity for more RAM just to present character morph dials as options. 16GB seems sufficient to load a character when there are only 5-10 characters in the library. But that memory tends to peak out when I have more than 300+ characteres in the library. As others with 64 GBRAM have reported acceptable loading times with library of 300+ characters which makes me doubt the scalability of the memory consumption. Is that 64 GB going to remain sufficient when one's character library increases to 600+ or 900+ in the next few years?

    Often times one doesn't need to examine the food for staleness when the stench itself could indicate the symptoms. Not disagreeing to the requirements that people have wanted all morph sets to be made available upfront. That requires the skill of managing conflicting requirements. But the cost of supporting that requirement is not scalable, or has not been implementsed in a scalable way, to say the least. Algorithms are just tools or methods that can only provide benefits when applied to address the appropriate requirements under the right context. It is not justified to blame the efficiency of an existing algorithm, when alternate options/methods are available and more suited to address the task (context). Though I have observed an alarmingly common pattern for people in this industry to blame the limitations of their tools to cover for their lack of resourcefulness in exploring alternate methods.

     

    Post edited by Richard Haseltine on
  • mrinal said:

    mrinal said:

    Richard Haseltine said:

    alainmerigot said:

    mrinal said:

    Richard Haseltine said:

    margrave said:

    Based on: it takes whole minutes to load a figure and even longer to start a new scene if you have a figure in memory; Daz's IK is riddled with lag but exported figures in Blender aren't; the timeline is a confusing mess with keyframes that disappear yet still affect your scene; docking UI panes is an exercise in frustration and can sometimes get panels stuck so they can't be removed without resetting the layout.

    There's probably more, but those are the ones that came to mind as giving me the most grief.

    Most of which, even if I agree to accept them as criticisms, have nothing to do with optimisation - and even the ones that might be affected by optimisation don't prove that that is the case. I've not run into the docking issue, and the timeline sounds like an issue with the filtering settings or not looking at the correct node.

    Optimisation or not, the problem lies within the concern that the DS's architecture/design and morph loading approach has not evolved to support the size of the content library that users have today. When it comes to supporting larger asset library, DS still uses that archaic method of loading assets that was deemed sufficient a decade ago when the users content library wasn't that vast. There are software design issues that can no longer be swept under the rug (without buying a larger rug, of course).

    An architecture rethink would abviously be required, but there is also a problem of basic code optimization. Even with this archaic method, it could be easily possible to have much faster loading times. If I load 2 g8F, I can understand that the first one takes a lot of time to load all the morphs. But what about the second g8F? Why is the full morph directory (re)scanning required? Loading should resuse the results of the first scan, unless the DB has been refreshed. So just one full scan per char family per session and the second load should be almost instant. (and it is not alas what I see...)

    It could also be possible to scan all the figures directories for their content in another thread while the user is idle. Many (simple) optimizations are possible and not done.

    Please stop the assumptions about how the code is inefficient or not optimised - we don't havea ccess to the code to judge.

    It's true that loading can be very slow - though memory is not the issue, so I don't think this si forcing people to upgrade - but there does need to be a recognition that people have wanted to have multiple morph sets available, and to be able to add 9or remove) sets and characters easily (which is why loading a second figure again does the property reading). I'm sure Daz is aware of the issue, but they are still bound by the laws of algorithmics and the need to maintain compatibility. We will have to see if daz Studio 5 offers an alternative (and if it does, whether it requires new content to use it).

    As other users have observed and reported in other threads, character loading happens in a single thread which is a clear and indisputable indication of inefficiency.

    Only if you know of a way to load the properties across multiple threads, given that the non-linear aspect is - as far as I am aware - created by the need to check relationships with the other available properties. If you are just assuming that single-threading is bad then you are not advancing a valid argument. As i said above, we need to stop pronouncing on how the process is deficient unless we actually understand how it is implemented and what other options there may be, with what consequences.

    I am not willilng to rule out the necessity for more RAM just to present character morph dials as options. 16GB seems sufficient to load a character when there are only 5-10 characters in the library. But that memory tends to peak out when I have more than 300+ characteres in the library. As others with 64 GBRAM have reported acceptable loading times with library of 300+ characters which makes me doubt the scalability of the memory consumption. Is that 64 GB going to remain sufficient when one's character library increases to 600+ or 900+ in the next few years?

    There may be other factors in play - I don't believe that character loading makes that heavy a demand on memory. I just loaded my standard Genesis 8 Female clothes horse figure i use for testing; I have over 600 entries in Smart Content>Products for figures with the G8f selected, plus sundry moprh packs, and the total memory usuage incerased byt a little under 5GB going from an empty scene to the full figure, so unless it was heavily loaded with other activity i'd expect even a 16GB system to cope (8GB might have trouble).

    Often times one doesn't need to examine the food for staleness when the stench itself could indicate the symptoms. Not disagreeing to the requirements that people have wanted all morph sets to be made available upfront. That requires the skill of managing conflicting requirements. But the cost of supporting that requirement is not scalable, or has not been implementsed in a scalable way, to say the least. Algorithms are just tools or methods that can only provide benefits when applied to address the appropriate requirements under the right context. It is not justified to blame the efficiency of an existing algorithm, when alternate options/methods are available and more suited to address the task (context). Though I have observed an alarmingly common pattern for people in this industry to blame the limitations of their tools to cover for their lack of resourcefulness in exploring alternate methods.

     

  • father1776father1776 Posts: 982

    I looks like this kinda went off the rails a bit.

    The bug has nothing to do with optimization.

     

    Here is the issue, I merged 2 files - there was some data added

    that was not connected directly to the model merged, these files were

    merged over again several times causing that extra data to be multiplied geometrically each time

    the file was merged.

    The real problem - when most of the models were removed, all that extra data stayed.

    Even when all the models were removed, that extra data stayed - lots of extra (Gigs)

    The bug has to do with the program not cleaning up data after the files are merged.

    Any data added to allow the files to be merged should be automatically removed once the

    operation is completed. The data might be tied to allowing the program to 'undo'

    the merge but that is a mistake. When you elect to merge 2 scenes, at most it should

    do it and ask 'Do you want to finalize the merge?' If you like what you see, click yes

    and the 2 scenes are made one and any extraneous files/data are removed 'cleaned up'

    so the new scene is no larger than it would have been if you had just made it that way.

    In other words

    If you took a scene with a 1m sphere, saved it, then saved it again under a new name

    and then you merged the 2 scenes the end result should be a scene with 2 spheres

    that should be the same size as a scene just made with 2 spheres from the start.

    A scene that included a merge(s) should be no larger than one that doesn't if

    all other elements are the same.

     

  • RL_MediaRL_Media Posts: 339
    edited July 2021

    I think the scene file is also saving history. I have been doing asset creation, one of the ideas I had was to save and close the scene, reopen it to have a cleared freeze ERC list. It didn't work. Once I reopened it, and made the change I wanted to ERC freeze, the list was still massive, populated with changes I made before the last save close. Also things in the figure setup pane are still there, so it saves that too.

    Post edited by RL_Media on
  • Richard HaseltineRichard Haseltine Posts: 96,863
    edited July 2021

    father1776 said:

    I looks like this kinda went off the rails a bit.

    The bug has nothing to do with optimization.

     

    Here is the issue, I merged 2 files - there was some data added

    that was not connected directly to the model merged, these files were

    merged over again several times causing that extra data to be multiplied geometrically each time

    the file was merged.

    The real problem - when most of the models were removed, all that extra data stayed.

    Even when all the models were removed, that extra data stayed - lots of extra (Gigs)

    The bug has to do with the program not cleaning up data after the files are merged.

    Any data added to allow the files to be merged should be automatically removed once the

    operation is completed. The data might be tied to allowing the program to 'undo'

    the merge but that is a mistake. When you elect to merge 2 scenes, at most it should

    do it and ask 'Do you want to finalize the merge?' If you like what you see, click yes

    and the 2 scenes are made one and any extraneous files/data are removed 'cleaned up'

    so the new scene is no larger than it would have been if you had just made it that way.

    In other words

    If you took a scene with a 1m sphere, saved it, then saved it again under a new name

    and then you merged the 2 scenes the end result should be a scene with 2 spheres

    that should be the same size as a scene just made with 2 spheres from the start.

    A scene that included a merge(s) should be no larger than one that doesn't if

    all other elements are the same.

    This is something in your copy of DS that is adding data. I don't see the reported behaviour with spheres -

    1. saving a scene with one under two names, both are 16KB (sphere test one, sphere test two)
    2. merging the one with the other and saving as a scene I get 25KB (sphere test three)
    3. deleting the spheres and saving as a scene I get 3KB (sphere test four)
    4. merging all four of those I get 41KB (sphere test five)
    5. creating a new scene with two spheres I get 25KB, just as with ther two single sphere scenes merged (sphere test six)

     

    Spheres - merging and creating scenes.jpg
    629 x 257 - 48K
    Post edited by Richard Haseltine on
  • RL_Media said:

    I think the scene file is also saving history. I have been doing asset creation, one of the ideas I had was to save and close the scene, reopen it to have a cleared freeze ERC list. It didn't work. Once I reopened it, and made the change I wanted to ERC freeze, the list was still massive, populated with changes I made before the last save close. Also things in the figure setup pane are still there, so it saves that too.

    The scene does not save history, but it does save the current state of the content and global settings. If you had done Freeze ERC that would have made changes to the content which would have saved in the scene (assuming you hadn't saved as ana sset after doing the freeze).

  • father1776father1776 Posts: 982

    I have a copy of one of the problem saves. > 3.2 gig <

    how would I get that to you?

    Not sure if email can handle a file that large (tells you how often I email)

     

Sign In or Register to comment.