64bit

13»

Comments

  • Subtropic PixelSubtropic Pixel Posts: 2,388
    edited December 1969

    Reading this thread brought up bad old memories (in my brain) of running out of memory (in the Bryce address space) back in the early 2000's. And all I was trying to do was load tutorial projects! This was back when I was still running 32 bit Windows, and now I've been 64 bit since I installed Vista around 2007-2008. I am now on Windows 8 64 bit, and would never go back to those XP days.

    Here's my editorial comment: I think 64 bit must be the very highest priority for Bryce developers, especially for projects that contain a lot of objects and elements that require memory. Some functions may even be faster under 64 bit, because Windows 7/8 would not have to run the application in a special "compatibility mode" 32-bit sandbox address space, and because 64 bit is twice the word size, which reduces I/Os for some memory operations.

    Going 64 bit will not be free, of course. There is a labor expense, because all of the app's executables (the EXE and supporting DLLs) will need to be recompiled and linked for 64 bit compliance. If there are any problems with the source code (incompatible feature use, missing code because it was on John Doe Developer's PC and his hard drive crashed before a backup could be taken, etc), they will need to be resolved before recompiling. If source code management was not done well, then this could take months to rewrite portions of the product's code. Either before or after 64 bit testing, the EXE and DLL modules will need to be recompiled again in 32 bit and retested there (assuming that you want to keep track of only one version of the source code and avoid "forks"). Issues in either 32 bit or 64 bit will need to be resolved and retested on the other.

    The upshot is, a migration to 64 bit mode is really a full project planning and a complete test cycle, and probably should go through beta testing too.

    Another thing: if Bryce plays "host" to other code (such as in plugins), then DAZ would need to decide how to handle 32-bit plugins, especially those from other vendors. Either bridge them or require them to be 64 bit. The former comes with issues, but the latter could be viewed as unfriendly, especially if some of those plugins were written by and/or sold by community members.

    Finally, there would be an ongoing labor expense if DAZ wanted to maintain both 32 bit and 64 bit versions of the software. It's usually a good idea to give your user base a grace period before stopping 32 bit support. Not unlike Microsoft's current edict that security fixes for Windows XP will stop in early 2014.

    Based on all I've said above, it sure might sound like there are more things "con" than there are "pro". But Bryce can be a heavy memory user, so it must go 64 bit. I must say I was very surprised to learn that it is still 32 bit, because 64 bit now is the right thing to do.

    DAZ should have begun the planning stage the very moment they acquired Bryce. I hope they did and I hope they are actively working on it now.

  • G_smithG_smith Posts: 7
    edited December 1969

    Well, I really don't want to intrude on the board, but I've been a bryce user since 5.0 and viciously looking for rendering improvements by utilizing different computers.
    I have been using AMD CPUs for a long time, because they were economical and cost effective up until the Bulldozer cores... I've created a certain personal benchmark on bryce5.0 that I use to measure test speeds. Now I don't claim to be a pro, just a hobbyist, I know my poly count should be lower, but I'm lazy. I love Horos tutorials and David, you are awesome!!! I've followed many tutorials and your posts are very good, (like the LAA program that now I use ad it make my renders easy...)
    I want to share some observations that may help others to make the right choice of computers.
    So All I want to say is to warn whoever goes down the line to buy Bulldozer or vishera type AMD cpu is that they just not worth the money. I think their new architecture just not cut out for Bryce, since they use in their 8 core lineup with the 4/4 design. 4 cores with math cores and 4 without them. In light of that I sold my 8150 and dropped a T1045 into the socket.
    My test render with bulldozer was 39 sec. With the six core (true six core) AMD, I OCd to around 3.1Ghz from 2.7, and the memory will hold at 1442Mhz (1333). The same test render is a solid 30 sec!!!
    And the CPU was only 100$. Now I know that it's unwise to OC the computer that already doing 100% utilization, but I am aware of the risks. I've got that setup since February and basically its running in my basement ever since. Usually I try to limit renders to 1-3 days max, but currently I have to render with the highest settings, so 5sec takes 7 days at 720x480 res.

    At the same time I own a i7 3770s and the test render is 21 sec!!! I have test renders done without HT and with it. I learned a lot how the multiprocessing works on Bryce.

    Yet again endeavored into the xenon world in the name of a HP Proliant 350ml G5. With 2x2.33ghz CPUs and 4GB ECC DDR-2667, it led me to a respectable 35sec of the same test render.

    The way I see Bryce really likes fast memory. AM3 AMD cpu's are equipped with onboard DDR2 and DDR3 memory controllers. I have an AMD e905 CPU. Basically 4 cores, 2.5Ghz. Phenom II series. That was in an AM2+ mobo, with DDR2-800 memory.
    Test render hovered around 1minute. (I gave a generous 8% boost of the base FSB in the past 2 years)
    When I dropped this CPU into the AMD board that had the Bulldozer 8150, later the 1045T and I OC'd the FSB from 200mhz to 220Mhz, the test render became 48 sec. Same processor just moved to AM3 board, onto DDR3-1333 (plus a little kick to 1442mhz)...
    So the way I see the current generation of Intel CPU's are far superior to AMD. (unfortunately) You would have to double the cores of a computer with AMD to match Intel's speed, but because of the 8 core limitation it really becomes an issue.
    Now of course you can get around it with vmware, but soon you'll end up hiring a secretary just to manage the chaos...

    Least but not last, the original test render file was made on AMD Athlon CPU@2098Mhz (2600+ MP with 512 L2 cache) on DDR-266Mhz w/ oc it ended around 272 I think... The render time was slightly under 5 minutes.
    That machine still exist with dual 2800+ MP procs. Render time around 3:30. I checked last year. Not competitive anymore, but from a 12 year old motherboard, heavily oc's over the year, it's respectable.

  • HoroHoro Posts: 10,633
    edited December 1969

    @G_smith - very interesting observations! Thank you also for the kind words about David and my efforts. You might be interested in looking at this CPU Benchmark lists.

    There is a difference between multi-core and multi-thread. I have an i7 8-way which sports 4 cores, each of them hyper-threadable. And an i3 4-core, not hyperthreadable. Bryce uses half the CPUs available in normal mode and all, up to 8, in high priority. For the i7, normal priority takes 4 cores, for the i3 2 cores.

    In high priority the i7 throws the multi-threading in and the i3 the other 2 cores. The time advantage for the i7 between normal and high priority is around 15%, for the i3 100%. Looking at the core temperature and the current supplied to the CPU, the 15% gain in the i7 come very expensive while the i3 does the job without even having a CPU fan. Of course, the i3 at high priority only gives 60% speed than the i7 does in normal priority.

    Since Bryce does everything in memory, memory speed is key for development, but it doesn't much help for rendering. But it is helpful, because for scene development, Bryce uses only 1 of the available cores.

    If you have several computers, it may be interesting to use Lightning to share the render job among several machines. The important thing here is that the priority setting of the host determines the priority setting of each client. There is no way to set the priority for the clients separately.

  • G_smithG_smith Posts: 7
    edited December 1969

    >@G_smith - very interesting observations! Thank you also for the kind words
    >about David and my efforts. You might be interested in looking at this >href="http://www.cpubenchmark.net" target="_blank">CPU Benchmark lists.

    Thank you sir, I am familiar with it, but it helps as always to be reminded. :)

    >There is a difference between multi-core and multi-thread. I have an i7
    >8-way which sports 4 cores, each of them hyper-threadable. And an i3 4-core,
    > not hyperthreadable.

    I thought all of fhe i3 was a 2/4 (2 physical core and two HT) and majority of i5 was 4/4 except I believe one cpu that was 2/4 the 3470T. But I'm not so sure about laptops.

    > Bryce uses half the CPUs available in normal mode and
    >all, up to 8, in high priority. For the i7, normal priority takes 4 cores,
    >for the i3 2 cores.

    That is indeed my observations as well.


    >In high priority the i7 throws the multi-threading in and the i3 the other 2
    >cores. The time advantage for the i7 between normal and high priority is
    >around 15%, for the i3 100%. Looking at the core temperature and the current
    >supplied to the CPU, the 15% gain in the i7 come very expensive while the i3
    >does the job without even having a CPU fan. Of course, the i3 at high
    >priority only gives 60% speed than the i7 does in normal priority.

    Your 15% is pretty accurate, I came up with 19% for my current project.
    I've read it on another thread that bryce supposed to use the physical cores before going to the HT and since this is pretty CPU intensive there isn't much they can squeeze out of the virtual core, so yeah. I think price/performance wise the i5 is pretty much the way to go, but if rendering time is more important, than it will be costly thing...
    Here are my test speeds for the Ivy bridge i7 (3770S) 2x4GB DDR3-1600:
    HT turned off:
    1 Core: 84 sec
    2 Cores: 47 sec
    4 Cores: 25 sec

    HT is on:

    1 Core: 85 sec
    4 Cores: 26 sec (now here of course this is all the physical cores...- in normal priority - pretty much the same without HT at Max priority)
    4+4Cores: 21 sec

    Because the test render is done so quickly I am doing another set of test renders to

    >Since Bryce does everything in memory, memory speed is key for development,
    >but it doesn't much help for rendering. But it is helpful, because for scene
    >development, Bryce uses only 1 of the available cores.
    Can you elaborate further on 'scene development'? Is that mean the fast render on the top left, or is it at each frame?

    >If you have several computers, it may be interesting to use Lightning to
    >share the render job among several machines. The important thing here is
    >that the priority setting of the host determines the priority setting of
    >each client. There is no way to set the priority for the clients separately.

    I think i've read somewhere that lighting will only use one processor at the client side. Isn't it true, even though the host will use all?

    I gave up on network rendering after bryce 5... And even there because of the lag with too many slow computers it was just not feasible...

  • HoroHoro Posts: 10,633
    edited December 1969

    @G_smith - thank you for the clarifications. I thought I risk to put on some redundant comments, but one never knows. The 15% for HT are an average. Depending on the scene, I got as high as 20% and as low as 9%.

    Scene Development: that means when you're working on a scene. All previews only use 1 core. What is worse is that all labs and functions only use 1 core. If you - for instance - multi-replicate an object several 100 of times, you're in for quite some waiting. Though everything is done in memory, only 1 core works on a job involving a lot of math operations.

    Lightning: in Bryce 5 (Lightning 1), the host was automatically used as a client as well. This is not the case anymore with Lightning 2c in Bryce 7.1 (we went through 2, 2a, 2b). You have to start the client on the host machine if you want it to participate in the job, though it is not mandatory to do so. However, to have to start the client explicitly on the host has the advantage that you can test network rendering without a network using the loop-back address or the one assigned to the machine. This is particularly helpful for locating bugs (firewall settings and the like).

    A disadvantage of Lightning is that the host sends the source file to each client in sequence. This means that each client must have enough memory to hold the file, and that large files take quite a while until distributed to each client. The good news is that each client that got the file starts rendering while the others still wait for the file. I've already recommended to distribute the files via multi-cast. Hopefully, this will be introduced with the next development cycle.

    The good news about Lightning 2c is that it is rock stable. You can pull out LAN cables at some clients while a render job runs and put it back a few minutes later. That client resumes the job and the tile that got lost when you pulled out the cable is rendered by another client.

    For animations, you should not use Tile Optimization since each frame can be considered a tile. However, if you network render a still, using Tile Optimization is very helpful. In this case, each client gets a 100 x 100 pixels tile to chew.

    The host should be the fastest machine, the slowest machine is always the one that sends in the last tile. It is possible to render two scenes at the same time over the same network. For example, if you have 6 computers on your network that have a client running, you can assign 3 working for host 1 and 3 for host 2.

    Network rendering is quite worth the trouble, but only if you have a render job at hand that needs several hours or days because of the overhead involved in sending the file to all clients. But I do use my oldest laptop, Win2000 with 512 MB memory for long jobs. Though it contributes only 10%, that's 1 day saving in a 10 day render.

  • Subtropic PixelSubtropic Pixel Posts: 2,388
    edited December 1969

    Wow, my posts just keep getting longer and longer on this topic....sigh!

    I just did some render testing last weekend and I find Bryce to be surprisingly slow even for a simple scene with few objects and none of the truly costly attributes such as transparenty, refraction, and reflection. My i7 6 core/12 thread system simply would not engage more than 4 or 5 cores at the same time, and that many only for brief moments. Bryce mostly used ONLY one or two cores and less than 50% of each.

    I am dejected at this!

    Lightning was worse; unacceptably slow. A 4 minute animation with no textures and low effects took 30 minutes to render on the workstation (not so bad considering that Bryce was only using about 20% of my available CPU...uh yeah...) was even worse via Lightning when it took about an hour+15 to render when I brought an i7 4C/8T laptop into the equation and ran the render with both systems set up as Lightning "clients".

    Both systems reported very low CPU usage during the Lightning renders, between 15% and 35% for each, with periodic dips down into the 10%-ish ranges. And the status information coming back to each client was very poor. The percentage-completion figures were not at all understandable, and I had no real idea when the render could be projected to finish.

    To put some of this in perspective: My i7 workstation has 64 GB RAM and spends its time running Stanford's F@H protein folding simulations 24/7 whenever I'm not using it for something else. And even though CPU is always at or near 100% for all twelve cores, I can do just about anything without shutting down folding anyway. Some games will crash if GPU folding is running at the same time, so I do have to make accommodations.

    But Bryce won't really use it even when I shut down folding and have nothing else running. It is most certainly capable of doing more work than I can seemingly get Bryce to ask for.

    So for the time being, I see two significant problems here. Not bugs. But they are definitely "problems of obsolescence."

    1. Bryce definitely needs to go 64 bit.

    2. We need to have a Bryce that can and will engage ALL CPU threads and even GPU cores both locally and via network rendering. Preferrably without fuss or nit-picky tweaking!

    I would really like to know whether or not DAZ software (Bryce, Hexagon, DAZ Studio, et al) have a development roadmap that includes modernization of the render engines to do this, even at the higher cost of electricity.

    And of course, I am saddened at the realization that Bryce might no longer be for me; at least not in its current state of (non?) development.

  • HoroHoro Posts: 10,633
    edited December 1969

    @Subtropic Pixel - there is sometimes a bit of confusion about the usage of the cores. Not every job can be easily split up to use a different core. Sometimes they go to 100%, sometimes much less - it depends on which part of what sort of scene is rendered at any moment. In network rendering, the host must be set to high priority if you want your multi-core clients to work at full throttle. It is mandatory that Bryce gets to 64-bit, there's no doubt about that.

  • Subtropic PixelSubtropic Pixel Posts: 2,388
    edited December 1969

    Hello Horo, and thank you for your explanation. I understand that the role of Hyperthreading (and AMD does a somewhat similar thing to share the floating point unit even though AMD processors have actual separate cores) might preclude the use of the so-called "virtual cores" (in an Intel system), which on my system would probably be numbered 1, 3, 5, 7, 9, and 11, while the real cores would be numbered 0, 2, 4, 6, 8, and 10.

    But I am disappointed that Bryce wouldn't even engage all six of the REAL cores of my system. And yet DS4 will use ALL TWELVE. In fact, with the few test renders I did under DS4 (it was just a simple face pose of a Genesis figure with hair and nothing else; not even a background or props), I noticed that these renders engaged all twelve threads and pegged all twelve at 100%.

    When you are able to dedicate meaningful resources to a project such as I want to do, it just chews up simple renders in no time flat and makes complicated renders more doable in what little free time lots of us have. I was pleased and heartened by DS4's performance, for sure.

    I get what you are saying, but I don't understand how DAZ Studio could be able to use all cores if both Bryce and DS4 are obeying the same basic laws of physics and light ray behavior. After all, we are assembling meshes into polygons, laying a texture over that and bouncing light beams off of it all. It's an oversimplification, yes; but I am still not convinced that in this day and age, Bryce should be given a pass on not using all available CPU cores, when available free software does this like a champ, and without any tweaking or fine-tuning.

    And I'll go one further and posit that in this day and age, rendering engine designers should find a way to leverage GPU cores too. Why not? After all, there is a lot of unused processing capacity in most people's GPUs, since those are bought for gaming and nobody really plays games while rendering art or animations. In some cases, people have more than one GPU or even dual-GPU graphic cards.

    I would like to see my hardware used to the max so that when it is time for me to gift or landfill it, I will know that I got all the work out of it that I possibly could have.

    Thank you for helping me to think through these concepts.

  • HoroHoro Posts: 10,633
    edited December 1969

    @Subtropic Pixel - Bryce still runs on its legacy software and it was "just" upgraded, though the coders DAZ 3D had engaged made a great job. Nevertheless, it's still 32-bit and a lot of the code is written in the proprietary Axiom code which has the advantage that it runs on the Mac and PC. Bryce 7.0 was supposed to become a 64-bit application but it turned out to be more time intensive then anticipated and it would have overrun the budget. After all, the Bryce 6.1 - 6.3 - 7.0 - 7.1 development cycle took already 2 years. Unfortunately, it is not as simple as just recompiling the code to 64-bit.

    With the memory restriction inherent to 32-bit (potentially 4 GB, practically 2 GB, with the LAA flag set around 3.4 GB), using more cores is trading speed with memory. Several of the 7.0 builds addressed up to 16 cores but the memory penalty was considered too high and it was decided that 8 cores would be a good compromise. With a 64-bit program, like Studio, there's no reason to not support 32 or 64 cores. Studio has the advantage that it is a relatively new program (been using it since a 0.8 version) and not hampered by old legacy code written in great parts in a proprietary programming language.

    To make full use of many cores asks for a lot of rewriting. Obviously, there was time enough to implement it for rendering, but not for all the other tasks that could hugely profit from it.

    GPU rendering comes up from time to time. Games use the GPU to render, which is a CPU resources saver. There are also disadvantages. I'm not sure whether plop rendering could be supported, or not. I wouldn't want to miss that option. Additionally, Bryce renders internally in 48 bit (16-bit per colour, not just 8) and a render can be exported as 48 and 96 bit image. It contains much more information than a standard 24 bit one and this can be exploited in post production. Also this option I wouldn't like to miss.

  • Subtropic PixelSubtropic Pixel Posts: 2,388
    edited December 1969

    Fascinating information, Horo. Thank you. And I believe DS4 already supports GPU rendering. I tried it out yesterday; works like a charm (but again, only simple images; nothing complex).

    But for Bryce, I would agree that 64 bit is the first step. From that, all other things will be possible, and I love Bryce, so I would definitely be in the market for an upgrade.

    In the meantime, I am thinking of having a gander at Vue, and not just for reasons of rendering. It flat-out looks intriguing and appears to have regular updates made to it. Vue Frontier looks like a reasonably priced entry-level option and is even sold in the DAZ store, so my PC membership would help reduce the price even further.

  • ChoholeChohole Posts: 33,604
    edited August 2013

    Investigate a lot further before you dive into Vue. You will find that you have to keep adding bits to it to get it to do what you what. Even importing models into it needs a paid for add on.

    PC membership would not reduce the price any, as this is not a Daz Original. Membership discounts and coupons only apply to Daz Originals not PA items or resell items.

    Post edited by Chohole on
  • Subtropic PixelSubtropic Pixel Posts: 2,388
    edited December 1969

    chohole said:
    Investigate a lot further before you dive into Vue. You will find that you have to keep adding bits to it to get it to do what you what. Even importing models into it needs a paid for add on.

    Yep, I figured that out within 5 minutes of browsing their site. I am also looking at Carrera; it looks a lot less expensive and I guess you can do landscapes and modeling.

    PC membership would not reduce the price any, as this is not a Daz Original. Membership discounts and coupons only apply to Daz Originals not PA items or resell items.

    Hmmm, Frontier did show a discount on my DAZ store screens last week but today it's gone. I just assumed that it was due to my Platinum Club membership.

Sign In or Register to comment.