Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Optimization info is never enough, esp here. Very very very valuable tips. Saves me hours and rounds of mind-numbing tests. Thank you again.
I render all of my final animations using Daz content exported to Maxon Cinema4D, this is an important reminder for me to remember to checkfor those ridiculous 4k x4k texture maps and res them down in photoshop CS
I wish there was a script to batch procces them by folder.
yeh the E7 rocks, but the price is eye-wateringly high.
My current system suffered an accident, and is now prone to crashes.
I was going to go for a octo core i7, 64GB RAM and plenty of lanes on the motherboard.
Cost wise, it's about the same as two 8 core xeons and the MB and RAM, so I'm tempted; as some other stuff I do, chews up and spits out CPU cores. My current i7 (4 core) cries like a baby.
Oh, that makes sense, actually I'm very relieved to hear I misunderstood the scenario, I was afraid for a moment that I was facing some sort of temporary window period where I would be able to network all my computers and that at some future point I would be confronted with Microsoft shutting it down and telling me I had to buy Windows Server (and it sounds like that's way to expensive for my tastes)
Boy oh boy, it's the only way to fly! I just can't believe I never took advantage of it before, as it's so much fun to see all those render cores racing. I have all my machines set up in one not-very-large room, I hadn't really thought about spreading them around the house. I have to say though even at full load everything runs fairly cool and quiet, but then again I suppose we're only talking about a few computers at this point, not like a real server farm with rows and rows of rack-mounted bladeservers.
I own Bryce, but haven't used it yet, it's good to hear that it also supports network rendering.
Does Studio allow for network rendering?
and times that by two for a dual m/b
xeon 22 cores is out. 44 threads. times 2.
Thanks to everyone for all this vital information!!!
Working now on a small farm based on LGA-2011 V1/V2 server boards with 2x E5-2670V1 (8-core) + 64GB DD3 server ram. Used pairs of E5-2670V1 going for $160 vs $3100 in 2012, E5-2650 pairs for as little as $88! Used server ram is cheap too. This will allow me to run most any rendering engine, including VUE's RenderCows.
I will wait a bit (need to save up money anyway) to decide on graphics boards. Looks like VUE 2016 will be adding a hybrid GPU/CPU mode also.
Indeed. The one linked is Xeon version of a Haswell Extreme - costs more than the E5 with 22 threads; E7 is kind of i7 (i understand), and the E5 is like the consumer i5. Both are priced for those with deep pocket, or a deep need. Well both, really.
Be careful with "used" server RAM. Many times it doesn't go up for sale until the ECC/EDAC units start reporting errors. If used in systems that don't use the Error Correcting features it will likely be "OK", but on systems with Error Correction on it could cause an ABEND that keeps the machine from booting or causes a freeze. BE VERY SURE that you can return it for tested RAM if it starts flagging errors.
Kendall
That's awesome, I've been drooling over the idea of going for a LGA-2011 setup, especially since I saw an article on building a 32 core render monster that would cost less than the price (at the time the article was written) of one single haswell E i7 8 core cpu: http://www.techspot.com/review/1155-affordable-dual-xeon-pc/ The setup pretty much destroyed every more modern consumer grade machine they pitted it against
I looked into building one of these for myself, but a) I've never put together a computer before, the most I've done is tear them down, clean out fans and switch out cpus and I'm not sure of my ability to do this and b) even though the prices are waay low it was still just too pricey at this point for me Then I started looking around ebay and there are lots of hp z620 and z820 workstations that are prebuilt with dual xeon lga 2011 (I'm sure there are also comparable Dell and Lenovo workstations for dual lga 2011 too) and they seemed to go for between $400 - $600, which is less expensive but still a bit to pricey for me at the moment, when I can get the prior generation xeon workstations that are still great for rendering for less. But if the trend continues and prices keep dropping, maybe in a couple of years those things will be going for $200 - $300 for a complete 32 render core server, ready to go, and I will lunge at it
I'm exited for your project, hope you share as it progresses
Forgive my ignorance, but if I end up with ECC ram failing like this, will it hurt the motherboard or other components of the server, or is it just a matter of going out and buying some replacement ECC ram and swapping it out? I'm engaging in all this totally blind, I'm kind of a tech-ignoramus, and I want to steer clear of any unseen dangers I can.
The fail to POST is really a 'safety' measure to prevent damage to anything...it doesn't pass the check so it doesn't get to run. So, yeah, that's about all you need to do. But without it booting it is hard to know which stick to replace, so you need to do all of them.
Thanks, I really appreciate the guidance and tips throughout this thread, it really is a goldmine
You might be able to run a mem test on it, one stick at a time if the worst comes to the worst, although I understand support for ECC memory has not been maintained; perhaps try booting with just one stick in (per CPU perhaps). Not sure if this will work, or if it will work every time. Just make sure any of the tests you try, aren't going to cause damage somewhere.
Some boards may require the memory to be in pairs...and if there are only 2 sticks .
Some boards can turn ECC on/off...so that may be another option.
...Linux doesn't, but then Daz Carrara or Bryce don't work in Linux very well.
There are several different types of error correcting RAM. Some are more easily checked than others.
I don't think the average desktop user realizes just how often memory errors happen. In most cases it doesn't matter since an error is more likely in "unused" sections of memory than sections that are constantly being read/written, and the desktop user never sees a difference (usually it would precipitate a crash if an error happened in active memory). However, servers and server OS's tend to use almost all of the available RAM so errors are a much bigger concern. ECC RAM allows the system to "repair" or reset the error and continue -- most of the time. Sometimes the error is so bad that the system halts. If the chip develops a permanent error in either the RAM or the parity sections the system will refuse to boot upon detection. There are times that POST doesn't detect the errors and it is left to the Motherboard circuitry to catch the problem, in these cases what occurs on a detected error is up to the OS.
Worst case is that you get a machine that crashes randomly (hard to detect if running windows since it crashes more often, but very easy to see when running linux). Best case is that POST detects the problem and tells you which chip is failing. Running a memory test on ECC RAM is costly and normally it is less expensive to ust throw out a suspected bad chip than to risk losing critical data. It is these "suspected" chips that end up on the used market along with chips pulled for preventative maintenance or upgrades. Caveat Emptor.
Kendall
Indeed. If the error is in the ECC or EDAC portion of the chip and the board allows turning off the error checking, this can be a good workaround. If the system is using FB-DIMMS though, things can get nebulous. The buffers on the chips cannot be turned off and that is where the majority of failures tend to occur on those puppies. Luckily, FBDIMM prices are going down ATM (but will head back up as they become more niche).
In my experience, most of the "newer" boards no longer require that the chips be done in pairs or sets, so one can upgrade RAM on a processor-by-processor basis, but since most of my machines are 4x systems I try to add/replace chips in 4's. Several of my machines have 256GB of RAM in them so replacing RAM can be a costly proposition. In the last two years I've had two chips fail outright, and one that bothered/concerned me enough that I pulled it from service. Luckily they were all from different machines so I could migrate chips around to keep things about even.
Kendall
..the base price of memory has been continually falling, for example, I can get a 128 GB quad channel memory kit today for pretty much what 24 GB of tri channel memory cost four years ago.
Meanwhile, today that same 24G tri channel memory kit is going for just over 100$ (Newegg).
Jonstark. Can I put two graphic cards to that build? It's possible? Thanks
I mean that build
Thanks!
I just finished reading the article. Yes, it's possible. I'm sorry about it but I'm enthusiastic!
When doing the ebay thing, please keep in mind the age of the systems and whether the board can handle the necessary throughput the card needs. Just because it has x16 slots DOES NOT mean it will do it well nor does it mean that it can handle multiple GPUs saturating the bus concurrently. Most of these machines were designed BEFORE multiple GPUs were popular and are designed for CPU/DIsk IO and not for gaming. This means that the PCI-e bus may not be able to handle the speeds on the lines that the video cards need.
It would be a shame to spend the money and not be able to do what you want with GPUs on the PCIe bus. I recommend that research is in order before jumping.
Kendall
Thanks kendall! I will!
And even diligent research doesn't always help. I had a Dell T7400 for 8 years; the specs listed 90 amps on the 12 volt output of the 1000 watt power supply. It wasn't until I had a power supply issue that I found out it had 5 separate 12 volt outputs, each rated at 18 amps . . . I never did find this listed on Dell's site or any of the (few) reviews I found.
That brings up a good point... Most server level machines DO NOT have PCIe 6 or 8 pin power plugs.
Kendall
Record your own action for the res down part and then use it to batch process either a selection of images or a specific folder (file>automate>batch).
Blender uses CPU rendering indeed if they have to render a movie (Sintel, Tears of Steel, and Cosmo Laundromat). Tom explained indeed that scenes could easily outperform the wildest combination of video cards, however there are also studiio's that use 8 Titans to render stils (and movie) at a blistering pace (so it's not only hobbyist rendering on GPU's),. It just depends on what you are creating. I have a private prop build that is a appartment building (outside only) in all it's details. That prop alone eats 12 Gb in RAM, so rendering is done on the processor cores (6 of them to be exact) and that takes time. But sometimes you just have to wait. If all goes wel the next generation of Titan X's could sport 24 Gb video ram (if tthe 900 is a guide to go by and extrapolate on the 1080:980). So I guess, more and more wil be handled by GPU's. Fact of the matter is that GPU's are better in handling complex single precision calculations then ordinary processor cores.
Greets, Artisan!