Computer Upgrades
After a bunch of years I finally decided to get myself a Christmas present, and bought a new high-powered computer. An i7-4790, 3.6GHz, 12GB RAM, 2TB hard drive....
It's nice, especially that it gives me 8 cores when rendering and stuff. I just with they would use them for Bullet cloth.... :(
But it occurred to me, after it's all set up and running, computers haven't really changed a lot in all of these years. Yeah, of course they have, technically, but after 5 or so years of an old Core 2 Quad, 2GHz processor on my laptop, this new super computer isn't all that different in total performance than the old one. Nothing to really blow me away.
I always was under the assumption, after hearing about Moore's Law and other predictions of computer performance increases, that processors would double in speed or something every year or two. But heck, here we are in almost 2015 and there's no way that's been true. In my case I'm going from a 2GHz to a 3.6GHz after 5 or 6 years. Yeah, more cores for processing, but that only matters if the software can deal with it.
Clearly I don't understand the specifics, but I just keep waiting for computers to blow me away, but between the hardware limitations and the software limitations, it always seems like it's just plugging along. Heck, even my graphics performance with Carrara on this new super computer can bog down with texture refreshes and stuff.
Anyway, for those who haven't upgraded your computers in a long time, don't feel too bad. :)
Comments
You're right. Moore's Law worked great, right up through the Pentium, but it kind of hit the end stops with the Intel Core ix processors, and now the enhancements seem to "go wide" rather than "go deep" (to borrow an American Football metaphor). Going from 2 cores to 4, or 4 cores to 8 (and sometimes those are not even "real" cores, with hyperthreading) is not the same as going from 2 GHz to 4GHz in terms of system performance. (because so much of the software we run simply isn't optimised to use multiple cores)
Also, the advent of "green" hard drives (slow spindle speeds, running down the drive & parking the heads after 5 minutes etc) has done much to send overall computer performance heading back to the stone age.
I have a 5 year old 2.5GHz Centrino laptop that outperforms my 2013 3.8GHz i5 iMac in everything apart from direct cpu computations. And given that it's mostly used for writing code, it doesn't spend much time under high CPU loads. Sure it's old, scratched, the paint's rubbed off the plastic in places and there are at least 3 dead thunderflies trapped within the LCD sandwich, but it still does its job remarkably well. And I very much doubt anyone would nick it!
Now I'm not any kind of technology expert, but whereas 7 or 8 years ago we might have upgraded a computer to the latest thing for "better performance", it strikes me that nowadays, the year-on-year improvements are marginal at best, and the only real reason to buy a new computer is because the old one died, or you're switching to massively different tech. That applies to phones, tablets etc too, IMHO.
You are right, depends on the software and the math used in that software to simulate whatever.
However, the newer video cards along with new processors do help with speed and what one can actually do. The first example I can think of is 3DCoat. One will quickly run out of memory and computing power if too many unseemly items are loaded in at once. ZBrush seems to do a better job with dynamesh. I've found if one doesn't have about 8 or so extra gigabytes of mem above and beyond the operating overhead, then there will be lag and unpredictable results.
My older I7 920 with 8gb runs into this memory quandary quite often. Other computers that have 16gb and up, do not have problems with the same software.
Cloth simulation is one application that takes a huge amount of computing time. Doesn't matter if you use the 16K $ Optitex or the free one in Blender.
But I digress.
I don't know where the thread is that I read this Joe, but if you are you using the 64 bit version of Carrara, DAZ_Spooky has said that you will get much faster performance out of Carrara if you turn off Texture Spooling. It is in Preferences and under Imaging and Scratch Disks if I recall. After turning it off, you will need to quit and relaunch Carrara.
Yes, turn off texture spooling for 64-bit Carrara: it will spool textures even though it doesn't need to and wastes time accessing the hard disk - which is a "one thing at a time" operation unless you have a really fancy multi-channel RAID controller. In the worst cases, I've had Carrara sitting in the neighborhood of 20% CPU usage while the hard disk controller was pegged.
Thanks Fenric. I knew I wasn't crazy. At least about this. ;-)
Oh really??? I missed that...
And after all the issues we had early on with texture spooling, making sure you set it at exactly the right value, now it's better turned off.
Groovy. Thanks. Done.
Now all they have to do is make all the single processor functions multi-processor/core aware!
I knew the standard physics engine was single processor, but until you mentioned it above, I had just assumed that the bullet engine was multi-processor aware as it was added in C8, well after multi-processor machines were common. Weird.
I felt a much greater difference going from a Core2Duo dual core chip to my eight core AMD. Twas hugely noticeable.
Of course, 3d graphics software will always have the ability to bring a computer t its proverbial knees.
Something that I've read as I was deciding upon my cpu chip is that, the GHz of a modern processor makes greater difference now that it did before, due to the form factor of the chip itself being greatly reduced in size, making the information travel distance that much shorter. To me, being a but numb in these regards, I didn't think that nano meters could make that much difference. But when it comes to delivering data to us, I guess it can be a huge boon. So now, even though I have eight buckets flying across my render window, each of those buckets are faster than those from my older Core2Duo machine.
Having the 16GB of cooled RAM hasn't been touched much by Carrara, which rarely has been needing 8GB for my Carrara renderings - but everybody's scene house-cleaning habits are different, so that may just be my own thing. I'm still glad to have it for other related software.
I notice a huge difference in Windows performance and multitasking with my Zambezi 8 compared to my dual core, however. Like night and day.
I have just built a system for 3d animation and I notice the difference right away with the speed I also went the extra mile and have a 240 mm liquid cooling on it and when it tenders for long time (24hrs) the CPU never got above 59C so I don't have to worry about it overheating and killing the CPU.
More threads for processing. Going from a Core2 Quad to an i7-4790 delivers you the same number of cores - 4.
The 4790 has hyper-threading, which (sort of) creates virtual cores that can make use of "spare" processor cycles. This usually results in a few percentage points of performance efficiency in multi-threaded applications.
If you provide more specifics about your system (and the performance differences between old and new) it should be possible to see if you're getting the most out of it.
I upgraded from an i7 quad ( 8 logical cores) to 2 Xeon CPUs (24 logical cores) on a dual-socket Asus motherboard with 32 GB of RAM and didn't notice very much difference in speed, except for rendering. THAT is much faster.
I wanted to install liquid cooling but at the store they told me the case wasn't big enough ( it's a full tower) to install 2 liquid cooling systems for my 2 Xeons CPUs. I find that hard to believe.
It is very difficult to give any clear numbers when it comes to 3D graphics (or anything that use loads of RAM), the RAM memory has not gone that much faster over the years, it is still 10-20 times slower then the CPU, so you use the cache memory in the CPU a lot, and as soon as you run out of that memory you are back to the slooooow RAM memory again, and that happens all the time with 3D graphics.
And most RAM implementations don't give full access to memory for the cores, they have to wait for each other when they read/write to RAM.
In terms of CPU cores I think it is more a question on how it handles the hardware floating point operations, if each core can use it's own floating point unit without locking up anything else that will give you a good boost, 90% of the time the renderer will spend on doing floating point operations.
And turn of all junk you will have in the startup tab (windows), Adobe background tasks, printer background tasks, Lenovo, Acer, P&B put lots of junk they run in the background for no use at all and eat valuable CPU time.
And a harddisk with a good cache is not a bad idea either, improve performance when you start to page to disk.
I got myself an i7-4770k for christmas ;o) 32G of RAM and Nvidia GeForce GTX 760, and I am pleased with it so far, but the most important part for me there was the RAM, I could not add any more to the old machine, but the OpenGL is much faster now and I got two monitors, don't think I can ever use DS or Carrara on a single monitor again.
It depends on the code and how it is written. I'll spare you the details of that though -- but it is rather fascinating.
Semi-related; Pixologic is only just now making ZBrush a 64-bit application. I was stunned to learn it was only 32-bit...I haven't seen 32-bit hardware in a decade.
Since I bought new hardware myself recently, I noticed that the industry is using the generic term "threads" (Hyperthreading was trademarked by Intel, so AMD couldn't use the word for the same technology). Technically a thread is an internal concept in programming; it is what an OS uses to control code execution across all the running programs.
While these multicore CPUs do indeed allow multiple threads to run simultaneously, it still comes down to how the code is written. Bad design will run slowly no matter what your hardware.
they often have huge problems porting old C/C++ code to 64 bit because it was written before no one thought they would be 64 bit one day, so they use lots of hardcoded constants on pointers and assume integer and pointers are same size and integers are 32 bits and all kinds of stuff, it can take ages to port bad old code to 64 bits, so it's not easy.
Yes, but I suspect that ZBrush is not that old.
For a good time, deal with SQL Server memory pressure on a 32-bit system. Those were the days! Admittedly, Microsoft has deep pockets, but they transitioned SQL Server and Exchange to 64-bits only very quickly.