OT: First AMD Threadripper Review In..
Ghosty12
Posts: 2,056
So the first review and testing of AMD's Threadripper CPU's are in and well looking at the productivity benchmarks especially for rendering using Corona, Blender and PovRay in this case.. Going by the results AMD did really well and in some cases stomped all over Intel, although when it came to gaming it was a different matter but still did relatively well..
Post edited by Ghosty12 on
Comments
Definitely going to be my next workstation backbone. 60 PCIe lanes, even with the forthcoming 8 core 1900x. It's looking better and better for a quad gpu rig.
If they continue bringing the power requirements down for the CPU, GPU, and periphials in 2 -5 years time a 4 GPU 64 core CPU 256GB RAM with 4TB nvm SSD is looking really good and at a 'consumer gaming PC price' too.
...well once I can get the routine figured out for installing W7 Pro on a Ryzen based CPU system (have the tutorial), then I will look into this further.
180 watts, I was hoping these would be at the same level as the first generation.
First generation meaning Ryzen? These are still technically the first generation, just the first generation of Threadripper.
Anxious to see these pan out... I didn't get all the info I wanted in my building a beastly Daz machine thread...
https://www.daz3d.com/forums/discussion/179126/lots-of-exciting-hardware-coming-very-soon-building-a-beastly-daz-studio-machine
More specifically, I notice with a bunch of Genesis 3 figures, my system takes a lot longer to load the scene. On some new scenes in particular, there are 8 figures. I dropped the texture size down to 1024 for the secondary figures, and sub D down too. But it still takes a long time to load. Geometry wise, I don't think it adds that much overall, there are lots of props.
The morphs?? I have a more characters and a lot more morph dial expressions (yay sales!). To get to the point, will a beastly multi-core system aid with loading those scenes? (IF that's why it's slowing down, many figures with many morphs)?
Probably not unless the DAZ programmers specifically interceded or the compiler / machine code optimization know enough the for each character loaded in the scene a fork can be done just for handling the loading of that character and objects parented to that character. It will still be faster, either way.
I have the same problem. Past 40 installed characters, loading characters and scenes is ridiculously slow. It's all single threaded. So a powerful machine with many cores won't help.
As for threadripper, those 64 PCIe lanes are looking really tempting. I'm hearing some boards support x16, x16, x8, x8 with 4 video cards and all gen 3 which is insane. Some of those lanes are shared with extra m.2 slots unfortunately. But some aren't.
Maybe, maybe not. Just saw a review with a gtx 1080, a 1950x Threadripper vs the Intel 6700k, the 6700k beat it with some games at 1080p. I dunno but could be faster single core performance, and Ryzen architecture is still fairly new and there isn't much optimization for it.
From the Daz page on performance:
CPU: When you load content a fair amount of CPU calculations are made to apply morphs, fit clothing, smooth and deform. It is one of the factors involved in loading time (along with storage speed). The CPU is also 100% of your render performance (render time) when using non GPU accelerated renderers (like 3Delight in DAZ Studio). So if using DAZ Studio 4.7 or older your CPU determines how fast your renders happen as well as has an influence on how fast things load and deform.
Huh? Compared to what?
Because for workstation loads, and 3D things like Cinebench the benmarks I saw show the Threadripper 1950x performing better or much better than the i9 7900, Blender is about the same, but the AMD chip crushes the equally priced Intel chip. It is still weaker at gaming. With all CPUs sporting multiple cores nowadays, we need games to be better optimized for multi-core chips now.
* I bring up games, as they typically favor single core performance and high clock rates.
This review had benchmarks for a lot of apps - not on top for all, but does better for many. Gaming appears to be the biggest weak spot.
https://arstechnica.com/gadgets/2017/08/amd-threadripper-review-1950x-1920x/
Single thread for intel is faster by about 12% - 15% on average but AMD opened all those PCI lanes for throughput and added many more cores so that it consistantly outdoes the i9 'whatever' 10 core by about 15% - 30% which is intel 20 threads compared to AMD 32 threads - so AMD has about 33% more threads and more data throughput to offset single thread shortcomings.
At the present moment, Threadripper is the undisputed fastest desktop CPU on the planet. It's on top of pretty much every productivity benchmarks. The only one I saw where it was behind was 7zip compression. So saying that threadripper is at the bottom is a completely false statement considering that in most tasks on HEDT platforms, it's not just a little faster, it's ridiculously faster. The Passmark CPU charts also has Threadripper in the #1 spot as I write this.
https://www.cpubenchmark.net/high_end_cpus.html
And if you want to go dual socket server CPUs, EPYC has 32 cores 64 threads per CPU, so you get 64 cores, 128 thread. There's also a single socket version. You get 128 PCIe lanes on both single and dual socket platforms as well as 8 channel memory if you're concerned about memory speed.
Another advantage of Threadripper is that you can also use ECC memory whereas on Intel, this is only available on server platforms.
For sure AMD is in good position to give more than serious competition to nVidia and intel now in both home & business. I can't see them loosing this cost and functionality advantage anytime in next few years. I looks like they will improve and surpass intel & nVidia now,
AMD still has a ways to go against Nvidia. The new Vega 64 competes against the 1080 (so they don't have anything to compete against the 1080Ti) and I'm hearing that Volta (next gen nvidia gpu) is coming out earlier than anticipated. Navi (next gen amd gpu) is at least a year out as they have to wait for GloFo to perfect their 7nm process.
Having said that, AMD's video cards will be sold out for the foreseeable future because of cryptocurrencies like Ethereum. Also, AMD added new instructions on Vega to boost mining speeds. And if you have a freesync monitor, Vega is the only option for high end GPU.
...however, Epyc requires Linux which most 3D software we use does not support so that is pretty much moot.
Rysen does not support OS versions prior to W10 unless you jump through a tonne of hoops. Apologies, but not tying my rope to a bandwagon of something that is still fairly flawed.
For most 2D/3D software Threadripper didn't post very inspiring numbers compared to Intel CPUs including current generation ones.
Again, great for business data management so it will still sell.
Trust me, I'm no Intel fan (particularly as Kaby Lake and newer generqation CPUs also dumped older versions of Windows). One of my first workstations back in the 90s was AMD powered and served me very well.
...yeah, but will still be useless for CUDA based render engines like Iray and Octane.
Apologies for being the curmudgeon here, but unless we get real cross platform compatibility, it won't matter if AMD comes out with a 64 GB GPU card. when pretty much the main OpenCL choices are buggy LuxRender or AMD's own Pro Render (still a WIP).
The other interesting thing with Intel is that their new Coffeelake CPU's will go back to socket 1151 and rumors have it that while Coffeelake will be using the 1151 socket they will need the new 300 series chipset to work and not be backwards compatible with the 200 series chipsets..
But in the end all I know is when I am able to build a new system it is going to be tough on what to get, whether I go AMD or Intel it is all going to be interesting..
For a lot of "general" computing tasks, yes; not brilliant. But when it comes specifically to CPU 3D rendering:
http://www.tomshardware.com/reviews/amd-ryzen-threadripper-1950x-cpu,5167-11.html
...the new AMD is a bargain top performer.
Quick test loading a scene with 8 Genesis 8 characters. I too have a lot of characters... oof, just counted maybe 70? Some are almost dupes, like Valoria and Valoria Zombie. Anyhoo, a bunch. And a bunch of morphs (bought a bunch of morph expressions lately). Seems to be using all 4 cores of a 4-core CPU. Big question though is how many cores will Daz use? 4 to 16 is a big jump.
For example, if Daz Studio will only use 8 cores, for $1000 the i9 7900 would probably offer better performance. If it will use more, than the Threadripper 1950x will probably offer better performance.
Attached pic is about 2 seconds after scene finished loading.
It does matter, because although IRAY is built in, there are other options; and things will improve and continue to do so.
intel, nVidia, and Autodesk are not the alpha & omega of all that is best, there is a whole other world out there that can do what those do and increasingly better then they can even.
My only trouble now is waiting for an entry-level consumer PC that is Ryzen based that is priced like the entry level intel consumer PCs from Asus & Acer ( that is about $450 for latest intel i5 4 core with 16GB RAM, Wifi, Bluetooth, & 2 TB HDD - that's a good price). They'll come but I must have a bit of patience. Also the problem of needing a discrete card adds to base PC price.
..agh, missed that page (was reading the article on the phone at the time which is very difficult when it comes to veiwing charts).
OK, makes me wonder what marks for Carrara and Iray CPU mode would be like.
...the only other non CUDA option for Studio and Carrara is Lux. I uninstalled both Reality (4.x) and Lux a while back after dealing with way too much instability, Also, unlike Iray, in Lux, if the render process exceeds VRAM, it simply crashes. Since Iray was introduced two years ago, it has been very stable in comparison and hence, I was able to become reasonably proficient with it. In CPU mode it is still faster than UE in 3DL as well as has a more straightforward set up.
Frustration with buggy software tends to make me reluctant to want to work with it. For example, were it not for all the instabilities in Hexagon, I would most likely be much more proficient with modelling.
To start all over from square one again (particularly given the remaining years I have left) is not an option. If it means finding a way to afford the higher costs for Nvidia, I'll just have to deal with it. At this stage I need to draw the line and go with what best suits my workflow.