OT: Laptop Render Benchmark Results

13»

Comments

  • ebergerlyebergerly Posts: 3,255
    edited November 2017

    As far as cost of laptops equivalent to desktops...

    The $2,000 laptop I mentioned (i7, 16GB RAM, 512GB M.2, GTX1080) is fairly equivalent in price to a desktop with the same specs at Newegg, although the CPU in the laptop is 2.8GHz and the ones I saw at Newegg are faster (4GHz?). The price for the desktop was $1,700, not including monitor. So if you add the monitor it's not really much different. 

    So if you're like me and don't need a high powered CPU (and let's be honest, there aren't many apps now that rely heavily on CPU...yeah there are some, but that's changing), a $2,000 laptop with those specs isn't much different from a desktop.  

    Post edited by ebergerly on
  • frank0314frank0314 Posts: 13,919

    Don't skimp to much on CPU because DS will utilize some of it in Iray renders unless you shut it off.

  • ebergerlyebergerly Posts: 3,255
    frank0314 said:

    Don't skimp to much on CPU because DS will utilize some of it in Iray renders unless you shut it off.

    Yeah I always shut it off. From what I've seen, CPU is pretty much useless compared to GPU for most stuff. Especially Iray. 

  • frank0314frank0314 Posts: 13,919
    edited November 2017

    I use a combo and it gives me fast render speeds but as I said I have a higher end computer since I make content for the store for a living.

    Post edited by frank0314 on
  • ebergerlyebergerly Posts: 3,255

    I have a Ryzen 7 1700 with 16 threads, along with a GTX 1080ti and 1070, and even if I add all those threads it only improves render time by a few percent at best. And it locks up my computer so I can't do anything. I hate CPU's smiley

  • pdr0pdr0 Posts: 204
    ebergerly said:

    As far as cost of laptops equivalent to desktops...

    The $2,000 laptop I mentioned (i7, 16GB RAM, 512GB M.2, GTX1080) is fairly equivalent in price to a desktop with the same specs at Newegg, although the CPU in the laptop is 2.8GHz and the ones I saw at Newegg are faster (4GHz?). The price for the desktop was $1,700, not including monitor. So if you add the monitor it's not really much different. 

    So if you're like me and don't need a high powered CPU (and let's be honest, there aren't many apps now that rely heavily on CPU...yeah there are some, but that's changing), a $2,000 laptop with those specs isn't much different from a desktop.  


    For CPU, 4Ghz, vs 2.8Ghz is significant. But it's not just speed difference, the laptop CPU SKU's are slower because of less cache than it's desktop "brother"

    Be careful - is the GTX1080 the desktop variant, or laptop variant or max-q variant ? It's about 10% slower each step.

     

     

     

  • ebergerlyebergerly Posts: 3,255
    edited November 2017
    pdr0 said:

    For CPU, 4Ghz, vs 2.8Ghz is significant. But it's not just speed difference, the laptop CPU SKU's are slower because of less cache than it's desktop "brother"

    While that may be true, in my case it's pretty much irrelevant. I made a list of the apps I use regularly that use the CPU vs. my two GPU's. And almost nothing cares about CPU. Blender, Substance Painter, DAZ Studio, an engineering software I use, Nuke....

    I think the only thing that cares about CPU is VWD (the author already has a working version for GPU, probably released early next year), Davinci Resolve video editing, and, well, I think that's it. So 2.8 GHZ vs 4 GHZ, and 8 cores vs 2 cores, it mostly irrelevant. And getting more so. 

    Post edited by ebergerly on
  • frank0314frank0314 Posts: 13,919

    Yeah, most apps depend heavily on RAM and GPU

  • pdr0pdr0 Posts: 204

    You always pay more for equivalent hardware in a laptop form factor.  You'd have to downgrade the desktop cpu and gpu to make it slower for apples to apples comparison. But then you'd save more money

    With the money you save on an equivalently downconfigured desktop you could get a better GPU, like at 1080ti with 11GB instead of 8GB . DS Iray has problems with large scenes.

    If you know how to build it yourself, you can save the labour markup too . It's more difficult to get white box OEM laptop shells .

  • nonesuch00nonesuch00 Posts: 18,032

    If you skimp too much on CPU with Windows 10 & DirectX 12 being much more efficient at graphics and thread management parallelization you are trading a lot of performance improvement. You saw earlier even with myself rendering the same scene basically twice with the same version of Windows 10 the i5-3230 took over twice as long to render the scene as the i7-3630. CPU speed matters in thread management and DAZ Studio itself doesn't run on the GPU, it runs on the CPU. 

  • JamesJABJamesJAB Posts: 1,760
    edited November 2017
    pdr0 said:
    ebergerly said:

    As far as cost of laptops equivalent to desktops...

    The $2,000 laptop I mentioned (i7, 16GB RAM, 512GB M.2, GTX1080) is fairly equivalent in price to a desktop with the same specs at Newegg, although the CPU in the laptop is 2.8GHz and the ones I saw at Newegg are faster (4GHz?). The price for the desktop was $1,700, not including monitor. So if you add the monitor it's not really much different. 

    So if you're like me and don't need a high powered CPU (and let's be honest, there aren't many apps now that rely heavily on CPU...yeah there are some, but that's changing), a $2,000 laptop with those specs isn't much different from a desktop.  


    For CPU, 4Ghz, vs 2.8Ghz is significant. But it's not just speed difference, the laptop CPU SKU's are slower because of less cache than it's desktop "brother"

    Be careful - is the GTX1080 the desktop variant, or laptop variant or max-q variant ? It's about 10% slower each step.

     

     

     

    The only thing you need to watch out for is that MAX-Q designator.  The normal GTX 1080 for notebooks is the same as the desktop version bit with a slightly lower base clock speed and 30W lower power usage (1556Mhz 150W vs 1607Mhz 180W). The MAX-Q version drops the base clock down to (1101-1290Mhz and 90-110W) depending on the cooling setup in the notebbok.
    Generaly the MAX-Q version is used for thin and light or smaller screen (Alienware puts the 1080 MAX-Q in their 15" machine) gaming notebooks and the card will usualy be integrated onto the mainboard.

    I have found notebookcheck.com to have a very good set of tests for thier reviews, including external temperatures (top and bottom) so that you know it the keyboard will burn your fingers under full load...
    When you search on google for any given notebook GPU their page for that chip is usualy at or near the top of the list.  When you scroll down past the specs and game fps ranges, there is a list of laptops that they have reviewed with that GPU.  Their reviews also hit one of my big "decision points" for notebook purchases...  How easily accessable are the heatsinks/fans for removal and cleaning.  They always show images that includ what is directly under the service/user upgrade panel.

     

    The external GPU dock can be a great option.  As was stated above even if you have a fairly high end gaming notebook, the adition of an external GTX 1080 ti will drop rendering times and keep all of that extra heat pruduction away from the rest of the computer.  Another advantage could be to use only the E-GPU for rendering, freeing up the rest of the computer for other tasks (working on your next project, editing photos, watching movies, or browsing Daz forums).  This setup is very Iray friendly, because once the render job is sent over to the GPU, the rest of the computer is mostly hands off.

    Post edited by JamesJAB on
  • pdr0pdr0 Posts: 204
    JamesJAB said:

    The only thing you need to watch out for is that MAX-Q designator.  The normal GTX 1080 for notebooks is the same as the desktop version bit with a slightly lower base clock speed and 30W lower power usage (1556Mhz 150W vs 1607Mhz 180W). The MAX-Q version drops the base clock down to (1101-1290Mhz and 90-110W) depending on the cooling setup in the notebbok.
     

    The "slightly lower clocks" result in 10% difference (ok it' s 9%). If you look at the 3DMark benchmarks

    https://www.notebookcheck.net/NVIDIA-GeForce-GTX-1080-Laptop.171212.0.html

    +34% 1080Ti Desktop

    +9% 1080 Desktop

    0%  1080 Laptop

    -17% 1080 Laptop MaxQ

    But is that applicable for 3D "GPU" rendering (iray or other)?  maybe, maybe not. But I bet the trend is at least similar

     

  • JamesJABJamesJAB Posts: 1,760

    If you skimp too much on CPU with Windows 10 & DirectX 12 being much more efficient at graphics and thread management parallelization you are trading a lot of performance improvement. You saw earlier even with myself rendering the same scene basically twice with the same version of Windows 10 the i5-3230 took over twice as long to render the scene as the i7-3630. CPU speed matters in thread management and DAZ Studio itself doesn't run on the GPU, it runs on the CPU. 

    Iray is very good when it comes to scaling.  I did a pretty extensive test using my dual xeon workstation 12 cores / 24 threads.  When I started removing CPU cores from the render job the time per itteration per CPU thread stayed very consistent at about 7.2 seconds for each thread to complete an itteration.  as part of my experiment I decided to turn off all of the secondary hyperthreading cores and came up with a much better 5 second per thread to complete an itteration.  ***CPU running full boost clock speed for the entire duration***

    This result might not seen very relivent until you consider the way notebook thermal restrictions and boost clocks are setup.
    Disabling all of your hyperthreading cores should allow your main cores to hold a higher clock speed while using less power (and generating less heat).  On a notebook the effect should be much more pronounced as the boost states are much higher compared to the base clock speed (vs desktop CPUs).

    This can be achieved in two ways.
    The easiest is load Daz Studio, Open the task manager "Ctr+Shift+Esc", Click "More Details", go to the "Details Tab", right click on "DAZStudio.exe", select "Set Affinity", and uncheck all of the odd numbered CPU cores.  You will need to this every time that you open Daz Studio.
    The other way is in the system Bios and varies from OEM to OEM and may not even be an included option.

  • nonesuch00nonesuch00 Posts: 18,032
    JamesJAB said:

    If you skimp too much on CPU with Windows 10 & DirectX 12 being much more efficient at graphics and thread management parallelization you are trading a lot of performance improvement. You saw earlier even with myself rendering the same scene basically twice with the same version of Windows 10 the i5-3230 took over twice as long to render the scene as the i7-3630. CPU speed matters in thread management and DAZ Studio itself doesn't run on the GPU, it runs on the CPU. 

    Iray is very good when it comes to scaling.  I did a pretty extensive test using my dual xeon workstation 12 cores / 24 threads.  When I started removing CPU cores from the render job the time per itteration per CPU thread stayed very consistent at about 7.2 seconds for each thread to complete an itteration.  as part of my experiment I decided to turn off all of the secondary hyperthreading cores and came up with a much better 5 second per thread to complete an itteration.  ***CPU running full boost clock speed for the entire duration***

    This result might not seen very relivent until you consider the way notebook thermal restrictions and boost clocks are setup.
    Disabling all of your hyperthreading cores should allow your main cores to hold a higher clock speed while using less power (and generating less heat).  On a notebook the effect should be much more pronounced as the boost states are much higher compared to the base clock speed (vs desktop CPUs).

    This can be achieved in two ways.
    The easiest is load Daz Studio, Open the task manager "Ctr+Shift+Esc", Click "More Details", go to the "Details Tab", right click on "DAZStudio.exe", select "Set Affinity", and uncheck all of the odd numbered CPU cores.  You will need to this every time that you open Daz Studio.
    The other way is in the system Bios and varies from OEM to OEM and may not even be an included option.

    Yes, I agree that for something so CPU intense doubling up on a single core for each core is a bookkeeping wheeling spinning.

  • JamesJABJamesJAB Posts: 1,760

    Testing on my Dell Precision M6700 Mobile Workstation.
    Core i7-3840QM
    4 cores 8 threads
    2.8Ghz base - 3.8Ghz max turbo
    45W TDP
    105c Max Temp
    As far as I can tell, the highest clock you can get on this CPU is 3.75Ghz while only using 1 of the 4 cores. (3.59Ghz seems to be the fastest it will boost with all 4 enabled.)

    Using everyone's favorite Iray benchmark scene (with one small change for time and ease of calculations) Max samples set to 1000 itterations.

    Iray rendering all cores enabled:  CPU starts at 3.59Ghz turbo and heats up to 95c, then slows down to 3.31Ghz and holds a stable temperature of 85c for the duration of the render.
    13:39 to 1000 itterations : Stable temp of 85c with a core clock of 3.31Ghz

    Iray rendering hyperthreading cores disabled:  CPU starts at 3.59Ghz turbo and heats up to 90c, then slows down to 3.50Ghz and holds a stable temperature of 87c for the duration of the render.
    17:13 to 1000 itterations : Stable temp of 87c with a core clock of 3.50Ghz

    On this CPU here's a bit of info: 
    8 threads enabled, it takes 1 thread 6.5 seconds to render an itteration.
    4 primary threads enabled, it takes 1 thread 4.1 seconds to render an itteration

    Some fun facts to keep in mind:
    On newer mobile CPUs, the gap between base clock and turbo states is quite a bit larger.
    Most Mobile Core i3 CPUs are either a single core with Hyperthreading or a dual core without.
    Most mobile Core i5 CPUs are either a dual core with Hyperthreading or a quad core without.
    Mobile Core i7 CPUs on all but the newest generations where dual core with Hyperthreading unless there was a Q in the letters at the end of the model number.  That Q designated it as a true quad core with Hyperthreading.
    Most modern CPUs within the last few generations can have their TDP set by the computer manufacturer (usualy 15W or 30W) based on what kind of laptop it's being installed into.  This will have a direct effect on how aggressively it will turbo boost and what clock speeds it will use under normal opperation.
    Most thin and light notebooks will have a CPU that has a maximum TDP of 15W or lower.

     

  • ebergerlyebergerly Posts: 3,255
    edited November 2017

    JamesJAB have you considered trading in your mobile workstation for a laptop like the one I referenced with an internal GTX 1080? Or maybe one with a Thunderbolt interface so you can hook up an external GPU box with a 1080ti? That will give you the mobility, but when you're at home you can split your workload between the two, or maybe share rendering across two 1080ti's (the existing in your desktop and another in the laptop). 

    I'm actually starting to consider getting a new laptop, and if I do it right I can get some great use for it at home. I'm thinking since I have 3 monitors on my main desktop, it would be easy to have a window open where I remote into the laptop on the same network. I can set up rendering on the laptop while I'm doing other stuff (video editing, Blender, whatever) on my main desktop. And when you have a separate machine doing rendering it's not that big a deal if it's a tad slower, since you're busy doing other stuff. Or I could just get a lower end laptop with a Thunderbolt and hook up an external 1080ti, and the desktop and laptop would perform about the same. 

    And since Christmas is coming...................................smiley  

    Post edited by ebergerly on
  • JamesJABJamesJAB Posts: 1,760
    edited November 2017
    ebergerly said:

    JamesJAB have you considered trading in your mobile workstation for a laptop like the one I referenced with an internal GTX 1080? Or maybe one with a Thunderbolt interface so you can hook up an external GPU box with a 1080ti? That will give you the mobility, but when you're at home you can split your workload between the two, or maybe share rendering across two 1080ti's. 

    I'm actually starting to consider getting a new laptop, and if I do it right I can get some great use for it at home. I'm thinking since I have 3 monitors on my main desktop, it would be easy to have a window open where I remote into the laptop on the same network. I can set up rendering on the laptop while I'm doing other stuff (video editing, Blender, whatever) on my main desktop. And when you have a separate machine doing rendering it's not that big a deal if it's a tad slower, since you're busy doing other stuff. Or I could just get a lower end laptop with a Thunderbolt and hook up an external 1080ti, and the desktop and laptop would perform about the same. 

    And since Christmas is coming...................................smiley  

    I'm not all about having a 150W space heater (GPU) to lug around, or spending $2000+ on a notebook.
    Personally I like the Precision M6X00 design.  It is just the right balance between size, weight, price, and performance (and it fits perfectly in my laptop bag.).
    I just ordered a GPU upgrade for my M6700.
    Primaraly it gets used as a day to daily driver/gaming notebook when I'm not home, and a gaming machine when my wife and I play together.

    After some research and finding one at a reasonable price, I've ordered a GTX 980M 8GB.  The only issue it will have is one that I don't care about, Nvidia Optimus will not work.  This just means that I am stuck keeping the Nvidia GPU active all the time, the same way I have my current Quadro K5000M set.

    Post edited by JamesJAB on
  • ebergerlyebergerly Posts: 3,255
    edited November 2017

    By the way, for those who want to do remote access but not buy Windows 10 Pro....

    There's an awesome free app that is amazing, called Team Viewer. Install on both machines, a window pops up with code and password, just enter that in the other machine and BAM a remote access window opens and you're accessing the remote computer on your home network. 

    Like I said I have three monitors on my main desktop, and I can open the Team Viewer window on one monitor when I need to do something on the other desktop in the other room. And if I want to transfer a file to the other just drag and drop into the remote window. And if I'm installing software or upgrade on my main desktop, I can do the same on the remote machine at the same time, so I make sure my two desktops match. And when I'm done, just minimize the Team Viewer window and that's it. 

    And since the other desktop has 2 monitors, I can even open up both in separate windows on my main desktop with just a quick menu selection. 

    Awesome. 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    JamesJAB said:

    I'm not all about having a 150W space heater (GPU) to lug around, or spending $2000+ on a notebook.
     

    Come on, a light bulb can be around 100-150 watts, it's not that much smiley 

    Heck, when you render with you 1080ti you're generating maybe 300-400watts from the GPU alone.

    And I'm surprised people think that $2,000 is a lot for a powerful computer like that.  

  • JamesJABJamesJAB Posts: 1,760

    $2000+ is a lot when you need to negotiate large purchases with your spouse.  ($2000+ will quickly turn into $4000+ because you end up needing to buy two of them...)

  • nonesuch00nonesuch00 Posts: 18,032

    I just ran a render for an hour with only one thread on each core checked on my i7-3630qm and unlike JamesJab's results early my render times more than doubled (part of that though is a more ccomplex scene is being rendered). So I went back to having 7 of 8 threads checked for use by DAZ Studio.

  • JamesJABJamesJAB Posts: 1,760

    I just ran a render for an hour with only one thread on each core checked on my i7-3630qm and unlike JamesJab's results early my render times more than doubled (part of that though is a more ccomplex scene is being rendered). So I went back to having 7 of 8 threads checked for use by DAZ Studio.

    There should be some logical reason for that happening.
    Did you keep an eye on the task manager to see what clock speed your CPU was running at during both renders?
    What is the RAM configuration in your notebook (how many sticks and what sizes/speeds)?

    Both of our CPUs are from the same generation, and have thesame turbo boost profile.
    You should be seeing simmilar clock speed patterns on your CPU (see my above post and subtract 0.4Ghz)

  • nonesuch00nonesuch00 Posts: 18,032
    edited November 2017
    JamesJAB said:

    I just ran a render for an hour with only one thread on each core checked on my i7-3630qm and unlike JamesJab's results early my render times more than doubled (part of that though is a more ccomplex scene is being rendered). So I went back to having 7 of 8 threads checked for use by DAZ Studio.

    There should be some logical reason for that happening.
    Did you keep an eye on the task manager to see what clock speed your CPU was running at during both renders?
    What is the RAM configuration in your notebook (how many sticks and what sizes/speeds)?

    Both of our CPUs are from the same generation, and have thesame turbo boost profile.
    You should be seeing simmilar clock speed patterns on your CPU (see my above post and subtract 0.4Ghz)

    No, but doing DAZ iRay renders or not - my CPU stays pretty much locked at 3.1GHz but the percentage load and wattage use changes instead under heavy load like a DAZ iRay render. Maybe that's a bug in Core Temp. The Intel Processor Utility Identification also shows 3.1GHz but the Windows 10 information about my system claims it is a i7-3630QM " 2.40GHz 2.40 GHz CPU. Not sure why it doesn't list all 4 cores. It is correct about the base speed of the CPU supposed to by 2.40GHz although I've not seen it run at that in any of the utilities that measure that.

    That said, adding back in the 3 core threads I took out only increases the render speed by about 30% - 33% not more than 50% so a very large part of the extra render time is a new scene that is decetively complex despite looking no more complex than other scenes. 

    16GB PC-2300 I think or maybe PC-1600 I'm not sure. Which ever is cheaper likely.

    OK, I did a quick compare in the task manager:

    with threads 1, 3, 5, 7: 3.01 - 3.03GHz 65% - 72%

    with threads 0. 2, 4, 6: 3.01 - 3.28 GHz 65% - 82% (there was a wrte to disk & render screen)

    with thread 0 - 7: 2.87 - 2.90 GHz 100% steady

    so 100% steady CPU use compared to 66% CPU use matches the about 30% - 33% increase I calculated using the increased number of iRay iterations reported in the DAZ Studio 'Rending Image' dialogue. 

    Too bad Task Manager doesn't guage voltages and temperatures and allow break out by indivial core & virtual thread core (what are they hypercore? hyperthreads?).

    I also think intel generation three are designed to behave differently than your newer intel CPU generations at generation 6 or the AMD CPUs too.

    Post edited by nonesuch00 on
  • JamesJABJamesJAB Posts: 1,760
    JamesJAB said:

    I just ran a render for an hour with only one thread on each core checked on my i7-3630qm and unlike JamesJab's results early my render times more than doubled (part of that though is a more ccomplex scene is being rendered). So I went back to having 7 of 8 threads checked for use by DAZ Studio.

    There should be some logical reason for that happening.
    Did you keep an eye on the task manager to see what clock speed your CPU was running at during both renders?
    What is the RAM configuration in your notebook (how many sticks and what sizes/speeds)?

    Both of our CPUs are from the same generation, and have thesame turbo boost profile.
    You should be seeing simmilar clock speed patterns on your CPU (see my above post and subtract 0.4Ghz)

    No, but doing DAZ iRay renders or not - my CPU stays pretty much locked at 3.1GHz but the percentage load and wattage use changes instead under heavy load like a DAZ iRay render. Maybe that's a bug in Core Temp. The Intel Processor Utility Identification also shows 3.1GHz but the Windows 10 information about my system claims it is a i7-3630QM " 2.40GHz 2.40 GHz CPU. Not sure why it doesn't list all 4 cores. It is correct about the base speed of the CPU supposed to by 2.40GHz although I've not seen it run at that in any of the utilities that measure that.

    That said, adding back in the 3 core threads I took out only increases the render speed by about 30% - 33% not more than 50% so a very large part of the extra render time is a new scene that is decetively complex despite looking no more complex than other scenes. 

    16GB PC-2300 I think or maybe PC-1600 I'm not sure. Which ever is cheaper likely.

    OK, I did a quick compare in the task manager:

    with threads 1, 3, 5, 7: 3.01 - 3.03GHz 65% - 72%

    with threads 0. 2, 4, 6: 3.01 - 3.28 GHz 65% - 82% (there was a wrte to disk & render screen)

    with thread 0 - 7: 2.87 - 2.90 GHz 100% steady

    so 100% steady CPU use compared to 66% CPU use matches the about 30% - 33% increase I calculated using the increased number of iRay iterations reported in the DAZ Studio 'Rending Image' dialogue. 

    Too bad Task Manager doesn't guage voltages and temperatures and allow break out by indivial core & virtual thread core (what are they hypercore? hyperthreads?).

    I also think intel generation three are designed to behave differently than your newer intel CPU generations at generation 6 or the AMD CPUs too.

    Your CPU and my CPU are from the same generation, heck they both hit the OEM market the same month.
    In your task manager:  If you right click on the CPU usage graph, you can select "Change graph to" -> "Logical Processors"  This will give a graph for each thread.

  • nonesuch00nonesuch00 Posts: 18,032
    JamesJAB said:
    JamesJAB said:

    I just ran a render for an hour with only one thread on each core checked on my i7-3630qm and unlike JamesJab's results early my render times more than doubled (part of that though is a more ccomplex scene is being rendered). So I went back to having 7 of 8 threads checked for use by DAZ Studio.

    There should be some logical reason for that happening.
    Did you keep an eye on the task manager to see what clock speed your CPU was running at during both renders?
    What is the RAM configuration in your notebook (how many sticks and what sizes/speeds)?

    Both of our CPUs are from the same generation, and have thesame turbo boost profile.
    You should be seeing simmilar clock speed patterns on your CPU (see my above post and subtract 0.4Ghz)

    No, but doing DAZ iRay renders or not - my CPU stays pretty much locked at 3.1GHz but the percentage load and wattage use changes instead under heavy load like a DAZ iRay render. Maybe that's a bug in Core Temp. The Intel Processor Utility Identification also shows 3.1GHz but the Windows 10 information about my system claims it is a i7-3630QM " 2.40GHz 2.40 GHz CPU. Not sure why it doesn't list all 4 cores. It is correct about the base speed of the CPU supposed to by 2.40GHz although I've not seen it run at that in any of the utilities that measure that.

    That said, adding back in the 3 core threads I took out only increases the render speed by about 30% - 33% not more than 50% so a very large part of the extra render time is a new scene that is decetively complex despite looking no more complex than other scenes. 

    16GB PC-2300 I think or maybe PC-1600 I'm not sure. Which ever is cheaper likely.

    OK, I did a quick compare in the task manager:

    with threads 1, 3, 5, 7: 3.01 - 3.03GHz 65% - 72%

    with threads 0. 2, 4, 6: 3.01 - 3.28 GHz 65% - 82% (there was a wrte to disk & render screen)

    with thread 0 - 7: 2.87 - 2.90 GHz 100% steady

    so 100% steady CPU use compared to 66% CPU use matches the about 30% - 33% increase I calculated using the increased number of iRay iterations reported in the DAZ Studio 'Rending Image' dialogue. 

    Too bad Task Manager doesn't guage voltages and temperatures and allow break out by indivial core & virtual thread core (what are they hypercore? hyperthreads?).

    I also think intel generation three are designed to behave differently than your newer intel CPU generations at generation 6 or the AMD CPUs too.

    Your CPU and my CPU are from the same generation, heck they both hit the OEM market the same month.
    In your task manager:  If you right click on the CPU usage graph, you can select "Change graph to" -> "Logical Processors"  This will give a graph for each thread.

    Thanks, much more convenient than Core Temp and more generally useful.

Sign In or Register to comment.