GTX 1070 Ti Official!

Kevin SandersonKevin Sanderson Posts: 1,643
edited October 2017 in The Commons

For the budget weary -- 2,432 CUDA cores and 8GB GDDR5 memory $449

More info: http://www.tomshardware.com/news/nvidia-geforce-gtx-1070-ti,35773.html

Post edited by Kevin Sanderson on

Comments

  • ebergerlyebergerly Posts: 3,255

    Has anyone figured out the purpose of this one? Sounds like it's the GPU's that didn't quite make it as GTX 1080 get tossed into the 1070ti bin. But heck, the difference between a 1070 and 1080 already was pretty slim, and now this comes just under a 1080? I don't get it. 

  • kyoto kidkyoto kid Posts: 40,937

    ...I think it's primarily the increased core count.  Was hoping it might have been an increase in memory. (at least 10GB or faster DDR5X)

    Hopefully it will cause the standard 1070 to come down more in price. as the EVGA model is 469$ at Newegg.

  • ebergerly said:

    Has anyone figured out the purpose of this one?

    It's for mining ethereum.

  • JamesJABJamesJAB Posts: 1,760

    No...
    The GTX 1070 ti is positioned to take on AMD's new GPUs. 
    It's plugging the performance/price hole that AMD is trying to take advantage of.

  • agent unawaresagent unawares Posts: 3,513
    edited October 2017
    JamesJAB said:

    No...
    The GTX 1070 ti is positioned to take on AMD's new GPUs. 
    It's plugging the performance/price hole that AMD is trying to take advantage of.

    1080s specifically never saw the price increases of all the "lesser" models because they use GDDR5X which has poor performance mining ethereum. The 1070 Ti is basically a 1080 with GDDR5 which is correct for mining ethereum. This IS the hole in the market. ;)

    Post edited by agent unawares on
  • kyoto kidkyoto kid Posts: 40,937

    ...well it looks as if some prices for standard 1070s are dropping. Am able to find them at Newegg for as little as 404$.  Still borderline for me as my average finished scene size is around 7 GB.

  • JamesJABJamesJAB Posts: 1,760
    edited October 2017
    JamesJAB said:

    No...
    The GTX 1070 ti is positioned to take on AMD's new GPUs. 
    It's plugging the performance/price hole that AMD is trying to take advantage of.

    1080s specifically never saw the price increases of all the "lesser" models because they use GDDR5X which has poor performance mining ethereum. The 1070 Ti is basically a 1080 with GDDR5 which is correct for mining ethereum. This IS the hole in the market. ;)

    From my understanding about the whole mining craze....
    It is all about power usage vs hashes per second.  If the 1070 ti power usage is low enough, it will become the new "best" mining card and the price will skyrocket.  If the power usage is close to (or the same as) the GTX 1080, then the card will be a gaming/rendering card and prices will hold stable.

     

    Just checked Nvidia's website. Power usage is listed as 180W, the same as the GTX 1080.  Hopefully that will keep it out of the mining craze.

    Post edited by JamesJAB on
  • ColinFrenchColinFrench Posts: 646
    edited October 2017
    kyoto kid said:

    ...well it looks as if some prices for standard 1070s are dropping.

    And the 1070ti seems to be pulling down prices for some 1080's. I guess the difference in performance is expected to be so small (slightly faster memory) that some manufacturers are worried they'll be stuck with stock of the more expensive 1080.

    Sadly, 1080ti pricing is still in the stratosphere. Would really like that 11GB of memory.

    Post edited by ColinFrench on
  • agent unawaresagent unawares Posts: 3,513
    edited October 2017
    JamesJAB said:
    JamesJAB said:

    No...
    The GTX 1070 ti is positioned to take on AMD's new GPUs. 
    It's plugging the performance/price hole that AMD is trying to take advantage of.

    1080s specifically never saw the price increases of all the "lesser" models because they use GDDR5X which has poor performance mining ethereum. The 1070 Ti is basically a 1080 with GDDR5 which is correct for mining ethereum. This IS the hole in the market. ;)

    From my understanding about the whole mining craze....
    It is all about power usage vs hashes per second.  If the 1070 ti power usage is low enough, it will become the new "best" mining card and the price will skyrocket.  If the power usage is close to (or the same as) the GTX 1080, then the card will be a gaming/rendering card and prices will hold stable.

     

    Just checked Nvidia's website. Power usage is listed as 180W, the same as the GTX 1080.  Hopefully that will keep it out of the mining craze.

    GDDR5X specifically has a problem with ethereum mining. A GTX 1080 is way, way worse than a 1070, without even taking into consideration relative power. https://devtalk.nvidia.com/default/topic/962406/gtx-1080-very-bad-result-for-mining/

    Miners bought 1080s at first expecting to use them and when it was found GDDR5X was crap for the application the 1080s escaped the craze. The 1070s use GDDR5 which works fine. So if you want a Ti version grab it while it's at NVIDIA prices. 1070 Ti is almost exactly "1080 but works for ethereum mining" which is why I'm willing to bet this was the actual target market. It's incredibly weird to release a __70 Ti.

    Post edited by agent unawares on
  • JamesJABJamesJAB Posts: 1,760

    It probably would have been better for Nvidia just lower the price on the GTX 1080 or release a GTX 1080 SE (4GB) version for gamers.

  • ArtisanSArtisanS Posts: 209

    Well, memory is nice, but money in my pocket is nice as well. I use a 6Mb 980Ti....and I must say my iRay renders sometimes swamp my card. But there are some tricks to be performed to lighten the memory load and more significantly the render time......

    1) If you work on a scene long (and change the materials often in that proces) it's a good idea to save the scene and restart DAZ before rendering. Sometimes maps get "stuck" in the memory and that can lead to a scene rendering on the CPU that runs on the GPU after a save and restart.

    2) Background objects should be background objects.....so a Vicky or Micky standing in the background should have there memory load reduced. Now DAZ does a lot but offering 2 and 1K texturemaps for background use isn't one of them. But a cool script that is worth every penny is V3Digitimes Scene Optimizer that can selectively change the load a charachter or prop has.....untill it fits snug in my memory. And since a character that takes up about 20.000 pixels in the final render does not need a 4K or 8K map, this works without harmfull effects to the final render....

    Greets, ArtisanS

  • kyoto kidkyoto kid Posts: 40,937
    kyoto kid said:

    ...well it looks as if some prices for standard 1070s are dropping.

    And the 1070ti seems to be pulling down prices for some 1080's. I guess the difference in performance is expected to be so small (slightly faster memory) that some manufacturers are worried they'll be stuck with stock of the more expensive 1080.

    Sadly, 1080ti pricing is still in the stratosphere. Would really like that 11GB of memory.

    ...I'd prefer 16 GB of fast HBM 2 such as on the Vega Frontier GPU, but then, AMD is useless for Iray and as I understand, the card will turn your your system into a spare room heater for the winter while rendering. 

    To get that type of horsepower from Nvidia you will have to dig deep, like 6,500$ - 7,500$ deep (Quadro GP-100).

  • JamesJABJamesJAB Posts: 1,760
    kyoto kid said:
    kyoto kid said:

    ...well it looks as if some prices for standard 1070s are dropping.

    And the 1070ti seems to be pulling down prices for some 1080's. I guess the difference in performance is expected to be so small (slightly faster memory) that some manufacturers are worried they'll be stuck with stock of the more expensive 1080.

    Sadly, 1080ti pricing is still in the stratosphere. Would really like that 11GB of memory.

    ...I'd prefer 16 GB of fast HBM 2 such as on the Vega Frontier GPU, but then, AMD is useless for Iray and as I understand, the card will turn your your system into a spare room heater for the winter while rendering. 

    To get that type of horsepower from Nvidia you will have to dig deep, like 6,500$ - 7,500$ deep (Quadro GP-100).

    The "Faster HBM2" on the Vega cards is not as fast as you might think it is...

    • RX Vega 56 - 8GB @ 410 GB/s (HBM2)
    • RX Vega 64 - 8GB @ 483 GB/s (HBM2)
    • GTX 1070 - 8GB @ 256 GB/s
    • GTX 1070 Ti - 8GB @ 256 GB/s
    • GTX 1080 - 8GB @ 320 GB/s (GDDR5 X)
    • GTX 1080 Ti - 11GB @ 484 GB/s (GDDR5 X)
    • TITAN XP - 12GB @ 547 GB/s (GDDR5 X)
    • Radeon Vega Frontier Edition - 16GB @ 483 GB/s (HBM2)
    • Quadro P6000 - 24GB @ 384 GB/s (GDDR5 X)
    • Quadro GP100 - 16GB @ 720 GB/s (HBM2)
  • kyoto kidkyoto kid Posts: 40,937

    ...I should have said more "efficient" by the fact that HBM memory offers a wider bandwidth with the GPU chip and than conventional GDDR memory. It also requries less power translating to less heat.

    True as I mentioned I wouldn't purchase a Vega as AMD GPUs have always experienced heat/noise issues and it would be useless for Iray. 

    Unfortunately, I do not have 6,500$ (or more) for a GP100 either and it will probably be some time before we see Nvidia bring HBM 2 to consumer cards.

  • ebergerlyebergerly Posts: 3,255
    edited October 2017

    I guess I'm not understanding the interest in fast (HBM) ram in GPU's or anywhere else. 

    I mean, if you step back and look at the whole rendering process, the things that takes time are:

    1. Bringing all the parts of the scene together (textures, obj's, etc.) from different places on a relatively slow hard drive into system RAM (assuming most of use use SATA hard drives to store our content)
    2. Rendering, which depends on the GPU cores and speed

    For me, the dreadfully slow part is #1, because my "fast" SATA hard drive is still relatively slow. If you google "hard drive transfer rates" the first thing that pops up is this chart comparing an SSD with a SATA hard drive. And it says the MAXIMUM read speed on a Western Digital Black drive is 122 MEGA bytes per second (not GIGA bytes, like the HBM2 transfer of 256GB/sec). And because it's slow it takes 34 seconds to transfer a 1.2 GB file. Heck, even if your content is on an SSD you're still talking 456 MEGA bytes per second, not gigabytes. From my understanding THAT is the bottleneck with most users doing renders. I've never understood why there's so much talk of RAM speed, either on the CPU or GPU.   

    Hard Drive Transfer.PNG
    941 x 284 - 23K
    Post edited by ebergerly on
  • JamesJABJamesJAB Posts: 1,760
    ebergerly said:

    I guess I'm not understanding the interest in fast (HBM) ram in GPU's or anywhere else. 

    I mean, if you step back and look at the whole rendering process, the things that takes time are:

    1. Bringing all the parts of the scene together (textures, obj's, etc.) from different places on a relatively slow hard drive into system RAM (assuming most of use use SATA hard drives to store our content)
    2. Rendering, which depends on the GPU cores and speed

    For me, the dreadfully slow part is #1, because my "fast" SATA hard drive is still relatively slow. If you google "hard drive transfer rates" the first thing that pops up is this chart comparing an SSD with a SATA hard drive. And it says the MAXIMUM read speed on a Western Digital Black drive is 122 MEGA bytes per second (not GIGA bytes, like the HBM2 transfer of 256GB/sec). And because it's slow it takes 34 seconds to transfer a 1.2 GB file. Heck, even if your content is on an SSD you're still talking 456 MEGA bytes per second, not gigabytes. From my understanding THAT is the bottleneck with most users doing renders. I've never understood why there's so much talk of RAM speed, either on the CPU or GPU.   

    The biggest bottlenecks getting your render to start drawing itterations are your PCIe bus and your CPU.  That scene you have in your preview window needs to be packaged up in an iray compatible format and sent from your system RAM through the CPU, down the PCIe bus and into your GPU before the render can start.  None of this involves your hard drive unless your swap file comes into play because of limited system RAM. 

    Next time you get ready to start an iray render open your task manager (Windows 10) click on "more details", select the "performance" tab, and watch the disk usage for the drive(s) where Daz Studio and your windows swap file are located, then click render in Daz Studio.  You will probably notice zero extra activity aside from the canvas temporary file writes on your windows drive.

    @ kyoto kid : I'm not sure how vram speed dependent iray is while rendering.  Actualy... now that I think of it, there have been a few times with my old GTX 1060 where the GPU was handling a complex scene and the core temperature sat pretty close to idle (I'm guessing that the "slow" 192bit 216GB/s memory was bottlenecking the render)

  • ebergerlyebergerly Posts: 3,255
    JamesJAB said:

    Next time you get ready to start an iray render open your task manager (Windows 10) click on "more details", select the "performance" tab, and watch the disk usage for the drive(s) where Daz Studio and your windows swap file are located, then click render in Daz Studio.  You will probably notice zero extra activity aside from the canvas temporary file writes on your windows drive.

     

    I see a lot of disk activity, with transfer rates around 10-20 MB/sec. Which is why I assumed that the various components of the scene (image textures, OBJ's, etc.) need to be pulled off the Runtime where your content is stored. Otherwise how would it be brought into memory? And it's not because I'm running out of RAM, since I have 64GB. 

    Now if you were already working on the scene and all that stuff was still in RAM I suppose it wouldn't need to pull it from the drive. But how else can it pull everything from the Runtime if it doesn't access the drive where it's all stored?

Sign In or Register to comment.