16GB RTX 4060ti Releasing in July

2»

Comments

  • kyoto kidkyoto kid Posts: 41,058
    edited July 2023

    ...I went with the 3060 not just because I got a good deal from EGVA but it should have been a rather big step up in performance from my Maxwell Titan-X which (is somewhat hamstrung by the RTX emulation for older cards) Unfortunately as I've mentioned the ancient BIOS of my MB wasn't compatible with the 3060 even though it was supposedly backwards compatible with PCIe 2.0 slots.    

    Of  course Windows 11 WDDM will take pretty much the same amount of VRAM from the 3060 (1 GB) so i that department there's no gain.

    W7 Pro WDDM is around 135MB with no other programmes or pocesses running.

    Post edited by kyoto kid on
  • KitsumoKitsumo Posts: 1,216

    Things are starting to look good for hobbyists https://nl.hardware.info/nieuws/85816/nvidias-16-gb-4060-ti-vier-dagen-na-release-al-e40-onder-msrp-in-duitsland

    from Google Translate:

    Nvidia's 16 GB-4060 Ti already €40 under MSRP in Germany four days after release

    Mark Klaver 22 July 2023 20:42 6 comments

    RTX 4060 Ti video cards with 16 GB of memory are already being sold well below the suggested retail price in Germany. For MSI's Ventus 3X 16G version of the card, you will pay €519 at Mindfactory; $40 less than Nvidia's $559 msrp.

    16 GB RTX 4060 Tis have been on sale since last Tuesday , but supply still seems limited . Quite striking that the German webshop already has it on sale. Still, it seems to be winning over few customers, with only "more than 10" orders so far.


    MSI's RTX 4060 Ti Ventus 3X 16G.

    The previous two RTX 4060 cards also seemed to sell poorly , although that has not yet led to significant price reductions in the Netherlands. You now pay €539 for the MSI Ventus 3X 16G; So €20 below the suggested retail price, with Gigabytes Gaming OC, Aero OC and MSI's Gaming Slim cards as more expensive alternatives.

    Source: Mindfactory (German)

    Man, this thing would be tempting at $450 US or less. I'd prefer $400, but I don't see it happening anytime soon.

     

  • davesodaveso Posts: 7,024
    edited July 2023

    the're 499 and up at Newegg. I'm thinking of building a system instead of buying one outright. For $2k I can use this 4060 Ti 16gb and be pretty happy. I have a 2070 Super 8gb right now. 

    Post edited by daveso on
  • outrider42outrider42 Posts: 3,679

    kyoto kid said:

    ...I went with the 3060 not just because I got a good deal from EGVA but it should have been a rather big step up in performance from my Maxwell Titan-X which (is somewhat hamstrung by the RTX emulation for older cards) Unfortunately as I've mentioned the ancient BIOS of my MB wasn't compatible with the 3060 even though it was supposedly backwards compatible with PCIe 2.0 slots.    

    Of  course Windows 11 WDDM will take pretty much the same amount of VRAM from the 3060 (1 GB) so i that department there's no gain.

    W7 Pro WDDM is around 135MB with no other programmes or pocesses running.

    All PCIe GPUs can run on older versions of PCIe, they will do so capped by the older gen's bus speed, which as has been pointed out is not an issue for Iray. However, the motherboard BIOS may need updating, which is purely on the company that built the motherboard. They may have an updated BIOS which adds support for new cards, so it is ultimately on them to provide the BIOS update. This is not on Nvidia or the maker of the GPU.

    I have a friend who had a similar problem. I had sold him my old 1080ti for a nice discount so he could upgrade his 970, but he couldn't get it to work on his old motherboard. He was screwed because he was completely unable to access his BIOS on that board for some reason (apparently an issue with the particular mobo he had.) He could not update his BIOS. So he sat on the 1080ti for a while until finally building a new PC. And he is so happy he did, he is kicking himself for not doing it sooner, lol. He is more of a gamer, though.

    Which I think you would get a huge performance boost with a new PC. If your motherboard is that old, then your CPU clearly must be quite old. Thus you could be looking at possibly 10 generations or more with of upgrades on CPU. That level of power absolutely will make Daz Studio smoother to run. It is hard to buld a 12gb scene when DS itself starts chugging from the stuff you have in it.

    However there is one last thing to consider before building new...your monitor. If your monitor is vintage to your PC, it might lack modern ports for a new GPU, which is another issue my friend had. He couldn't get a newer GPU because his monitor has DVI and no HDMI or Display Port. Luckily the 1080ti still has DVI. But the 3060 does not, so if your monitor lacks the ports, you have yet another problem. It kind of sucks how fast tech can move sometimes. But at least monitors are cheap these days, and you can also get DVI adapters.

    Back to the 4060ti, I had a feeling it might get some discounts. But it is hard to say how often this will happen because we don't know what the stock situation is. If they really did not make many, then the stock may not be built up so high to where they put it on sale. At the same time, if it just sits on shelves they may need to put it on sale to move it.

    There are arguments for and against the 4060ti. For gaming it is pretty much universally panned. But for something like Daz Iray, there are arguments to be made for it. This card will be little faster than a 2080ti, while packing 5gb more VRAM, and also using half the electricity on top of that. That is actually pretty impressive when you think about it, the power draw shows true progress. The 3060 was pretty neat for Daz, yet even it uses more power than the 4060ti. If you just went by my last few sentences, you'd think this card is amazing and it kind of is...until you see the price. Like much of the 4000 series, the price is holding it back. However, even at this price, it is still the cheapest 16gb GPU that Nvidia has ever made, which is kind of sad, but that is a fact.

  • KitsumoKitsumo Posts: 1,216
    edited July 2023

    Gamers are unhappy with Nvidia's pricing because they just want to buy a graphics card. Nvidia transitioned away from making "graphics cards" cards years ago. Now they're a compute company - they make compute cards that can also do graphics. The GeForce division has a tough job - they have to offer good enough features/prices to stay ahead of AMD and Intel, but bad enough features/prices to not cannibalize their data center/workstation division.

    Universities/research centers that want tons of VRAM and VRAM pooling have to pay for a professional card - no more cheating and buying a 1080ti or Titan. Also, as more home users get into AI image generation, that market has become segmented too - you can run your own equipment at home and be limited to 24 Gb, or pay a subscription to a company using workstation cards and get 48, 96Gb or higher.

    As long as AMD/Intel aren't anywhere close to catching up, Nvidia isn't going to bring back NVlink anytime soon. It's the same reason reason AMD has barely released any Threadrippers this past generation, the competition doesn't have anything to compete with it, and it would cut into data center sales. People take this stuff personally, but it's just business.

    I think pricing is bad all around because both Nvidia/AMD know that this is a lost generation. The market is flooded with with last gen cards and used cards from the crypto crash, plus a lot of people have upgraded since the pandemic, so nothing they introduce is going to look like a good deal anyway. They may as well shoot for the moon with pricing, that way they can at least keep their average selling price high and get a thumbs-up from investors. I'm not saying I like it, but that's the world we live in.

    Post edited by Kitsumo on
  • oddboboddbob Posts: 396

    Kitsumo said:

    Gamers are unhappy with Nvidia's pricing because they just want to buy a graphics card. Nvidia transitioned away from making "graphics cards" cards years ago. Now they're a compute company - they make compute cards that can also do graphics. The GeForce division has a tough job - they have to offer good enough features/prices to stay ahead of AMD and Intel, but bad enough features/prices to not cannibalize their data center/workstation division.

    I strongly suspect that the push for raytracing in games that started with the RTX series cards was a way to sell cards that were primarily designed for other workloads to gamers.

  • KitsumoKitsumo Posts: 1,216

    Yep, they're always going to have something new, PhysX, DLSS, raytracing, RTX voice, RTX video super resolution, etc. The most popular card on Steam is still the 1060 or some variant of it. So the majority of gamers can get by with 60 level graphics, and increasingly more people are getting into indie games which can run on integrated graphics, to be honest.

    I think Nvidia saw this coming and started making the switch to compute back in '07. If they'd stuck to making graphics only cards, there's a fair chance they wouldn't even be in business now. Out of the old club, only a few remain; Cirrus, PowerVR, and Matrox but they all just do embedded/industrial stuff. So I think for Nvidia, it was a matter of creating a new market just to be able to survive. If they were still a graphics card company, they'd be getting hit from all sides today; consoles, iGPUs, handhelds, mobile, etc.

    I'll admit I'm not the typical gamer, but unless DLSS and RTX get added to Railroad Tycoon 3, they aren't that important to mecheeky. I have tried a few raytracing titles that work on Linux (Spider-Man, Quake, QuakeII, Portal) and they were ... nice, but not a must-have feature. I can't get DLSS to work with anything. AMD FSR works great, but on a 1080p monitor, there's not much use for it.

    I am actually looking forward to RTX video super resolution, if it ever comes to Linux. That's going to be the killer app for me. As an aging gen-X'er, I watch a lot of old content, and being able to watch a slightly improved version would be pretty neat.

  • outrider42outrider42 Posts: 3,679
    edited July 2023

    Compute has always been the goal. You can find the hint for this with Nvidia's code names, all of their architectures are based on historical scientists and mathematicians, every one from the very start. Nvidia targeted gaming first because gaming products can sell. As a new company in 1993, they released their first product in 1995. It would take them time to gather the resources to push into compute. After several years of research they released CUDA in 2007. CUDA is largely meaningless for video games, so it should be clear what Nvidia's goals were way back then. The first Quadro cards were released before CUDA even came out, back in 2004.

    The pro Quadro and A Series use the same chips as gaming products. They cross over a lot. So it should never have been a surprise when ray tracing and tensor cores came to gaming cards. It saves them money from having to design too many different products. The Quadros get the best binned chips, gamers get the leftovers. This is an important thing to remember...gamers come after the rest. But gamers don't lose out on all the tech, they just get it filtered down to a degree.

    I can recall the arguments in this forum when I said gaming would get RT and Tensor cores. That actually irritated some certain people who insisted there was no way gaming products would get something intended for science. They scoffed at the mere suggestion of a such a thing. Ouch. Nvidia actually found a way to work these cores into games, as initially they didn't really have a use for Tensor cores. Notice how poor the first version of DLSS was, and it truly was poor, it looked like somebody smeared Vaseline all over the image. That should be an indication of how DLSS was kind of an afterthought for gaming. They were just trying to find a use for these new Tensor cores.

    I can also recall Nvidia saying that RT and Tensor were intended to be used in tandem, with RT doing its ray tracing and Tensor denoising the image in real time. That was because ray tracing is still so demanding. However this use case has not actually panned out, as DLSS is being used instead. To me this is another sign of Nvidia not really having a plan for Tensor at first. DLSS is upscaling, not denoising.

    I tried RTX video and was not impressed. Sometimes it looks ok, sometimes it looks pretty awful. I would be happy if RTX video just handled the terrible compression we often get on low quality video, and it shows some promise there. Also, using RTX video made my GPU run surprisingly hard, and I do not want my GPU running like that just watching a video. It also needs to be more easily enabled and disabled. You should be able to just pop out a browser button and toggle it as you wish, as I certainly don't want this running on all my videos. For now you must dive deep into the Nvidia control panel to find the setting. However I get the feeling that this will be similar to DLSS, where the first iteration was not good. I bet RTX video will get much better over time. But for right now, you are not missing much.

    And just to add, DLSS is quite a bit better than FSR at low resolutions like 1080p. FSR gets better as the resolution goes up, but DLSS still generally wins in most cases. My experiences with FSR have not been great, either. I would prefer to actually mod any game that lacks DLSS rather than use FSR. This is quite easy to do for many games now. However, as I do have a 3090, I don't really need DLSS much. I think the only time I used it was for Control, but I also tend to play at high refresh rates when possible. If I was sticking to 60 fps I wouldn't need it.

    As a fellow that has been around long enough to play Atari, it is wild how far things have come. One memory I have is playing the first Metal Gear Solid on PS1. In that game you can find a bathroom, and a few scenes take place in one. What was wild to me was the mirror, it actually worked! You can see your "reflection". I don't recall ever seeing a working mirror before, though I am sure there are others before that. MGS used a trick to make this happen, they basically had a second character in the mirror performing your actions in reverse. Crude, yet effective. Ever since that time, I would often look for my reflection in other games. I would even run for a bathroom to see if it had working mirrors, seriously, I am the person who runs to the bathroom just see if they have a reflection...for 20 years, LOL. Pretty much every bathroom since cops out by either having broken or foggy mirrors. So in a way I have long been looking for real time ray tracing in video games for a long time. Thus I was indeed excited to finally see this become a reality in modern games.

    Post edited by outrider42 on
  • kyoto kidkyoto kid Posts: 41,058
    edited July 2023

    outrider42 said:

    All PCIe GPUs can run on older versions of PCIe, they will do so capped by the older gen's bus speed, which as has been pointed out is not an issue for Iray. However, the motherboard BIOS may need updating, which is purely on the company that built the motherboard. They may have an updated BIOS which adds support for new cards, so it is ultimately on them to provide the BIOS update. This is not on Nvidia or the maker of the GPU.

    I have a friend who had a similar problem. I had sold him my old 1080ti for a nice discount so he could upgrade his 970, but he couldn't get it to work on his old motherboard. He was screwed because he was completely unable to access his BIOS on that board for some reason (apparently an issue with the particular mobo he had.) He could not update his BIOS. So he sat on the 1080ti for a while until finally building a new PC. And he is so happy he did, he is kicking himself for not doing it sooner, lol. He is more of a gamer, though.

    Which I think you would get a huge performance boost with a new PC. If your motherboard is that old, then your CPU clearly must be quite old. Thus you could be looking at possibly 10 generations or more with of upgrades on CPU. That level of power absolutely will make Daz Studio smoother to run. It is hard to buld a 12gb scene when DS itself starts chugging from the stuff you have in it.

    However there is one last thing to consider before building new...your monitor. If your monitor is vintage to your PC, it might lack modern ports for a new GPU, which is another issue my friend had. He couldn't get a newer GPU because his monitor has DVI and no HDMI or Display Port. Luckily the 1080ti still has DVI. But the 3060 does not, so if your monitor lacks the ports, you have yet another problem. It kind of sucks how fast tech can move sometimes. But at least monitors are cheap these days, and you can also get DVI adapters.

    Back to the 4060ti, I had a feeling it might get some discounts. But it is hard to say how often this will happen because we don't know what the stock situation is. If they really did not make many, then the stock may not be built up so high to where they put it on sale. At the same time, if it just sits on shelves they may need to put it on sale to move it.

    There are arguments for and against the 4060ti. For gaming it is pretty much universally panned. But for something like Daz Iray, there are arguments to be made for it. This card will be little faster than a 2080ti, while packing 5gb more VRAM, and also using half the electricity on top of that. That is actually pretty impressive when you think about it, the power draw shows true progress. The 3060 was pretty neat for Daz, yet even it uses more power than the 4060ti. If you just went by my last few sentences, you'd think this card is amazing and it kind of is...until you see the price. Like much of the 4000 series, the price is holding it back. However, even at this price, it is still the cheapest 16gb GPU that Nvidia has ever made, which is kind of sad, but that is a fact.

    ...thanks for the info.

    Yeah the Motherboard is an X58 chipset with 2 x PCIe 2.0 X16 expansion slots.  I have the last BIOS update that which was released by the manufacturer over a decade ago. There have been no updates since.   Not about to flash the BIOS with one that isn't directly from the manufacturer for that chipset as I could kill the entire machine and then I' have nothing to work with for months until I scrape up what I need for the upgrade. 

    The CPU is a 2.80 GHz 6 core/12 thread Xeon 5660 (Westmere) - no integrated graphics. 

    System memory is 24 GB of 3 channel DDR3 1066.(the most the MB supports).

    So yeah I'm running on vintage stuff here save for the drives (2 SSDs and a 2 GB storage HDD), PSU, monitors, and additional fans.

    As to the monitors they are much newer ,ASUS 24" flat screen IPS displays,.1920 x 1080 at 75 Hz. refresh rate. Both have a Display Port connection to the TitanX.

    Yeah the upgrade for the 4 components + W11 Pro OEM is currently around 911$. Could really use one of those stimulus cheques right now as I am barely eking it out from month to month .

    As a fellow that has been around long enough to play Atari, it is wild how far things have come. One memory I have is playing the first Metal Gear Solid on PS1. In that game you can find a bathroom, and a few scenes take place in one. What was wild to me was the mirror, it actually worked! You can see your "reflection". I don't recall ever seeing a working mirror before, though I am sure there are others before that. MGS used a trick to make this happen, they basically had a second character in the mirror performing your actions in reverse. Crude, yet effective. Ever since that time, I would often look for my reflection in other games. I would even run for a bathroom to see if it had working mirrors, seriously, I am the person who runs to the bathroom just see if they have a reflection...for 20 years, LOL. Pretty much every bathroom since cops out by either having broken or foggy mirrors. So in a way I have long been looking for real time ray tracing in video games for a long time. Thus I was indeed excited to finally see this become a reality in modern games.

    ...I still remember playing Wizardry, Castle Wolfenstein amd :Lunar Lander on a 48K Apple ][ with the dual "shoebox" single sided floppy drives.

    Before that it was Bob Leedom's Super Star Trek on the mainframe when when I was at college in the 1970s.

    I'm sure LeatherGryphon has me beat though.

    Post edited by kyoto kid on
  • KitsumoKitsumo Posts: 1,216

    The first game I played with a reflective mirror was Quake. It wasn't enabled by default, you had to enter r_mirroralpha 1 and the only thing that was actually reflective was the stained glass window in the easy path on the opening map. I guess it wasn't feasible to include it throughout the game, but Id put that one detail in just to show they could do it. You could also make the water partially transparent, but with no volumetrics, it looked kind of fake. That game was hugely innovative for 1996. I think it was 99 or 2000 before I had a card that could reliably run GLQuake at 30fps.

    I laugh when the gamer-bros make comments like "Ok, enjoy your 35fps". My first card was an S3 Virge/325 - after playing Quake at 14fps, anything over 20 feels like an IMAX movie.

    I miss those dial-up internet days of waiting for my PC Gamer Magazine to arrive (with the demo disc) every month. I still have my discs. One of these days, I'll convert them to ISOs and add them to archive.org.

  • outrider42outrider42 Posts: 3,679

    I have to say I do get amused by some gamers absolute obsession with getting the highest frame rates at all times. I can understand it to a point, but to ridicule someone for playing 30 is nauseating. It is a preference, people. I am not going to make my game look like a potato just to hit some extra frames, either. I have a 144hz screen, and a 3090, so I can totally hit that in a lot of games. It is most visible to me in driving games. But I don't complain about not getting that number, either. I grew up playing way worse. I didn't have PCs growing up, I had consoles.

    On the RTX VRAM, I looked up my test notes and found that my 1080ti would use between 800mb and 1gb of extra VRAM for a given scene versus my 3060. In these I used the test cards without any screen attached to them to avoid any extra variables. I didn't test large scenes, I was mainly testing for speed. It is possible the gap grows larger with larger scenes, but that is purely a guess without a proper test. I can say that my 3060 doesn't run out of VRAM as often as my 1080ti would. Which is logical, with the RTX emulation on GTX, it is like having 2 extra gb instead of just 1. While going from Windows 7 to 11 would alter the equation a little, I don't think 10 or 11 really reserve that much more than 7. Going to 11 you would probably only see a few hundred mb being used. So you would still have an overall uptick in VRAM capacity. Not much, but still more than nothing.

  • PerttiAPerttiA Posts: 10,024

    outrider42 said:

    While going from Windows 7 to 11 would alter the equation a little, I don't think 10 or 11 really reserve that much more than 7. Going to 11 you would probably only see a few hundred mb being used. So you would still have an overall uptick in VRAM capacity. Not much, but still more than nothing.

    W10 takes about 800MB's more VRAM than W7, this is for example seen in the DS logs.

  • stefan.humsstefan.hums Posts: 132

    PerttiA said:

    outrider42 said:

    While going from Windows 7 to 11 would alter the equation a little, I don't think 10 or 11 really reserve that much more than 7. Going to 11 you would probably only see a few hundred mb being used. So you would still have an overall uptick in VRAM capacity. Not much, but still more than nothing.

    W10 takes about 800MB's more VRAM than W7, this is for example seen in the DS logs.

    With respects, I'm really curious where you get this nonsense from. And also what is seen in the DS logs is not the truth. wink

    Currently at my PC with Windows 10 Professional and RTX 3060, Opera browser active, Daz Studio freshly opened with empty scene, viewport Texture Shaded.

    DS log says: CUDA device 0 (NVIDIA GeForce RTX 3060): compute capability 8.6, 12.000 GiB total, 11.055 GiB available, display attached

    GPU-Z clearly says: Memory Used: 534 MB - fluctuates between 507 and 550 MB, depending on what window is in foreground. (Btw same VRAM usage I also get from Aida64.)

    Now where are the other about 500 MB, DS log claims to be in use? Who is right, the DS log or GPU-Z / Aida64? Even the Winwows task manager says that only 0.4 of 12 GB dedicated video memory is used. The DS log is clearly wrong there.

    Also the first time on W10 when I had the RTX 2060 Super (8 GB), DS always claimed only 7 GB were available, but by GPU-Z or Aida64, in use were less than 400 MB (without web browser opened). This has not changed with the 3060.

    I assume, what is reported by the DS log (more exactly by Iray) is just a static value and has not much to do with the reality. But it is a fact that Windows 10 does NOT take much more VRAM than 7 did. And I trust GPU-Z and Aida64 more than the DS log, already back then the reported geometry and texture memory usage were only estimates - the reason why Nvidia removed it from the Iray log.

  • PerttiAPerttiA Posts: 10,024

    stefan.hums said:

    PerttiA said:

    outrider42 said:

    While going from Windows 7 to 11 would alter the equation a little, I don't think 10 or 11 really reserve that much more than 7. Going to 11 you would probably only see a few hundred mb being used. So you would still have an overall uptick in VRAM capacity. Not much, but still more than nothing.

    W10 takes about 800MB's more VRAM than W7, this is for example seen in the DS logs.

    With respects, I'm really curious where you get this nonsense from. And also what is seen in the DS logs is not the truth. wink

    Currently at my PC with Windows 10 Professional and RTX 3060, Opera browser active, Daz Studio freshly opened with empty scene, viewport Texture Shaded.

    DS log says: CUDA device 0 (NVIDIA GeForce RTX 3060): compute capability 8.6, 12.000 GiB total, 11.055 GiB available, display attached

    GPU-Z clearly says: Memory Used: 534 MB - fluctuates between 507 and 550 MB, depending on what window is in foreground. (Btw same VRAM usage I also get from Aida64.)

    Now where are the other about 500 MB, DS log claims to be in use? Who is right, the DS log or GPU-Z / Aida64? Even the Winwows task manager says that only 0.4 of 12 GB dedicated video memory is used. The DS log is clearly wrong there.

    Also the first time on W10 when I had the RTX 2060 Super (8 GB), DS always claimed only 7 GB were available, but by GPU-Z or Aida64, in use were less than 400 MB (without web browser opened). This has not changed with the 3060.

    I assume, what is reported by the DS log (more exactly by Iray) is just a static value and has not much to do with the reality. But it is a fact that Windows 10 does NOT take much more VRAM than 7 did. And I trust GPU-Z and Aida64 more than the DS log, already back then the reported geometry and texture memory usage were only estimates - the reason why Nvidia removed it from the Iray log.

    Based on testing on W7 and countless discussions and comparisons here on the forums, but I do admit, nobody has taken the time to check and report how much VRAM is really used and available in different stages of the rendering process and when does the VRAM run out.
    I have done it on W7, but have not yet moved to W10, so I can't test it myself.

    Below is a test I made some time ago to see how much RAM and VRAM was used while rendering in IRAY (in W7)

    Case a) just one lightweight G8 figure with lightweight clothing and hair
    Case b) four similar G8 characters with architecture
    Case c and d) started increasing SubD on the characters to see at which point the rendering would drop to CPU

    "RAM/GB" and "VRAM/MB" taken from GPU-Z, "DS Log/GiB" taken from DS Log, no other programs were running but DS and GPU-Z
    The "DS Log/GiB" is the sum of Geometry usage, Texture usage and Working Space - After Geometry and Textures, there should still be at least a Gigabyte of VRAM available for the Working space => In my case, Geometry + Textures should not exceed 4.7GiB

    Note; Case c) was already using 38GB's of RAM, even though the rendering was done on the GPU, Case d) when rendering on CPU the RAM usage went almost over my 64GB's

    Tests made using RTX 2070 Super (8GB), i7-5820K, 64GB's of RAM on W7 Ultimate and DS 4.15

  • Richard HaseltineRichard Haseltine Posts: 101,002

    Note that, as far as I am aware, not all memory is available to Ds (or anything else) - Windows caps the amount a single application can use.

  • kyoto kidkyoto kid Posts: 41,058

    ...I remember back in the 32 bit days the memory cap for any programme was 2 GB (there was a way to push that to 3GB but you needed 4 GB of memory [the maximum at the time] so the OS and system processes would still have memory available to them).

    Moving to 64 bit processing was supposed to eliminate those caps.

    As to the 1 GB of VRAM reserved by WDDM2, that's been corroborated on the Nvidia forums.

  • stefan.humsstefan.hums Posts: 132

    kyoto kid said:

    As to the 1 GB of VRAM reserved by WDDM2, that's been corroborated on the Nvidia forums.

    Ok, this would explain why the DS log reports 1 GB less than the physically VRAM for available memory, although only half of the 1 GB is used. But I guess this 1 GB reserved doesn't also mean that it can not be used at all, even if required.

    Btw: Windows 10 task manager shows me on the GPU tab "For hardware reserved memory 146 MB". Hmm... would be interesting to know what really is reserved by driver and Windows and how much of the VRAM then can be used by applications at maximum.

    Highest VRAM usage I have seen on my PC while rendering on GPU was approximately 12150 of 12288 MB.

  • just ordered the MSI gtx 4060 ti 16gb, it wil run together with the rtx 3060 that i already have in use for a year. I'm exited in expectation to see what boost it wil give. cpu ryzen 7-5800x, ram 32gb 4400mhz , MB asus rog B550m, psu MSI 850w
  • robert_poirot said:

    just ordered the MSI gtx 4060 ti 16gb, it wil run together with the rtx 3060 that i already have in use for a year. I'm exited in expectation to see what boost it wil give. cpu ryzen 7-5800x, ram 32gb 4400mhz , MB asus rog B5

    50m, psu MSI 850w

     

    Just curious, did you happen to get that setup and running yet?

    I`m planning on getting the 4060ti 16gb and doing the same.

     

Sign In or Register to comment.