The more VRAM people have, the better as far as I’m concerned.
I mean, sure, the GPU should increase in performance as well, but game devs have already hit memory walls with current gen games. Granted, a lot of PC games today are cross-platform and so are more restricted by console hardware than PC hardware, but techniques like texture streaming didn’t come about because devs were bored. There simply isn’t enough VRAM to go around to constantly maintain the level of visual fidelity that gamers expect, what with multiple render passes for shader effects/AA, procedurally generated in-game objects, physics based deformable world geometry, and all the other things needed to be stored for polygonal raster graphics. Grand Theft Auto IV, despite being a shoddy PC port, does illustrate that if we want bigger in-game worlds without visibly artificially imposed limits on the level of detail we can see, we’re going to either need a vastly more efficient way to handle and render geometry or gobs and gobs of VRAM.
I think in the current and future generations, VRAM is and will be just as important as GPU power. And it bothers me when people confidently claim that 1GB of VRAM is more than enough for a long time and that there are little to no games that can take advantage of it. It’s not so much that game devs can’t write engines to use it all, but that doing so generally incites a shitstorm of whining and complaining about a poorly written game (see Crysis) and because devs target the most common hardware configurations in the potential market.