What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce GTX 680 2GB Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The Witcher 2 (DX9)

The Witcher 2 (DX9)


The Witcher 2 may be a DX9 based game but its graphics quality is beyond reproach. In this benchmark we take an area out of The Kayran mission and include one of the toughest effects the graphics engine has in store for the GPU: rain. Throughout this sequence, rain plays a large part but explosions, combat and even some sun shafts are included as well.

1920 x 1200

GTX-680-72.jpg


GTX-680-73.jpg


2560 x 1600

GTX-680-74.jpg


GTX-680-75.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Taking Image Quality to the Next Level

Taking Image Quality to the Next Level


In this section we take a number of games we have tested previously in this review and bring things to the next level by pushing the in-game settings to the highest possible level. All other methodologies remain the same.


Batman: Arkham City

GTX-680-36.jpg


Crysis 2

GTX-680-43.jpg


Dirt 3

GTX-680-52.jpg


Metro 2033

GTX-680-55.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Taking Image Quality to the Next Level (pg.2)

Taking Image Quality to the Next Level (pg.2)


In this section we take a number of games we have tested previously in this review and bring things to the next level by pushing the in-game settings to the highest possible level. All other methodologies remain the same.

Shogun 2: Total War

GTX-680-61.jpg


The Elder Scrolls: Skyrim

GTX-680-66.jpg


Wargame: European Escalation

GTX-680-71.jpg


The Witcher 2

GTX-680-76.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Surround / Eyefinity Multi Monitor Performance

Surround / Eyefinity Multi Monitor Performance


Both NVIDIA and AMD now have single GPU multi monitor output options for some truly immersive gaming. However, spanning a game across three or more monitors demands a serious amount of resources which makes this a perfect test for ultra high-end solutions.

While all solutions have the ability to implement bezel correction, we leave this feature disabled in order to ensure compatibility. The benchmarks run remain the same as in normal testing scenarios.



Batman: Arkham City

GTX-680-83.jpg


Battlefield 3

GTX-680-84.jpg


Crysis 2

GTX-680-85.jpg


Deus Ex: HR

GTX-680-86.jpg


Dirt 3

GTX-680-87.jpg


Our first batch of multi monitor tests shows the HD 7970 banking on its higher memory footprint. Even though the AMD card doesn't beat the GTX 680 in most cases, both cards run a lot closer here than they did at single monitor resolutions. Nonetheless, it is still impressive to see two single GPU cards returning playable framerates at such high resolutions.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Surround / Eyefinity Multi Monitor Performance (pg.2)

Surround / Eyefinity Multi Monitor Performance (pg.2)


Both NVIDIA and AMD now have single GPU multi monitor output options for some truly immersive gaming. However, spanning a game across three or more monitors demands a serious amount of resources which makes this a perfect test for ultra high-end solutions.

While all solutions have the ability to implement bezel correction, we leave this feature disabled in order to ensure compatibility. The benchmarks run remain the same as in normal testing scenarios.



Metro 2033

GTX-680-88.jpg


Shogun 2: Total War

GTX-680-89.jpg


The Elder Scrolls: Skyrim

GTX-680-90.jpg


Wargame: European Escalation

GTX-680-91.jpg


The Witcher 2

GTX-680-92.jpg


Our last few ultra high resolution tests tell a bit of a conflicting story. On one hand the HD 7970 is able to pull ahead in several of the most demanding games but the GTX 680 offers better driver support. The reason we say this is the lack of Eyefinity support for The Witcher 2 which used to run fine but as of version 2.0's display changes, no longer works with AMD's solution. NVIDIA found a way around this but even now, months after the patch, AMD still hasn't added functional support.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
New Overclocking Modes Explained

New Overclocking Modes Explained


As we mentioned several dozen pages ago, the introduction of GPU Boost has led to some new challenges when trying to push clock speeds. Instead of dealing with the typical core, memory and shader clocks seen on current NVIDIA cards, the Shader domain clock has been eliminated since it runs at a 1:1 ratio with the rest of the processing engine.

There are also several other new options which allow users to increase performance like control over GPU Boost and the ability to modify power consumption for some additional overhead. The old fashioned way of overclocking has been thrown into the wind so properly harnessing every tool at your disposal will help maximize clocks and ultimately lead to optimal performance. For this section, we will be using EVGA’s new Precision X tool but expect to see many of these same options make their way into MSI’s AfterBurner, Gigabyte’s SOC Tuner and ASUS’ GPU Tweak utility in some form or another.

GTX-680-123.gif

While Precision’s default interface changed to a drastic degree, its basic high level functionality has remained the same. It still allows you to overclock, monitor and generally tweak the hell out of your graphics card. We won’t drill down into every part of Precision X but there are three items you’ll want to familiarize yourself with: Power Target, GPU Clock Offset and Mem Clock Offset. It is these sliders that allow clock speeds to be modified

GTX-680-112.gif

To begin with, there’s one major caveat when overclocking a GTX 680: the Base Clock can never be increased. Instead, you will be using the offsets to achieve higher GPU Boost frequencies while modifying the minimum level at which GPU Boost will kick in.

GTX-680-95.jpg

GTX-680-96.jpg

At the reference speeds, Kepler will always strive to reach a certain power target by pushing clock speeds upwards via GPU Boost when the core isn’t fully utilized, providing the preset operating range is adhered to. In order to begin overclocking it is always a good idea to set a higher Power Target so your clock speeds aren’t artificially constrained by the default TDP limits. Just by moving this setting upwards without modifying anything else, increased performance can be realized since GPU Boost will be automatically given some extra headroom in some cases (but not all). Conversely, lowering the Power Target is an option for those of you who don’t need to run at ultra high framerates and want to conserve electricity and lower heat output.

Playing around with the GPU and Memory Offsets is where the real fun begins. Remember that the default GPU Boost clock is 1058MHz and the vast majority of applications will likely cause the GPU to run at that speed. Bumping up the Offset literally heightens GPU Boost’s range by the amount you set. For example, if you set an Offset of 125MHz, situations that saw the core running at 1058MHz will now cause it to Boost up to 1183MHz while games that allowed for 1150MHz would now strive for 1275MHz. The Memory Offsets behave in the same way except they aren’t quite as constrained as the core is.

Of course, all of these numbers are dependent upon the card operating within its TDP limits. This is also why setting a Power Target is so important since without changing it, there would be much less headroom to play with. In addition, keeping the card cool will also ensure that it can run at higher GPU Boost clock speeds without slamming head first into a power and thermal barrier.

GTX-680-125.jpg

The last item we wanted to look at was EVGA’s inclusion of a Framerate Target setting. With this enabled, the graphics card will try to attain a pre determined FPS without you having to enable VSync. Ultra high framerate situations increase power consumption and usually aren’t beneficial to the end user so EVGA now allows for an artificial cap to be placed upon the card. This really is a novel idea which could (in the long run at least) decrease rendering inefficiencies in a way that’s invisible to the end user.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Overclocking Results

Overclocking Results


So after the previous page’s explanations, we’re all ready to get going on the actual overclocking results we achieved with our reference sample. Please note that all of these results were done at the default voltage with EVGA Precision X along with the maximum Power Target of 132% applied to ensure the clock speed increases get used in as many situations as possible. As you can see, the results were quite impressive right across the board.

While the results may look a bit low compared to some achieved by other publications, we have tested these to be completely stable through our entire benchmarking suite.

GPU Clock Offset: +147MHz
Memory Clock Offset: +528MHz (QDR)

GTX-680-97.jpg

GTX-680-98.jpg

GTX-680-99.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Temperature & Acoustical Testing / Power Consumption

Temperature Analysis


For all temperature testing, the cards were placed on an open test bench with a single 120mm 1200RPM fan placed ~8” away from the heatsink. The ambient temperature was kept at a constant 22°C (+/- 0.5°C). If the ambient temperatures rose above 23°C at any time throughout the test, all benchmarking was stopped. For this test we use the 3DMark Batch Size test at its highest triangle count with 4xAA and 16xAF enabled and looped it for one hour to determine the peak load temperature as measured by GPU-Z.

For Idle tests, we let the system idle at the Windows 7 desktop for 15 minutes and recorded the peak temperature.


GTX-680-77.jpg

The GTX 680 is a relatively cool running card which never got above the 80 degree mark even though we were struggling to keep the heat and humidity down in our testing room (summer-like weather in spring can be a bitch without air conditioning).


Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, a loop of Unigine Heave 2.5 is used in order to generate a constant load on the GPU(s) over the course of 20 minutes.

GTX-680-94.jpg

It looks like NVIDIA sacrificed a bit on the temperature front in order to keep the GTX 680’s acoustical footprint to a minimum. Interestingly enough, these numbers aren’t quite as good as those posted by the near-mute GTX 580 but they are still enough to give the HD 7970 a thorough trouncing.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we once again use the Batch Render test in 3DMark06 and let it run for 30 minutes to determine the peak power consumption while letting the card sit at a stable Windows desktop for 30 minutes to determine the peak idle power consumption. We have also included several other tests as well.

Please note that after extensive testing, we have found that simply plugging in a power meter to a wall outlet or UPS will NOT give you accurate power consumption numbers due to slight changes in the input voltage. Thus we use a Tripp-Lite 1800W line conditioner between the 120V outlet and the power meter.

GTX-680-78.jpg

Hold on a second while we allow you to pick yourself off the floor. No, these numbers aren’t an error since we ran them a half dozen times just to ensure they had some foundation in reality. NVIDIA has taken an absolute quantum leap forward in terms of performance per watt and soundly beats the HD 7970 on the power consumption front. While the GTX 680 may not have AMD’s ZeroCore Power, there really isn’t anything to fault here considering how little time these cards should end up idling in gamers’ systems.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Conclusion

Conclusion


To set the stage for this conclusion, let’s take a quick trip down memory lane. Back in 2008 AMD introduced the HD 4870, a card that not only outperformed NVIDIA’s recently released GTX 260 but it came in at a significantly lower price point. That one product brought NVIDIA crashing back down to earth and the resulting fallout marked a turning point where high end Radeon cards became performance per watt leaders while the GeForce lineup gradually shifted towards powerful, yet inefficient designs. Well, the design mantra of yesteryear is a thing of the past and the good folks at AMD now have their own nightmare situation to deal with.

After years of releasing inefficient, large and expensive GPUs, NVIDIA's GK104 core - and by association the GTX 680 - is not only smaller and less power hungry than Tahiti but it also outperforms the best AMD can offer by a substantial amount. A price that doesn’t break the $500 mark and undercuts the HD 7970 by some fifty bucks is just the icing on the cake. In our opinion, it is a true game-changer for the GeForce lineup since it bucks past trends and allows NVIDIA to essentially offer more for less. This card couldn’t have been released at a better time since AMD was not been able to ramp up HD 7970 production fast enough to meet demand, leaving many potential customers without cards and ready to embrace what NVIDIA is offering. In addition, the GTX 680 will force AMD's hand into lowering the prices of their high end lineup, making a whole generation of cards that much more affordable.

GTX-680-82.jpg

From a lower image quality standpoint against the immediate competition, the GTX 680 2GB doesn’t necessarily bring anything new to the table. But let’s be honest here: enthusiasts don’t play games without anti aliasing and other forms of IQ wizardry enabled. NVIDIA’s new architecture is able to stretch its legs in these instances, sometimes doubling the GTX 680’s performance lead over the HD 7970 when MSAA is turned on. Across nearly every test in our benchmark suite, the GTX 680 surpassed every other single GPU card. That’s no small feat considering the significant gap AMD managed to open up between their current generation cards and GF110-based designs.

Speaking of the GTX 580, it will currently anchor the sub-$450 market for NVIDIA as its remaining stocks are gradually depleted but it has been thoroughly manhandled by the Kepler architecture. A 36% performance increase from generation to generation is a step in the right direction especially when you consider that Kepler uses so many of Fermi’s basic architectural elements.

Perhaps surprisingly to some, the GTX 680’s 2GB of memory really didn’t seem to hold it back in many of our tests, even when pushing maximum in game image quality settings. There will always be some fringe cases like Metro 2033 where the 1GB difference between the two cards will play out in AMD’s favor but when setting details so high, more often than not the GPU core will become a limiting factor rather than the memory. One exception to this rule was multi-monitor testing where the HD 7970 was able to make up a ton of lost ground and drew even with or beat the GTX 680 in several cases.

GTX-680-126.jpg

The EVGA GTX 680 HydroCopper….Coming Soon

The real genre-defining improvements lie underneath the GTX 680’s skin, unseen to most gamers but they’re just as important as high framerates. By refining the basic design principles of Fermi and moving to the 28nm manufacturing process, NVIDIA has made performance improvements where it counts while minimizing die size (which leads to lower costs for end-users) and optimizing efficiency. While AMD is still very much alive in the graphics game, Kepler makes the Tahiti architecture look half a generation out-of-date and about nine months too late.

Other than what can be considered the best graphics card released in the last year, there are several other technologies bundled into this launch, all of which help make the GTX 680 a cut above the competition. GPU Boost is an innovative and non-restrictive way to modulate performance based upon an application’s actual needs. Most importantly to end-users who may have been worried about sample to sample variance, Boost doesn’t seem to be all that affected by standard temperature fluctuations and adds a new dimension to overclocking. Adaptive Vsync needs to be experienced to be understood since the in-game fluidity it offers will be priceless for many gamers. The advances to NVIDIA Surround will be appreciated by multi-monitor aficionados even though we still feel that AMD is a step ahead on the user interface front. Perhaps the biggest selling point here is that none of the aforementioned features had a half baked feel; every one of them was well developed, relatively bug free and some like Adaptive VSync even had a meaningful impact upon our gameplay experience.

We could prattle on and on extolling the GTX 680’s virtues but here’s what really matters: NVIDIA’s newest flagship card is superior to the HD 7970 in almost every way. Whether it is performance, power consumption, noise, features, price or launch day availability, it currently owns the road and won’t be looking over its shoulder for some time to come.


240463923aca1f6b.jpg
 
Last edited:
Status
Not open for further replies.

Latest posts

Top