What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

The NVIDIA GeForce GTX 1080 Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
DX12 / 4K: Quantum Break / Rise of the Tomb Raider

Quantum Break


Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.

Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.






Rise of the Tomb Raider


The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.



 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Analyzing Temperatures & Frequencies Over Time

Analyzing Temperatures & Frequencies Over Time


Modern graphics card designs make use of several advanced hardware and software facing algorithms in an effort to hit an optimal balance between performance, acoustics, voltage, power and heat output. Traditionally this leads to maximized clock speeds within a given set of parameters. Conversely, if one of those last two metrics (those being heat and power consumption) steps into the equation in a negative manner it is quite likely that voltages and resulting core clocks will be reduced to insure the GPU remains within design specifications. We’ve seen this happen quite aggressively on some AMD cards while NVIDIA’s reference cards also tend to fluctuate their frequencies. To be clear, this is a feature by design rather than a problem in most situations.

In many cases clock speeds won’t be touched until the card in question reaches a preset temperature, whereupon the software and onboard hardware will work in tandem to carefully regulate other areas such as fan speeds and voltages to insure maximum frequency output without an overly loud fan. Since this algorithm typically doesn’t kick into full force in the first few minutes of gaming, the “true” performance of many graphics cards won’t be realized through a typical 1-3 minute benchmarking run. Hence why we use a 10-minute warm up period before all of our benchmarks.

For now, let’s see how these new algorithms are used when the card is running at default speeds.


Despite the fact NVIDIA showed a GTX 1080 running at over 2GHz and under 70°C at their launch event, the actual reality is quite different. Here we see the core temperatures gradually rising to around 83°C before they are eventually tamed by a combination of the heatsink’s fan speed and voltage / clock speed manipulation. While this is far from the GTX 1080’s official throttle temperature of 90°C, the trickle-down effect it has upon clock speeds and performance is quite interesting to see.


While temperatures climb quite steadily, looking at the chart above it becomes quite apparent that NVIDIA’s fan speed profile doesn’t really react until the core reaches approximately 70°C whereupon it begins a gradual ramp-up before it hits a plateau near 2000RPMs. That stately and sedate RPM increase is extremely welcome given the noticeable noise caused by the GTX 980 Ti’s rapid ascension to its fan’s rotational plateau but it isn’t quite as lethargic as the GTX 980’s profile either.


Due to the fact that GPU Boost insures there’s a direct correlation between the core’s temperature and its maximum frequency, as the GTX 1080 gets hotter its speed starts fluctuating a bit. The reduction in frequency is minor in the grand scheme of things (about 150MHz) but we have certainly seen better results from NVIDIA in this metric. To make matters even more interesting it seems like the GTX 1080’s aggressive power draw limiter is behind this frequency drop-off rather that temperatures.


Naturally, with the frequency reduction there’s a corresponding framerate hit as well. In this case Rise of the Tomb Raider goes from running at 75FPS to hovering between 72FPS and so we’re looking at an approximate 4% reduction in real-world performance. To avoid this you will need to use a custom fan profile (just expect a corresponding noise increase as well) through an application like EVGA’s Precision or wait for custom cooled GTX 1080 cards from NVIDIA’s board partners. For the record, this isn’t exactly a great result for a card that commands a $100 premium over what will likely be very well behaved alternate solutions from EVGA, ASUS, Gigabyte, MSI and others.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Thermal Imaging / Acoustics / Power Consumption

Thermal Imaging



As with all full-coverage coolers, there really isn’t much to see on the thermal imaging shots other than a small head point directly below the core on the backplate. Since all of the heat is effectively exhausted out the back, there should be very little worry about temperatures problems with motherboard-mounted components.


Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Rise of the Tomb Raider is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.


As we’ve already seen the GTX 1080’s fan curve is somewhat lethargic to the point of it possibly not being aggressive enough. With that being said, this is a pretty quiet card which is only beaten out by a few other substantially less powerful options. It should be interesting to see what board partners do with this thing since I’m sure they will have cooler designs that exhibit lower temperatures and even quieter noise levels.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Rise of the Tomb Raider while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.


With its 16nm manufacturing process, advanced power delivery system and relatively cool temperatures, it should be no surprise that the GTX 1080 is a power miser. When you take these results and combine them with the absolutely spectacular performance numbers, this becomes the absolute best performance per watt GPU available by a country mile. It really is incredible what NVIDIA has accomplished here.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Overclocking Results; Going Well Beyond Insanity

Overclocking Results


Overclocking the GTX 1080 and all upcoming Pascal cards will be a different affair from Maxwell, Kepler and other NVIDIA cards. While the Power, Voltage and Temperature limits are all present and accounted for once again, some additional features have been added in an effort to squeeze every ounce of extra clock speed out of these cores.


NVIDIA’s GTX 1080 takes this clock speed balancing act to the next level with GPU Boost 3.0 which utilizes all of the additional measurement points and event prediction from the 2.0 iteration and builds in even more granularity for overclocking. GPU Boost 2.0 utilized a fixed frequency offset which adjusted clock speeds and voltages based off of a single voltage-based linear multiplier. While this did allow for good frequency scaling relative to the attainable maximum, some performance was still left on the cutting room floor.

GPU Boost 3.0 endeavors to rectify this situation by giving overclockers the possibility to adjust clock speeds and their associated voltages across several points along a voltage path, thus –in theory- delivering better performance than would have otherwise been attained. Basically, since Pascal features many different voltage read points, they can be adjusted one at a time to insure frequencies at each level are maximized.


EVGA’s Precision tool has been upgraded to take advantage of this new functionality by adding three different voltage scaling options: Basic, Linear and Manual. Within the new dialog box you will find 30 small bars, each of which indicates a separate voltage point which can be increased or decreased based on stability. In addition, each shows the final attainable voltage and Base clock speed if the overclock is tested as stable.

In Basic mode you simply click a point above one of the voltage bars and a green line will move upwards showing an approximate visual-based level for each point. This is extremely straightforward but it is simply a quick and easy fix for a no-nonsense overclock. In testing I found this to be the least exacting way of squeezing performance out of the GTX 1080.

I wasn’t exactly successful with the Linear option either since even though it offers a bit more flexibility, it still felt a bit limiting since the targets move in a very predetermined pattern.

The fully Manual area is where I achieved the best results since it grants a user complete control over the entire system and each point’s voltage level. They can each be adjusted in such a way that stability can be virtually insured if you spend enough time at it. With these in hand you can (and should!) slowly but surely manipulate them until the optimal combination of voltages increases –or the “heights” of each bar- and clock speeds is found. Boosting a single bar to its maximum value is a surefire way to failure though since one voltage point cannot hope to be stable with the weight of an overclock pressing down on it. Granted, this option requires a substantial investment in terms of effort but the payoffs are certainly there as you will see in the results below.

If you choose to skip using your own judgment in this respect, EVGA has a Scan tool in the Manual section which will slowly adjust each point, run their OC Scanner utility to test for stability and then move onto the next point. While it wasn’t completely working during our testing, it should be ready in time for launch.

So with all of this being said, what did I end up settling on as an overclock?


Well that isn’t all that shabby now is it? After more than two hours working on Precision the core was able to hit a constant Boost speed of 2126MHz, representing a pretty shocking 400MHz+ overclock when compared to the standard load results I achieved. The memory got in on the party as well with a decent showing at 5670MHz. This was all accomplished while maintaining a fan speed of 55% which proved to be quiet enough to not disturb gaming sessions.

Now actually getting to this point wasn’t easy and it should be noted that even though NVIDIA’s cooler is quite capable, its default fan profile is absolute garbage if you intend on overclocking. I also smashed head first into a voltage and Power Limit wall so even if I would have been able to achieve higher theoretical frequencies, they would have been dragged back down to earth by NVIDIA’s Boost algorithms.

Performance scaling was extremely decent as well, as evidenced by the in-game results below. OK, now I’m being facetious….the resulting performance from these overclocks is mind-blowing. I want to put this into context for everyone: when overclocked, the GTX 1080 can achieve performance that’s close to TWO GTX 980 Ti’s in SLI.






 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Conclusion; Mission Accomplished & Then Some!

Conclusion; Mission Accomplished & Then Some!


NVIDIA’s GTX 1080 represents something almost unique it today’s computer component market, a space that has been continually subjected to incremental improvements from one product generation to the next. I can’t remember the last time a product allowed me to write from the heart instead of trying to place some kind of positive spin on the latest yearly stutter that may have brought a bit more performance to the table. Pascal and by extension the GTX 1080 have changed that in a big way by offering a leap forward in terms of graphical efficiency, overall performance and a top-to-bottom feature set. Not only am I excited about what this kind of launch does to the competitive landscape –they say challenges breed innovation- but I’m also anxious to see what developers will accomplish with this newfound horsepower.

To say to say the GTX 1080 exceeded expectations understating things by an order of magnitude. While NVIDIA did spill some of the beans with their nebulous but nonetheless cheer-inducing launch event performance graphs, the full reality of the situation is still actually a bit awe-inspiring. What’s been accomplished here is a generational performance shift of a size not seen since Fermi launched and ushered in the DX1 age. And yet for a multitude of reasons Pascal is more impressive than Fermi ever was.


From a raw framerate producing standpoint its impossible not to be impressed with the GTX 1080. Rather than towing the usual conservative inter-generational line of 15-20% increases that keep buyers of the last architecture’s flagships content, it demolishes preconceptions. Remember, like the GTX 780 before it, the GTX 980 offered performance that was about 30% better than its direct predecessor. With the GTX 1080 we are seeing a roughly 68% improvement over the GTX 980’s framerates at 1440P and in some memory-limited scenarios the delta between these cards can reach much higher than that. For GTX 680 and GTX 780 users, this thing will act like a massive defibrillator shock for their systems’ flagging performance in today’s latest titles.

Against the GTX 980 Ti, a card that launched for $649 and just until recently was considered an ultra high end option the GTX 1080 actually looks like a viable upgrade path, particularly when overclocked. That’s something that could have never been said about the GTX 780 Ti to GTX 980 metric. Not only does it offer 35% higher (on average) framerates than NVIDIA's erstwhile flagship but it does so while consuming less power. While we couldn’t add it into these charts in time, the $999 TITAN X is about 3% faster than the GTX 980 Ti so it would still be beaten like a lazy donkey by NVIDIA’s latest 104-series core. Looking at these results, I can’t help but be anxious for what the GTX 1070 could potentially bring to the table for more budget-conscious gamers.

With all of this being taken into account the GTX 1080 is able to walk all over the R9 Fury X too, at least in DX11 situations. NVIDIA is obviously marching to the beat of a different drummer but don’t count AMD out of the fight just yet. By looking past the initial numbers versus the GTX 1080 we can see AMD’s driver team has been able to leverage their architecture’s strengths and the Fury X is now able to step ahead of NVIDIA’s GTX 980 Ti more often than not.


DX12 actually proved to be an interesting counterpoint but one which proved absolutely detrimental to the GTX 980. In many applications its memory bandwidth was simply overwhelmed, leading to the GTX 1080 nearly doubling up on its results. There was some face-saving however in 4K DX12 since we needed to turn down MSAA in Ashes to achieve somewhat playable framerates on all the cards. This bodes well for Pascal but I find myself wondering how well GTX 980 users are prepared for gaming’s DX12 future because from these initial tests it seems like dire news indeed. The GTX 980 Ti versus GTX 1080 equation is literally identical here as it was in DX11 but the new core does show some flashes of vast superiority when asked to render bandwidth-hogging DX12 workloads.

With the GTX 1080 NVIDIA has thrown up a huge bulwark against whatever AMD has coming down the pipeline but it is numbers like this which should give Radeon users some real hope. Their DX12 performance in general is still very strong, making it evident the Fiji architecture is extremely forward thinking in the way it handles the new API’s draw calls and asynchronous workloads. Whether or not that continues into the future is anyone’s guess but at times, in very select benchmarks the Fury X can really power through these scenarios.

There is one major caveat with these DX12 results. Right now the API is still extremely immature and developers are still coming to grips with its various functionalities, hence why some benchmarks actually saw a performance reduction versus DX11. Both driver support and DX12’s integration into games still have a long way to go so above all else don’t base your purchasing decision solely upon these early benchmarks regardless of how much they favor NVIDIA’s new architecture.

So the GTX 1080 is wonderfully fast. Ludicrously fast even. It’ll make recent GTX 980 Ti buyers curse their impatience, cause GTX 980 owners to look into selling off various appendages just to buy one and it should keep AMD awake at night praying that Polaris is up to the task of competing. But regardless of how much I want to scream like a Justin Bieber fangirl at what the Pascal architecture is offering, the cynic in me realizes it isn’t perfect.

Let’s start with the already-infamous Founders Edition and its associated price, two things I haven’t really discussed until this juncture. Some items contained in this launch like the SLI Enthusiast key I can overlook as being potentially beneficial over the long term for certain niches but the FE is a head scratcher.

No matter how much NVIDIA wants to play up its premium design elements and carefully selected components, the $100 additional investment required by the Founders Edition’s will be extremely hard to justify over whatever $599 alternatives their partners are working on. This just highlights the extreme disconnect that comes along with this whole affair; I’m already comparing the Founders Edition to unreleased, unannounced, hypothetical cards and telling you to wait before jumping onto the GTX 1080 bandwagon. Why? Because they’ll hopefully offer better performance consistency than the “reference” heatsink you pay so dearly for and at least by that time you'll know what the competitive landscape looks like. And let’s be introspective for a moment; unlike other Founders / Day One / Backers editions this one doesn’t come with any extras or exclusive goodies. Luckily, that blower setup will be an awesome addition for Small Form Factor systems that live or die by temperature levels inside the chassis.

Speaking of price, for all the GTX 1080’s impressive performance benefits I’m forced to evaluate this thing as a $699 graphics card because until we see otherwise, that’s exactly what it is. The Founders Edition may very well be the only SKU available in sufficient quantities come launch day on May 27th so early adopters will have to happily chow down on that $100 “blower tax” for the chance to own one. NVIDIA knows their customers will do exactly that and they’ve priced the reference card (no, I won’t stop calling it that!) accordingly.

The GTX 1080 is clearly a superior product that completely overturns the graphics card market as we know it. While $699 will be a bitter pill to swallow for some and it may point towards a gradual uptick in the price we all pay for GPUs, there’s no denying that the GTX 1080 Founders Edition still offers phenomenal bag for your buck. Meanwhile, the $599 versions could end up being absolutely spectacular. Regardless of what you think about NVIDIA’s pricing structure you have to appreciate what they’ve accomplished: with one single finely crafted, high performance graphics core they’ve made us all lust after an inanimate but oh-so-sexy object.

 
Last edited:
Top