The NVIDIA GTX 1080 Ti Performance Review

Editor-in-Chief

Share:

Author: SKYMTL
Date: March 8, 2017
Product Name: GTX 1080 Ti Founders Edition
Warranty: 3 Years

Last week NVIDIA pulled the covers off of their newest creation: the GTX 1080 Ti. Following in the footsteps of the surprisingly popular GTX 980 Ti, it is meant to push the high cost TITAN X out of its flagship role while offering both better performance and a lower price point. Considering the complete lack of competition in the $500+ gaming GPU category, this step to lower costs may be surprising to some, but we can’t forget that AMD’s Vega is supposed to launch sometime this year and NVIDIA may be proactively adjusting their product stack accordingly.

There’s also the metric of time, which has been rapidly ticking away as NVIDIA waited for some response from their competition. It may not feel like a long time but the GTX 1080 was released 10 months ago, the TITAN X seven months ago, and the GTX 980 Ti has been around for nearly two years. That may be a blink of history’s eye, but an eternity in GPU years. Now with NVIDIA’s focus shifting to Volta and – if rumors are to be believed – the GTX 1080 Ti being held back since November as a hedge bet against Vega, it was high time for this new model to get launched.

The GTX 1080 Ti is basically a TITAN X which has been massaged in an effort to heighten yields and slightly lower end user costs. As such, it too uses a 16nm GP102 core with the Pascal architecture, with the full assortment of 3584 CUDA cores and 224 texture units. The differences lie in more nuanced areas since even though the primary core structure is fundamentally the same as the TITAN X’s GP102, this iteration has one less 32-bit memory controller, one less ROP partition, and a bit less L2 cache.

As with all of NVIDIA’s GPUs dating back to Kepler, those three elements need to be scaled in a parallel fashion to avoid possible performance issues down the road. To insure optimal memory subsystem performance, each of the GTX 1080 Ti’s eleven memory controllers is paired up with a single GDDR5X IC, granting it a total of 11GB of video memory. According to NVIDIA, this will become a key differentiating factor for future games as more buyers look towards 4K and even 5K screens.

At a raw specifications level, the GTX 1080 Ti is supposed to have enough horsepower to surpass the expensive TITAN X in gaming scenarios. To put that into context, NVIDIA’s latest flagship could nearly double up on the GTX 980 Ti’s performance, while offering potentially 40% higher framerates than the GTX 1080. This might indeed the most potent Ti version ever.

Achieving TITAN X-beating performance meant increasing base / boost clocks, while also increasing the GDDR5X frequency to 11GHz to compensate for the narrower 352-bit bus. The end result is higher overall bandwidth and supposedly better in-game performance, while still maintaining a TDP of 250W. But make no mistake about it: this is still one hot, power hungry card which will require at least a 650W power supply and plenty of good ventilation within your case.

Pricing plays heavily into this equation as well. At $699 USD, the GTX 1080 Ti Founders Edition certainly won’t be affordable for the majority of gamers, and it is about $50 more expensive than the GTX 980 Ti was when it was released back in 2015. However, its inclusion into the NVIDIA lineup has pushed the GTX 1080 into a lower $499 bracket, making that versatile card a bit more affordable. Ironically, this $699 price point is exactly where the GTX 1080 FE was sitting not that long ago. It also goes without saying that the TITAN X will now move into EOL status.

NVIDIA is also doing away with their ill-advised practice of selling the Founders Edition for a premium, but come launch this will be the only version of this card available, with board partners’ iterations arriving a few weeks to a month afterwards. Our conversations with retailers point towards strong availability come launch, but given how long people have waited for the GTX 1080 Ti, there’s bound to be a lot of pent-up demand as well. It will be interesting to see what that means for the long-term stock situation.

Other than the logo emblazoned on its shroud, the GTX 1080 Ti itself is essentially indistinguishable from other Founders Edition cards. It is 11 ½” long, uses a milled aluminum shroud with an integrated illuminated GeForce logo, and there’s a full coverage backplate to dissipate additional heat.

The only major difference between this and other NVIDIA GPUs is the lack of a dedicated DVI connector on the rear I/O area. Instead, NVIDIA is using the space for additional ventilation, and the Founders Edition will ship with a DisplayPort to Dual-link DVI adapter. Board partners are free to design their cards with that “missing” connector, but I can understand where this is coming from; if you are spending $700 on the GPU, you’ll likely have a DisplayPort-equipped monitor.

What you can’t see with your bare eyes is that NVIDIA has thoroughly revised their internal heatsink design – it now has double the surface area – while also updating the PWM to a more advanced 7-phase all-digital layout. But will this lead to lower temperatures or lower fans speeds when compared against the TITAN X? Experience tells us that blower-style coolers aren’t known for their silence, but hopefully these revisions will help.

One of my biggest disappointments is the dearth of suitable competition for inclusion within this review. For the first time that I can remember, the charts in a launch-day GPU article will consist of products from a single manufacturer; that being NVIDIA. While this situation is certainly indicative of how far ahead the GeForce product lineup is right now, any healthy market needs competition. So while the GTX 1080 Ti is certainly exciting, and will deliver phenomenal performance across the board (seriously, do you expect anything less?), I find myself hoping that its doppelganger from AMD is right around the corner.

Test System & Setup

Processor: Intel i7 5960X @ 4.7GHz
Memory: G.Skill Trident X 32GB @ 3000MHz 15-16-16-35-1T
Motherboard: ASUS X99 Deluxe
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / Acer XB280HK (4K)
OS: Windows 10 Pro

Drivers:
NVIDIA 378.14 Beta

*Notes:

– All games tested have been patched to their latest version

– The OS has had all the latest hotfixes and updates installed

– All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings

The Methodology of Frame Testing, Distilled

How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results in DX11.

DX12 Benchmarking

For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.

Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.

One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.

Call of Duty: Infinite Warfare

The latest iteration in the COD series may not drag out niceties like DX12 or particularly unique playing styles but it nonetheless is a great looking game that is quite popular.

This benchmark takes place during the campaign’s Operation Port Armor wherein we run through a sequence combining various indoor and outdoor elements along with some combat.


Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file. We are now also using the High Resolution Texture Pack add-on.


Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.


Overwatch

Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.


Titanfall 2

The original Titanfall met with some reasonable success and its predecessor tries to capitalize upon that foundation by including a single player campaign while expanding multiplayer options. It also happens to be one of the best looking games released in 2016.

This benchmark sequence takes place within the Trial By Fire mission, right after the gates of the main complex are breached. Due to the randomly generated enemies in this area, getting a completely identical runthrough is challenging which is why we have increase the number of datapoints to four.


Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.


Battlefield 1

Battlefield 1 will likely become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles around. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.


Deus Ex – Mankind Divided

Deus Ex titles have historically combined excellent storytelling elements with action-forward gameplay and Mankind Divided is no difference. This run-through uses the streets and a few sewers of the main hub city Prague along with a short action sequence involving gunplay and grenades.


The Division

The Division has some of the best visuals of any game even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool. In addition, we are now using the DX12 patch.


Doom (Vulkan)

Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.


Gears of War 4

Like many of the other exclusive DX12 games we have seen, Gears of War 4 looks absolutely stunning and seems to be highly optimized to run well on a variety of hardware. In this benchmark we use Act III, Chapter III The Doorstep, a level that uses wide open views along with several high fidelity environmental effects. While Gears does indeed include a built-in benchmark we didn’t find it to be indicative of real-world performance.


Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.


Quantum Break

Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.

Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.


Rise of the Tomb Raider

Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.

The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.[/I]


Call of Duty: Infinite Warfare

The latest iteration in the COD series may not drag out niceties like DX12 or particularly unique playing styles but it nonetheless is a great looking game that is quite popular.

This benchmark takes place during the campaign’s Operation Port Armor wherein we run through a sequence combining various indoor and outdoor elements along with some combat.


Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file. We are now also using the High Resolution Texture Pack add-on.


Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.


Overwatch

Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.


Titanfall 2

The original Titanfall met with some reasonable success and its predecessor tries to capitalize upon that foundation by including a single player campaign while expanding multiplayer options. It also happens to be one of the best looking games released in 2016.

This benchmark sequence takes place within the Trial By Fire mission, right after the gates of the main complex are breached. Due to the randomly generated enemies in this area, getting a completely identical runthrough is challenging which is why we have increase the number of datapoints to four.


Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.


Battlefield 1

Battlefield 1 will likely become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles around. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.


Deus Ex – Mankind Divided

Deus Ex titles have historically combined excellent storytelling elements with action-forward gameplay and Mankind Divided is no difference. This run-through uses the streets and a few sewers of the main hub city Prague along with a short action sequence involving gunplay and grenades.


The Division

The Division has some of the best visuals of any game even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool. In addition, we are now using the DX12 patch.


Doom (Vulkan)

Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.


Gears of War 4

Like many of the other exclusive DX12 games we have seen, Gears of War 4 looks absolutely stunning and seems to be highly optimized to run well on a variety of hardware. In this benchmark we use Act III, Chapter III The Doorstep, a level that uses wide open views along with several high fidelity environmental effects. While Gears does indeed include a built-in benchmark we didn’t find it to be indicative of real-world performance.


Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.


Quantum Break

Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.

Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.


Rise of the Tomb Raider

Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.

The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.


Analyzing Temperatures & Frequencies Over Time

Modern graphics card designs make use of several advanced hardware and software facing algorithms in an effort to hit an optimal balance between performance, acoustics, voltage, power and heat output. Traditionally, this leads to maximized clock speeds within a given set of parameters. Conversely, if one of those last two metrics (those being heat and power consumption) steps into the equation in a negative manner, it is quite likely that voltages and resulting core clocks will be reduced to insure the GPU remains within design specifications. We’ve seen this happen quite aggressively on some AMD cards while NVIDIA’s reference cards also tend to fluctuate their frequencies. To be clear, this is a feature by design rather than a problem in most situations.

In many cases clock speeds won’t be touched until the card in question reaches a preset temperature, whereupon the software and onboard hardware will work in tandem to carefully regulate other areas such as fan speeds and voltages in order to insure maximum frequency output without an overly loud fan. Since this algorithm typically doesn’t kick into full force in the first few minutes of gaming, the “true” performance of many graphics cards won’t be realized through a typical 1-3 minute benchmarking run. Hence why we use a 10-minute warm up period before all of our benchmarks.

The GTX 1080 Ti is obviously a massively powerful card, but it also boasts a TDP of about 250W. That means it outputs a massive amount of heat despite using an upgraded heatsink. Does this spell trouble? Let’s find out.

Temperatures for these NVIDIA cards are typically capped at 84°C in an effort to give GPU Boost some maneuvering room, while ensuring the silicon doesn’t get excessively hot. That just so happens to be where this card tops out and the fan doesn’t allow it to go one iota above that point.

Fan speeds actually remain lower than the TITAN X, which could point towards some of those heatsink upgrades NVIDIA was talking about. This will also likely lead to lowered overall acoustics, but it is nonetheless interesting to see how even a blower-style fan is able to remain at respectable RPM levels provided the internal heatsink is up to the task.

Obviously, Boost will be working overtime to ensure clock speeds, temperatures and power consumption remain in line, but we can see that it works amazingly well. Not only do frequencies remain above those posted by the TITAN X, but they also stay consistent throughout testing.

Whereas the TITAN X tended to have a dip in framerates after a few minutes of testing due to load balancing, the GTX 1080 Ti’s output is quite steady. That points towards a well-tuned design that doesn’t need to sacrifice in one particular area to achieve its goals.

Acoustical Testing

What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Hitman Absolution is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.

On the previous page, we saw an interesting trend of higher clock speeds and lower fan speeds versus the TITAN X. Obviously, that allows the GTX 1080 Ti to be quieter than NVIDIA’s previous generation flagship. Let’s be clear here though: this isn’t a quiet card by any stretch of the imagination, but if you want near-silence it may be worthwhile to wait for custom board partner designs….just be prepared for those cards to dump all the heat back into your case.

System Power Consumption

For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

The power consumption results are somewhat interesting. Even though the GTX 1080 Ti is the most power hungry NVIDIA card we’ve tested in quite some time, much like the TITAN X its performance per watt ratio is straight through the roof. This thing is pulling about 5% more wattage than a GTX 980 Ti, yet it offers nearly double the performance. That’s quite incredible and it shows the inherent efficiency of NVIDIA’s latest chip designs.
Overclocking Results – Pushing Past 2GHz

Back when NVIDIA first showed the GTX 1080 Ti, it was displayed running at a constant overclocked speed of 2GHz. Considering the card runs at 1733MHz on a regular basis even when under heavy load, 2GHz certainly seemed like a possible achievement and it was…..but with a few sacrifices.

If you want to hit a 2GHz core speed acoustics will have to be offered up as a sacrificial lamb on this Founders Edition card. While the heatsink and fan design is geared towards efficiency, higher voltage and heat output quickly pushes them to the breaking point. As a result you will need to boost fan speeds to 70% or so to achieve adequate cooling….but be prepared to lose a bit of sanity since things get very loud, very fast.

With a few tweaks it was easy to get my sample running just over 2GHz and I expect board partners’ versions should achieve this frequency level with much less drama. Meanwhile, the memory exceeded expectations by running at 12GHz without any overt issues.

And what kind of performance should you expect from those speeds? Hold onto your hats…..


Conclusion; The Fastest Just Got More “Affordable”

I thought I would go into this conclusion being completely unsurprised at what NVIDIA is offering with the GTX 1080 Ti. That didn’t happen. Tradition dictates that the “Ti” version of an architecture offers near-TITAN levels of performance while costing significantly less. This particular version accomplished that goal…and then it shattered preconceptions. Not only does it match the latest TITAN X but it supersedes it in many games as well, though not by a massive amount. That’s a notable accomplishment given how high Pascal’s now-replaced flagship set the water mark.

In DX11 there isn’t a game (without mods) in existence that can bring the GTX 1080 Ti to its knees, and that even goes for Fallout 4 with its new memory-gobbling High Resolution Texture Pack. From Call of Duty: Infinite Warfare to Titanfall, you’ll be able to power through pretty much everything, even with AA pushed to the limit. Not only does the GTX 1080 Ti do well in current games, but there’s more than enough juice left in its tank for ultra-smooth 4K performance well into the future.

Moving on to 4K, framerates do obviously suffer but this card is infinitely better positioned to tackle such a high resolution than something like the GTX 1080. However, there is something to say about the 11GB memory layout of this card; with many game engines becoming so much better at the way they handle memory allocation, it is completely overkill. And yet, sometimes overkill is a good thing since with it you will be prepared for that oddball memory muncher that comes along every now and then.

As more triple-A games transition to Microsoft’s new API, consistent performance in DX12 will become increasingly important. As a matter of fact, there are now more DX12 tests within our lineup than those using DX11. Here it is increasingly hard to judge how NVIDIA’s GPUs stack up overall since we’re simply comparing them to one another, without an alternate point of view from AMD.

With that being said, there’s really not much to complain about at 4K but at that ultra high resolutions we start seeing the limits of this card’s power. In Deus Ex, Gears of War, and a few other titles I had to turn down anti-aliasing or other in-game settings to achieve playable framerates. Whether or not you actually need AA at such a high resolution is debatable, but this is nonetheless a hint that more GPU power could be needed for future 4K titles. That’s actually a scary thought.

It isn’t all champagne and roses though. The Founders Edition may look great when compared to the Steampunk-like direction some of NVIDIA’s board partners are taking with their custom designs, but there are notable sacrifices when using a blower-style cooler. First of all, the GTX 1080 Ti’s GP102 core gets nuclear hot and even NVIDIA’s upgraded cooling design can struggle to keep up, particularly when overclocking factors into the equation. The problem here is straightforward: an internal heatsink can be ultra-efficient at whisking heat away from the core, but that heat needs to be dissipated somehow and the single fan just isn’t up to the task.

Overclocking naturally exasperates the thermal issue as well. Even though I was able to push this sample to a relatively constant core frequency of 2.05GHz, that achievement was only accomplished by running the fan at 75% of its stated maximum output. That resulted in awesome framerates, but an unbearably loud acoustical profile. Without the fan speed increase higher clocks were still possible, but they eventually fell back to stock speeds after being strangled by higher temperatures and of course NVIDIA’s Boost algorithms.

Back when I reviewed the TITAN X, I mentioned that anyone’s perception of that card hinged upon their willingness to slap down $1200 on it. The GTX 1080 Ti is fundamentally different since it may demand a princely sum of $700, but it is still infinitely more accessible than any TITAN card ever was. Much like its predecessor, this card stands alone without any competition so it’s perceived value (or lack thereof) will ultimately rest with how much you’re willing to spend for the fastest GPU on the planet. Whereas a $1200 GPU is something many gamers could simply lust after, this one us well within reach for a much larger audience.

At the end of the TITAN X review I asked a simple question: would I buy it? Back then the answer was a resounding “yes, provided my girlfriend doesn’t kill me”. This time around I’m going to unequivocally say “yes, if you have the money”. However, while the GTX 1080 Ti is a technological achievement of pretty epic proportions, and it has staked its territory atop the performance mountain, I think it will be up to NVIDIA’s board partners to make it truly shine. Regardless of what you choose to do – run out and buy a Founders Edition or wait for the inevitable custom designs – the GTX 1080 Ti has set the bar so high that it will be very, very hard to dethrone.

Posted in

Latest Reviews