What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce RTX 2080 Ti & RTX 2080 Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Overclocking Results - Pushing Framerate Possibilities

Overclocking Results - Pushing Framerate Possibilities


At this point should be more than obvious the RTX 2080 Ti and RTX 2080’s heatsinks seem be more than ready to tackle the heat put out by the TU102 and TU104 cores. However, we’ve seen time and again even that even the best coolers at stock settings can fall flat when it comes to overclocking.

NVIDIA has made no small amount of claims when it comes to the subject of overclocking Turing-based GPUs. At their editor’s presentation at Gamescom, we saw the RTX 2080 operating around the 2.1GHz mark. That’s super impressive to say the least but we didn’t know the test’s conditions. Plus, Pascal GPUs were incredibly voltage capped without some significant modifications and I was wondering if that same limitation existed with the RTX 2080 and 2080 Ti.

But before we get too far into this section I wanted to mention I haven’t explored all of the options for squeezing every last bit of performance from the RTX cards. For example, NVIDIA’s board partners will be including a new scanning tool that NVIDIA developed alongside their GPU Boost 4.0 technology.


This is what the EVGA implementation looks like. Basically what it does is make overclocking easy for newcomers by launching a simulated load on the GPU while gradually increasing voltage and clock speeds. If you move the Power and Temperature Limit sliders to higher values, the scanner will automatically shift its targets as well. It will eventually settle on a final overclock that should be completely stable without the need for endless trial and error testing. But then again, I’m a sucker for punishment so I happen to like the inexact science of overclocking.

What I did is use the NVIDIA Scanner tool to set a baseline overclock on both cards and then went ahead to using manual inputs to dial the clock speeds. I should also mention that the automatic scanner doesn’t touch memory frequencies so I needed to work on those too. In all of this my goal was to hit the highest speeds possible that were also stable for long gaming sessions, That means the results will show on-the-fly load frequencies each GPU settled on after 30 minutes of gaming. I should also mention the fans were set to between 41% and 52% for all tests and even then, neither card came close to the assigned Temperature Target.


If you recall, the GTX 2080 Ti Founders Edition settled at 1680MHz in stock form but with a bit of tuning it ended up continually running around 1980MHz. The memory didn’t get all that far though, going from 14GHz to just over 15.5GHz. One thing to make note of is the GDDR6 memory has error correction routines which will cause it to throttle rather than show rendering errors most of the time so detecting its true stable overclock is pretty challenging.


The amount of headroom shown by the GTX 2080 is pretty reasonable with it going from a Boost speed of 1845MHz to a relatively constant 2085MHz. The memory on this particular card actually came close to hitting 16GHz which is super impressive.

After spending a few hours working on overclocking the RTX cards if you actually need more performance they do have some overhead. However, much like with Maxwell and Pascal, they are strictly limited by the amount of additional voltage you can apply in current overclocking tools. The cores rarely reached the higher Power Limit and neither even came close to the Temperature limit so it’s more than obvious higher voltage settings would allow you to go even further. However, NVIDIA and their board partners won’t allow that.


In Battlefield 1 performance of the RTX 2080 Ti and RTX 2080 gets a good increase of about 12% in both cases but honestly, I would likely just keep the settings at stock speeds unless you absolutely need those few extra frames per second.


Far Cry 5 provides a bit more interesting results. The RTX 2080 Ti gets a big bump but the RTX 2080 sees a smaller increase. This could likely be due to a framebuffer or core architecture limitation rather than clock speeds.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Conclusion - The Fastest GPUs Ever, For A Price

Conclusion - The Fastest GPUs Ever, For A Price


With the RTX series, NVIDIA is trying to push the gaming industry towards an evolution that may lead to the titles we play taking one more step towards lifelike realism. They’re trying to sell an idea above all else and that means putting forward cutting edge technologies that may not be fully realized for months or even years to come. However, with a vision of “build it and they will develop for it” being a cornerstone of this initiative, many gamers simply want to know what was being offered now, not years down the road.

After being asked to pony up huge amounts of money, many weren’t worried about the RTX-series cards’ ability to chew through next generation tasks. Performance in traditional games was the question mark. Ironically it turns out that NVIDIA’s RTX 2080 and RTX 2080 Ti are ridiculously capable GPUs in current titles without the need to flex their artificial intelligence or ray tracing muscles.


With the Turing architecture’s baseline improvements, the RTX 2080 Ti is the fastest GPU ever created for current games. On average it beats the GTX 1080 Ti by an average of 35% at 1440P and an even more impressive 42% at 4K. There were times when that 4K number edged closer to 60% as well. To put that into perspective, whereas Intel has been improving their CPU’s by an average of 5 % to 10% per year, it has taken NVIDIA a mere 18 months to leap forward by this amount. That isn’t just noteworthy, it is staggering.

Now granted, that gap does narrow in some titles due to either a lack of driver optimizations or game engine limitations. For example Warhammer 2: Total War is a great game but its DX12 mode needs more development despite being in “beta” since the original Warhammer was launched. All cards suffer in it. There are a few other titles as well but even when those are factored into the equation, the RTX 2080 Ti still performs better than two GTX 1080 cards in SLI.

One thing to take note of is the top-end power of this card is obviously being hamstrung at 1440P as evidenced by the notable uptick in the gap between it and lesser cards when jumping up to 4K. The RTX 2080 Ti is hugely fast but it’s even more impressive in UHD scenarios.


The RTX 2080 feels like the sweetheart of this launch regardless of the raw performance offered by its bigger sibling. It still comes with the RT and Tensor cores which are supposed to benefit users sometime down the road but does so in a substantially more efficient and less expensive package. Basically at 1440P it ends up ahead of the GTX 1080 Ti by a good 11% and a massive 36% in front of its predecessor, the GTX 1080. Things do narrow substantially between it and the GTX 1080 Ti at 4K but that was to be expected given the 2080’s lower memory capacity and bandwidth.

That memory allotment does become a slight hindrance though. Whereas the 11GB RTX 2080 Ti and GTX 1080 Ti cards provided consistent performance across all games at 4K, the standard 2080’s smaller 8GB framebuffer proved to be a limitation in Wolfenstein: New Colossus. It only affects one title in our tests but as new rendering methods and HDR needs increase games’ memory footprints, people running 4K monitors one has to wonder if this situation won’t pop up more often.

Balancing efficiency with raw performance obviously played a big role in Turing as well. Both cards feature performance per watt ratios that are impressive to say the least. They’re blissfully quiet as well, just be warned that my very expensive RTX 2080 Ti did exhibit some coil whine when framerates were pushed to 160 and beyond. Below that level, the card was quite well behaved though.

But what about the competition? Well folks, it doesn’t exist. Every single metric here just showed how far behind AMD really is. I’m not talking about months; this is now a factor of years. AMD has effectively ceded the high end GPU market to NVIDIA. As a result and with Intel’s currently-unnamed discrete GPU due sometime in 2020, the GeForce lineup essentially has a monopoly.

This provides a perfect segway into a deeply rooted issue with the new RTX cards: their price, or at least the perception of cost versus their predecessors. First of all, every one of the benchmarks you saw comes from the Founders Edition which boasts a significant premium and (thankfully!) higher than stock clock speeds as well. So what many will be doing is comparing the launch prices of a $1,200 RTX 2080 Ti to a $700 GTX 1080 Ti and a $800 RTX 2080 to a $550 GTX 1080. That is a bitter pill to swallow despite the substantial performance bumps, particularly when you take into account the potential lack of “reference” $1,000 and $700 options.

So the question that begs answering is simple: are those premiums justifiable? Over the course of nearly two hours at Gamescom NVIDIA tried to convince us those next generation RTX features were worthy of such high prices. However, after spending the better part of a week with these cards my answer is a simple “no they aren’t quite worth it, yet”. Why? Because the RTX story feels like one we’ve heard before despite the impressive current-day performance results.

As PC enthusiasts we’ve been promised amazing things from the likes of DX10, DX11, DX12, Vulkan, PhysX, 3D Vision, TrueAudio, OpenCL, Heterogeneous System Architecture and countless other technologies. They all looked great at first but ultimately failed to reach broad scale deployment. This time is a bit different since gamers are being asked to essentially pony up the money for that future potential like some quasi-Kickstarter campaign. You pay for the prestige (and some additional performance) now in the hope there’ll be benefits down the road.

Granted DLSS seems to have legs since every one of the dozen or so developers I’ve spoken to in the last two weeks is excited about it and the simplicity by which it can be added to existing game engines. Ray tracing in games is obviously where the industry needs to go and some of the tech demos NVIDIA showed were indeed impressive. But before I go out and outright recommend a $1,200 RTX 2080 Ti, NVIDIA needs to prove they can maintain long term momentum for RTX features after the initial groundswell of sponsored titles.

You may have noticed I didn’t lump the vanilla RTX 2080 Founders Edition into those last statements and that’s because it actually looks like a pretty decent buy. Certainly not award worthy but still a card to recommend in this price range. With most custom GTX 1080 Ti’s still going for about $700, the $100 (~15%) jump seems very close to in line with the performance uplift without even taking RTX technologies into account. That means you can still dabble in next generation features without having to sell your dog, your first born and a kidney. Plus, in my opinion the Founders Edition card looks better than any board partner version shown to date.

NVIDIA is obviously gambling with Turing. The TU102 and TU104 are gigantic chips that are costly to produce but if developers fail to buy into their new technologies, vast swaths of that very expensive core will sit idle. That wouldn’t be a good situation for buyers either since –with the RTX 2080 Ti at least- they’re being asked to gamble with their own money right alongside NVIDIA.

With all of this being said, if you take time to really think about the situation, NVIDIA may be onto something here. Perhaps RTX is exactly what the industry needs to push more innovation into development and drag it beyond its cozy little safe zone. Now that’s an idea I’d get behind in an instant.
 
Last edited:

Latest posts

Twitter

Top