What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce RTX 2080 Ti & RTX 2080 Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
When NVIDIA first showed of their RTX 2080 Ti and RTX 2080 at Gamescom, many who watched the livestream and attended in person were left with more questions than answers. This actually brings us to today since we’re finally able to talk about the one thing everyone wanted to know about: performance in today’s games.

By this point many of you are simply going to skip ahead to the benchmarks and eventually conclusion but there’s a lot of stuff going on behind the scenes which is relevant not just for today’s titles but tomorrow’s as well. That’s because NVIDIA has taken a completely different approach to the baseline architecture. Rather than using yet another evolution of a decade-old architecture, they’ve moved towards a more revolutionary approach which is being billed as the largest shift in the graphics pipeline since GeForce 256.

That’s a big promise but beyond the hyperbole, marketing speak and plain old hype, I need to get into the specifics that buyers need to know about. Like the new architecture we are in the midst of a very different launch, one that brings some major concerns and also hope that the PC gaming market may be on the cusp of something amazing.

RTX2080-REVIEW-16.jpg

As their names suggest, this new series of cards utilize NVIDIA’s RTX ecosystem which combines real time ray tracing, GPU compute, deep learning AI and typical rasterization into a single unified architecture code-named Turing. In a nutshell (I’ll go into more details on the next pages) these are supposed to be graphics cards targeting the gaming marking but they’re also packed with features that haven’t been utilized in game development. At least not yet but NVIDIA is hoping developers are able and willing to use tools traditionally reserved for other industries to create the virtual worlds of tomorrow.

For the time being, NVIDIA will be launching two cards; the aforementioned RTX 2080 and RTX 2080 Ti. Personally I think the use of these names is pretty unfortunate since they immediately draw parallels with the previous generation. The similarities in names have naturally caused potential buyers to immediately associate the RTX 20-series with the GTX 10-series from the ground up. When put into that context, as spiritual successors these new cards look positively overpriced. But remember the Turing architecture is very, very different from Pascal and with its breadth of forward-looking features, the RTX generation is more akin to previous TITAN cards rather than anything in the GTX stable.

RTX2080-REVIEW-60.jpg

The RTX 2080 Ti sits at the very top of NVIDIA’s current lineup, offering significantly more cores and texture units than the GTX 1080 Ti. What hasn’t changed with this generation is the ROP and memory controller setup. Though there have been drastic improvements in the way these parts are addressed, the TU102 core still communicates with its 11GB of memory memory via a 352-bit bus while the ROP count still stands at 88.

One of the major challenges for the GTX 2080 Ti will be its price. It may be the most powerful gaming GPU on the planet but at $1000 it is also the highest priced of all time. Granted, there have been dual core cards and TITAN series parts that have set even higher MSRPs but never before has a new architecture launched with a thousand dollar price tag.

RTX2080-REVIEW-10.PNG

NVIDIA’s RTX 2080 is not only significantly less power hungry but its cost of $700 is also more palatable for most potential buyers. On it is supposed to -at the very least- match the GTX 1080 Ti’s overall performance which could come as a surprise if you are only looking at its on-paper specifications. But Turing’s true power lies in the architecture’s significant departure from previous generations so those visible specs are backstopped by a massive amount of additional rendering grunt. That’s been accomplished while also consuming less power than the GTX 1080 Ti.

You’ll also notice two new columns in the chart above, those being for RT cores and Tensor cores which will be (according to NVIDIA at least) key technological components in their efforts to move the gaming industry forward. In short, the RT cores are purpose-built stages that are engineered to handle ray tracing activities while the Tensor cores are utilized to accelerate artificial intelligence features. At this point these specs don’t really mean much but as games are launched with support for elements of the RTX ecosystem, we’ll be able to get a better handle on how they perform.

RTX2080-REVIEW-9.png

One of the major improvements within NVIDIA’s Turing architecture is its compatibility with Micron’s next generation GDDR6 memory. Thus, despite using exactly the same bus width and capacity allotments as their predecessors, the GTX 2080 Ti and GTX 2080 boast significantly more overall memory bandwidth at 616 GB/s and 448 GB/s respectively.

NVIDIA decided to avoid High Bandwidth Memory for Turing due to the complications it adds to production, limited availability of the modules and the overall cost associated with HBM-based designs. All of these points are ones which AMD became abundantly familiar with as first Fiji and then Vega struggled with availability. GDDR6 on the other hand offers bandwidth that’s above what HBM2 offered on Vega while also offering 20% improved power efficiency when compared to GDDR5X memory used in Pascal GPUs.

RTX2080-REVIEW-61.jpg

Much like NVIDIA’s last few launches, there’s a bit of a wrinkle in the way these new RTX series cards are being presented. Instead of the Founders Edition representing a so-called “reference spec” card, this time they sport some of the best looking heatsinks I’ve ever seen, their maximum sustained Boost Clock is increased and on-PCB components are some of the best available on the market. As a result, their costs are increased to even higher levels than the ones I mentioned previously, to the tune of $1,200 for the RTX 2080 Ti and $800 for the standard RTX 2080.

From a personal standpoint I wouldn’t have an issue with these prices in relation to a reference card. But there are serious doubts we’ll see those so-called “reference” boards anytime soon, if at all. From everything I’ve heard the Bill of Materials for the Turing core, cooling, memory and other components is so high board partners have no incentive to create versions that hit $999 / $699. They wouldn’t be able to turn any sort of profit without NVIDIA somehow subsidizing their efforts to achieve those mythical prices. As you can imagine, that’s a major problem and it flies in the face of just how much RTX will actually cost buyers.

At this point what NVIDIA has to do is convince buyers that Turing can not only satisfy future gaming needs but also insure their investment provides immediate dividends. While the RTX 2080 and RTX 2080 Ti’s long term success can’t be determined by this launch day review, at the very least it will show you how these new and very expensive cards perform right now.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The RTX 2080 & RTX 2080 Ti Founders Editions

A Closer Look at the RTX 2080 & RTX 2080 Ti Founders Edition


While many will look at the Founders Edition as high priced playthings of rich gamers, they are nonetheless spectacular looking graphics cards. As a matter of fact, NVIDIA has created two of the best-built cards they have passed through the HWC offices and it makes me wonder why anyone would buy one of the triple-slot monsters board partners are peddling.

RTX2080-REVIEW-3.png

Unlike previous Founders Edition cards that featured blower-style coolers, NVIDIA decided to go a different route with the RTX 2080-series. Their latest creation is a low-slung downdraft heatsink with a pair of axial fans to draw in cool air. While this does mean a lot of Turing’s copious amount of heat will remain within your case, the design leads to not only better cooling but also a much quieter setup.

The shroud itself is made of forged, milled aluminum in either a brushed anodized look or flat black. The effect is actually quite stunning.

RTX2080-REVIEW-11.png

The fans themselves look like a work of precision crafting and are powered by a three-phase motor. According to NVIDIA, the three-phase design is supposed to limit vibrational noise. These cover a full-length vapor chamber which covers the entire PCB and is topped by a massive aluminum fin array to insure quick heat transfer between the components and outside air.

RTX2080-REVIEW-17.jpg

By pulling off the heatsink and its accompanying base plate we can see that NVIDIA has spared no expense on the components. On the RTX 2080 Ti there’s a 13-phase iMON DrMOS power supply is used which provides a dynamic power management system featuring fine-grain current monitoring. This card’s 11GB of GDDR6 memory receive their own 3-phase setup as well. Meanwhile, the RTX 2080 needs quite a bit less juice so NVIDIA equipped it with an 8+2 layout which is still highly efficient but also well tailored to the more efficient TU104 core.

RTX2080-REVIEW-1.png

Speaking of cooling, NVIDIA has really gone to town on the 2080-series’ backplates. The overall design is quite similar to those seen on previous generation cards but the all-silver coloring really makes it stand out. The actual heat dissipation potential of these backplates can be significant provided there’s enough air movement within the case but they also get lava hot.

RTX2080-REVIEW-13.png

NVIDIA’s previous generation SLI implementation went through a few revisions, most recently with the “high bandwidth” interconnect option for their Pascal architecture. But due to the extremely high bandwidth requirements of Turing, there was a need for something else and that’s where NVLink gets factored into the equation. On the RTX 2080 and RTX 2080 Ti, this interface is covered with a machined aluminum cover that’s been padded with a rubber liner to preserve the backplate’s aesthetics.

Now there’s a few things that you need to know about NVLink, the most important of which is that NVIDIA is officially discontinuing three and four card setups. In its current form Turing only supports dual card configurations with SLI over NVLink.

The real benefit of NVLink and its (up to) 100GB/s of bi-directional bandwidth is that it can not only customize workloads to the less-utilized GPU is properly utilized but it can also share memory capacity and other key resources across multiple cards. As a result, it can scale in a linear fashion when moving to higher resolution displays. NVIDIA gave a great example of nearly 100% scaling when moving from 4K to 8K on current RTX-series GPUs.

RTX2080-REVIEW-15.png

Power connectors come in an expanded form when compared to flagship Pascal cards The RTX 2080 Ti boasts dual 8-pin PCIe inputs while the RTX 2080 has a 6-pin and 8-pin layout. Make no mistake about it, while the RTX-series does focus on performance per watt, their relative benchmark numbers are so high, power consumption has gone up as well. That means I wouldn’t recommend anything less than a 600W PSU for the RTX 2080 Ti and a 500W unit for the 2080. On the other hand if you are rocking a Threadripper or overclocked Skylake-X system, add 200W to each of those numbers.

RTX2080-REVIEW-2.png

The I/O area of NVIDIA’s RTX series (yes, the 2080 and 2080 Ti share the same layout) is actually quite unique. There’s a trio of DisplayPort 1.4a outputs that can support 8K at 60Hz or 4K at 120Hz alongside a single HDMI 2.0b port with HDCP 2.2 compatibility that can support 4K/60 HDR content.

The oddball here is the USB-C port which is a first for any GPU. This isn’t actually your typical USB-C data connector but rather a VirtualLink interface that’s supposed to provide data and up to 27W of power to next generation VR headsets. Considering uptake in consumer VR headsets has decreased over the last year, its obvious this port won’t be used all that much but there’s still a niche market that will take full advantage of its implementation.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Meet the New Turing SM

The Meet the New Turing SM


Much like with previous NVIDIA designs, the Turing architecture utilizes the Streaming Multiprocessor as a fundamental building block upon which their new GPUs are built. However, this time the SM’s internals have received a pretty significant makeover to the point where some sections are indistinguishable from the likes of Pascal and Maxwell.

The main reason for this shift is the incorporation of specific stages for ray tracing and deep learning functionality, two elements that were absent from previous designs. There have also been some substantial evolutions to standard rasterization stages to boost performance in more traditional rendering tasks.

According to NVIDIA, this design –the first evolution of which was seen in Volta- has taken nearly a decade to come to fruition. That should give you some idea about how far into the future technology companies have to look in order to be successful.

RTX2080-REVIEW-5.png

Starting right at the top, you’ll notice some noteworthy differences between this SM and the one found in Pascal. First and foremost, the number of FP32 / CUDA cores has been cut down from 128 to 64 in an effort to make the entire chip design slightly more scalable. NVIDIA has also taken that opportunity to include completely separate Integer and Floating Point units rather than combining all of those functions under the unified CUDA cores. There has also been a major rework to the caching and memory hierarchy within each SM so they’re better able to handle both shading and compute workloads. Meanwhile, there are still four texture units here but due to Turing’s revamped memory compression algorithms their efficiency has been drastically increased since less data needs to be transferred across their interface.

Perhaps the largest change this time around is the addition of the deep learning focused Tensor cores and ray tracing focused RT Core. These units play a huge part in NVIDIA’s plans for the future of gaming and as a result they take a preeminent place within the Turin architecture. I’ll cover those functions a bit more on an upcoming page.

RTX2080-REVIEW-6.png

These elements combine to make what NVIDIA claims is a brand new pipeline which is highly optimized for concurrent instead of serial rendering tasks. In plain English that means it can process multiple workloads at the same time rather than a part of the pipeline sitting idle while waiting for other tasks to finish.

A central part of this concurrency is the breakup of the previous inseparable integer and floating point tasks I talked about above. As shading evolved, so too did the instruction set distribution. The assumption used to be that the vast majority of instructions in modern games were floating point heavy but it actually turns out almost one third of game instructions aren’t FP focused at all. During those times, the floating point pipeline would simply sit idle while waiting for other instructions to finish.

As a result of this situation the CUDA cores often weren’t doing the things they should have been; which is calculating floating point math. They are running non-FP operations while the FP pipeline sits idle. In an effort to remove this bottleneck NVIDIA moved towards a more scalable approach where functions are segregated towards their dedicated pipelines. Thus integer and FP ops can be issued at once.

RTX2080-REVIEW-16.png

This approach towards concurrency is highlights as we zoom in on what the Turing architecture can accomplish in a single rendered frame. The FP32 shading typically takes the longest time but instead of the INT32 shading ops being done before or after that task, they’re done in parallel, typically after ray tracing would be done. Speaking of those RT Cores, since they’re treated as a separate (but joined) pipeline, they don’t intrude onto the workloads of other processes. Meanwhile, since for the time being most of the deep neural network is being focused on post processing elements like anti-aliasing, the Tensor Cores are brought into the action closer to the frame’s end.

RTX2080-REVIEW-7.png

As games have more and more complicated shading needs, the needs for better memory address spaces have also increased. Hence the new caching designs and the higher end GDDR6 memory have been used in the Turing architecture to great effect.

Drilling down a bit further into the memory hierarchy differences between the Pascal and Turing, there are some very notable changes. Each of the SM’s four processing blocks includes a new L0 instruction cache and a 64 KB register file while all four blocks share a combined 96 KB L1 data cache/shared memory. This L1 cache can be divided a number of different ways. Traditional graphics workloads partition it as 64 KB of dedicated graphics shader RAM and 32 KB for texture caching. Compute workloads can divide the 96 KB into 32 KB shared memory and 64 KB L1 cache, or 64 KB shared memory and 32 KB L1 cache. Meanwhile, the shared L2 cache has been doubled to 6MB.

The result of Turing’s unique and rather evolutionary approach is a speedup of roughly 50% to overall shading performance.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Dive into Turing - TU102, TU104 & TU106

A Dive into Turing - TU102, TU104 & TU106


By this point in time you are hopefully familiarized with the Streaming Multiprocessor, the massively important cornerstone of the Turing architecture. But how does it fit into the grand scheme of things? Well NVIDIA is in the process of launching not one but three separate cores that are based on the new Turing design. Supposedly the initial plans were to only launch the high end cards in the form of RTX 2080 Ti and RTX 2080 for the time being. However the relatively quick ramp-up of their 12nm FFN manufacturing process has meant the lower-end RTX 2070’s launch date was moved up a bit.

RTX2080-REVIEW-17.png

The big daddy of this lineup is the TU102 core which has a whopping 18.6 billion transistors, 72 SMs, 4,608 CUDA Cores, 288 texture units, 72 RT Cores, 576 Tensor Cores spread across six separate GPC’s. At the other end of the spectrum it has an even dozen 32-bit GDDR6 memory controllers, 96 ROPs spread across twelve groups of eight and 6MB of L2 cache. However, in order to increase yields and lower power consumption, NVIDIA has modified the original core somewhat to create the GTX 2080 Ti.

In short, they’ve cut off four Streaming Multiprocessors resulting in a core that has 4352 CUDA cores and 272 TMUs. Then, since the L2 cache, ROPs and memory controllers are joined at the hip in this design, NVIDIA also eliminated a bank of eight ROPs, a single GDDR6 32-bit memory controller and about 400KB of the L2 cache. The end result is still an immensely powerful design that can combine high clock speeds alongside more than adequate bandwidth.

RTX2080-REVIEW-19.png

Moving down the stack a bit we come to the 13.6 billion transistor TU104 core that’s being used in the GTX 2080 and there are some fundamental differences here. While there are still six GPCs, instead of each housing a dozen Streaming Multiprocessors these have eight each, resulting in 3072 CUDA Cores, 368 Tensor Cores, and 48 RT Cores alongside 184 Texture Units. Meanwhile, back-of-house tasks are handled by 64 ROPs, 4MB of L2 cache and eight 32-bit memory controllers.

Unlike the RTX 2080 Ti which is using a cut down core design, the TU104 is fully utilized on NVIDIA’s RTX 2080. The only major thing that’s changed (other than the lower specs) is the fact that the TU104’s NVLink has a single x8 link resulting in 50GB/s of aggregate bandwidth versus the 100GB/s of the TU102. That’s completely understandable since the overall performance output of this core doesn’t require a massive amount of link bandwidth for adequate communication with a second card.

RTX2080-REVIEW-18.png

The final core being announced in the Turing lineup for the time being is the TU106 which will grace NVIDIA’s RTX 2070. According to the information we have, the 10.8 billion transistor core has been architected for that perfect blend of efficiency, pricing and performance. It actually shares a lot in common with the TU102 since there are a dozen SMs per GPC and in many ways it looks like NVIDIA took their higher end core and simply cut it in half.

There’s actually some interesting things going on here too. Unlike the RTX 2080 Ti which has a ROP partition / GPC ratio of 2:1, this chip has eight ROP groupings and three GPCs. There’s also 4MB of L2 cache which is quite a bit considering how many SMs and Texture Processing Units’ information is being fed into the memory. However, this ROP and cache-heavy design was necessary to ensure the TU106 received the eight memory controllers necessary for a 256-bit GDDR6 bus.

All in all, the RTX 2070 will ship with 2304 CUDA Cores, 144 Texture Units and 64 ROPs but it will not have the ability to run SLI / NVLink so single card setups will be the only way to go. This chip will also likely be used for further cut down version of the RTX series if NVIDIA sees a market for them.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup

Test System & Setup



Processor: Intel i9 7900X @ 4.81GHz
Memory: G.Skill Trident X 32GB @ 3600MHz 16-16-16-35-1T
Motherboard: ASUS X299-E STRIX
Cooling: NH-U14S
SSD: Intel 900P 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (4K) / Acer XB280HK (4K)
OS: Windows 10 Pro patched to latest version


Drivers:
NVIDIA 411.51 Beta (RTX series)
NVIDIA 399.24 WHQL
AMD 18.9.1 WHQL

*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FRAPS on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in FPS over time graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation along with OCAT.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are using OCAT or FCAT (depending on compatibility) for ALL benchmark results in DX11 and DX12.

Not only does OCAT have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.

Since PresentMon and its offshoot OCAT throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. Framerate over time which is then distilled down further into the typical bar graph averages out every data point as its presented.


Understanding the “Lowest 1%” Lines


In the past we had always focused on three performance metrics: performance over time, average framerate and pure minimum framerates. Each of these was processed from the FCAT or OCAT results and distilled down into a basic chart.

Unfortunately, as more tools have come of age we have decided to move away from the "minimum" framerate indication since it is a somewhat deceptive metric. Here is a great example:

RX580-REVIEW-55.jpg

In this example, which is a normalized framerate chart whose origin is a 20,000 line log of frame time milliseconds from FCAT, our old "minimum" framerate would have simply picked out the one point or low spike in the chart above and given that as an absolute minimum. Since we gave you context of the entire timeline graph, it was easy to see how that point related to the overall benchmark run.

The problem with that minimum metric was that it was a simple snapshot that didn't capture how "smooth" a card's output was perceived. As we've explained in the past and here, it is easy for a GPU to have a high average framerate while throwing out a ton of interspersed higher latency frames. Those frames can be perceived as judder and while they may not dominate a gaming experience, their presence can seriously detract from your immersion.

HD7990-AMD-95.jpg

In the case above, there are a number of instances where frame times go through the roof, none of which would accurately be captured by our classic Minimum number. However, if you look closely enough, all of the higher frame latency occurs in the upper 1% of the graph. When translated to framerates, that's the lowest 1% (remember, high frame times = lower frame rate). This can be directly translated to the overall "smoothness" represented in a given game.

So this leads us to our "Lowest 1%" within the graphs. What this represents is an average of all the lowest 1% of results from a given benchmark output. We basically take thousands of lines within each benchmark capture, find the average frame time and then also parse out the lowest 1% of those results as a representation of the worse case frame time or smoothness. These frame time numbers are then converted to actual framerate for the sake of legibility within our charts.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Battlefield 1 (DX12)

Battlefield 1 DX12 Performance


Battlefield 1 has become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles too. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.


RTX2080-REVIEW-30.jpg

RTX2080-REVIEW-43.jpg


The first taste of what these RTX series cards has to offer is pretty impressive. The RTX 2080 Ti is way, way out in front and even the standard RTX 2080 is able to beat the GTX 1080 Ti by about 13%. The 4K results continue that trend too but the GTX 2080 Ti stretches its legs even more, offering a nearly 50% improvement over the GTX 1080Ti.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Call of Duty: World War II

Call of Duty: World War II Performance


The latest iteration in the COD series may not drag out niceties like DX12 or particularly unique playing styles but it nonetheless is a great looking game that has plenty of action and drama, not to mention a great single player storyline.

This benchmark takes place during the campaign’s Liberation storyline wherein we run through a sequence combining various indoor and outdoor elements along with some combat, explosions and set pieces.


RTX2080-REVIEW-31.jpg

RTX2080-REVIEW-44.jpg


Call of Duty continues the trend we saw with BF1. Both RTX cards are well ahead at 1440P, and the Ti offers some impressive numbers. However, turn on the stress at 4K and the RTX 2080 Ti’s lead widens even more while the gap between the RTX 2080 and GTX 1080 Ti narrows a bit. That’s likely due to the RTX 2080’s smaller memory footprint.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Destiny 2

Destiny 2 Performance


Destiny is a game that continues to evolve to suit online gameplay and new game styles but it has always remained a good looking DX9-based title. For this benchmark we use the single player Riptide mission which combines environmental effects like rain, an open world setting and plenty of scripted combat.

Multiplayer maps may be this game’s most-recognized element but unfortunately performance in those is highly variable.


RTX2080-REVIEW-32.jpg

RTX2080-REVIEW-45.jpg


Destiny 2 is next on our list and the performance seen here is just blazing fast from every one of the cards. Something I do want to mention right now is the RTX 2080’s improvement over the original GTX 1080. It typically hovers around 50% or so which is why I’ve been comparing it to the GTX 1080 Ti instead up to this point. These really are some fast cards but remember, they also cost a small fortune.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Far Cry 5

Far Cry 5 Performance


With a beautiful open world but a day / night cycle that can play havoc with repetitive, accurate benchmarking, Far Cry 5 has a love / hate relationship around here. In this benchmark we use the area around the southwest region’s apple orchards but a basic run-though alongside touching off a small brushfire with explosives.

RTX2080-REVIEW-33.jpg

RTX2080-REVIEW-46.jpg


My results in Far Cry 5 at 1440P are interesting since according to the system’s resource monitor two cores on the overclocked 7900X were maxed out during the benchmark run with the RTX 2080 Ti. This could mean its performance was slightly limited by the CPU. Again, at 4K the Ti surges further ahead while the RTX 2080 and GTX 1080 Ti are neck and neck.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Forza Motorsports 7 (DX12)

Forza Motorsport 7 DX12 Performance


Forza 7 is a racing game with a very similar heart to the legendary Gran Turismo series and it looks simply epic. It also happens to use a pretty efficient DX12 implementation. For this benchmark we use the Spa-Francorchamps track along with a full field of competing cars. In addition, the rain effects are turned on to put even more pressure on the GPUs.

RTX2080-REVIEW-34.jpg

RTX2080-REVIEW-47.jpg


Forza 7 was another weird one and it seemed to be CPU limited again at 1440P but that didn’t stop the RTX cards from dominating. They did however pull ahead again at 4K and here even the RTX 2080 maintained a pretty big lead over the GTX 1080 Ti. That’s pretty impressive!
 

Latest posts

Top