When NVIDIA first showed of their RTX 2080 Ti and RTX 2080 at Gamescom, many who watched the livestream and attended in person were left with more questions than answers. This actually brings us to today since we’re finally able to talk about the one thing everyone wanted to know about: performance in today’s games.
By this point many of you are simply going to skip ahead to the benchmarks and eventually conclusion but there’s a lot of stuff going on behind the scenes which is relevant not just for today’s titles but tomorrow’s as well. That’s because NVIDIA has taken a completely different approach to the baseline architecture. Rather than using yet another evolution of a decade-old architecture, they’ve moved towards a more revolutionary approach which is being billed as the largest shift in the graphics pipeline since GeForce 256.
That’s a big promise but beyond the hyperbole, marketing speak and plain old hype, I need to get into the specifics that buyers need to know about. Like the new architecture we are in the midst of a very different launch, one that brings some major concerns and also hope that the PC gaming market may be on the cusp of something amazing.
As their names suggest, this new series of cards utilize NVIDIA’s RTX ecosystem which combines real time ray tracing, GPU compute, deep learning AI and typical rasterization into a single unified architecture code-named Turing. In a nutshell (I’ll go into more details on the next pages) these are supposed to be graphics cards targeting the gaming marking but they’re also packed with features that haven’t been utilized in game development. At least not yet but NVIDIA is hoping developers are able and willing to use tools traditionally reserved for other industries to create the virtual worlds of tomorrow.
For the time being, NVIDIA will be launching two cards; the aforementioned RTX 2080 and RTX 2080 Ti. Personally I think the use of these names is pretty unfortunate since they immediately draw parallels with the previous generation. The similarities in names have naturally caused potential buyers to immediately associate the RTX 20-series with the GTX 10-series from the ground up. When put into that context, as spiritual successors these new cards look positively overpriced. But remember the Turing architecture is very, very different from Pascal and with its breadth of forward-looking features, the RTX generation is more akin to previous TITAN cards rather than anything in the GTX stable.
The RTX 2080 Ti sits at the very top of NVIDIA’s current lineup, offering significantly more cores and texture units than the GTX 1080 Ti. What hasn’t changed with this generation is the ROP and memory controller setup. Though there have been drastic improvements in the way these parts are addressed, the TU102 core still communicates with its 11GB of memory memory via a 352-bit bus while the ROP count still stands at 88.
One of the major challenges for the GTX 2080 Ti will be its price. It may be the most powerful gaming GPU on the planet but at $1000 it is also the highest priced of all time. Granted, there have been dual core cards and TITAN series parts that have set even higher MSRPs but never before has a new architecture launched with a thousand dollar price tag.
NVIDIA’s RTX 2080 is not only significantly less power hungry but its cost of $700 is also more palatable for most potential buyers. On it is supposed to -at the very least- match the GTX 1080 Ti’s overall performance which could come as a surprise if you are only looking at its on-paper specifications. But Turing’s true power lies in the architecture’s significant departure from previous generations so those visible specs are backstopped by a massive amount of additional rendering grunt. That’s been accomplished while also consuming less power than the GTX 1080 Ti.
You’ll also notice two new columns in the chart above, those being for RT cores and Tensor cores which will be (according to NVIDIA at least) key technological components in their efforts to move the gaming industry forward. In short, the RT cores are purpose-built stages that are engineered to handle ray tracing activities while the Tensor cores are utilized to accelerate artificial intelligence features. At this point these specs don’t really mean much but as games are launched with support for elements of the RTX ecosystem, we’ll be able to get a better handle on how they perform.
One of the major improvements within NVIDIA’s Turing architecture is its compatibility with Micron’s next generation GDDR6 memory. Thus, despite using exactly the same bus width and capacity allotments as their predecessors, the GTX 2080 Ti and GTX 2080 boast significantly more overall memory bandwidth at 616 GB/s and 448 GB/s respectively.
NVIDIA decided to avoid High Bandwidth Memory for Turing due to the complications it adds to production, limited availability of the modules and the overall cost associated with HBM-based designs. All of these points are ones which AMD became abundantly familiar with as first Fiji and then Vega struggled with availability. GDDR6 on the other hand offers bandwidth that’s above what HBM2 offered on Vega while also offering 20% improved power efficiency when compared to GDDR5X memory used in Pascal GPUs.
Much like NVIDIA’s last few launches, there’s a bit of a wrinkle in the way these new RTX series cards are being presented. Instead of the Founders Edition representing a so-called “reference spec” card, this time they sport some of the best looking heatsinks I’ve ever seen, their maximum sustained Boost Clock is increased and on-PCB components are some of the best available on the market. As a result, their costs are increased to even higher levels than the ones I mentioned previously, to the tune of $1,200 for the RTX 2080 Ti and $800 for the standard RTX 2080.
From a personal standpoint I wouldn’t have an issue with these prices in relation to a reference card. But there are serious doubts we’ll see those so-called “reference” boards anytime soon, if at all. From everything I’ve heard the Bill of Materials for the Turing core, cooling, memory and other components is so high board partners have no incentive to create versions that hit $999 / $699. They wouldn’t be able to turn any sort of profit without NVIDIA somehow subsidizing their efforts to achieve those mythical prices. As you can imagine, that’s a major problem and it flies in the face of just how much RTX will actually cost buyers.
At this point what NVIDIA has to do is convince buyers that Turing can not only satisfy future gaming needs but also insure their investment provides immediate dividends. While the RTX 2080 and RTX 2080 Ti’s long term success can’t be determined by this launch day review, at the very least it will show you how these new and very expensive cards perform right now.
By this point many of you are simply going to skip ahead to the benchmarks and eventually conclusion but there’s a lot of stuff going on behind the scenes which is relevant not just for today’s titles but tomorrow’s as well. That’s because NVIDIA has taken a completely different approach to the baseline architecture. Rather than using yet another evolution of a decade-old architecture, they’ve moved towards a more revolutionary approach which is being billed as the largest shift in the graphics pipeline since GeForce 256.
That’s a big promise but beyond the hyperbole, marketing speak and plain old hype, I need to get into the specifics that buyers need to know about. Like the new architecture we are in the midst of a very different launch, one that brings some major concerns and also hope that the PC gaming market may be on the cusp of something amazing.
As their names suggest, this new series of cards utilize NVIDIA’s RTX ecosystem which combines real time ray tracing, GPU compute, deep learning AI and typical rasterization into a single unified architecture code-named Turing. In a nutshell (I’ll go into more details on the next pages) these are supposed to be graphics cards targeting the gaming marking but they’re also packed with features that haven’t been utilized in game development. At least not yet but NVIDIA is hoping developers are able and willing to use tools traditionally reserved for other industries to create the virtual worlds of tomorrow.
For the time being, NVIDIA will be launching two cards; the aforementioned RTX 2080 and RTX 2080 Ti. Personally I think the use of these names is pretty unfortunate since they immediately draw parallels with the previous generation. The similarities in names have naturally caused potential buyers to immediately associate the RTX 20-series with the GTX 10-series from the ground up. When put into that context, as spiritual successors these new cards look positively overpriced. But remember the Turing architecture is very, very different from Pascal and with its breadth of forward-looking features, the RTX generation is more akin to previous TITAN cards rather than anything in the GTX stable.
The RTX 2080 Ti sits at the very top of NVIDIA’s current lineup, offering significantly more cores and texture units than the GTX 1080 Ti. What hasn’t changed with this generation is the ROP and memory controller setup. Though there have been drastic improvements in the way these parts are addressed, the TU102 core still communicates with its 11GB of memory memory via a 352-bit bus while the ROP count still stands at 88.
One of the major challenges for the GTX 2080 Ti will be its price. It may be the most powerful gaming GPU on the planet but at $1000 it is also the highest priced of all time. Granted, there have been dual core cards and TITAN series parts that have set even higher MSRPs but never before has a new architecture launched with a thousand dollar price tag.
NVIDIA’s RTX 2080 is not only significantly less power hungry but its cost of $700 is also more palatable for most potential buyers. On it is supposed to -at the very least- match the GTX 1080 Ti’s overall performance which could come as a surprise if you are only looking at its on-paper specifications. But Turing’s true power lies in the architecture’s significant departure from previous generations so those visible specs are backstopped by a massive amount of additional rendering grunt. That’s been accomplished while also consuming less power than the GTX 1080 Ti.
You’ll also notice two new columns in the chart above, those being for RT cores and Tensor cores which will be (according to NVIDIA at least) key technological components in their efforts to move the gaming industry forward. In short, the RT cores are purpose-built stages that are engineered to handle ray tracing activities while the Tensor cores are utilized to accelerate artificial intelligence features. At this point these specs don’t really mean much but as games are launched with support for elements of the RTX ecosystem, we’ll be able to get a better handle on how they perform.
One of the major improvements within NVIDIA’s Turing architecture is its compatibility with Micron’s next generation GDDR6 memory. Thus, despite using exactly the same bus width and capacity allotments as their predecessors, the GTX 2080 Ti and GTX 2080 boast significantly more overall memory bandwidth at 616 GB/s and 448 GB/s respectively.
NVIDIA decided to avoid High Bandwidth Memory for Turing due to the complications it adds to production, limited availability of the modules and the overall cost associated with HBM-based designs. All of these points are ones which AMD became abundantly familiar with as first Fiji and then Vega struggled with availability. GDDR6 on the other hand offers bandwidth that’s above what HBM2 offered on Vega while also offering 20% improved power efficiency when compared to GDDR5X memory used in Pascal GPUs.
Much like NVIDIA’s last few launches, there’s a bit of a wrinkle in the way these new RTX series cards are being presented. Instead of the Founders Edition representing a so-called “reference spec” card, this time they sport some of the best looking heatsinks I’ve ever seen, their maximum sustained Boost Clock is increased and on-PCB components are some of the best available on the market. As a result, their costs are increased to even higher levels than the ones I mentioned previously, to the tune of $1,200 for the RTX 2080 Ti and $800 for the standard RTX 2080.
From a personal standpoint I wouldn’t have an issue with these prices in relation to a reference card. But there are serious doubts we’ll see those so-called “reference” boards anytime soon, if at all. From everything I’ve heard the Bill of Materials for the Turing core, cooling, memory and other components is so high board partners have no incentive to create versions that hit $999 / $699. They wouldn’t be able to turn any sort of profit without NVIDIA somehow subsidizing their efforts to achieve those mythical prices. As you can imagine, that’s a major problem and it flies in the face of just how much RTX will actually cost buyers.
At this point what NVIDIA has to do is convince buyers that Turing can not only satisfy future gaming needs but also insure their investment provides immediate dividends. While the RTX 2080 and RTX 2080 Ti’s long term success can’t be determined by this launch day review, at the very least it will show you how these new and very expensive cards perform right now.