AMD RX Vega 64 & Vega 56 Performance Review

Editor-in-Chief

AMD’s RX Vega 64 and RX Vega 56 graphics cards are finally here and in all my time in this industry, I have never experienced anything like this launch. This architecture is AMD’s best hope to effectively compete in both the high end CPU and GPU markets at the same time. Due to a successive string of disappointing processor architectures coupled with competitive GPUs, that never happened. The Threadripper series changed that equation in a big way by putting the screws to Intel’s HEDT lineup so the ball is now in the Radeon Technologies Group’s court to complete the successful one-two punch.

The importance of Vega and its place in the AMD ecosystem cannot be denied since it is supposed to be around for the forseeable future. It is also an attempt to once again compete against NVIDIA’s higher end graphics cards more than two years after the HBM-totting R9 Fury X (which felt more like a science experiment than anything else) came out. Both Radeon fans and people hoping for an alternative to NVIDIA’s GeForce cards alike have some pretty big hopes for Vega.

AMD is launching a pair of cards today: the Vega 64 and Vega 56. Meant to compete against the GTX 1080 (no, not the GTX 1080 Ti or Titan X), the Vega 64 represents the architecture’s uncut and unedited flagship. It will be going for between $499 and $699 depending on the version you end up buying while availability is slated for today. The Vega 56 meanwhile is a cut down version of the 64 which will retail for $399 and its availability is slated for August 28th.

Now that you have some context it should be easier to understand why this launch is so bizarre. After merrily leading all of us down the Vega path for 18 months or so with sneak peaks at the feature sets and inner workings of Vega, AMD has given press less than three days to analyze their new cards. They’ve also insured that two of those days land on the weekend, the one time many of us get to spend with our families. Now you can take this as just yet another entitled reviewer complaining but the “18 months of cookie crumbs followed by three days of testing” benefits no one. It is insulting to everyone from readers to press to the folks who worked so hard to bring Vega to fruition. It has actually been such a condensed launch that AMD actually requested we prioritize testing on the Vega 56, a card that will only be available at the end of this month. I’m guessing they feel the metrics on this one will be better.

Not allowing the journalists and YouTube personalities to have enough time to properly analyze a product does have its benefits. Give us a large window, more testing will be done and there’s a likelihood we may end up finding some potentially damaging information. This isn’t a conspiracy theory of mine either but rather a well-honed marketing strategy that has been used by some of this world’s largest companies. Granted, no one is making us pop this out on Day 0 but there are tangible benefits to being among the first to publish content. That’s why this review will be a fraction of its normal size since I focused on testing as many games as possible rather than going into our usual architectural deep dive.

So with all of that being off my back, let’s get on with the show and take a look at what’s under the bonnet of AMD’s Vega 64 and Vega 56.

The new 12.5 billion transistor Vega 10 core has more improvements baked into it than I could possibly get to in this shortened review but there are nonetheless some key callouts that need to be discussed. First and foremost, this layout is nearly identical to the one used on AMD’s Fiji core from a block diagram standpoint but that doesn’t necessarily mean this is a simply copy / paste of that design. Rather, AMD calls this the most sweeping change to their architecture since the introduction of Graphics Core next about five years ago.

Underpinning this core is AMD’s ubiquitous Infinity Fabric which links the graphics core to the other on-chip logic blocks like the display engine, memory controllers and other components. This should allow Vega to be more easily scalable than past designs and be incorporated into other areas of AMD’s product stack with a minimum of reworking.

There has also been a fundamental change to the Compute Units themselves, necessitating a renaming to Next Generation Compute Units. Each of these 64 NCU’s includes 64 SIMD units, 4 texture units, 64Kb of local data share and 16Kb of L1 cache while being tuned for higher speeds and support for rapid pack math. There is also a Next Generation Geometry engine (NGG) which has a new, faster geometry pipeline for developers who want to take advantage of it.

The last addition is the High Bandwidth Cache subsystem which allows users to set aside some of their system memory for use if a game requires more than the 8GB of video memory supported by Vega. While Vega won’t be memory limited in any of today’s games and this feature won’t be beneficial right now, it is there for some future-proofing. We’ll go into the details of HBC on a separate page in this article.

Currently the RX Vega lineup is made up of three distinct cards, the Vega 64 Liquid cooled has the preeminent position with its extremely Base and maximum clock speed of 1406MHz and 1667MHz respectively. It also comes decked out with 4096 Stream Processors 256 Texture Units and 64 ROPs. Those architecture specifications are actually quite interesting since they align perfectly with the Fiji XT core in AMD’s previous flagship GPU, the R9 Fury X. However, the differences between those two graphics cards is like night and day from an architecture perspective with Vega 10 being finely honed for higher clocks and increased overall engine efficiency.

Vega 10 also happens to be the first graphics card to use HBM2 memory, a feature which may have contributed to its extended delays. In this case the Vega64 has 8GB running at 945MHz, operating across a 2048-bit wide interface which results in 483 GB/s of bandwidth. This is actually a bit lower than the Fiji series of cards but according to AMD, Fiji never required that much bandwidth to begin with and the higher memory speed on HBM2 allowed them push an optimal amount of data through a much narrower bus.

AMD is hoping the RX Vega 64 Liquid Cooled will end up bridging the gap between NVIDIA’s GTX 1080 and GTX 1080 Ti and its sky high price of $699 proves that. In addition, you won’t be able to buy this card individually. Rather, it will only be purchasable in a so-called “Radeon Aqua Pack” which also includes a monitor, Ryzen 7 processor, an X370 motherboard and two “free” games. Basically, your $699 purchase will lead to spending over $2000 on what amounts to a whole new setup. You can learn a bit more about those packages here.

Move a bit further down market into a more accessible price point and you get the RX Vega 64. This card boasts the exact same core specs as the Liquid Cooled version but comes with an air cooler and lower clock speeds. It starts at $499 when bought individually but if purchased as part of the Radeon Black Pack the price goes up to $599. On the positive side, the first few hundred buyers of that package will get the oh-so-sexy Limited Edition RX Vega 64 with its brushed aluminum shroud. Nonetheless this card is supposed to compete against NVIDIA’s extremely popular GTX 1080, a product that has remained in its lofty $549 position for the better part of five months now after launching at $599 last July.

The last but certainly not least card in AMD’s new lineup is the RX Vega 56. Priced at just $399 and not attached to any ridiculous bundle, I think this is the GPU many of you will look closely at. It’s competitor is the golden goose of NVIDIA’s lineup; the $379 GTX 1070. As the name suggests, the Vega 56 has 56 of its NCU’s enabled granting 3584 cores, 224 Texture Units and 64 ROPs, which also happens to be theexact same specs on the R9 Fury. Coincidence? Maybe. Clock speeds are pretty respectable as well with a maximum frequency of 1471MHz while the base clock gets reduced to 1156MHz. Memory has been given a small cut too but only on the clock side where there’s 8GB of HBM2 on a 2048-bit interface running at 800MHz. That actually results in a pretty significant drop in bandwidth to 410 GB/s.

All of these cards look pretty good on paper, at least until we get to their comparative power consumption against NVIDIA’s opposites. The RX Vega 64 Liquid Cooled is supposed to split the difference between the GTX 1080 and GTX 1080 Ti and yet it seems to consume almost 100W more than the GTX 1080Ti. Let me get that to sink in for a moment; you could be running two GTX 1080’s full tilt and those would only consume a few more watts than the single Vega 64 LC. To make matters even more interesting, we have to remember that AMD uses typical board power in their specifications whereas NVIDIA lists maximum board power. With that in mind, these AMD cards may conceivably require even more juice than what’s listed.

The standard Vega 64 doesn’t really fare all that much better but instead of a small nuclear reactor to power it, you’ll only need a hydroelectric dam on your local river. All kidding aside, seeing a possible consumption figure just south of 300W on a card that‘s supposed to compete against the 180W GTX 1080 is worrying to say the least. It looks like AMD pushed these cards to extreme levels to equal what NVIDIA has on tap.

Unfortunately while the Vega 56’s cuts have pushed its power envelope to a much more reasonable 210W it too is up against a very efficient NVIDIA GTX 1070. The delta between the two is “just” 60W but we’ll have to test that for ourselves.

So now that AMD has finally laid the new cards bare for all to see, there’s just a few questions (other than performance of course!) that still need answering. First of all, how much of a mess will availability be? This isn’t a question of “if” since availability problems with Vega are a foregone conclusion due to extremely limited core production numbers coupled with potential demand from miners. I’ve already seen some internal distributor listings with $50 to $100 markups on what amounts to phantom stock. I’d also like to know what AMD’s sudden rush is.

After slowly plodding along with this launch, the quick pedal to the metal mentality of this last week is highly questionable, if not suspicious. Could they have caught wind of something NVIDIA may or may not be planning? We’ll have to see. Until then, we have a bunch of benchmarks to get to. So without any more yammering on this page, let’s get on with the review.

A Closer Look at the RX Vega 64 & 56

I’m sure you thought the first image of the Vega cards would be one with a sexy brushed aluminum heatsink but that wasn’t meant to happen. You see, the vast majority of buyers will receive a card that looks like the one above since the one seen in most images will only be available in a Radeon Black Pack and will be a limited edition. The RX Vega 64 and 56 will basically look like supersized RX 480’s with a black dimpled fan shroud, blower style fan setup and a length of about 10.5”.

These cards require copious amounts of power and therefore AMD has chosen to utilize a pair of 8-pin power connector inputs. Remember that a power supply needs to have two native 8-pin PCIe cables for the Vega cards and you should not be using 6-pin to 8-pin adapters.

The back is covered by a full contact heat dissipation plate that’s also been painted black. The whole setup gives the Vega series a pretty stealthy look.

That backplate has a few cut outs, one of which is for the GPU Tach, an LED indicator that shows GPU usage in real time. Meanwhile, there’s also a pair of DIP switches that are used to either turn off the card’s main LED (on the Radeon logos) or change it from red to blue.

The I/O connectors are pretty straightforward as well with a trio of DisplayPort 1.2 outputs and a lone HDMI 2.0 plug.

I couldn’t finish this section without at least mentioning the Limited Edition Vega 64 now could I? The only real difference between this card and the more plebian reference version is its shroud. Gone is the black plastic and in its place is a high quality brushed aluminum affair with the RX Vega logo front and center. It really does look incredible and it would have made a great visual alternative to NVIDIA’s Founder’s Edition.

It also has a small cubic Radeon logo that’s illuminated with a soft red glow.

The backplate also gives the card a finished look and carries on the card’s design to a clean conclusion.

This architecture is AMD’s best hope to effectively compete in both the high end CPU and GPU markets at the same time. Due to a successive string of disappointing processor architectures coupled with competitive GPUs, that never happened. The Threadripper series changed that equation in a big way by putting the screws to Intel’s HEDT lineup so the ball is now in the Radeon Technologies Group’s court to complete the successful one-two punch.

The importance of Vega and its place in the AMD ecosystem cannot be denied since it is supposed to be around for the forseeable future. It is also an attempt to once again compete against NVIDIA’s higher end graphics cards more than two years after the HBM-totting R9 Fury X (which felt more like a science experiment than anything else) came out. Both Radeon fans and people hoping for an alternative to NVIDIA’s GeForce cards alike have some pretty big hopes for Vega.

AMD is launching a pair of cards today: the Vega 64 and Vega 56. Meant to compete against the GTX 1080 (no, not the GTX 1080 Ti or Titan X), the Vega 64 represents the architecture’s uncut and unedited flagship. It will be going for between $499 and $699 depending on the version you end up buying while availability is slated for today. The Vega 56 meanwhile is a cut down version of the 64 which will retail for $399 and its availability is slated for August 28th.

Now that you have some context it should be easier to understand why this launch is so bizarre. After merrily leading all of us down the Vega path for 18 months or so with sneak peaks at the feature sets and inner workings of Vega, AMD has given press less than three days to analyze their new cards. They’ve also insured that two of those days land on the weekend, the one time many of us get to spend with our families. Now you can take this as just yet another entitled reviewer complaining but the “18 months of cookie crumbs followed by three days of testing” benefits no one. It is insulting to everyone from readers to press to the folks who worked so hard to bring Vega to fruition. It has actually been such a condensed launch that AMD actually requested we prioritize testing on the Vega 56, a card that will only be available at the end of this month. I’m guessing they feel the metrics on this one will be better.

Not allowing the journalists and YouTube personalities to have enough time to properly analyze a product does have its benefits. Give us a large window, more testing will be done and there’s a likelihood we may end up finding some potentially damaging information. This isn’t a conspiracy theory of mine either but rather a well-honed marketing strategy that has been used by some of this world’s largest companies. Granted, no one is making us pop this out on Day 0 but there are tangible benefits to being among the first to publish content. That’s why this review will be a fraction of its normal size since I focused on testing as many games as possible rather than going into our usual architectural deep dive.

So with all of that being off my back, let’s get on with the show and take a look at what’s under the bonnet of AMD’s Vega 64 and Vega 56.

The new 12.5 billion transistor Vega 10 core has more improvements baked into it than I could possibly get to in this shortened review but there are nonetheless some key callouts that need to be discussed. First and foremost, this layout is nearly identical to the one used on AMD’s Fiji core from a block diagram standpoint but that doesn’t necessarily mean this is a simply copy / paste of that design. Rather, AMD calls this the most sweeping change to their architecture since the introduction of Graphics Core next about five years ago.

Underpinning this core is AMD’s ubiquitous Infinity Fabric which links the graphics core to the other on-chip logic blocks like the display engine, memory controllers and other components. This should allow Vega to be more easily scalable than past designs and be incorporated into other areas of AMD’s product stack with a minimum of reworking.

There has also been a fundamental change to the Compute Units themselves, necessitating a renaming to Next Generation Compute Units. Each of these 64 NCU’s includes 64 SIMD units, 4 texture units, 64Kb of local data share and 16Kb of L1 cache while being tuned for higher speeds and support for rapid pack math. There is also a Next Generation Geometry engine (NGG) which has a new, faster geometry pipeline for developers who want to take advantage of it.

The last addition is the High Bandwidth Cache subsystem which allows users to set aside some of their system memory for use if a game requires more than the 8GB of video memory supported by Vega. While Vega won’t be memory limited in any of today’s games and this feature won’t be beneficial right now, it is there for some future-proofing. We’ll go into the details of HBC on a separate page in this article.

Currently the RX Vega lineup is made up of three distinct cards, the Vega 64 Liquid cooled has the preeminent position with its extremely Base and maximum clock speed of 1406MHz and 1667MHz respectively. It also comes decked out with 4096 Stream Processors 256 Texture Units and 64 ROPs. Those architecture specifications are actually quite interesting since they align perfectly with the Fiji XT core in AMD’s previous flagship GPU, the R9 Fury X. However, the differences between those two graphics cards is like night and day from an architecture perspective with Vega 10 being finely honed for higher clocks and increased overall engine efficiency.

Vega 10 also happens to be the first graphics card to use HBM2 memory, a feature which may have contributed to its extended delays. In this case the Vega64 has 8GB running at 945MHz, operating across a 2048-bit wide interface which results in 483 GB/s of bandwidth. This is actually a bit lower than the Fiji series of cards but according to AMD, Fiji never required that much bandwidth to begin with and the higher memory speed on HBM2 allowed them push an optimal amount of data through a much narrower bus.

AMD is hoping the RX Vega 64 Liquid Cooled will end up bridging the gap between NVIDIA’s GTX 1080 and GTX 1080 Ti and its sky high price of $699 proves that. In addition, you won’t be able to buy this card individually. Rather, it will only be purchasable in a so-called “Radeon Aqua Pack” which also includes a monitor, Ryzen 7 processor, an X370 motherboard and two “free” games. Basically, your $699 purchase will lead to spending over $2000 on what amounts to a whole new setup. You can learn a bit more about those packages here.

Move a bit further down market into a more accessible price point and you get the RX Vega 64. This card boasts the exact same core specs as the Liquid Cooled version but comes with an air cooler and lower clock speeds. It starts at $499 when bought individually but if purchased as part of the Radeon Black Pack the price goes up to $599. On the positive side, the first few hundred buyers of that package will get the oh-so-sexy Limited Edition RX Vega 64 with its brushed aluminum shroud. Nonetheless this card is supposed to compete against NVIDIA’s extremely popular GTX 1080, a product that has remained in its lofty $549 position for the better part of five months now after launching at $599 last July.

The last but certainly not least card in AMD’s new lineup is the RX Vega 56. Priced at just $399 and not attached to any ridiculous bundle, I think this is the GPU many of you will look closely at. It’s competitor is the golden goose of NVIDIA’s lineup; the $379 GTX 1070. As the name suggests, the Vega 56 has 56 of its NCU’s enabled granting 3584 cores, 224 Texture Units and 64 ROPs, which also happens to be theexact same specs on the R9 Fury. Coincidence? Maybe. Clock speeds are pretty respectable as well with a maximum frequency of 1471MHz while the base clock gets reduced to 1156MHz. Memory has been given a small cut too but only on the clock side where there’s 8GB of HBM2 on a 2048-bit interface running at 800MHz. That actually results in a pretty significant drop in bandwidth to 410 GB/s.

All of these cards look pretty good on paper, at least until we get to their comparative power consumption against NVIDIA’s opposites. The RX Vega 64 Liquid Cooled is supposed to split the difference between the GTX 1080 and GTX 1080 Ti and yet it seems to consume almost 100W more than the GTX 1080Ti. Let me get that to sink in for a moment; you could be running two GTX 1080’s full tilt and those would only consume a few more watts than the single Vega 64 LC. To make matters even more interesting, we have to remember that AMD uses typical board power in their specifications whereas NVIDIA lists maximum board power. With that in mind, these AMD cards may conceivably require even more juice than what’s listed.

The standard Vega 64 doesn’t really fare all that much better but instead of a small nuclear reactor to power it, you’ll only need a hydroelectric dam on your local river. All kidding aside, seeing a possible consumption figure just south of 300W on a card that‘s supposed to compete against the 180W GTX 1080 is worrying to say the least. It looks like AMD pushed these cards to extreme levels to equal what NVIDIA has on tap.

Unfortunately while the Vega 56’s cuts have pushed its power envelope to a much more reasonable 210W it too is up against a very efficient NVIDIA GTX 1070. The delta between the two is “just” 60W but we’ll have to test that for ourselves.

So now that AMD has finally laid the new cards bare for all to see, there’s just a few questions (other than performance of course!) that still need answering. First of all, how much of a mess will availability be? This isn’t a question of “if” since availability problems with Vega are a foregone conclusion due to extremely limited core production numbers coupled with potential demand from miners. I’ve already seen some internal distributor listings with $50 to $100 markups on what amounts to phantom stock. I’d also like to know what AMD’s sudden rush is.

After slowly plodding along with this launch, the quick pedal to the metal mentality of this last week is highly questionable, if not suspicious. Could they have caught wind of something NVIDIA may or may not be planning? We’ll have to see. Until then, we have a bunch of benchmarks to get to. So without any more yammering on this page, let’s get on with the review.

Understanding Power Profiles & The HBCC

In an effort to give users finer grain control over Vega’s voracious appetite for power, AMD has implemented a slider within their Wattman overclocking utility. Basically it allows you to modify the card’s behavior with a few clicks rather than going through the trial and error of overclocking or underclocking. The default position is Balanced which evens out clock speeds while keeping Vega’s fan curve to relatively decent speeds. On the flip sides of this, Power Save is meant to deliver a quieter gaming experience by clocking the core to lower speeds and delivering an optimal performance per watt quotient. Meanwhile Turbo is basically a balls-to-the-wall setting the sacrifices power consumption and acoustics for higher overall performance.

According to AMD, Power Saver can increase a Vega 64’s performance per watt ratio by up to 25% while the delta between Turbo and Balanced is somewhere in the neighborhood of 3% in terms of actual framerates. What this tells us is that Vega has been pushed to its absolute limit in an effort to compete with NVIDIA’s Pascal architecture. For the purposes of testing within this review, we’ve left the slider in its default Balanced position.

All Vega cards also have a small VBIOS switch located on their outside edge which toggles between two different power “modes”. They ship in the “primary” (or higher wattage) position whereas the secondary position allows for even lower input requirements. When used in conjunction with Power Profile slider in Wattman, you can reduce power by up to 50W.

A Quick Mention about HBCC

AMD’s High Bandwidth Cache Controller was initially designed for the professional market to grant the GPU core direct access to high speed storage subsystems. It would then use those storage systems (be it an SSD array or onboard memory) as a partial caching partition for page files.

The HBCC implementation in Vega 10’s desktop version is a bit different. It allows you to dynamically utilize system memory as part of the GPU’s cache or memory partition. The 8GB of HBM video memory should be more than enough for any of today’s games but there’s no guarantee 8GB will be enough for future applications. To compensate for this, AMD has added a slider with Radeon Settings that sets aside system memory for use if the software detects local video memory is running low. It will then move unused bits to the slower system memory for use later, freeing up space on the HBM for higher priority packets. This is actually done seamlessly as the game engine will detect the total capacity of the HBM plus the system memory.

For the time being at least, this setting won’t have any use unless you somehow find a game that gobbles up more than 8GB of video memory. However it could be a game changer for HBM GPUs which typically have a smaller memory footprint than their GDDR-equipped alternatives.

Test System & Setup

Processor: Intel i7 6900X @ 4.6GHz
Memory: G.Skill Trident X 32GB @ 3200MHz 15-16-16-35-1T
Motherboard: ASUS X99 Deluxe
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / Acer XB280HK (4K)
OS: Windows 10 Pro Creator’s Update

Drivers:
AMD 17.30.1051-B4
NVIDIA 385.12 Beta

*Notes:

– All games tested have been patched to their latest version

– The OS has had all the latest hotfixes and updates installed

– All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings

The Methodology of Frame Testing, Distilled

How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results in DX11.

DX12 Benchmarking

For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.

Since PresentMon and its offshoot OCAT throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.

One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.

Understanding the “Lowest 1%” Lines

In the past we had always focused on three performance metrics: performance over time, average framerate and pure minimum framerates. Each of these was processed from the FCAT or Presentmon results and distilled down into a basic chart.

Unfortunately, as more tools have come of age we have decided to move away from the “minimum” framerate indication since it is a somewhat deceptive metric. Here is a great example:

In this example, which is a normalized framerate chart whose origin is a 20,000 line log of frame time milliseconds from FCAT, our old “minimum” framerate would have simply picked out the one point or low spike in the chart above and given that as an absolute minimum. Since we gave you context of the entire timeline graph, it was easy to see how that point related to the overall benchmark run.

The problem with that minimum metric was that it was a simple snapshot that didn’t capture how “smooth” a card’s output was perceived. It’s easy for a GPU to have a high average framerate while throwing out a ton of interspersed higher latency frames. Those frames can be perceived as judder and while they may not dominate a gaming experience, their presence can seriously detract from your immersion.

In the case above, there are a number of instances where frame times go through the roof, none of which would accurately be captured by our classic Minimum number. However, if you look closely enough, all of the higher frame latency occurs in the upper 1% of the graph. When translated to framerates, that’s the lowest 1% (remember, high frame times = lower frame rate). This can be directly translated to the overall “smoothness” represented in a given game.

So this leads us to our “Lowest 1%” within the graphs. What this represents is an average of all the lowest 1% of results from a given benchmark output. We basically take thousands of lines within each benchmark capture, find the average frame time and then also parse out the lowest 1% of those results as a representation of the worse case frame time or smoothness. These frame time numbers are then converted to actual framerate for the sake of legibility within our charts.

Battlefield 1

Battlefield 1 will likely become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles around. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.

Call of Duty: Infinite Warfare

The latest iteration in the COD series may not drag out niceties like DX12 or particularly unique playing styles but it nonetheless is a great looking game that is quite popular.

This benchmark takes place during the campaign’s Operation Port Armor wherein we run through a sequence combining various indoor and outdoor elements along with some combat.

Deus Ex – Mankind Divided

Deus Ex titles have historically combined excellent storytelling elements with action-forward gameplay and Mankind Divided is no difference. This run-through uses the streets and a few sewers of the main hub city Prague along with a short action sequence involving gunplay and grenades.

The Division

The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.

Doom (Vulkan)

Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.

Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.

Gears of War 4

Like many of the other exclusive DX12 games we have seen, Gears of War 4 looks absolutely stunning and seems to be highly optimized to run well on a variety of hardware. In this benchmark we use Act III, Chapter III The Doorstep, a level that uses wide open views along with several high fidelity environmental effects. While Gears does indeed include a built-in benchmark we didn’t find it to be indicative of real-world performance.

Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.

Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.

Overwatch

Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.

Far Cry Primal

It seems like every generation of our GPU testing has a Far Cry title in it and this one is no different. Far Cry Primal is yet another great looking open-ended world game from Ubisoft which takes you back to prehistoric times. This 60 second runthrough is taken from deep into the game taken within a forest in the far south of the main game map. It combines fire, substantial amounts of vegetation, animals and water into one benchmark. We have also noted that the in-game benchmark is highly inaccurate and does not give results that are properly aligned with actual gameplay performance.

Tom Clancy’s Ghost Recon Wildlands

While the latest Ghost Recon game isn’t compatible with DX12, it happens to be one of the best looking games released in quite some time. It also has an extensive set of in-game graphics options. This 90-second benchmark is based in the tropical jungle of Espiritu Santo as well as a vehicle drive into a slightly more arid zone. As with some other games, the in-game benchmark on this one is out to lunch and doesn’t give a good representation of what you can expect within gameplay.

Warhammer: Total War

Unlike some of the latest Total War games, the hotly anticipated Warhammer title has been relatively bug free, performs well on all systems and still incorporates the level detail and graphics fidelity this series is known for. In this sequence, we use the in-game benchmarking tool to play back one of our own 40 second gameplay sessions which includes two maxed-out armies and includes all of the elements normally seen in standard gameplay. That means zooms and pans are used to pivot the camera and get a better view of the battlefield.

Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.

Battlefield 1

Battlefield 1 will likely become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles around. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.

Call of Duty: Infinite Warfare

The latest iteration in the COD series may not drag out niceties like DX12 or particularly unique playing styles but it nonetheless is a great looking game that is quite popular.

This benchmark takes place during the campaign’s Operation Port Armor wherein we run through a sequence combining various indoor and outdoor elements along with some combat.

Deus Ex – Mankind Divided

Deus Ex titles have historically combined excellent storytelling elements with action-forward gameplay and Mankind Divided is no difference. This run-through uses the streets and a few sewers of the main hub city Prague along with a short action sequence involving gunplay and grenades.

The Division

The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.

Doom (Vulkan)

Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.

Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.

Gears of War 4

Like many of the other exclusive DX12 games we have seen, Gears of War 4 looks absolutely stunning and seems to be highly optimized to run well on a variety of hardware. In this benchmark we use Act III, Chapter III The Doorstep, a level that uses wide open views along with several high fidelity environmental effects. While Gears does indeed include a built-in benchmark we didn’t find it to be indicative of real-world performance.

Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.

Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.

Overwatch

Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.

Far Cry Primal

It seems like every generation of our GPU testing has a Far Cry title in it and this one is no different. Far Cry Primal is yet another great looking open-ended world game from Ubisoft which takes you back to prehistoric times. This 60 second runthrough is taken from deep into the game taken within a forest in the far south of the main game map. It combines fire, substantial amounts of vegetation, animals and water into one benchmark. We have also noted that the in-game benchmark is highly inaccurate and does not give results that are properly aligned with actual gameplay performance.

Tom Clancy’s Ghost Recon Wildlands

While the latest Ghost Recon game isn’t compatible with DX12, it happens to be one of the best looking games released in quite some time. It also has an extensive set of in-game graphics options. This 90-second benchmark is based in the tropical jungle of Espiritu Santo as well as a vehicle drive into a slightly more arid zone. As with some other games, the in-game benchmark on this one is out to lunch and doesn’t give a good representation of what you can expect within gameplay.

Warhammer: Total War

Unlike some of the latest Total War games, the hotly anticipated Warhammer title has been relatively bug free, performs well on all systems and still incorporates the level detail and graphics fidelity this series is known for. In this sequence, we use the in-game benchmarking tool to play back one of our own 40 second gameplay sessions which includes two maxed-out armies and includes all of the elements normally seen in standard gameplay. That means zooms and pans are used to pivot the camera and get a better view of the battlefield.

Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.

Temperatures

Testing the temperatures for these cards hasn’t been easy since AMD’s Vega series isn’t compatible with our most-used logging tools quite yet. As a result we needed to use a combination of HWInfo and AMD’s own Wattman. Expect additional Vega thermal and power testing as time moves on.

At first glance it may look like Vega actually runs quite cool since its temperatures seem to be well within normal boundaries. Unfortunately that’s a fallacy since AMD needed to run the GPU fan at extremely high speeds to achieve numbers like this. The result is blast furnace-like heat being exhausted out of the backplate. To give you an idea of how extreme this is, the temperature in my 15’x15’ lab increased nearly 10 degrees Celsius after 30 minutes of continual testing. I’d call this a feature in the winter but in the summer a room can become excessively hot due to this. The Vega 10 core is a monster and taming it with an air cooler seems to be a challenge.

Acoustical Testing

What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 18” away from the GPU’s fan. For the load scenarios, Tomb Raider is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.

I alluded to high fan speeds but this chart should put things into a bit better context. Remember, decibels are measured on a logarithmic scale so an increase of just 5 decibels could actually “sound” up to three times as loud. In this case the Vega cards approach ridiculous levels of noise but thankfully neither card was accompanies by any coil whine. This is all fan noise you are seeing.

Simply put, the RX Vega 64 is excessively loud and when that’s combined with the heat being thrown out the back of its I/O shield, I can’t recommend buying this card. Well, you could buy it but make sure your room is well ventilated and you have very good noise cancelling headphones.

The RX Vega 56 is another matter altogether. While still loud, it is relatively tame when compared against its big brother. As a matter of fact, I have to credit AMD’s design team for effectively managing this thing’s heat in such a way that it’s infinitely easier to live with.

System Power Consumption

For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

Power consumption typically takes a back seat for enthusiasts but when the deltas between competing solutions are this large, attention needs to be given to the culprit. Despite claims of efficiency improvements, AMD’s architecture seems to be falling further afield from an NVIDIA design that’s almost two years old already. It isn’t even a fair competition anymore. This ends up dragging Vega’s performance per watt ratio through the mud.

Typically us press folks have been more than willing to give AMD a pass on their lack of competitive power consumption numbers but I can’t find myself making excuses anymore. In a time of rising utility costs – in the last five years alone electricity prices in Ontario have more than doubled on average – I simply don’t understand why potential buyers need to accept higher power bills just to justify buying a piece of technology that competes rather than excels at its job.

Maybe I’m approaching this from the wrong perspective and editorializing too much but at this point the whole “game with headphones and hope you have great air conditioning” argument doesn’t fly anymore. Vega is power hungry, hot and noisy. There’s no escaping that and AMD needs to find a way out of this rut soon or NVIDIA will soon become even more dominant than they already are. If you somehow disagree with this standpoint, by all means leave a comment in our forums.

Conclusion; Value First & Some Brutal Truths

So there you have it. After a nearly two years of false leads, junk tabloid journalism, an endless number of teases by AMD and a solid 48 hours of benchmarking we now know where AMD’s RX Vega fits into things. For those who waited all this time in the hope Vega would compete with the best NVIDIA had to offer, this launch will likely end with a mix of regret and hope. But if you are someone who appreciates value, then AMD might just have something for you.

Let’s start this off with a very simple analogy between Zen and Vega. AMD’s Hawaii was a last hurrah for the old competitive ways of ATI and the long wait for the follow up Fiji architecture allowed NVIDIA to move ahead by (almost) a whole generation. Now the two years between Fiji and Vega have compounded the issue and NVIDIA’s GeForce lineup has gone even further afield. A generational leap forward was needed in an effort to better align AMD’s GPU division with the impending release of Volta, not a 16 moth old GTX 1080. The Zen architecture is indeed a saving grace on the x86 side -finally competing against the latest Intel CPUs- and Vega was seen as a similar knight in shining armor for the Radeon Technology Group. From a gaming standpoint, it simply isn’t.

When taken inside a bubble the RX Vega 64 does post some very competitive scores in nearly every single game. Once everything is averaged out, it trades blows almost evenly with the GTX 1080 due to its superiority in DX12 applications. That goes for both our 1440P and 4K tests. One concern however is the dearth of DX12 and Vulkan titles that have been launched as of late since AMD’s best wins seem to be narrowly focused in these two APIs. While there was an initial surge of compatible games last year, the list has really thinned out and it doesn’t look to be improving all that much through the rest of 2017. Hopefully that situation changes. So if this was a year ago, I’d be singing AMD’s praises from the hilltop regardless of the 64’s obvious inability to land an absolute killing blow. But at this point I’m a bit more apprehensive about what the future holds for the continued shift to DX12 and Vulkan.

Vega 64 does perform admirably on the whole but it is also quite simply an unpleasant card to have in the same room as you. It has loud fan speeds, a voracious appetite for electricity and exudes blasts of hot air that feel like Lucifer’s own breath. This is a card pushed to the ragged edges of air cooling and it shows. Luckily some of these issues can be taken care of by AMD’s intrepid board partners and their custom cooling solutions. And yet if there’s $50 separating the reference Vega 64 and GTX 1080 at retail and you don’t have any brand preference, I’d recommend one of two routes: jump onto the NVIDIA bandwagon or save some money by buying an RX Vega 56 if an AMD card is a necessity.

You heard that right, my experience with Vega 64 completely justified AMD’s last minute request for reviewers to focus on the Vega 56. This cut down version of Vega 10 is a pretty compelling graphics card if you can overlook its power consumption, heat and noise limitations. I’m not quite sure why you would want to overlook those things though. Against the GTX 1070, it won far more often than it lost and at times the RX Vega 56 even came close to putting down the GTX 1080. If you are looking for a slightly more future-proof $400 gaming solution than the GTX 1070 and you tend to game with headphones plugged in, this could be a good fit.

The RX Vega 56 is infinitely better behaved than its bigger, more obnoxious brother. So whereas the RX Vega 64 is thoroughly underwhelming in my books, the Vega 56 ended up being a pretty solid GPU from a value standpoint provided you’ll actually be able to find it at $400. But is the $20 premium over the GTX 1070 worthwhile? I’d say that $20 makes it a good value rather than a screaming deal, drawing the 56 even with its competitor.

I know it sounds like I am harping a lot on Vega’s power consumption in this conclusion but it points towards the reason why it lacks a competitive edge. When you have what amounts to be an upper mid-tier GPU that consumes significantly more power than the competitor’s leading edge card, questions need to be raised. In my testing the deltas were extreme with the 56 requiring almost 100W more than the 1070 and the 64 beating NVIDIA’s 1080 by a shocking 117W. There’s no justification other than to assume AMD realized their architecture was behind the times and to insure competitiveness, blasted it with voltage and cranked clock speeds. It feels like AMD tried to pull a page from NVIDIA’s book by designing a highly competitive compute architecture, adapted it to the mass market gaming side and failed to hit efficiency targets.

With all of this being said, what AMD has done here is nothing short of masterful; they’ve laid just enough breadcrumbs over the space of more than a year to keep potential buyers in a state of perpetual suspended animation. Then they used events to insure interest remained at a slow, steady boil by allowing the press and online personalities to release tantalizing morsels of information and unboxings at strategic intervals. Folks waited, and waited, and waited some more. Vega is what they are left with and looking back on this long, meandering path I’m honestly not sure this strategy will work in the long run.

To be clear I’m not saying that people who are waiting for a particular piece of technology should be ridiculed. Rather, I hope after this protracted and painful launch potential buyers will be a lot more careful before waving the “I’m waiting for xxx!!” as a banner of pride or in the name of brand loyalty. Because the moral of this story is pretty simple: if you have the money and want a graphics card, stop daydreaming, buy the best you can afford NOW and get on with gaming.

Posted in

Latest Reviews