The AMD RX 580 8GB Performance Review

Editor-in-Chief

There’s no denying in the year since its launch, AMD’s RX 480 and its positioning in the graphics card market has been a hotly debated subject. Some of that is due to the fact that its Polaris architecture effectively remains the only lifeline Radeon fans can cling to in the hopes that something more will eventually be around the corner. However, much of the consternation and hand wringing has been due to that card’s positioning against NVIDIA’s slightly newer GTX 1060. Well today AMD is hoping to put that debate behind us with the RX 580, a spiritual encore presentation to one of their most popular and best-received GPUs of the last few years.

In order to understand the RX 580’s path to inception you have to look back at the somewhat troubled but ultimately successful RX 480. When it was launched, I praised it for focusing on value; it was actually one of the first GPUs to cost less than $300 but provide excellent performance metrics in both 1080P and 1440P scenarios. Unfortunately there were some speed bumps placed along the way since we soon found the reference versions drew excessive power from the PCI-E slot and at least initially, availability was sketchy at best. Things quickly turned around with multiple driver releases and just a few months ago we discovered the RX 480 had become the card to buy, often beating out the GTX 1060 in key gaming benchmarks.

What the RX580 strives to do is further capitalize upon the wave of positive press its now-discontinued sibling received by providing more performance per dollar. To accomplish this AMD us utilizing the same Polaris 10 core we have all come to know and love but through the use of a more mature 14nm FinFET manufacturing process, they have been able to offer substantially increased core frequencies.

There is supposedly also more overclocking headroom for their board partners and end users to tap into. Not only could this positively impact overall value but it could also help AIBs expand their selections of pre-overclocked Polaris-based products.

The RX 580 isn’t the only GPU getting a brand new coat of paint either. Since it is based off of the same yet slightly cut down version of the Polaris 10 core, the RX 470 will be making way for the new RX 570. This review will focus on the RX580 since my RX 570 sample arrived a bit late and with the Easter weekend upon me, there just wasn’t the time to properly execute.


The only other area where the newer Polaris products differentiate themselves from the outgoing cards is within their power consumption algorithms. The older Polaris-based GPUs featured a simple 2-state power system for their memory clocks wherein one was set for idle situations or extremely light workloads whereas the second state was reserved for higher performance situations.

Unfortunately this setup led to the memory entering its high performance mode when two displays were plugged in or when the video decode engine was required for simple online video playback. As a result power consumption in those two scenarios was much higher than it should have otherwise been.

To bypass this issue, AMD has now added a third power state for their onboard memory which sits between the idle and high performance “gears”. The byproduct of this move is lower power needs (and heat production) than before. It should also be noted that reduced single display idle power consumption numbers have been achieved by using that refined 14nm manufacturing process. There’s some hope this will allow standard idle numbers in our tests to register lower as well.

Looking at these numbers should completely eliminate the term “rebrand” from anyone’s mind, even though some will likely be disappointed that we’re still stuck on the original Polaris 10 architecture. Let’s call this a refresh, a refurbishment or a refinement but let’s not come down like a ton of bricks on AMD for sticking with a good thing.

At the top of the current Radeon stack is the RX 580 a card that sticks with the exact same 2304 Stream Processor layout as its predecessor but its Base and Boost frequencies receive a shot of adrenalin. That 1340MHz Boost clock should be easily achievable by the majority of cards since this time around AMD isn’t launching a reference version per se. Rather, board partners will be free to use their own heatsinks so we won’t have to worry about a poor-performing blower style cooler messing with results.

On the memory front all remains the same. Even though there are higher GDDR5 speed bins available these days, AMD has chosen to largely avoid them in an effort to reduce overall board costs. Another thing that should be mentioned is the return of a 4GB card. While the RX 570 will be pulling double duty by competing against the GTX 1060 3GB and GTX 1050Ti, AMD’s intent for the 4GB version this time around is for it to bridge the gap between NVIDIA’s 3GB and 6GB SKUs.

Moving a bit lower in the chart and we come up against two points that will likely prove to be the most controversial. The first of these is power consumption. While there are the minor revisions to Polaris which reduce bottom-end efficiency, there’s no escaping the fact that higher frequencies lead to increased power needs. As a result, the RX 580’s typically board power hovers around the 185W mark while board partners’ versions could top 200W. Given the RX 480’s original 150W rating was hotly debated from day one, this shouldn’t come as any surprise.

Pricing is also something I need to discuss since the RX 480’s $240 MSRP hasn’t budged since its launch. This time around I went to the source –that being board partners- since they’re ultimately the ones who set retail pricing whereas AMD’s numbers are simply a “guidance”.

As it turns out come launch you should be able to find reference spec’d cards for anywhere between $230 and $240USD. Finding those few $ Meanwhile overclocked versions will range from $260 upwards to $280USD. Whether or not there will be many cards available at the $230 mark remains to be seen but I have my fingers crossed.

The RX 570 on the other hand is an interesting little card. Not only does it sport massively increased frequencies over the RX 470 but it also sports a boost to memory frequencies. This should allow AMD’s new $170 GPU to open up a huge lead against the GTX 1050Ti but it also costs about $20 more than the NVIDIA card. Also of note is that the 570 will also be available in 8GB form for about $190.

Moving on to the subject of this review and I have a bit of a yin and yang situation for you. A bit before launch it came to our attention that AMD was seeding cards with their highest speed bin to reviewers. Typically that isn’t a problem since I would also be able to compare it against a typical reference-spec’d card to give a proper idea of what the overall competitive landscape looks like.

That wasn’t meant to be since the only sample I received was a Sapphire RX 580 Nitro+ Limited Edition which will retail for $280USD. Now granted, this card runs at a nominal speed of 1410MHz and has a secondary BIOS which pushes it even further to 1450MHz (for testing I used the default BIOS) but the optics of price become a huge issue. Remember, this is a whole $50 over AMD’s SRP. That may not sound like much but in this segment it is a significant premium.

I reached out to other board partners and in rode XFX at the last minute with their RX 580 GTS XXX Edition a card that has a price of just $240 but boasts close-to-reference speeds (the core is overclocked just 26MHz to 1366MHz) and pricing. With this we can actually get an almost-true apples to apples comparison against a bone stock RX 480 and GTX 1060 Founders Edition. My opinion wouldn’t be clouded by the specter of a $280 price either. For the fun of it I’m also throwing in an EVGA GTX 1060 6GB Superclocked card.

Right now it looks like the RX580 will be an extremely appealing replacement for the RX 480. Not only does it bring more performance to the table but pricing hasn’t increased by any appreciable amount. Meanwhile, NVIDIA doesn’t have anything in their lineup that can effectively compete against this initiative other than their upcoming GTX 1060 6GB 9Gbps edition. So could this finally be a convincing segment win for AMD? Let’s find out.

Test System & Setup

Processor: Intel i7 5960X @ 4.4GHz
Memory: G.Skill Trident X 32GB @ 3200MHz 15-16-16-35-1T
Motherboard: ASUS X99 Deluxe
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / Acer XB280HK (4K)
OS: Windows 10 Pro

Drivers:
AMD Crimson ReLive 17.10.1030 B8
NVIDIA 381.65 WHQL

*Notes:

– All games tested have been patched to their latest version

– The OS has had all the latest hotfixes and updates installed

– All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings

The Methodology of Frame Testing, Distilled

How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results in DX11.

DX12 Benchmarking

For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.

Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.

One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.

Understanding the “Lowest 1%” Lines

In the past we had always focused on three performance metrics: performance over time, average framerate and pure minimum framerates. Each of these was processed from the FCAT or Presentmon results and distilled down into a basic chart.

Unfortunately, as more tools have come of age we have decided to move away from the “minimum” framerate indication since it is a somewhat deceptive metric. Here is a great example:

In this example, which is a normalized framerate chart whose origin is a 20,000 line log of frame time milliseconds from FCAT, our old “minimum” framerate would have simply picked out the one point or low spike in the chart above and given that as an absolute minimum. Since we gave you context of the entire timeline graph, it was easy to see how that point related to the overall benchmark run.

The problem with that minimum metric was that it was a simple snapshot that didn’t capture how “smooth” a card’s output was perceived. It is easy for a GPU to have a high average framerate while throwing out a ton of interspersed higher latency frames. Those frames can be perceived as judder and while they may not dominate a gaming experience, their presence can seriously detract from your immersion.

In order to understand the RX 580’s path to inception you have to look back at the somewhat troubled but ultimately successful RX 480. When it was launched, I praised it for focusing on value; it was actually one of the first GPUs to cost less than $300 but provide excellent performance metrics in both 1080P and 1440P scenarios. Unfortunately there were some speed bumps placed along the way since we soon found the reference versions drew excessive power from the PCI-E slot and at least initially, availability was sketchy at best. Things quickly turned around with multiple driver releases and just a few months ago we discovered the RX 480 had become the card to buy, often beating out the GTX 1060 in key gaming benchmarks.

What the RX580 strives to do is further capitalize upon the wave of positive press its now-discontinued sibling received by providing more performance per dollar. To accomplish this AMD us utilizing the same Polaris 10 core we have all come to know and love but through the use of a more mature 14nm FinFET manufacturing process, they have been able to offer substantially increased core frequencies.

There is supposedly also more overclocking headroom for their board partners and end users to tap into. Not only could this positively impact overall value but it could also help AIBs expand their selections of pre-overclocked Polaris-based products.

The RX 580 isn’t the only GPU getting a brand new coat of paint either. Since it is based off of the same yet slightly cut down version of the Polaris 10 core, the RX 470 will be making way for the new RX 570. This review will focus on the RX580 since my RX 570 sample arrived a bit late and with the Easter weekend upon me, there just wasn’t the time to properly execute.


The only other area where the newer Polaris products differentiate themselves from the outgoing cards is within their power consumption algorithms. The older Polaris-based GPUs featured a simple 2-state power system for their memory clocks wherein one was set for idle situations or extremely light workloads whereas the second state was reserved for higher performance situations.

Unfortunately this setup led to the memory entering its high performance mode when two displays were plugged in or when the video decode engine was required for simple online video playback. As a result power consumption in those two scenarios was much higher than it should have otherwise been.

To bypass this issue, AMD has now added a third power state for their onboard memory which sits between the idle and high performance “gears”. The byproduct of this move is lower power needs (and heat production) than before. It should also be noted that reduced single display idle power consumption numbers have been achieved by using that refined 14nm manufacturing process. There’s some hope this will allow standard idle numbers in our tests to register lower as well.

Looking at these numbers should completely eliminate the term “rebrand” from anyone’s mind, even though some will likely be disappointed that we’re still stuck on the original Polaris 10 architecture. Let’s call this a refresh, a refurbishment or a refinement but let’s not come down like a ton of bricks on AMD for sticking with a good thing.

At the top of the current Radeon stack is the RX 580 a card that sticks with the exact same 2304 Stream Processor layout as its predecessor but its Base and Boost frequencies receive a shot of adrenalin. That 1340MHz Boost clock should be easily achievable by the majority of cards since this time around AMD isn’t launching a reference version per se. Rather, board partners will be free to use their own heatsinks so we won’t have to worry about a poor-performing blower style cooler messing with results.

On the memory front all remains the same. Even though there are higher GDDR5 speed bins available these days, AMD has chosen to largely avoid them in an effort to reduce overall board costs. Another thing that should be mentioned is the return of a 4GB card. While the RX 570 will be pulling double duty by competing against the GTX 1060 3GB and GTX 1050Ti, AMD’s intent for the 4GB version this time around is for it to bridge the gap between NVIDIA’s 3GB and 6GB SKUs.

Moving a bit lower in the chart and we come up against two points that will likely prove to be the most controversial. The first of these is power consumption. While there are the minor revisions to Polaris which reduce bottom-end efficiency, there’s no escaping the fact that higher frequencies lead to increased power needs. As a result, the RX 580’s typically board power hovers around the 185W mark while board partners’ versions could top 200W. Given the RX 480’s original 150W rating was hotly debated from day one, this shouldn’t come as any surprise.

Pricing is also something I need to discuss since the RX 480’s $240 MSRP hasn’t budged since its launch. This time around I went to the source –that being board partners- since they’re ultimately the ones who set retail pricing whereas AMD’s numbers are simply a “guidance”.

As it turns out come launch you should be able to find reference spec’d cards for anywhere between $230 and $240USD. Finding those few $ Meanwhile overclocked versions will range from $260 upwards to $280USD. Whether or not there will be many cards available at the $230 mark remains to be seen but I have my fingers crossed.

The RX 570 on the other hand is an interesting little card. Not only does it sport massively increased frequencies over the RX 470 but it also sports a boost to memory frequencies. This should allow AMD’s new $170 GPU to open up a huge lead against the GTX 1050Ti but it also costs about $20 more than the NVIDIA card. Also of note is that the 570 will also be available in 8GB form for about $190.

Moving on to the subject of this review and I have a bit of a yin and yang situation for you. A bit before launch it came to our attention that AMD was seeding cards with their highest speed bin to reviewers. Typically that isn’t a problem since I would also be able to compare it against a typical reference-spec’d card to give a proper idea of what the overall competitive landscape looks like.

That wasn’t meant to be since the only sample I received was a Sapphire RX 580 Nitro+ Limited Edition which will retail for $280USD. Now granted, this card runs at a nominal speed of 1410MHz and has a secondary BIOS which pushes it even further to 1450MHz (for testing I used the default BIOS) but the optics of price become a huge issue. Remember, this is a whole $50 over AMD’s SRP. That may not sound like much but in this segment it is a significant premium.

I reached out to other board partners and in rode XFX at the last minute with their RX 580 GTS XXX Edition a card that has a price of just $240 but boasts close-to-reference speeds (the core is overclocked just 26MHz to 1366MHz) and pricing. With this we can actually get an almost-true apples to apples comparison against a bone stock RX 480 and GTX 1060 Founders Edition. My opinion wouldn’t be clouded by the specter of a $280 price either. For the fun of it I’m also throwing in an EVGA GTX 1060 6GB Superclocked card.

Right now it looks like the RX580 will be an extremely appealing replacement for the RX 480. Not only does it bring more performance to the table but pricing hasn’t increased by any appreciable amount. Meanwhile, NVIDIA doesn’t have anything in their lineup that can effectively compete against this initiative other than their upcoming GTX 1060 6GB 9Gbps edition. So could this finally be a convincing segment win for AMD? Let’s find out.

Test System & Setup

Processor: Intel i7 5960X @ 4.4GHz
Memory: G.Skill Trident X 32GB @ 3200MHz 15-16-16-35-1T
Motherboard: ASUS X99 Deluxe
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / Acer XB280HK (4K)
OS: Windows 10 Pro

Drivers:
AMD Crimson ReLive 17.10.1030 B8
NVIDIA 381.65 WHQL

*Notes:

– All games tested have been patched to their latest version

– The OS has had all the latest hotfixes and updates installed

– All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings

The Methodology of Frame Testing, Distilled

How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results in DX11.

DX12 Benchmarking

For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.

Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.

One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.

Understanding the “Lowest 1%” Lines

In the past we had always focused on three performance metrics: performance over time, average framerate and pure minimum framerates. Each of these was processed from the FCAT or Presentmon results and distilled down into a basic chart.

Unfortunately, as more tools have come of age we have decided to move away from the “minimum” framerate indication since it is a somewhat deceptive metric. Here is a great example:

In this example, which is a normalized framerate chart whose origin is a 20,000 line log of frame time milliseconds from FCAT, our old “minimum” framerate would have simply picked out the one point or low spike in the chart above and given that as an absolute minimum. Since we gave you context of the entire timeline graph, it was easy to see how that point related to the overall benchmark run.

The problem with that minimum metric was that it was a simple snapshot that didn’t capture how “smooth” a card’s output was perceived. It is easy for a GPU to have a high average framerate while throwing out a ton of interspersed higher latency frames. Those frames can be perceived as judder and while they may not dominate a gaming experience, their presence can seriously detract from your immersion.

In the case above, there are a number of instances where frame times go through the roof, none of which would accurately be captured by our classic Minimum number. However, if you look closely enough, all of the higher frame latency occurs in the upper 1% of the graph. When translated to framerates, that’s the lowest 1% (remember, high frame times = lower frame rate). This can be directly translated to the overall “smoothness” represented in a given game.

So this leads us to our “Lowest 1%” within the graphs. What this represents is an average of all the lowest 1% of results from a given benchmark output. We basically take thousands of lines within each benchmark capture, find the average frame time and then also parse out the lowest 1% of those results as a representation of the worse case frame time or smoothness. These frame time numbers are then converted to actual framerate for the sake of legibility within our charts.

There has also been some debate over whether or not we should include 0.1% (or 99.9th percentile) framerates as well. Our answer to that is a simple “NO”. The reason for this is simple: in GPU benchmarking an outside influence such as an SSD load or game engine shader caching issue could interject a single very high latency frame into the mix. While the 1% lowest calculation would include that in its average, 0.1% would display highlight it. However, since many of those 0.1% frames aren’t due to the GPU at all, they should also be perceived as a red herring and not valid for GPU comparisons.

Battlefield 1

Battlefield 1 will likely become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles around. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.

Call of Duty: Infinite Warfare

The latest iteration in the COD series may not drag out niceties like DX12 or particularly unique playing styles but it nonetheless is a great looking game that is quite popular.

This benchmark takes place during the campaign’s Operation Port Armor wherein we run through a sequence combining various indoor and outdoor elements along with some combat.

Deus Ex – Mankind Divided

Deus Ex titles have historically combined excellent storytelling elements with action-forward gameplay and Mankind Divided is no difference. This run-through uses the streets and a few sewers of the main hub city Prague along with a short action sequence involving gunplay and grenades.

The Division

The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.

Doom (Vulkan)

Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.

Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.

Gears of War 4

Like many of the other exclusive DX12 games we have seen, Gears of War 4 looks absolutely stunning and seems to be highly optimized to run well on a variety of hardware. In this benchmark we use Act III, Chapter III The Doorstep, a level that uses wide open views along with several high fidelity environmental effects. While Gears does indeed include a built-in benchmark we didn’t find it to be indicative of real-world performance.

Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.

Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.

Overwatch

Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.

Quantum Break

Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.

Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.

Titanfall 2

The original Titanfall met with some reasonable success and its predecessor tries to capitalize upon that foundation by including a single player campaign while expanding multiplayer options. It also happens to be one of the best looking games released in 2016.

This benchmark sequence takes place within the Trial By Fire mission, right after the gates of the main complex are breached. Due to the randomly generated enemies in this area, getting a completely identical runthrough is challenging which is why we have increase the number of datapoints to four.

Warhammer: Total War

Unlike some of the latest Total War games, the hotly anticipated Warhammer title has been relatively bug free, performs well on all systems and still incorporates the level detail and graphics fidelity this series is known for. In this sequence, we use the in-game benchmarking tool to play back one of our own 40 second gameplay sessions which includes two maxed-out armies and includes all of the elements normally seen in standard gameplay. That means zooms and pans are used to pivot the camera and get a better view of the battlefield.

Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.

Battlefield 1

Battlefield 1 will likely become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles around. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.

Call of Duty: Infinite Warfare

The latest iteration in the COD series may not drag out niceties like DX12 or particularly unique playing styles but it nonetheless is a great looking game that is quite popular.

This benchmark takes place during the campaign’s Operation Port Armor wherein we run through a sequence combining various indoor and outdoor elements along with some combat.

Deus Ex – Mankind Divided

Deus Ex titles have historically combined excellent storytelling elements with action-forward gameplay and Mankind Divided is no difference. This run-through uses the streets and a few sewers of the main hub city Prague along with a short action sequence involving gunplay and grenades.

The Division

The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.

Doom (Vulkan)

Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.

Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.

Gears of War 4

Like many of the other exclusive DX12 games we have seen, Gears of War 4 looks absolutely stunning and seems to be highly optimized to run well on a variety of hardware. In this benchmark we use Act III, Chapter III The Doorstep, a level that uses wide open views along with several high fidelity environmental effects. While Gears does indeed include a built-in benchmark we didn’t find it to be indicative of real-world performance.

Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.

Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.

Overwatch

Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.

Quantum Break

Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.

Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.

Titanfall 2

The original Titanfall met with some reasonable success and its predecessor tries to capitalize upon that foundation by including a single player campaign while expanding multiplayer options. It also happens to be one of the best looking games released in 2016.

This benchmark sequence takes place within the Trial By Fire mission, right after the gates of the main complex are breached. Due to the randomly generated enemies in this area, getting a completely identical runthrough is challenging which is why we have increase the number of datapoints to four.

Warhammer: Total War

Unlike some of the latest Total War games, the hotly anticipated Warhammer title has been relatively bug free, performs well on all systems and still incorporates the level detail and graphics fidelity this series is known for. In this sequence, we use the in-game benchmarking tool to play back one of our own 40 second gameplay sessions which includes two maxed-out armies and includes all of the elements normally seen in standard gameplay. That means zooms and pans are used to pivot the camera and get a better view of the battlefield.

Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.

Analyzing Temperatures & Frequencies Over Time

Modern graphics card designs make use of several advanced hardware and software facing algorithms in an effort to hit an optimal balance between performance, acoustics, voltage, power and heat output. Traditionally this leads to maximized clock speeds within a given set of parameters. Conversely, if one of those last two metrics (those being heat and power consumption) steps into the equation in a negative manner it is quite likely that voltages and resulting core clocks will be reduced to insure the GPU remains within design specifications. We’ve seen this happen quite aggressively on some AMD cards while NVIDIA’s reference cards also tend to fluctuate their frequencies. To be clear, this is a feature by design rather than a problem in most situations.

In many cases clock speeds won’t be touched until the card in question reaches a preset temperature, whereupon the software and onboard hardware will work in tandem to carefully regulate other areas such as fan speeds and voltages to insure maximum frequency output without an overly loud fan. Since this algorithm typically doesn’t kick into full force in the first few minutes of gaming, the “true” performance of many graphics cards won’t be realized through a typical 1-3 minute benchmarking run. Hence why we use a 10-minute warm up period before all of our benchmarks.

I think the first thing to do here is to properly explain how AMD’s board partners are treating temperature this time around. Rather than letting their own nominal fan speed algorithms dictate temperatures, each card’s BIOS is loaded with a specific temperature and clock speed target. With those targets in mind, fan speed curves are adjusted to meet them.

In this instance you can see that the temperatures of both XFX and Sapphire cards top out between the 73°C to 75°C marks. Both of those are a long stretch away from the reference RX 480 but the results are to be expected from cards with extensive custom coolers like these.

Moving on the frequencies and we can see exactly where the RX 580 gets all of its performance from. Whereas the reference RX 480 struggled to maintain a continual 1250MHz clock rate in our tests, the XFX card topped out at 1366MHz while Sapphire’s Nitro+ pushed itself to 1411MHz. One thing to note is that unlike NVIDIA’s Boost algorithm, AMD’s PowerTune technology isn’t designed to take advantage of excess thermal headroom by offering “higher than boost” frequencies. Rather, PowerTune strives to hit a given clock and sits there regardless of how cool the silicon is running.

The actual performance differences between these three cards needs to be discussed as well since the chart above relates directly back to all of the in-game performance benchmarks we just ran through. The delta between the near-reference XFX RX 580 GTS XXX and the reference RX 480 is about 7% which is about where things lined up in all of the other tests.

Meanwhile, the Sapphire card’s $40 premium over the GTS XXX edition nets you a very marginal 4% increase. That result is particularly telling since it highlights why sometimes premium priced, pre-overclocked GPUs actually deliver less performance per dollar than less expensive models.

Acoustical Testing

What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Tomb Raider is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.

There’s no denying the benefits of going towards a custom cooled GPU if you are looking for low noise and both of these cards prove that. Not only do they benefit from fan start / stop technology which idles their fans when there’s a low-load situation but XFX and Sapphire seem to have mastered the art of cooling the Polaris 10 core.

But let’s be honest here for a moment. This version of Polaris 10 may be higher clocked than its predecessor but its still just a 185W core which doesn’t need all that much cooling capacity. As such, the massive heatsinks on these two cards could be considered overkill as well.

System Power Consumption

For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

Over the last twenty or so pages, the benefits of AMD’s new RX 580 should have been abundantly evident. Its higher clock speeds bring about instantaneous benefits in every performance per dollar metric. But as they say “there’s no such thing as a free lunch” and those frequency increases go hand in hand with a significant power consumption increase. The numbers are actually quite shocking.

Whereas the reference RX 480 was power hungry in its own right, the two RX 580 samples I have in-house ended up putting that card to shame. The XFX GTS XXX -which is the lower clocked of the pair- ended up matching a GTX 1070 Founders Edition card’s input needs while the Nitro+ actually needed more juice than NVIDIA’s much higher performing $350 GPU.

Not only do numbers like these point towards a significant performance per watt shortfall against NVIDIA’s latest architecture (something which has always plagued Polaris) but they could also hamper these cards’ adoption into notebooks.

Overclocking Results – A Bucket of Frustration

With numerous voltage roadblocks and power walls being thrown up it seems like overclocking both graphics cards and CPUs has become a dead-end process as of late. No better example of this exists than the RX 580. With two samples in hand, I approached overclocking with a pretty open mind since I had double the chance of achieving something quite reasonable. As reality set in, so too did the frustration.


Get used to seeing a lot of these

So where do I start? First and foremost, every tool I used from Wattman to MSI’s Afterburner prevented the RX 580’s from pushing past a few MHz of overclock. And no, you aren’t reading this incorrectly….a few MHz. Since AMD offers a strict 50mV voltage increase and what seems like an extremely hard cap on input power, there wasn’t anything more I could do.

To add insult to injury, the system would hard lock without exhibiting any rendering errors first. Typically a graphics card will throw a few telltale signs that an overclock isn’t stable before crashing. Not so with either of these cards. Things went from perfectly stable to a hard driver crash within 2MHz. This points towards a vendor-based hard lock on these cards.

The end result overclocks were nothing short of embarrassing. That XFX card ended up hitting 1405MHz from its original 1366MHz while the Sapphire Nitro+ required its second 1450MHz BIOS to hit 1485MHz. Meanwhile, the memory did play along and I ended up hitting just over 8.7Gbps on both cards.

One word of warning to would-be overclockers though: even though the RX 580 may look stable at a given speed, it may not be. In the example below, I had boosted the XFX RX 580 to 1420MHz with an extra 50mV of voltage.

As you can see, while the system didn’t hard-lock but the overclock actually dialed itself even further back to 1399MHz. The only reason why I caught this was because of GPU-Z’s ability to poll the clock rates at extremely fast intervals. Meanwhile, AMD’s own Wattman was convinced the clock speed was sitting pretty at 1420MHz. With that in mind, make sure you test these cards with a proper tool to insure their stability at whatever overclock you believe is dialed in.

Conclusion; One of the Best Just Got Better

AMD intends for the RX 580 to become a true spiritual successor to the popular RX 480 and there’s no doubt in my mind it accomplishes exactly that. Despite many things pointing to the contrary, this is much more than a rebrand since these cards are being offered with higher clock speeds and some notable updates to their power consumption profiles. Some may argue that a 1340MHz speed bin for Polaris was already widely available on board partners’ overclocked SKUs but now AMD is making those higher clocks the new norm.

The true impact of those new frequencies upon raw performance data is pretty significant: there’s a delta between the RX 480 and RX 580 that fluctuates from 5% to 10% and points between. Add in a card such as the Nitro+ and you’re looking at 7-13% in averages and the lowest 1% of framerates depending on the title being tested. While that might not seem like much, the RX 480 was already riding high after some key driver releases and this update pushes AMD even further ahead of the GTX 1060 6GB. From my perspective, NVIDIA needs to enact some quick price cuts before their mid-tier darling gets overwhelmed. As it stands right now, there isn’t a single scenario wherein I’d recommend the 1060 over the RX 580, especially with products like XFX’s RX580 GTS XXX Edition sitting at about $240.

While it would be an understatement to say I’m stoked about what the RX580 brings to the table, I have to temper my excitement with a quick call back to reality. There are two pretty serious (in my eyes at least) limitations that have to be discussed, both of which are a byproduct of pushing the Polaris 10 core to new heights. First and foremost amongst these is power consumption. With such high frequencies, this is one power hungry little bugger. In our tests it consumed just as much as a reference GTX 1070 and when overclocked to Sapphire’s levels it requires even more juice. For those keeping track at home, that’s a whole 70W higher than a GTX 1060 6GB Founders Edition. If you are building a small form factor system, this fact alone may eliminate the RX 580 from contention.

Then there’s overclocking and what can I say on that point? After hours of heartache and frustration I simply gave up trying to whip Wattman into line. The end result for both cards was less than a 60MHz (at best!) core clock increase. If I strayed even a half percent out of line, finer-grain monitoring tools like GPU-Z would show the core speed fluctuating madly even though Wattman registered the higher speed as being locked in. Go a bit further afield and a driver crash would necessitate a hard system reboot. Will more mature tools fix this? Perhaps but I’m not going to say things will change either. Doing so would be falling into the usual “wait and see” trap that doggedly follows on the heels of almost every launch these days.

I’m also going to throw up a flag of warning here to; not just for AMD but NVIDIA as well. Had I been limited to the card that was originally sampled –the $280 Sapphire Nitro+ Limited Edition – my opinions about the RX 580’s overall value would have been very, very different. This highlights the danger of sampling premium cards at launch; they rarely, if ever, offer a good price / performance comparison. I understand the need to highlight a board partner’s offerings but there are plenty of other options that can and should be sampled before halo products.

So I’m going to wrap this up here and now. I think the RX 580 is the best possible drop-in upgrade solution that money can buy provided your PSU is up to the task of powering it and there’s full awareness of the very limited overclocking headroom. Not only can this card offer superlative 1080P performance but it has the chops to power through high detail level 1440P content as well. NVIDIA’s GTX 1060 6GB can’t even come close, even when EVGA’s Superclocked edition is thrown into the mix.

Personally I’d recommend looking at options like XFX’s GTS XXX Edition that hover around the $240 to $250 mark for the best bang for buck. However, if you want something a bit more enthusiast-oriented, Sapphire’s Nitro+ Limited Edition will certainly fit the bill nicely but it is simply too expensive for this segment.

In the case above, there are a number of instances where frame times go through the roof, none of which would accurately be captured by our classic Minimum number. However, if you look closely enough, all of the higher frame latency occurs in the upper 1% of the graph. When translated to framerates, that’s the lowest 1% (remember, high frame times = lower frame rate). This can be directly translated to the overall “smoothness” represented in a given game.

So this leads us to our “Lowest 1%” within the graphs. What this represents is an average of all the lowest 1% of results from a given benchmark output. We basically take thousands of lines within each benchmark capture, find the average frame time and then also parse out the lowest 1% of those results as a representation of the worse case frame time or smoothness. These frame time numbers are then converted to actual framerate for the sake of legibility within our charts.

There has also been some debate over whether or not we should include 0.1% (or 99.9th percentile) framerates as well. Our answer to that is a simple “NO”. The reason for this is simple: in GPU benchmarking an outside influence such as an SSD load or game engine shader caching issue could interject a single very high latency frame into the mix. While the 1% lowest calculation would include that in its average, 0.1% would display highlight it. However, since many of those 0.1% frames aren’t due to the GPU at all, they should also be perceived as a red herring and not valid for GPU comparisons.

Battlefield 1

Battlefield 1 will likely become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles around. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.

Call of Duty: Infinite Warfare

The latest iteration in the COD series may not drag out niceties like DX12 or particularly unique playing styles but it nonetheless is a great looking game that is quite popular.

This benchmark takes place during the campaign’s Operation Port Armor wherein we run through a sequence combining various indoor and outdoor elements along with some combat.

Deus Ex – Mankind Divided

Deus Ex titles have historically combined excellent storytelling elements with action-forward gameplay and Mankind Divided is no difference. This run-through uses the streets and a few sewers of the main hub city Prague along with a short action sequence involving gunplay and grenades.

The Division

The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.

Doom (Vulkan)

Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.

Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.

Gears of War 4

Like many of the other exclusive DX12 games we have seen, Gears of War 4 looks absolutely stunning and seems to be highly optimized to run well on a variety of hardware. In this benchmark we use Act III, Chapter III The Doorstep, a level that uses wide open views along with several high fidelity environmental effects. While Gears does indeed include a built-in benchmark we didn’t find it to be indicative of real-world performance.

Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.

Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.

Overwatch

Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.

Quantum Break

Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.

Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.

Titanfall 2

The original Titanfall met with some reasonable success and its predecessor tries to capitalize upon that foundation by including a single player campaign while expanding multiplayer options. It also happens to be one of the best looking games released in 2016.

This benchmark sequence takes place within the Trial By Fire mission, right after the gates of the main complex are breached. Due to the randomly generated enemies in this area, getting a completely identical runthrough is challenging which is why we have increase the number of datapoints to four.

Warhammer: Total War

Unlike some of the latest Total War games, the hotly anticipated Warhammer title has been relatively bug free, performs well on all systems and still incorporates the level detail and graphics fidelity this series is known for. In this sequence, we use the in-game benchmarking tool to play back one of our own 40 second gameplay sessions which includes two maxed-out armies and includes all of the elements normally seen in standard gameplay. That means zooms and pans are used to pivot the camera and get a better view of the battlefield.

Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.

Battlefield 1

Battlefield 1 will likely become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles around. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.

Call of Duty: Infinite Warfare

The latest iteration in the COD series may not drag out niceties like DX12 or particularly unique playing styles but it nonetheless is a great looking game that is quite popular.

This benchmark takes place during the campaign’s Operation Port Armor wherein we run through a sequence combining various indoor and outdoor elements along with some combat.

Deus Ex – Mankind Divided

Deus Ex titles have historically combined excellent storytelling elements with action-forward gameplay and Mankind Divided is no difference. This run-through uses the streets and a few sewers of the main hub city Prague along with a short action sequence involving gunplay and grenades.

The Division

The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.

Doom (Vulkan)

Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.

Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.

Gears of War 4

Like many of the other exclusive DX12 games we have seen, Gears of War 4 looks absolutely stunning and seems to be highly optimized to run well on a variety of hardware. In this benchmark we use Act III, Chapter III The Doorstep, a level that uses wide open views along with several high fidelity environmental effects. While Gears does indeed include a built-in benchmark we didn’t find it to be indicative of real-world performance.

Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.

Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.

Overwatch

Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.

Quantum Break

Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.

Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.

Titanfall 2

The original Titanfall met with some reasonable success and its predecessor tries to capitalize upon that foundation by including a single player campaign while expanding multiplayer options. It also happens to be one of the best looking games released in 2016.

This benchmark sequence takes place within the Trial By Fire mission, right after the gates of the main complex are breached. Due to the randomly generated enemies in this area, getting a completely identical runthrough is challenging which is why we have increase the number of datapoints to four.

Warhammer: Total War

Unlike some of the latest Total War games, the hotly anticipated Warhammer title has been relatively bug free, performs well on all systems and still incorporates the level detail and graphics fidelity this series is known for. In this sequence, we use the in-game benchmarking tool to play back one of our own 40 second gameplay sessions which includes two maxed-out armies and includes all of the elements normally seen in standard gameplay. That means zooms and pans are used to pivot the camera and get a better view of the battlefield.

Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.

Analyzing Temperatures & Frequencies Over Time

Modern graphics card designs make use of several advanced hardware and software facing algorithms in an effort to hit an optimal balance between performance, acoustics, voltage, power and heat output. Traditionally this leads to maximized clock speeds within a given set of parameters. Conversely, if one of those last two metrics (those being heat and power consumption) steps into the equation in a negative manner it is quite likely that voltages and resulting core clocks will be reduced to insure the GPU remains within design specifications. We’ve seen this happen quite aggressively on some AMD cards while NVIDIA’s reference cards also tend to fluctuate their frequencies. To be clear, this is a feature by design rather than a problem in most situations.

In many cases clock speeds won’t be touched until the card in question reaches a preset temperature, whereupon the software and onboard hardware will work in tandem to carefully regulate other areas such as fan speeds and voltages to insure maximum frequency output without an overly loud fan. Since this algorithm typically doesn’t kick into full force in the first few minutes of gaming, the “true” performance of many graphics cards won’t be realized through a typical 1-3 minute benchmarking run. Hence why we use a 10-minute warm up period before all of our benchmarks.

I think the first thing to do here is to properly explain how AMD’s board partners are treating temperature this time around. Rather than letting their own nominal fan speed algorithms dictate temperatures, each card’s BIOS is loaded with a specific temperature and clock speed target. With those targets in mind, fan speed curves are adjusted to meet them.

In this instance you can see that the temperatures of both XFX and Sapphire cards top out between the 73°C to 75°C marks. Both of those are a long stretch away from the reference RX 480 but the results are to be expected from cards with extensive custom coolers like these.

Moving on the frequencies and we can see exactly where the RX 580 gets all of its performance from. Whereas the reference RX 480 struggled to maintain a continual 1250MHz clock rate in our tests, the XFX card topped out at 1366MHz while Sapphire’s Nitro+ pushed itself to 1411MHz. One thing to note is that unlike NVIDIA’s Boost algorithm, AMD’s PowerTune technology isn’t designed to take advantage of excess thermal headroom by offering “higher than boost” frequencies. Rather, PowerTune strives to hit a given clock and sits there regardless of how cool the silicon is running.

The actual performance differences between these three cards needs to be discussed as well since the chart above relates directly back to all of the in-game performance benchmarks we just ran through. The delta between the near-reference XFX RX 580 GTS XXX and the reference RX 480 is about 7% which is about where things lined up in all of the other tests.

Meanwhile, the Sapphire card’s $40 premium over the GTS XXX edition nets you a very marginal 4% increase. That result is particularly telling since it highlights why sometimes premium priced, pre-overclocked GPUs actually deliver less performance per dollar than less expensive models.

Acoustical Testing

What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Tomb Raider is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.

There’s no denying the benefits of going towards a custom cooled GPU if you are looking for low noise and both of these cards prove that. Not only do they benefit from fan start / stop technology which idles their fans when there’s a low-load situation but XFX and Sapphire seem to have mastered the art of cooling the Polaris 10 core.

But let’s be honest here for a moment. This version of Polaris 10 may be higher clocked than its predecessor but its still just a 185W core which doesn’t need all that much cooling capacity. As such, the massive heatsinks on these two cards could be considered overkill as well.

System Power Consumption

For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

Over the last twenty or so pages, the benefits of AMD’s new RX 580 should have been abundantly evident. Its higher clock speeds bring about instantaneous benefits in every performance per dollar metric. But as they say “there’s no such thing as a free lunch” and those frequency increases go hand in hand with a significant power consumption increase. The numbers are actually quite shocking.

Whereas the reference RX 480 was power hungry in its own right, the two RX 580 samples I have in-house ended up putting that card to shame. The XFX GTS XXX -which is the lower clocked of the pair- ended up matching a GTX 1070 Founders Edition card’s input needs while the Nitro+ actually needed more juice than NVIDIA’s much higher performing $350 GPU.

Not only do numbers like these point towards a significant performance per watt shortfall against NVIDIA’s latest architecture (something which has always plagued Polaris) but they could also hamper these cards’ adoption into notebooks.

Overclocking Results – A Bucket of Frustration

With numerous voltage roadblocks and power walls being thrown up it seems like overclocking both graphics cards and CPUs has become a dead-end process as of late. No better example of this exists than the RX 580. With two samples in hand, I approached overclocking with a pretty open mind since I had double the chance of achieving something quite reasonable. As reality set in, so too did the frustration.


Get used to seeing a lot of these

So where do I start? First and foremost, every tool I used from Wattman to MSI’s Afterburner prevented the RX 580’s from pushing past a few MHz of overclock. And no, you aren’t reading this incorrectly….a few MHz. Since AMD offers a strict 50mV voltage increase and what seems like an extremely hard cap on input power, there wasn’t anything more I could do.

To add insult to injury, the system would hard lock without exhibiting any rendering errors first. Typically a graphics card will throw a few telltale signs that an overclock isn’t stable before crashing. Not so with either of these cards. Things went from perfectly stable to a hard driver crash within 2MHz. This points towards a vendor-based hard lock on these cards.

The end result overclocks were nothing short of embarrassing. That XFX card ended up hitting 1405MHz from its original 1366MHz while the Sapphire Nitro+ required its second 1450MHz BIOS to hit 1485MHz. Meanwhile, the memory did play along and I ended up hitting just over 8.7Gbps on both cards.

One word of warning to would-be overclockers though: even though the RX 580 may look stable at a given speed, it may not be. In the example below, I had boosted the XFX RX 580 to 1420MHz with an extra 50mV of voltage.

As you can see, while the system didn’t hard-lock but the overclock actually dialed itself even further back to 1399MHz. The only reason why I caught this was because of GPU-Z’s ability to poll the clock rates at extremely fast intervals. Meanwhile, AMD’s own Wattman was convinced the clock speed was sitting pretty at 1420MHz. With that in mind, make sure you test these cards with a proper tool to insure their stability at whatever overclock you believe is dialed in.

Conclusion; One of the Best Just Got Better

AMD intends for the RX 580 to become a true spiritual successor to the popular RX 480 and there’s no doubt in my mind it accomplishes exactly that. Despite many things pointing to the contrary, this is much more than a rebrand since these cards are being offered with higher clock speeds and some notable updates to their power consumption profiles. Some may argue that a 1340MHz speed bin for Polaris was already widely available on board partners’ overclocked SKUs but now AMD is making those higher clocks the new norm.

The true impact of those new frequencies upon raw performance data is pretty significant: there’s a delta between the RX 480 and RX 580 that fluctuates from 5% to 10% and points between. Add in a card such as the Nitro+ and you’re looking at 7-13% in averages and the lowest 1% of framerates depending on the title being tested. While that might not seem like much, the RX 480 was already riding high after some key driver releases and this update pushes AMD even further ahead of the GTX 1060 6GB. From my perspective, NVIDIA needs to enact some quick price cuts before their mid-tier darling gets overwhelmed. As it stands right now, there isn’t a single scenario wherein I’d recommend the 1060 over the RX 580, especially with products like XFX’s RX580 GTS XXX Edition sitting at about $240.

While it would be an understatement to say I’m stoked about what the RX580 brings to the table, I have to temper my excitement with a quick call back to reality. There are two pretty serious (in my eyes at least) limitations that have to be discussed, both of which are a byproduct of pushing the Polaris 10 core to new heights. First and foremost amongst these is power consumption. With such high frequencies, this is one power hungry little bugger. In our tests it consumed just as much as a reference GTX 1070 and when overclocked to Sapphire’s levels it requires even more juice. For those keeping track at home, that’s a whole 70W higher than a GTX 1060 6GB Founders Edition. If you are building a small form factor system, this fact alone may eliminate the RX 580 from contention.

Then there’s overclocking and what can I say on that point? After hours of heartache and frustration I simply gave up trying to whip Wattman into line. The end result for both cards was less than a 60MHz (at best!) core clock increase. If I strayed even a half percent out of line, finer-grain monitoring tools like GPU-Z would show the core speed fluctuating madly even though Wattman registered the higher speed as being locked in. Go a bit further afield and a driver crash would necessitate a hard system reboot. Will more mature tools fix this? Perhaps but I’m not going to say things will change either. Doing so would be falling into the usual “wait and see” trap that doggedly follows on the heels of almost every launch these days.

I’m also going to throw up a flag of warning here to; not just for AMD but NVIDIA as well. Had I been limited to the card that was originally sampled –the $280 Sapphire Nitro+ Limited Edition – my opinions about the RX 580’s overall value would have been very, very different. This highlights the danger of sampling premium cards at launch; they rarely, if ever, offer a good price / performance comparison. I understand the need to highlight a board partner’s offerings but there are plenty of other options that can and should be sampled before halo products.

So I’m going to wrap this up here and now. I think the RX 580 is the best possible drop-in upgrade solution that money can buy provided your PSU is up to the task of powering it and there’s full awareness of the very limited overclocking headroom. Not only can this card offer superlative 1080P performance but it has the chops to power through high detail level 1440P content as well. NVIDIA’s GTX 1060 6GB can’t even come close, even when EVGA’s Superclocked edition is thrown into the mix.

Personally I’d recommend looking at options like XFX’s GTS XXX Edition that hover around the $240 to $250 mark for the best bang for buck. However, if you want something a bit more enthusiast-oriented, Sapphire’s Nitro+ Limited Edition will certainly fit the bill nicely but it is simply too expensive for this segment.

Posted in

Latest Reviews