What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

The AMD Radeon RX 470 4GB Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
If I would say that I wanted to have a frank discussion about entry-level graphics cards, many of you would likely respond with a resounding “Yeah, let’s not”. You’ll then go on to read about the affordable yet high-flying RX 480 or the equally impressive GTX 1060 and call it a day. Historically the reason for this reaction was pretty straightforward: the GPUs within the $200 to $300 are typically the ones which provide the most bang for your buck. With the RX 470 AMD is attempting to change that equation in a big way, by providing a low cost gaming solution but not offering up performance like a sacrificial lamb.

The sub-$200 segment has always been an interesting one for graphics cards and I use the term “interesting” loosely. Truth be told, it’s been likened to the garbage bin of GPU performance since cards within it may be low priced but paying just a few bucks more usually netted you a much better gaming experience. Nonetheless, it is here where vendors achieve their volume shipments. These are the GPUs bought by the vast majority of budding gamers, are included in many pre-configured off the shelf systems, populate thousands of computers in Asia’s “PC Bang” facilities and are the darlings of eSports leagues. That’s a massive potential market for the AMD RX 470 and with its specifications and feature set, there’s a good chance it could become a dominating presence in short order.


I typically don’t get too excited about these budget-focused cards and as a matter of fact, we typically don’t review them. In my eyes at least, the RX 470 is different since with it AMD is trying to move the yardsticks forward by a country mile. It does this by utilizing a slightly cut-down 14nm Polaris 10 core with four SIMD arrays locked. This isn’t just a replacement for the R7 370, it could actually be a suitable step forward for R9 380 and R9 380X users too. It looks like the days of incremental upgrades for entry-level gamers are about to become a thing of the past. But could the RX 470 actually be considered an "entry level" card?

Speaking of incremental upgrades, I think a little bit of a history lesson is needed. Over the last three years the vast majority of AMD’s solutions within the $99 to $199 range have been rebrands. On one hand that points towards the GCN architecture’s staying power but it also means NVIDIA has been left nearly unchallenged when it comes to inter-generational performance and efficiency improvements. As such, a drastic change was needed.


The RX 470 steps into this world 2048 Stream Processors, 128 Texture Units and 32 ROPS which aligns perfectly with AMD’s outgoing R9 380X. Meanwhile, the Polaris architecture’s ability to hit substantially higher core frequencies than its predecessors will also prove to be a major point of differentiation and we’ll likely see this card hitting 1206MHz. When you add all of these elements together and take into considering the numerous baseline architecture-level improvements Polaris brings to the table, there’s a very good chance AMD’s RX 470 could nearly double up on the R7 370’s in-game framerates.

On the memory side of this equation, there’s 4GB of GDDR5 but the maturity of this memory technology has allowed for some additional speed bins which weren’t available when the 300-series was launched. In the RX 470’s case that means modules running at 6.6Gbps across a 256-bit wide bus.

All of this has been accomplished without any increases in power consumption versus the previous generation but that doesn’t necessarily mean the RX 470 will be a performance per watt leader. While the Polaris architecture’s 14nm FinFET manufacturing process has obviously allowed for vast improvements in efficiency, AMD is still facing an uphill battle on the TDP front. This card may have a 120W TDP but that’s equal to NVIDIA’s GTX 1060 6GB, a card which offers substantially more performance. However, within AMD’s present and past lineups the RX 470 seems to be a superstar in this regard.

Pricing is a also a key component of the RX 470 but that’s a bit more of a moving target since AMD is considering this to be a “virtual” launch. As such, there won’t be a reference design thrown into the retail channels (though large system builders will likely receive cards directly from AMD) and board partners will be taking on responsibility for designing cards around that cut-down Polaris 10 PRO core. That also means pricing could vary quite wildly with some cards hitting well north of AMD’s stated $179 price like the MSI RX 470 4GB Gaming X we are reviewing today which comes in at a hefty $199.

So let’s discuss that price for a moment since it causes something of a logjam within a number of segments and it highlights what I was talking about earlier when I said this market is exceedingly cluttered. The RX 470 obviously does nothing to help that perception. At $179 it is a stone’s throw away from the RX 480 4GB which in itself offers nearly identical performance as its 8GB sibling at 1080P. Add in the premium for a card like the Gaming X and we’re talking about identical pricing despite the 470’s diminished specs. This is a real head scratcher for me.


The MSI RX 470 4GB Gaming X is obviously meant to be a premium solution and it follows closely in the footsteps of its predecessors, albeit with a pretty minimum core overclock. In this case its anemic 48Mhz core bump and 100MHz memory increase (when its OC mode is enabled, which it was for this review) won’t allow for all that much differentiation from reference-clocked models but realistically, there will be some.

For the record, I'm still in the process of working with AMD's partners to get a $179, stock-clocked RX 470 into my hands so we can see what baseline performance looks like.


So what does that extra $20 pay for? In this case its MSI’s awesome Twin Frozr VI cooler and enhanced components. Naturally the Gaming X also includes LED lights around its heatsink’s “gills”, an 8-pin power input connector, supposedly-increased overclocking headroom and some striking good looks. This is all wrapped up into a package that isn’t all that compact at just over 10 ¾”.

To be perfectly candid with you, the first thought I had when AMD announced the $179 price tag was a shocked “well that can’t be right”. That was quickly followed by an assumption the card they sent –MSI’s obviously-upgraded GAMING X – was actually $179 but that proved to be wrong too. Unless that core overclock can have an impact or you value heavily upgraded cards, this one may end up being a hard sell in some people's eyes. All is not lost though since at the lower end of the market, every few dollars can have a significant impact upon the overall value quotient.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Test System & Setup

Test System & Setup



Processor: Intel i7 5960X @ 4.3GHz
Memory: G.Skill Trident X 32GB @ 3000MHz 15-16-16-35-1T
Motherboard: ASUS X99 Deluxe
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / Acer XB280HK (4K)
OS: Windows 10 Pro


Drivers:
AMD Radeon Software 16.7.2
AMD Radeon Software 16.8.1 (RX 4xx series)
NVIDIA 368.14 WHQL
NVIDIA 368.146 Beta (GTX 1060)

*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results in DX11.


DX12 Benchmarking


For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.

Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.

One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Performance Consistency Over Time

Performance Consistency Over Time


Modern graphics card designs make use of several advanced hardware and software facing algorithms in an effort to hit an optimal balance between performance, acoustics, voltage, power and heat output. Traditionally this leads to maximized clock speeds within a given set of parameters. Conversely, if one of those last two metrics (those being heat and power consumption) steps into the equation in a negative manner it is quite likely that voltages and resulting core clocks will be reduced to insure the GPU remains within design specifications. We’ve seen this happen quite aggressively on some AMD cards while NVIDIA’s reference cards also tend to fluctuate their frequencies. To be clear, this is a feature by design rather than a problem in most situations.

In many cases clock speeds won’t be touched until the card in question reaches a preset temperature, whereupon the software and onboard hardware will work in tandem to carefully regulate other areas such as fan speeds and voltages to insure maximum frequency output without an overly loud fan. Since this algorithm typically doesn’t kick into full force in the first few minutes of gaming, the “true” performance of many graphics cards won’t be realized through a typical 1-3 minute benchmarking run. Hence why we use a 10-minute warm up period before all of our benchmarks.


The Polaris 10 core was never all that hot running but when you cut down a few SIMD units and add MSI’s Twin Frozr VI heatsink on top of things thermals, obviously don’t pose any problems. Not even close.


These are honestly some of the lowest fan speeds I have ever seen from any graphics card. That bodes quite well for acoustics but were these ultra low rotational speeds achieved through frequency sacrifices instead?


The answer to that question is a resounding “absolutely not”. MSI states this card boosts to 1254MHz and that’s where it remains. One thing that is interesting is contrary to AMD’s claims that their new PowerTune algorithms are able to take advantage of additional thermal headroom to achieve higher clocks, that doesn’t seem to be happening here at all. That’s a bit odd and, after you see this card’s performance in the upcoming benchmarks, maybe there’ll be an understanding: it could be artificially capped at a preset speed to insure these cards don’t exceed the RX 480 4GB’s framerates.


With a constant clock speed we’re seeing no movement in the framerates….naturally.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
DX11 / 1080P: Doom / Fallout 4

Doom (OpenGL)


Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.




Fallout 4


The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game's .ini file.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
DX11 / 1080P: Far Cry 4 / Grand Theft Auto V

Far Cry 4


This game Ubisoft’s Far Cry series takes up where the others left off by boasting some of the most impressive visuals we’ve seen. In order to emulate typical gameplay we run through the game’s main village, head out through an open area and then transition to the lower areas via a zipline.




Grand Theft Auto V


In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
DX11 / 1080P: Hitman / Overwatch

Hitman (2016)


The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.





Overwatch


Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
DX11 / 1080P: Rise of the Tomb Raider/ SW Battlefront

Rise of the Tomb Raider


Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.

The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.[/I]




Star Wars Battlefront


Star Wars Battlefront may not be one of the most demanding games on the market but it is quite widely played. It also looks pretty good due to it being based upon Dice’s Frostbite engine and has been highly optimized.

The benchmark run in this game is pretty straightforward: we use the AT-ST single player level since it has predetermined events and it loads up on many in-game special effects.



 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
DX11 / 1080P: The Division / Witcher 3

The Division


The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.




Witcher 3


Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
DX12 / 1080P: Ashes of the Singularity / Hitman

Ashes of the Singularity


Ashes of the Singularity is a real time strategy game on a grand scale, very much in the vein of Supreme Commander. While this game is most known for is Asynchronous workloads through the DX12 API, it also happens to be pretty fun to play. While Ashes has a built-in performance counter alongside its built-in benchmark utility, we found it to be highly unreliable and often posts a substantial run-to-run variation. With that in mind we still used the onboard benchmark since it eliminates the randomness that arises when actually playing the game but utilized the PresentMon utility to log performance




Hitman (2016)


The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.



 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
DX12 / 1080P: Quantum Break / Rise of the Tomb Raider

Quantum Break


Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.

Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.





Rise of the Tomb Raider


Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.

The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.[/I]


 

Latest posts

Twitter

Top