What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

ASUS R9 Fury STRIX Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
When the R9 Fury was first launched, many gamers saw it as a near-perfect combination of performance, pricing and overall capabilities. Unfortunately, for the better part of six weeks after we initially reviewed it, actually finding one became a lesson in futility. Since then the stock situation has gradually improved and additional board partners have introduced their own designs, though retailers tell us that getting their hands on anything Fury-branded is a challenge. In this particular review, we will be looking at the ASUS R9 Fury STRIX.

The appeal surrounding AMD’s R9 Fury is well-warranted since it rings in at $549 but can run alongside the more expensive R9 Fury X when overclocked. In addition AMD’s board partners have been given a freedom to design custom boards with awesome cooling capabilities and upgraded component selections. This allows the Fury to be infinitely more adaptable that the cumbersome R9 Fury X with its large water cooling solution.

The introduction of AMD’s R9 Nano may have stolen some of the R9 Fury’s thunder in the news but there’s very little to no overlap between the two cards. While the Nano is an enticing yet expensive solution for small form factor builds, the Fury cards are typically massive due to large cooling assemblies. We highly doubt both would be cross shopped but if they are, the ASUS R9 Fury STRIX would come out well ahead on the performance per dollar front.


With AMD tightly constraining the board partners’ allowable out-of-box overclocks, ASUS couldn’t do all that much on the performance front to distinguish their card from competitors’ solutions. If GPU Tweak isn’t installed then the R9 Fury STRIX will default to the reference core frequency of 1GHz. Meanwhile, utilizing the software’s built-in “OC Mode” bumps things up by a mere 20MHz or barely enough to warrant mention. The High Bandwidth Memory on the other hand doesn’t receive one single iota of overclocking but that’s par for the course since AMD still doesn’t allow even manual changes to their onboard memory.

Pricing on this card is a bit of a slippery slope since we’ve seen it ranging from $10 to $30 more than AMD’s $549 ($675CAD) MSRP. For whatever reason, Sapphire’s competing Tri-X seems to be sticking to that $549 price point at most retailers. Some may call that the “ASUS premium” and it should be interesting to see if the extra money is really worthwhile.


The ASUS R9 Fury STRIX will likely look quite familiar to anyone who has been reading HWC lately since it uses the exact same DirectCU III cooler design as the GTX 980 Ti STRIX OC we reviewed a few weeks ago. Regardless of the similarities, the black, aluminum and red color design feels better suited to an AMD-branded card.

You may remember I said that this card likely won’t be used in too many SFF builds and that’s because of its length: at just over 12” many AMD fans who want a high powered ITX system will likely choose to go the Nano route. However, there’s a perfectly good explanation for its size: the DirectCU III heatsink has proven itself it be one of the very best on the market but it does take up a good amount of space.


Atop the card rests a trio of 92mm fans which are equipped with ASUS’ 0dB technology, allowing them to completely shut off in situations where the GPU core is idling. They have also been designed with a cantered outside edge which improves airflow and boost static pressure so load speeds can be reduced for a quieter gaming experience.


Below the shroud is one of the most advanced coolers on the market today. It boasts a pair of gargantuan 10mm heatpipes alongside several smaller ones, all of which make direct contact with the Fury core. Supposedly, this design can easily handle up to 300W of thermal load which is significantly more than AMD’s flagship architecture can produce.

Hidden below all of this cooling goodness is a highly advanced all-digital 12-phase PWM loaded with ASUS’ Super Alloy Power II components. Super Alloy Power II is a relatively straightforward affair: ASUS picks components that enhance overall efficiency, have longer lifespans than previous designs and run at low thermal levels.

Another interesting addition to this generation of STRIX cards is what ASUS calls Auto-Extreme technology which is actually a whole group of additions to their board designs. This includes a completely automated manufacturing process which reduces human error, a new series of advanced quality control methods and flux-free production for increased component durability.


Around the back side of the R9 Fury STRIX is a full coverage anodized aluminum backplate with the usual owl logo and not much else. Since there aren’t any mission-critical components, ASUS isn’t providing any additional cooling here but it does give a great finished look to the whole affair.


Around the core is a red GPU Fortifier which is supposed to work with the backplate to reinforce the surrounding area. These days, PCB flex isn’t a huge concern but it is certainly a possibility when utilizing a large heatsink like ASUS’ DirectCU III.


The rear area also boasts a small PCB finger that sits proud of the backplate and incorporates a number of voltage read points. Considering how limited overclocking is on this card (more on that later) we doubt these would be put to good use unless a competitive overclocker was able to completely unlock the Fury’s voltage outputs.


Power input is handled by a pair of 8-pin connectors that are bordered by LEDs. These LEDs glow red when the power connector isn’t properly plugged in and white when a successful connection is made.


Lastly, there’s the usual assortment of display connectors which includes a trio of DisplayPort outputs, a single HDMI 1.4 and the ubiquitous DVI-D.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
A Closer Look at ASUS' New GPU Tweak II

A Closer Look at ASUS' New GPU Tweak II


Note that the pictures below show the software running on a GTX 980 Ti.


The ASUS GPU Tweak software was initially conceived as a quick and easy way to overclock and monitor your graphics card. While the premise was simple, the initial version came off as a simple knock-off of better-established software like MSI’s AfterBurner, RivaTuner and EVGA’s Precision. With GPU Tweak’s second iteration that similarity gets thrown out the window and is replaced with a clean, modern and extremely useable design.

The primary landing page consists of three large dials which indicate GPU speed shown in a percentage relative to a reference card’s core frequency, temperature and GPU usage. There’s also a row of buttons listing the three preset modes of OC, Gaming and Silent. A fourth tab is provided wherein you can create your own clock speed and fan profile.

Tertiary buttons include a toggle for the 0dB fan technology, a link to the Professional Mode page which includes a bevy of overclocking options, a Game Booster function and a few additional areas that grant more control over the software and its functions.


The Professional Mode grants you full control over clock speeds, voltages, Power Limit, fan speeds and memory frequencies. There’s even an option to set up a completely custom fan speed curve. Once an overclock and other aspects are dialed in, they can be saved to a predefined User Mode. It’s a really straightforward process that’s been rolled into an extremely easy to approach interface.


GPU Tweak’s Monitor tab automatically opens when the application begins. As you might expect, this section gives readouts for every aspect of the card in question. Unfortunately, there’s no option to load “PerfCap” reasons like within EVGA’s Precision too and that’s a pretty major disappointment since those reasons allow you to see what’s holding back an overclock. ASUS also doesn’t give the option to log the readouts to a text file.


The Info tab loads an ASUS-branded version of GPU-Z along with a Live Update section. Live Update itself is absolutely brilliant since it allows you to download and install everything from a new version of GPU Tweak II to a new vBIOS with a minimum of hassle.


One of the more interesting additions to GPU Tweak II is its so-called Gaming Booster. It allows users to quickly modify and optimize their system for the best performance possible. Visual Effects can change the Windows default scheme from the typical Windows 8.1 theme to a theme without tab transparency. Meanwhile, System Services turns off non-critical Windows services and the Memory Defragmentation is supposed to re-arrange system memory resources without closing any processes.


ASUS also bundles a full year of the XSplit Gamecaster Premium service with many of their newest cards. That’s about $100 worth of value.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Performance Consistency & Temperatures Over Time

Performance Consistency & Temperatures Over Time


With such a well-endowed cooler there’s very little reason to worry about how the ASUS R9 Fury STRIX will handle the heat thrown out by AMD’s Fiji core. However, we decided to run these tests anyways since even the best heatsinks have been known to struggle in odd situations.


Actually seeing the DirectCU III heatsink in action is really something. It attacks temperatures with wild abandon, allowing the core to run at substantially lower temperatures than Sapphire’s Tri-X equipped card. That’s a mighty impressive result considering ASUS’ design is slightly more compact than Sapphire’s.


Clock speeds ended up exactly where we expected them to: at 1020MHz. Remember, this speed can only be achieved when using the new GPU Tweak’s so-called “OC Mode” while installing the card without accompanying software will result in a constant 1000MHz core frequency. We will use the OC Mode for all testing in this review.


Performance is within a few percentage points of Sapphire’s reference-speed card but the actual real-world framerate difference will be completely invisible to gamers.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Thermal Imaging / Acoustics / Power Consumption

Thermal Imaging



There’s really not all that much to see here. Since so much of the Fury’s heat-generating components (minus the PWM) are clustered in close proximity to the core ASUS has been able to focus their cooling efforts and the end result is nothing short of spectacular.


Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Hitman Absolution is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.


Acoustics are some of the best around which is pretty much par for the course with ASUS’ DirectCU III cooler. The fans never really audibly ramp up, even after hours of testing. Meanwhile, the 0dB fan technology insures the fans don’t spin at all in idle and low load scenarios.

One thing we do have to mention is coil / inductor whine. Our sample did suffer from it but ONLY in situations where framerates were at stratospheric levels (200FPS and higher) while in normal gaming situations of 150FPS and lower, the STRIX was quiet as a church mouse.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.


This is a bit of an interesting result since it shows that low temperatures and perhaps even ASUS’ upgraded components do allow for potentially lower power consumption numbers. With that being said, the difference between the Sapphire and ASUS cards in this test was still well within the margin of error so we may be jumping at shadows by assuming the difference is anything other than sample to sample variance.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Test System & Setup

Test System & Setup



Processor: Intel i7 4930K @ 4.7GHz
Memory: G.Skill Trident 16GB @ 2133MHz 10-10-12-29-1T
Motherboard: ASUS P9X79-E WS
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / ASUS PQ321Q (4K)
OS: Windows 8.1 Professional


Drivers:
AMD 15.201.1102 (R9 Nano)
AMD 15.7.1
NVIDIA 352.90


*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 2 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results, other than 4K.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
1440P: AC:Unity / Battlefield 4

Assassin’s Creed: Unity


<iframe width="640" height="360" src="https://www.youtube.com/embed/8V96SFIvFKg?rel=0" frameborder="0" allowfullscreen></iframe>​

While it may not be the newest game around and it had its fair share of embarrassing hiccups at launch, Assassin's Creed: Unity is still one heck of a good looking DX11 title. In this benchmark we run through a typical gameplay sequence outside in Paris.




Battlefield 4


<iframe width="640" height="360" src="//www.youtube.com/embed/y9nwvLwltqk?rel=0" frameborder="0" allowfullscreen></iframe>​

In this sequence, we use the Singapore level which combines three of the game’s major elements: a decayed urban environment, a water-inundated city and finally a forested area. We chose not to include multiplayer results simply due to their randomness injecting results that make apples to apples comparisons impossible.


 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
1440P: Dragon Age: Inquisition / Dying Light

Dragon Age: Inquisition


<iframe width="640" height="360" src="https://www.youtube.com/embed/z7wRSmle-DY" frameborder="0" allowfullscreen></iframe>​

Dragon Age: Inquisition is one of the most popular games around due to its engaging gameplay and open-world style. In our benchmark sequence we run through two typical areas: a busy town and through an outdoor environment.





Dying Light


<iframe width="640" height="360" src="https://www.youtube.com/embed/MHc6Vq-1ins" frameborder="0" allowfullscreen></iframe>​

Dying Light is a relatively late addition to our benchmarking process but with good reason: it required multiple patches to optimize performance. While one of the patches handicapped viewing distance, this is still one of the most demanding games available.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
1440P: Far Cry 4 / Grand Theft Auto V

Far Cry 4


<iframe width="640" height="360" src="https://www.youtube.com/embed/sC7-_Q1cSro" frameborder="0" allowfullscreen></iframe>​

The latest game in Ubisoft’s Far Cry series takes up where the others left off by boasting some of the most impressive visuals we’ve seen. In order to emulate typical gameplay we run through the game’s main village, head out through an open area and then transition to the lower areas via a zipline.




Grand Theft Auto V


In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
1440P: Hitman Absolution / Middle Earth: Shadow of Mordor

Hitman Absolution


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.




Middle Earth: Shadow of Mordor


<iframe width="640" height="360" src="https://www.youtube.com/embed/U1MHjhIxTGE?rel=0" frameborder="0" allowfullscreen></iframe>​

With its high resolution textures and several other visual tweaks, Shadow of Mordor’s open world is also one of the most detailed around. This means it puts massive load on graphics cards and should help point towards which GPUs will excel at next generation titles.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
1440P: Thief / Tomb Raider

Thief


<iframe width="640" height="360" src="//www.youtube.com/embed/p-a-8mr00rY?rel=0" frameborder="0" allowfullscreen></iframe>​

When it was released, Thief was arguably one of the most anticipated games around. From a graphics standpoint, it is something of a tour de force. Not only does it look great but the engine combines several advanced lighting and shading techniques that are among the best we’ve seen. One of the most demanding sections is actually within the first level where you must scale rooftops amidst a thunder storm. The rain and lightning flashes add to the graphics load, though the lightning flashes occur randomly so you will likely see interspersed dips in the charts below due to this.




Tomb Raider


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.



 
Top