What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

ASUS GTX 970 STRIX OC Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
The GTX 970 presents both a problem and an opportunity for NVIDIA and their board partners. On one hand its low price of just $329 relative to the performance it is supposed to attain has resulted in a popularly surge of extreme proportions. At face value that’s certainly a good thing but actually finding one of these cards in stock is a lesson in futility. Nonetheless, from all the reports we’ve seen, retailers expect regular shipments and with no AMD competitor on the horizon, that could allow NVIDIA to whittle some valuable market share away from their archrival.

If the GTX 980 showed us anything it is that NVIDIA's GM204 core is nearly impossible to compete with in a pure performance per watt battle. However, the GTX 970 finds itself in a hotly contested area, even with its price of $329. This means it needs to eke out a performance win over AMD's competitors before any cost cutting measures are implemented for the current Radeon lineup.


Now before we get any further in this review, it’s important to understand a bit more about the GTX 970 and its place within the wider graphics market. Since our sample was delayed by the lovely folks at Canada Customs this review will serve as our de-facto introduction to the slightly lower-end Maxwell core. However, most of the particulars were already covered in our GTX 980 launch day article so if you are looking for a more in-depth overview of the architecture and features that Maxwell brings to the table, head over there first.

The GTX 970 uses the same GM204 architecture as its bigger brother but has a trio of SMMs shaved off. As with every other architecture out there, this core design is one created through necessity since any dies that don’t pass muster for their inclusion into NVIDIA’s GTX 980 get rolled into the 970. The end result is a GPU with 1664 CUDA cores and 104 Texture Units. The GM204’s back-end structure of 64 ROPs and a 256-bit memory interface remains intact though.

NVIDIA’s target for this card should be evident from the getgo; it is meant to replace their GTX 770 (which has now been EOL’ed) while putting the screws to AMD’s R9 290 which still retails for around $350. Meanwhile, overclocked versions will reach higher levels in both performance and price points.


Another interesting aspect of the GTX 970’s launch is the lack of a reference design. While NVIDIA has given board design guidelines and minimum clock speeds that need to be adhered to, board partners have been given a free hand to launch their own versions. This has led to the vast majority of GTX 970 cards boasting high end cooling designs and enhanced frequencies. It also means that designating a “standard” GTX 970 is almost impossible.

This all brings us to ASUS’ GTX 970 STRIX OC, one of the most popular GTX 970’s around if forum chatter is any indication. With a Boost frequency of 1253MHz (though the memory remains at 7Gbps) it also happens to be one of the fastest examples on the market despite retailing for just $10 more than NVIDIA’s suggested base price.

With the GTX 970 being essentially sold out no matter where you look, this review may be a bit bittersweet right now but it won't remain that way. But what makes the ASUS STRIX OC unique and worthy of your attention? A whole lot it seems...

 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
A Closer Look at the GTX 970 STRIX OC

A Closer Look at the GTX 970 STRIX OC



The design for ASUS’ GTX 970 STRIX largely follows the same path as their GTX 780 STRIX. The card receives a large DirectCU II heatsink atop which a pair of large 92mm fans are attached. While the heatsink itself extends past the PCB’s edge and increases length to about 11”, its size has some tertiary benefits as well. Alongside ASUS’ 0dB Fan Technology, it can efficiently cool the core in lower load scenarios all by itself. This means the STRIX can operate completely silent provided there’s sufficient airflow and load remains at a manageable level.


Even when the fans do engage, ASUS’ STRIX remains one of the quietest cards around since their sheer size ensures that lower RPMs can be used to achieve adequate airflow. There’s also a massive 10mm heatpipe tucked away under the predominantly black shroud.


Moving around to the side we can see that 10mm heatpipe make an appearance along with ASUS’ unique owl-based STRIX logo. This version of the DirectCU II design does tend to dump all of its hot air back into the card’s immediate confines but the GTX 970 doesn’t output all that much heat so that concern has been partially nullified. It should be noted that the heatpipe protrudes from the card by a good inch which means that slimmer cases may have some issues with it.


ASUS has also installed a secondary backplate heatsink for additional cooling and in order to insure there is easy access to the SLI connectors, they have been placed on a PCB outcropping. It may look odd but it actually works quite well when you’re trying to set up a dual card system.


Most of this card’s action happens below the shroud so to speak. ASUS has equipped the GTX 970 STRIX with an all-digital 6-phase PWM that features their Super Alloy component selection. This means upgraded MOSFETS that can handle higher voltage thresholds and capacitors that feature 2.5x the lifespan of standard caps. For anyone who cares about silence, there are concrete-core chokes that are supposed to cut down on the telltale buzzing some cards exhibit when under load. Considering this card is only $10 more than NVIDIA’s stated “reference” cost, all of these things are just icing on the cake.


Since NVIDIA is giving their board partners freedom to design their cards as they see fit, ASUS has taken an interesting path. The STRIX is equipped with a single 8-pin power connector while the backplate uses a design that mirrors the GTX 7xx series rather than the GTX 980. With a single HDMI 2.0 port, a DisplayPort and a pair of DVI connectors, you’ll still be able to power a 4K display but running a trio of them isn’t possible.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Test System & Setup

Main Test System

Processor: Intel i7 4930K @ 4.7GHz
Memory: G.Skill Trident 16GB @ 2133MHz 10-10-12-29-1T
Motherboard: ASUS P9X79-E WS
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell 2412M (1440P) / ASUS PQ321Q (4K)
OS: Windows 8.1 Professional


Drivers:
AMD 14.7 Beta
NVIDIA 344.07 Beta


*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 2 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results, other than 4K.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Assassin’s Creed IV: Black Flag / Battlefield 4

Assassin’s Creed IV: Black Flag


<iframe width="640" height="360" src="//www.youtube.com/embed/YFgGnFoRAXU?rel=0" frameborder="0" allowfullscreen></iframe>​

The fourth iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Havana area which features plenty of NPCs, distant views and high levels of detail.


2560 x 1440




Battlefield 4


<iframe width="640" height="360" src="//www.youtube.com/embed/y9nwvLwltqk?rel=0" frameborder="0" allowfullscreen></iframe>​

Amidst its teething problems since its release, BF4 has been a bone of contention among gamers. In this sequence, we use the Singapore level which combines three of the game’s major elements: a decayed urban environment, a water-inundated city and finally a forested area. We chose not to include multiplayer results simply due to their randomness injecting results that make apples to apples comparisons impossible.

2560 x 1440


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Call of Duty: Ghosts / Far Cry 3

Call of Duty: Ghosts


<iframe width="640" height="360" src="//www.youtube.com/embed/gzIdSAktyf4?rel=0" frameborder="0" allowfullscreen></iframe>​

The latest Call of Duty game may have been ridiculed for its lackluster gameplay but it remains one of the best-looking games out there. Unfortunately due to mid-level loads, getting a “clean” runthrough without random slowdowns is nearly impossible, even with a dual SSD system like ours. Hence why you should ignore any massive framerate dips as they are anomalies of poor loading optimizations. For this benchmark we used the first sequence of the 5th Chapter entitled Homecoming as every event is scripted so runthroughs will be nearly identical.

2560 x 1440




Far Cry 3


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



2560 x 1440


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Hitman Absolution / Metro: Last Light

Hitman Absolution


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


2560 x 1440




Metro: Last Light


<iframe width="640" height="360" src="http://www.youtube.com/embed/40Rip9szroU" frameborder="0" allowfullscreen></iframe>​

The latest iteration of the Metro franchise once again sets high water marks for graphics fidelity and making use of advanced DX11 features. In this benchmark, we use the Torchling level which represents a scene you’ll be intimately familiar with after playing this game: a murky sewer underground.


2560 x 1440


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Thief / Tomb Raider

Thief


<iframe width="640" height="360" src="//www.youtube.com/embed/p-a-8mr00rY?rel=0" frameborder="0" allowfullscreen></iframe>​

When it was released, Thief was arguably one of the most anticipated games around. From a graphics standpoint, it is something of a tour de force. Not only does it look great but the engine combines several advanced lighting and shading techniques that are among the best we’ve seen. One of the most demanding sections is actually within the first level where you must scale rooftops amidst a thunder storm. The rain and lightning flashes add to the graphics load, though the lightning flashes occur randomly so you will likely see interspersed dips in the charts below due to this.


2560 x 1440





Tomb Raider


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


2560 x 1440


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Clock Speed Stability and the GTX 970 STRIX OC

Clock Speed Stability and the GTX 970 STRIX OC


When it comes to overclocked cards, the last thing enthusiasts want is for clock speeds to reach levels that are below those advertised. While there haven’t been all that many examples of downright throttling as of late, due to our experiences with AMD’s R9 290X and to a lesser extent their R9 290, we decided to implement this test. In it, the card is subjected to constant load for 30 minutes with the first 10 minutes being charted. Typically any throttling will occur within this amount of time since they are largely influenced by temperatures and the proximity of the core to its TDP limits.


The ASUS GTX 970 STRIX exhibits some incredible results straight off the bat but they still need some explanation since there’s some interesting things going on here. First and foremost, temperatures are allowed to increase to roughly 68°C before the fans actually kick in, after which there is a drop and everything stabilizes around the 64°C mark.

Unfortunately, the temperature at which the fans are enabled can’t be modified by ASUS’ GPU Tweak utility quite yet but hopefully the feature will be added since we could see this card being able to operate completely passively around the 80°C mark provided there is sufficient case airflow.


ASUS’ own documents state that the STRIX OC has a Boost frequency of 1253MHz but in our extensive testing, it actually remained around the 1303MHz mark. That’s an incredible clock speed considering the NVIDIA’s reference specs have it hovering at 1176MHz. This goes to show how effective the DirectCU II heatsink is at keeping temperatures (and therefore TDP) low, allowing NVIDIA’s Boost algorithms to take advantage of the expanded overhead.


The stable long term clock speeds lead to an extremely predictable performance curve without any movement from the same framerate. All in all, ASUS’ card looks extremely impressive here and actually offers higher clock speeds than raw specifications would lead you to believe.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Thermal Imaging / Acoustics / Power Consumption

Thermal Imaging




Some may be concerned that the DirectCU II’s insistence on dumping heat back into the case would play havoc with ambient temperatures but that doesn’t look to be the case. You see, the GM204 core (particularly in cut down form) doesn’t produce all that much heat.

Even when placed under the thermal camera’s watchful eye, actually finding an area that went above 70°C was nearly impossible. That’s absolutely incredible. Now we understand that internal components were likely running quite a bit hotter than that but these are some of the lowest temperatures we’ve seen. Ever.


Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Hitman Absolution is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.


As with all of ASUS’ custom heatsinks, this one delivers in every respect. Its temperatures are awesome, it doesn’t require the fans being engaged in low load scenarios, it allows for extremely high frequencies and best of all; acoustics are kept under strict control.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

Please note that after extensive testing, we have found that simply plugging in a power meter to a wall outlet or UPS will NOT give you accurate power consumption numbers due to slight changes in the input voltage. Thus we use a Tripp-Lite 1800W line conditioner between the 120V outlet and the power meter.


Power consumption for the GTX 970 is nothing short of game changing. It requires some 100W less than an R9 290X and yet can compete with that card in most benchmarks. ASUS’ overclock is balanced out by upgraded components which contribute to slightly increased overall efficiency but even knowing this, the STRIX is impossible to beat from a perf per watt perspective. The only card that comes remotely close is the reference GTX 980.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Overclocking Results

Overclocking Results



With the GM204 core running at a solid 1303MHz on this card, you’d think there wasn’t much room left in the tank and to a certain extent that’s a pretty solid assumption. Even with the Power and Voltage limits within ASUS’ GPU Tweak utility maxed out (the Power Limit topped out at just 120%) we were able to set 1341MHz which resulted in a constant speed of 1389MHz while retaining 24/7 stability. Any higher than that would result in the core throttling itself back to that exact same point as it smashed face first into its predetermined TDP wall.

Memory fared quite a bit better and topped out at 7811MHz. Unfortunately, ASUS doesn’t provide any memory voltage adjustments so pushing things further resulted in the GDDR5’s error correction routines kicking in.


 
Last edited:
Top