What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

The AMD R9 390X 8GB Performance Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,410
Location
Montreal
After the introduction of the Fiji-based Fury cards, there hasn’t been all that much talk about the new R9 300 series other than criticism directed towards their rumored use of previous-generation architectures. That’s a shame since these cards, for the time being at least, represent AMD’s best hope to compete against the GTX 980, GTX 970 and GTX 960 while Fury series targets substantially higher price points. In addition, the high end market where AMD’s flagships play in can never hope to match the volumes generated by lower priced products. Simply put, the Radeon lineup needs a good blend of enthusiast and regular mid-tier offerings if it hopes to make inroads against GeForce’s slice of the GPU market pie.

Headlining the R9 300 series is the R9 390X, a card picks up from where the R9 290X left off. This is also where the cries of “rebrand!” will likely be heard the loudest since its price / performance equation is supposed to compete against NVIDIA’s GTX 980, a card which boasts an arguably more efficient architectural design. Meanwhile, the 390X uses the same GCN 1.1-based architecture as the outgoing Hawaii-based cards but in an updated core which has been code named Grenada XT.


On the surface at least there’s absolutely nothing to distinguish Grenada from Hawaii but there’s supposedly more here than what first meets the eye. While the SIMD array, asynchronous compute engines and generalized compute units haven’t been touched, the 28nm manufacturing process used for these GPUs has come a long way since they were first introduced almost two years ago. The end result is higher overall efficiency and a general maturation of processing pathways. In theory this could lead to Grenada-based cores which reach higher Boost speeds than their predecessors while also exhibiting slightly higher performance when identically clocked. We’ll go into this a bit later in the article.

From a features perspective the R9 390X has all the hallmarks of AMD’s latest architectures, including Fiji. That means Virtual Super Resolution (though limited to Hawaii levels), FreeSync support and Frame Rate Target Control have all been rolled into one neat package. There may be some differences in feature level DX12 support between GCN 1.1 and the Fury parts but those details haven’t been made public yet. Expect to hear more about that in a week or so.


Despite there being no core layout differences, there have been several tertiary modifications rolled into the Grenada XT-based R9 390X. The core operates about 5% faster than the reference R9 290X did which isn’t all that much but AMD has still managed to lower typical board power to 275W, down from about 300W on its predecessor.

Memory is one area where this new SKU exhibits serious benefits over the 290X. While Hawaii launched with 4GB of GDDR5 memory operating at 5Gbps Grenada XT will be paired with 8GB of 6Gbps, making it a veritable 4K powerhouse compared to similarly priced cards. Granted, it wasn’t all that hard to find R9 290X cards with an 8GB framebuffer but they were costly and their default memory clocks never hit the 6GHz mark.

One area where buyers will likely have to take a cold shower is in the pricing department. While 4GB R9 290X cards could be found for as little as $350 just a few months ago, the R9 390X will start at $429. This is actually right in line with a few of the 8GB Hawaii-based products still available but the few which are less (Sapphire’s Tri-X OC is just $389) tend to throw a wrench into the works. In addition, AMD hopes this price point will help their new card compete on a more level footing with NVIDIA’s GTX 980…at least on the price / performance front.

Rather than a rebrand or rebadge, the R9 390X is more like the end-of-life refreshes car models usually go through before they’re replaced in the next model year. There’s certainly not enough here to call the R9 390X a “new” card but branding consistency and a few very minor, almost transparent changes point towards this being something a bit more than AMD slapping a different name onto an old card and calling it a day.


Sapphire's R9 390X Tri-X OC


Finally we come to the star of this particular review: the R9 390X itself. Sapphire’s R9 390X Tri-X OC is a blast from the past since, other than slightly faster memory, it looks and feels like a literal clone of their R9 290X 8GB Tri-X OC. Remember, that’s the one we mentioned above which goes for $40 less than their new card.

Other than the obvious similarities, it receives a paltry 5MHz overclock from AMD’s reference clocks. With a custom cooled card it may be impossible to find out exactly what “baseline” R9 390X performance is like but consider this: Sapphire is selling this thing for $429. That’s not a dime more than AMD’s SRP for reference-based examples.


This is one good looking card. The Tri-X has an extensive, triple-section heatsink which uses a stunning black / orange color scheme but with a length of 11.75”, it shouldn’t have too many issues fitting into most standard issue ATX cases. Underneath that heatsink is a custom designed 6-phase PWM alongside a dual BIOS button.

With its three 80mm cooling fans, an extensive aluminum fin array and a vapor chambered contact plate the cooler itself is an impressive piece of engineering. The build quality here is immaculate even though the shroud is manufactured out of plastic. Sapphire hasn’t seen the need to place a secondary heatsink over their memory modules, though the VRMs do get some attention with the inclusion of a form-fitting aluminum plate that’s actively cooled by the Tri-X fans.


The PCB’s back is left bare though we can see how large the cooler is; it exceeds the PCB by a good 1.5”. Without any memory modules here there really isn't any point to add a backplate, though the added cleanliness would be nice.


In order to accommodate as much overclocking headroom as possible, Sapphire has added a dual 8-pin power connector layout. Meanwhile, the rear I/O port houses a trio of DisplayPort outputs as well as HDMI and DVI connectors.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,410
Location
Montreal
Performance Consistency & Temperatures Over Time

Performance Consistency & Temperatures Over Time


Being based upon the hot-running and throttle-prone R9 290X, the R9 390X could very well fall into the same problems as its predecessor. However, Sapphire has added a mammoth heatsink which should contribute to lower temperatures and better clock speed consistency despite the Grenada core operating at higher speeds.


This is a ringing endorsement of Sapphire’s heatsink engineering if there ever was one. Temperatures remained tightly controlled throughout our test, never reaching over 72°C. Compare and contrast that to the reference R9 290X and the difference is like night and day.


The results here once again speak for themselves. While the reference R9 290X needed to reduce its frequencies in order to properly balance thermal loads and power consumption, this particular R9 390X does not. It consistently runs at 1055MHz. Since all of these new cards will use custom designs, we’re willing to bet these numbers remain consistent regardless of the SKU.


Performance numbers are nothing short of spectacular with extremely linear frame delivery whereas the older R9 290X (in reference form at least) drops like a stone as temperatures increase.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,410
Location
Montreal
Thermal Imaging / Acoustics / Power Consumption

Thermal Imaging



The Grenada XT core may have some optimizations built into it for lower power consumption and reduced thermal output but it still runs extremely hot. Nonetheless, it looks like Sapphire’s heatsink is able to handle it without a problem, exhibiting no outward signs of stress.


Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Hitman Absolution is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.


Near-silence is the name of the game here with the Tri-X cooler really flexing its muscles. This may not be one of the quietest cards we have ever tested but it will be almost impossible to hear this 390X when it is installed into a case.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.


This is an interesting metric since we are comparing a hot-running reference-clocked R9 290X to a custom cooled yet higher clocked R9 390X. In addition, AMD’s newer card also has double the amount of memory that’s operating at substantially higher speeds.

The end result actually puts credibility behind AMD’s claims about their Grenada core’s efficiency improvements. As you will see in the result on the following pages, the R9 390X is a good 12% to 20% faster than the R9 290X and yet according to the numbers above, it consumes less than 5% more power. It certainly looks like something has been done behind the scenes to enhance Hawaii’s performance per watt ratio.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,410
Location
Montreal
Test System & Setup

Main Test System

Processor: Intel i7 4930K @ 4.7GHz
Memory: G.Skill Trident 16GB @ 2133MHz 10-10-12-29-1T
Motherboard: ASUS P9X79-E WS
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / ASUS PQ321Q (4K)
OS: Windows 8.1 Professional


Drivers:
AMD 15.15 Beta
NVIDIA 352.90


*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 2 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results, other than 4K.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,410
Location
Montreal
1440P: AC:Unity / Battlefield 4

Assassin’s Creed: Unity





Battlefield 4


<iframe width="640" height="360" src="//www.youtube.com/embed/y9nwvLwltqk?rel=0" frameborder="0" allowfullscreen></iframe>​

In this sequence, we use the Singapore level which combines three of the game’s major elements: a decayed urban environment, a water-inundated city and finally a forested area. We chose not to include multiplayer results simply due to their randomness injecting results that make apples to apples comparisons impossible.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,410
Location
Montreal
1440P: Dragon Age: Inquisition / Dying Light

Dragon Age: Inquisition


<iframe width="640" height="360" src="https://www.youtube.com/embed/z7wRSmle-DY" frameborder="0" allowfullscreen></iframe>

Dragon Age: Inquisition is one of the most popular games around due to its engaging gameplay and open-world style. In our benchmark sequence we run through two typical areas: a busy town and through an outdoor environment.





Dying Light


<iframe width="640" height="360" src="https://www.youtube.com/embed/MHc6Vq-1ins" frameborder="0" allowfullscreen></iframe>​

Dying Light is a relatively late addition to our benchmarking process but with good reason: it required multiple patches to optimize performance. While one of the patches handicapped viewing distance, this is still one of the most demanding games available.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,410
Location
Montreal
1440P: Far Cry 4 / Grand Theft Auto V

Far Cry 4


<iframe width="640" height="360" src="https://www.youtube.com/embed/sC7-_Q1cSro" frameborder="0" allowfullscreen></iframe>​

The latest game in Ubisoft’s Far Cry series takes up where the others left off by boasting some of the most impressive visuals we’ve seen. In order to emulate typical gameplay we run through the game’s main village, head out through an open area and then transition to the lower areas via a zipline.




Grand Theft Auto V



 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,410
Location
Montreal
1440P: Hitman Absolution / Middle Earth: Shadow of Mordor

Hitman Absolution


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.




Middle Earth: Shadow of Mordor


With its high resolution textures and several other visual tweaks, Shadow of Mordor’s open world is also one of the most detailed around. This means it puts massive load on graphics cards and should help point towards which GPUs will excel at next generation titles.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,410
Location
Montreal
1440P: Thief / Tomb Raider

Thief


<iframe width="640" height="360" src="//www.youtube.com/embed/p-a-8mr00rY?rel=0" frameborder="0" allowfullscreen></iframe>​

When it was released, Thief was arguably one of the most anticipated games around. From a graphics standpoint, it is something of a tour de force. Not only does it look great but the engine combines several advanced lighting and shading techniques that are among the best we’ve seen. One of the most demanding sections is actually within the first level where you must scale rooftops amidst a thunder storm. The rain and lightning flashes add to the graphics load, though the lightning flashes occur randomly so you will likely see interspersed dips in the charts below due to this.




Tomb Raider


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.[/I


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,410
Location
Montreal
1440P: Total War: Attila / Witcher 3

Total War: Attila





Witcher 3



 
Top