What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

AMD Radeon R9 295X2 Performance Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
“This card isn’t for everyone” is AMD’s byline of choice when talking about their Radeon R9 295X2, code named Hydra. That may come off as ominous and slightly off-putting in some circles but it’s a well-placed warning. There are only a few dedicated gamers who typically see the value in these ultra expensive, power hungry dual core graphics cards. The 295X2 takes things to another level, which may limit its appeal in certain cases but at the same time its unique design could open the door to broader acceptance.

To understand a bit more about the R9 295X2 and the decisions AMD made during its gestation, we’ve got to look back in time. The HD 4870X2, HD 6990 and HD 7990 were all excellent cards in their own right but were ultimately crippled by poor driver support, constant in-game stuttering and extremely loud fan profiles. With the more recent HD 7990, AMD overcame the first two issues months after launch but by that time it’s luster had worn off and gamers had moved on to discussing the upcoming Hawaii architecture. The R9 295X2 hopes to do things differently with strong driver support from day one and an advanced cooling solution which is meant to drastically cut down its noise footprint.

With a pair of fully enabled Hawaii XT cores –the one found in the hot running R9 290X- on a single PCB, the R9 295X2 makes no qualms about targeting a very narrow enthusiast subset. However, even with some advanced engineering backstopping already impressive technical specs, there are still some limitations AMD’s engineers had to contend with. Due to PowerTune’s updated algorithms reference R9 290X’s tended to throttle their clock speeds back when trying to balance out thermal characteristics and performance. To overcome rampant heat production the R9 295X2 uses a water cooler which is a solution that’s been bandied about for several generations now but is only seeing the light now.

On the driver front, AMD has made some great strides towards better support by leveraging Gaming Evolved's developer relationships for improved in-engine optimizations. Crossfire scaling through the new XDMA interface should also work towards eliminating that horrible stuttering that used to occur with Radeon-based multi GPU configurations. Features like Mantle and TrueAudio will likely prove to be key differentiators here as well.


We typically see dual GPU cards making some concessions to attain a given TDP target but in this case AMD threw caution into the wind. Both cores are operating at full-tilt and with a high end cooling solution backing them up, throttling never becomes a problem. This means performance scaling should be quite linear between a single R9 290X and the R9 295X2 with the exception of latency introduced through the included PLX PCI-E bridge chip. More importantly, other than their $3000 TITAN Z, NVIDIA doesn’t have anything they can respond with for now.

Speaking of price, the R9 295X2 is the most expensive Radeon product to date with an MSRP of $1500. While some of you will likely be taken aback by the price, the total cost isn’t actually that bad. Two R9 290X’s go for about $1100 and then we can figure another $150 for the Asetek water cooling system, resulting of a relatively minimal upcharge of $250. To us this is actually a pretty fair proposition considering NVIDIA wants a $1000 premium for the TITAN Z compared to two TITAN Blacks.


Typically power consumption can be conveniently overlooked by enthusiasts who want the best of the best, but not this time. Why? To feed its epic 500W requirement, the Radeon R9 295X2 needs a power supply with at least 50A on a single +12V rail or 28A of power per +12V output of a multi rail PSU. That may not be a hindrance for most modern 1000W PSUs with massive +12V capacity but slightly older high wattage units may require some coaxing to ensure compatibility. Take a very close look at rail distribution before assuming your PSU can handle this card.

Availability really is the fifteen hundred dollar question here since once again AMD is in the midst of a soft launch. We don’t expect to see any R9 295X2’s for sale until the week of April 21st and even then expect stock to be relatively thin. This is considered a limited edition card so broad-scale shipments just aren’t possible.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
A Closer Look at the Radeon R9 295X2

A Closer Look at the Radeon R9 295X2



Outside of ASUS’ Poseidon series, the R9 295X2 is likely one of the most unique looking graphics cards around. While its 80mm axial-style fan and horizontal heatsink shroud may have a solid foundation in typical air cooled designs but they are mostly for show, hiding the two water blocks and additional tubing. The real star of this show is of course the external 120mm radiator which rounds out an All in One liquid cooling loop designed by Asetek. Without this feature, its doubtful AMD’s flagship could attain the performance it does.

Case compatibility shouldn’t be too much of a problem since the R9 295X2 is 12” long so it will fit into most modern cases. Even some mATX and ITX enclosures have 12” GPU allowances which really opens up a whole new world for this card.


Both the inlet and outlet tubes enter through the R9 295X2’s side, straddling an LED-illuminated Radeon logo. One of the tubes also acts as a carrier for the fan’s power wire. For those wondering, the two pumps and fan receive power through the card itself rather than external connectors while fluid temperature is monitored through an inline thermal probe.


The radiator is a relatively simple 120mm affair which is broadly compatible with every case fan mount on the market, provided it retains the standard 120mm offset. Meanwhile, the fan being used is a Power Logic slide bearing unit which is rated for 1000 to 1800RPMs at a maximum of 60CFM and has its rotational speed tied directly to fluid temperatures. It can be easily upgraded since the connection is through a simple 3-pin header. Unfortunately, there’s no direct control over the RPM range though you can add a second motherboard-controlled fan if lower temperatures are needed.


Moving around to the card’s side, AMD has once again gone the dual BIOS route though this time both are populate with the same file. This will give an open invitation for anyone looking to upload their own custom BIOS.


Power gets to the R9 295X2 through a pair of 8-pin connectors but there’s a bit more to it than that. According to PCI-SIG specifications, an 8-pin input should not exceed 150W which means, along with the 75W from the PCI-E slot, the total input power of this card would be only 375W. AMD’s rating of 500W leads to each of these connections supplying upwards of 212W, which exceeds the PCI-E limits by a huge amount and is why each connection should be fed by at least 28A of current. This allows for typical running conditions as well as some overhead for overclocking.

The rear I/O connectors consist of four DisplayPort outputs as well as a single DVI. If running a 4K monitor off of the R9 295X2, you’ll need to use a DisplayPort connector since it doesn’t support 4K at 60Hz over HDMI adaptors.


There’s not much to see on the backplate which covers the rear memory modules. One thing we did notice was how easily the powder coated finish chips. After less than a week in our system, the backplate had several areas of missing paint. Hopefully the retail cards won’t exhibit this.


The R9 295X2’s design is broken down into five major components. The main PCB, a pair of water cooling blocks attached to a full-coverage heatsink, a secondary copper heatsink below the main fan for cooling VRM components and the PLX chip and finally the fan shroud. It’s an interesting design which ensures the card doesn’t exceed double slot height while still delivering optimal cooling performance.


With a pair of Hawaii XT cores and 8GB of memory, AMD needed one hell of a VRM design and decided to go with an all-digital 11-phase design with a four additional phases being dedicated to the GDDR5. This is all backstopped by a PCI-E 3.0 bridge chip which handles inter-chip communication over the XDMA interface.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
Test System & Setup; Many Changes

Main Test System

Processor: Intel i7 4930K @ 4.7GHz
Memory: G.Skill Trident 16GB @ 2133MHz 10-10-12-29-1T
Motherboard: ASUS P9X79-E WS
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell 2412M (1440P) / ASUS PQ321Q (4K)
OS: Windows 8.1 Professional


Drivers:
AMD 14.4 Beta
NVIDIA 337.50 Beta


*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 2 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings

Every card undergoes a 10 minute warm-up cycle before each benchmark.


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results, other than 4K.


Frame Time Testing & FCAT

To put a meaningful spin on frame times, we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 28ms or about 35FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 28ms (35 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 25ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

In order to determine accurate onscreen frame times, a decision has been made to move away from FRAPS and instead implement real-time frame capture into our testing. This involves the use of a secondary system with a capture card and an ultra-fast storage subsystem (in our case five SanDisk Extreme 240GB drives hooked up to an internal PCI-E RAID card) hooked up to our primary test rig via a DVI splitter. Essentially, the capture card records a high bitrate video of whatever is displayed from the primary system’s graphics card, allowing us to get a real-time snapshot of what would normally be sent directly to the monitor. By using NVIDIA’s Frame Capture Analysis Tool (FCAT), each and every frame is dissected and then processed in an effort to accurately determine latencies, frame rates and other aspects.

We've also now transitioned all testing to FCAT which means standard frame rates are also being logged and charted through the tool. This means all of our frame rate (FPS) charts use onscreen data rather than the software-centric data from FRAPS, ensuring dropped frames are taken into account in our global equation.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
1440P: Assassin's Creed Black Flag / Battlefield 4

Assassin’s Creed IV: Black Flag


<iframe width="640" height="360" src="//www.youtube.com/embed/YFgGnFoRAXU?rel=0" frameborder="0" allowfullscreen></iframe>​

The fourth iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Havana area which features plenty of NPCs, distant views and high levels of detail.


2560 x 1440




Battlefield 4


<iframe width="640" height="360" src="//www.youtube.com/embed/y9nwvLwltqk?rel=0" frameborder="0" allowfullscreen></iframe>​

Amidst its teething problems since its release, BF4 has been a bone of contention among gamers. In this sequence, we use the Singapore level which combines three of the game’s major elements: a decayed urban environment, a water-inundated city and finally a forested area. We chose not to include multiplayer results simply due to their randomness injecting results that make apples to apples comparisons impossible.

2560 x 1440


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
1440P: CoD Ghosts / Crysis 3

Call of Duty: Ghosts


<iframe width="640" height="360" src="//www.youtube.com/embed/gzIdSAktyf4?rel=0" frameborder="0" allowfullscreen></iframe>​

The latest Call of Duty game may have been ridiculed for its lackluster gameplay but it remains one of the best-looking games out there. Unfortunately due to mid-level loads, getting a “clean” runthrough without random slowdowns is nearly impossible, even with a dual SSD system like ours. Hence why you should ignore any massive framerate dips as they are anomalies of poor loading optimizations. For this benchmark we used the first sequence of the 5th Chapter entitled Homecoming as every event is scripted so runthroughs will be nearly identical.

2560 x 1440





Crysis 3


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure.


2560 x 1440


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
1440P: Far Cry 3 / Hitman Absolution

Far Cry 3


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



2560 x 1440





Hitman Absolution


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


2560 x 1440


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
1440P: Metro Last Light / Thief

Metro: Last Light


<iframe width="640" height="360" src="http://www.youtube.com/embed/40Rip9szroU" frameborder="0" allowfullscreen></iframe>​

The latest iteration of the Metro franchise once again sets high water marks for graphics fidelity and making use of advanced DX11 features. In this benchmark, we use the Torchling level which represents a scene you’ll be intimately familiar with after playing this game: a murky sewer underground.


2560 x 1440




Thief


<iframe width="640" height="360" src="//www.youtube.com/embed/p-a-8mr00rY?rel=0" frameborder="0" allowfullscreen></iframe>​

When it was released, Thief was arguably one of the most anticipated games around. From a graphics standpoint, it is something of a tour de force. Not only does it look great but the engine combines several advanced lighting and shading techniques that are among the best we’ve seen. One of the most demanding sections is actually within the first level where you must scale rooftops amidst a thunder storm. The rain and lightning flashes add to the graphics load, though the lightning flashes occur randomly so you will likely see interspersed dips in the charts below due to this.


2560 x 1440


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
1440P: Tomb Raider

Tomb Raider


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


2560 x 1440


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
4K: Assassin's Creed Black Flag / Battlefield 4

Please note that due to the R9 295X2’s inability to output 4K over HDMI at 60Hz, it is incompatible with our current DVI-based FCAT setup (which requires HDMI dongles be used). As such, we have switched to FRAPS for these tests.

Assassin’s Creed IV: Black Flag


3840 x 2160




Battlefield 4


3840 x 2160


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
4K: CoD Ghosts / Crysis 3

Please note that due to the R9 295X2’s inability to output 4K over HDMI at 60Hz, it is incompatible with our current DVI-based FCAT setup (which requires HDMI dongles be used). As such, we have switched to FRAPS for these tests.

Call of Duty: Ghosts


3840 x 2160





Crysis 3


3840 x 2160




Please note that due to the R9 295X2’s inability to output 4K over HDMI at 60Hz, it is incompatible with our current DVI-based FCAT setup (which requires HDMI dongles be used). As such, we have switched to FRAPS for these tests.
 

Latest posts

Twitter

Top