What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

AMD Radeon R9 280X 3GB Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
AMD’s R9 280X has been a long time coming. Ever since the HD 7970 was introduced, rumors have been swirling about what the next batch of Radeon cards would look like, and now we finally have an answer. Today, the R9 280X is being launched alongside the R9 270X, R7 260X and a pair of lower end cards while the flagship R9 290X and R9 290 will see the light of day before month’s end.

This new series of cards has a lot of expectations to live up to since its predecessor, the HD 7000-series, houses cards which have been around for nearly two years but continue to lead the pack in the price / performance category. The R-series is meant to continue this tradition, but in a way some may have not been expecting: by leveraging the same GCN architecture we’ve come to know and love.

R9280X-REVIEW-8.jpg

Much like NVIDIA did with the GTX 700-series, AMD have enhanced several aspects of their architecture by increasing clock speeds and adding a smattering of new features to create these new cards. While that may not sound too exciting, it has allowed the R9 280X to hit an extremely competitive price point which is essential as the Radeon lineup evolves according to the new market realities.

The R9 280X uses AMD’s standard 28nm Tahiti XT core which formed the backbone of the HD 7970 and subsequently, the HD 7970 GHz Edition. In this form, it houses 2048 stream processors, 32 ROPs, 128 texture units and interfaces with the GDDR5 memory through a 384-bit wide bus. Basically, there isn’t anything here we haven’t seen before since the Tahiti XT already featured a fully enabled core so it’s not like AMD had additional SIMD units just waiting to be unlocked. The real question is whether or not the 280X will provide enough differentiation from current products to make a real impact in a highly competitive segment.

R9280X-REVIEW-57.jpg

Much like the GTX 770, the R9 280X may initially represent a rebadging of current technology but performance has been augmented by higher clock speeds than a HD 7970 3GB. AMD has also stated there will be no reference design so this will be a board partner focused ASIC. Expect plenty of variety as MSI, Gigabyte, XFX, ASUS, HIS, PowerColor and others launch custom cooled, overclocked versions that are loosely based off of their current HD 7970 designs.

The R9 280’s core layout mirrors those of the HD 7970 3GB and its GHz Edition evolution but the memory and engine frequencies stride in a space between the two. In this iteration the Tahiti core operates at a constant speed of 1GHz, though according to the BIOS, there’s a rarely-used Base frequency of 950MHz. The 3GB of GDDR5 memory follows very much the same pattern but sticks to the 6Gbps bandwidth pioneered by AMD’s GHz Edition.

TDP remains unchanged despite the higher clock speeds. According to board partners, the amount of voltage going to the core has been fractionally reduced which may negatively affect overclocking. This is particularly bad news for enthusiasts who have long championed the overhead of AMD’s Radeon series.

R7260X-REVIEW-26.jpg

For the time being at least, AMD isn’t planning on discontinuing any of their current products since there’s plenty of stock still left in the channels. You’ll likely see substantially fewer vanilla HD 7970 3GB cards in the channel as AIBs transfer their production to the R9 280X, so scoop up those great deals while they’ve still around. Nonetheless, there’s plenty of space between the R9 280X and the new R9 270X so these 7000-series parts will likely stick around for some time.

From a pricing competitive perspective, AMD is aiming for very distinct segments. With this in mind, the $299 R9 280X is extremely competitive and strikes directly at two of NVIDIA’s most successful cards: the $399 GTX 770 and $249 GTX 760. Unfortunately, neither was touched by the latest round of price cuts and with performance that should almost hit HD 7970 GHz Edition levels on tap, the 280X will likely be a dominant force despite its lack of a Never Settle game bundle.

MANTLE-2.jpg

Features typically take a back seat to raw performance but the R9 280X has a trick up its sleeve. We’ve covered AMD’s API / driver combination called Mantle previously and while it can’t be tested just yet, this may be a key differentiator for GCN-based cards going forward. However, other than Mantle, DX11.2 compatibility and clock speeds, there really isn’t anything that can distinguish the R9 280X from its predecessors (which are also Mantle / DX11.2 compatible).

The R9 280X should bring up the value quotient in AMD’s lineup which is an important factor for the gamers it is targeting: those who are still holding onto a HD 6900 or HD 5800-series card. Now may be the great time for these users to upgrade since this new product may offer that perfect combination of price and performance.

Due to the refreshed nature of this card, expect a hard launch with stock in the retail channels this week.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
More Display Possibilities for Eyefinity

More Display Possibilities for Eyefinity


While Eyefinity may not be used by the majority of gamers, the few who use three or even six monitors compose a demanding group of enthusiasts. Unfortunately, in the past, using a single card for Eyefinity purposes either limited output options since the mini DisplayPort connector needed to be part of any monitor grouping. This meant either using a DisplayPort-equipped panel, buying a HDMI / DVI to DisplayPort adaptor or waiting for an Eyefinity version of the card to be released.

R7260X-REVIEW-18.jpg

On all new R7 and R9 series cards, a user will be able to utilize any combination of display connectors when hooking their card up to an Eyefinity grouping. While most cards won’t have the horsepower necessary to power games across three 1080P screens (the beastly R9 290X and R9 290 will likely be the exceptions to this), the feature will surely come in handy for anyone who wants additional desktop real estate.

These possibilities have been applied to lower-end R7 series cards as well. This is particularly important for content creation purposes where 3D gaming may not be required but workspace efficiency can be greatly increased by using multiple monitors.

R7260X-REVIEW-19.jpg

Most R-series cards will come equipped with two DVI connectors, a single HDMI 1.4 port and a DisplayPort 1.2 output. While there are a number of different display configurations available, most R9 280X cards will come with a slightly different layout: two DVIs, a single HDMI port and two mini DisplayPorts. In those cases, AMD’s newfound display flexibility will certainly come in handy.

While we’re on the subject of connectors, it should also be mentioned that the R9 290X and R9 290 lack the necessary pin-outs for VGA adaptors. It looks like the graphics market will finally see this legacy support slowly dwindle down with only lower-end cards featuring the necessary connections.

R7260X-REVIEW-20.jpg

For six monitor Eyefinity, the newer cards’ DisplayPort 1.2 connector supports multi-stream transport which allows for multiple display streams to be carried across a single connector. This will allow daisy-chaining up to three monitors together with another three being supported by the card’s other connectors.

The DP 1.2 connector’s multi streaming ability also allows MST hubs to be used. These hubs essentially take the three streams and then break them up into individual outputs, facilitating connections on monitors that don’t support daisy-chaining. After years of talk, there’s finally one available from Club3D but its rollout in North America isn’t guaranteed.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Closer Look at the XFX R9 280X 3GB

A Closer Look at the XFX R9 280X 3GB


The R9 280X is a board partner focused product so there won’t be a reference design released into the retail channels. With that in mind we were sent an XFX R9 280X Double Dissipation for review which should be representative of what will be available to day-one buyers.

R9280X-REVIEW-1.jpg

While we’ve already reviewed numerous XFX Double Dissipation cards over the years, their R9 280X version takes a uniquely different approach. The sleek, futuristic black shroud looks phenomenal and moves away from the more utilitarian designs of yesteryear while additional efforts have been made to extend the heatsink’s overall size without adding to the card’s double slot height. This has led to an increase in length to 11.25” from the PCB’s nominal 10.5” but, from the outside this looks like a simple HD 7970 rebrand in many ways.

According to our sources, this card will go for AMD’s stated SRP of just $299.


The Double Dissipation heatsink itself houses a pair of 92mm fans which sit atop an extensive aluminum fin array that’s fed by numerous copper heatpipes. The overall quality of this design actually feels better than XFX’s past attempts at custom heatsinks but due to the large fans, the entire affair is wider than the PCB and that may cause issues for anyone using a smaller case.

R9280X-REVIEW-5.jpg

After talking to board partners, it seemed that many would be using the same card designs for the R-series as they did for custom HD 7000 products but that isn’t the case here. XFX is using an entirely different PCB and component layout, though the actual PWM phase allocation has remained the same.

R9280X-REVIEW-6.jpg
R9280X-REVIEW-7.jpg

The output connector layout has been pulled directly from XFX’s HD 7970 cards with a single DVI, two mini DisplayPorts and a single full-sized HDMI port. Power input is handled by a 6+8 pin combination and, like its predecessors, the R9 280X uses a pair of physical Crossfire connectors allowing for up to three cards to be used in tandem.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 331.40 Beta
AMD 13.11 Beta4



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results.


Frame Time Testing & FCAT

To put a meaningful spin on frame times, we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 28ms or about 35FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 28ms (35 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 25ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

In order to determine accurate onscreen frame times, a decision has been made to move away from FRAPS and instead implement real-time frame capture into our testing. This involves the use of a secondary system with a capture card and an ultra-fast storage subsystem (in our case five SanDisk Extreme 240GB drives hooked up to an internal PCI-E RAID card) hooked up to our primary test rig via a DVI splitter. Essentially, the capture card records a high bitrate video of whatever is displayed from the primary system’s graphics card, allowing us to get a real-time snapshot of what would normally be sent directly to the monitor. By using NVIDIA’s Frame Capture Analysis Tool (FCAT), each and every frame is dissected and then processed in an effort to accurately determine latencies, frame rates and other aspects.

We've also now transitioned all testing to FCAT which means standard frame rates are also being logged and charted through the tool. This means all of our frame rate (FPS) charts use onscreen data rather than the software-centric data from FRAPS, ensuring dropped frames are taken into account in our global equation.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Assassin’s Creed III / Crysis 3

Assassin’s Creed III (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/RvFXKwDCpBI?rel=0" frameborder="0" allowfullscreen></iframe>​

The third iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Boston area which features plenty of NPCs, distant views and high levels of detail.


2560 x 1440

R9280X-REVIEW-38.jpg

R9280X-REVIEW-30.jpg


Crysis 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure Due to the hefty system resource needs of this game, post-process FXAA was used in the place of MSAA.


2560 x 1440

R9280X-REVIEW-39.jpg

R9280X-REVIEW-31.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Dirt: Showdown / Far Cry 3

Dirt: Showdown (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/IFeuOhk14h0?rel=0" frameborder="0" allowfullscreen></iframe>​

Among racing games, Dirt: Showdown is somewhat unique since it deals with demolition-derby type racing where the player is actually rewarded for wrecking other cars. It is also one of the many titles which falls under the Gaming Evolved umbrella so the development team has worked hard with AMD to implement DX11 features. In this case, we set up a custom 1-lap circuit using the in-game benchmark tool within the Nevada level.


2560 x 1440

R9280X-REVIEW-40.jpg

R9280X-REVIEW-32.jpg



Far Cry 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



2560 x 1440

R9280X-REVIEW-41.jpg

R9280X-REVIEW-33.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Hitman Absolution / Max Payne 3

Hitman Absolution (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


2560 x 1440

R9280X-REVIEW-42.jpg

R9280X-REVIEW-34.jpg



Max Payne 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/ZdiYTGHhG-k?rel=0" frameborder="0" allowfullscreen></iframe>​

When Rockstar released Max Payne 3, it quickly became known as a resource hog and that isn’t surprising considering its top-shelf graphics quality. This benchmark sequence is taken from Chapter 2, Scene 14 and includes a run-through of a rooftop level featuring expansive views. Due to its random nature, combat is kept to a minimum so as to not overly impact the final result.


2560 x 1440

R9280X-REVIEW-43.jpg

R9280X-REVIEW-35.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Metro: Last Light / Tomb Raider

Metro: Last Light (DX11)


<iframe width="640" height="360" src="http://www.youtube.com/embed/40Rip9szroU" frameborder="0" allowfullscreen></iframe>​

The latest iteration of the Metro franchise once again sets high water marks for graphics fidelity and making use of advanced DX11 features. In this benchmark, we use the Torchling level which represents a scene you’ll be intimately familiar with after playing this game: a murky sewer underground.


2560 x 1440

R9280X-REVIEW-44.jpg

R9280X-REVIEW-36.jpg


Tomb Raider (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


2560 x 1440

R9280X-REVIEW-45.jpg

R9280X-REVIEW-37.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Onscreen Frame Times w/FCAT

Onscreen Frame Times w/FCAT


When capturing output frames in real-time, there are a number of eccentricities which wouldn’t normally be picked up by FRAPS but are nonetheless important to take into account. For example, some graphics solutions can either partially display a frame or drop it altogether. While both situations may sound horrible, these so-called “runts” and dropped frames will be completely invisible to someone sitting in front of a monitor. However, since these are counted by its software as full frames, FRAPS tends to factor them into the equation nonetheless, potentially giving results that don’t reflect what’s actually being displayed.

With certain frame types being non-threatening to the overall gaming experience, we’re presented with a simple question: should the fine-grain details of these invisible runts and dropped frames be displayed outright or should we show a more realistic representation of what you’ll see on the screen? Since Hardware Canucks is striving to evaluate cards based upon and end-user experience rather than from a purely scientific standpoint, we decided on the latter of these two methods.

With this in mind, we’ve used the FCAT tools to add the timing of partially rendered frames to the latency of successive frames. Dropped frames meanwhile are ignored as their value is zero. This provides a more realistic snapshot of visible fluidity.


R9280X-REVIEW-46.jpg

R9280X-REVIEW-47.jpg

R9280X-REVIEW-48.jpg

R9280X-REVIEW-49.jpg


This first batch of frametime results shows that AMD has pretty much fixed their past issues in single card configurations withthe exception of Far Cry 3 which continues to stutter every now and then. All in all, you'd be hard pressed to tell these two cards apart when comparing in-game "smoothness".
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Onscreen Frame Times w/FCAT (pg.2)

Onscreen Frame Times w/FCAT (pg.2)


When capturing output frames in real-time, there are a number of eccentricities which wouldn’t normally be picked up by FRAPS but are nonetheless important to take into account. For example, some graphics solutions can either partially display a frame or drop it altogether. While both situations may sound horrible, these so-called “runts” and dropped frames will be completely invisible to someone sitting in front of a monitor. However, since these are counted by its software as full frames, FRAPS tends to factor them into the equation nonetheless, potentially giving results that don’t reflect what’s actually being displayed.

With certain frame types being non-threatening to the overall gaming experience, we’re presented with a simple question: should the fine-grain details of these invisible runts and dropped frames be displayed outright or should we show a more realistic representation of what you’ll see on the screen? Since Hardware Canucks is striving to evaluate cards based upon and end-user experience rather than from a purely scientific standpoint, we decided on the latter of these two methods. With this in mind, we’ve used the FCAT tools to add the timing of runted to the latency of successive frames. Dropped frames meanwhile are ignored as their value is zero. This provides a more realistic snapshot of visible fluidity.


R9280X-REVIEW-50.jpg

R9280X-REVIEW-51.jpg

R9280X-REVIEW-52.jpg

R9280X-REVIEW-53.jpg


We actually found that the NVIDIA card displayed a few rare instances of hesitation in some of these games but as with AMD's results on the previous page, they're almost impossible to pick up unless you're actively looking for problems. Plus, they happen at odd intervals which could point towards system loads or other factors being the culprit.
 
Last edited:

Latest posts

Top