What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

AMD Radeon R7 265 Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The R7-series cards may not be the first thing enthusiasts think of when looking for a capable gaming GPU but they certainly occupy a prominent space in AMD’s lineup. While higher end products typically get all the attention, the $125 to $175 market occupied by these low priced yet extremely capable graphics cards happens to be the most popular. In order to round out their offerings in this crowded volume-focused segment, AMD is launching the R7 265 2GB.

The R7 265 may represent the highest performance iteration of AMD’s R7-series but in many ways it’s more akin to the R9 270 and R9 270X. With this in mind, AMD is hoping that it will go toe to toe against NVIDAI’s GTX 660 2GB and GTX 650 Ti Boost while also threading carefully between the R9 270 and R7 260X. Like we said; this is an insanely cluttered corner of the GPU market.

R7-265-1.png

In order to achieve their objectives of combining good 1080P performance with a value-focused design, AMD turned to one of their older architecture: Curacao, the artist formerly known as Pitcairn. Truth be told, other than some extremely minor optimizations and the addition of better PowerTune Boost algorithms, there really isn’t much to distinguish one core from the other. However, this is the first R7-series card to make use of the Curacao core so its pedigree is quite a bit higher than what we’ve come to expect from mid-level graphics cards.

To create the R7 265 a quartet of Compute Units was removed from the Pitcairn…er…Curacao core resulting in 1024 Stream Processors and 64 Texture Units. Meanwhile the render backends, L2 cache hierarchy and memory interface haven’t been touched so despite its middle-of-the-pack targeting, this card still has some relatively impressive bandwidth at its fingertips. Unfortunately, by utilizing and older core, AMD has cut out TrueAudio support which is a feature that may distinguish other R7-series cards in the future.

R7-265-49.jpg

It goes without saying that by using the Curacao core, AMD has some lofty expectations that the R7 265 will maintain a significant gap between itself and the R7 260X. Other than the large uptick in ROPs and Stream Processors, one of the main differentiating factors will be the 265-bit memory bus despite the 265’s slightly lower GDDR5 speeds. Essentially, the R7 265 a rebadged HD 7850 (a card that’s nearly two years old we might add) with faster memory and some additional software additions to bring its overall power consumption down but by no means should that be taken as a negative point.

While it may look thoroughly outclassed by the new addition, the R7 260X still has its place in this picture. It consumes just 115W, supports TrueAudio, comes in some great-looking passive configurations and it’s price has just been cut to $119. This makes it a highly adaptable, affordable card for HTPC and small form factor users who need a blend of acceptable 1080P gaming framerates, a low acoustical footprint and plenty of audio / video features. The R7 260 on the other hand remains at $119 which simply means there aren't any announced price cuts at this point.

With the R7 265 sitting at $149, NVIDIA really doesn’t have much to compete against it right now. The GTX 650 Ti Boost has been all but discontinued with its stocks quickly running out as budget-minded gamers look for a capable GeForce option in the $149 bracket. Meanwhile, the GTX 660 currently retails for about $189 and the $250 GTX 760 is in another dimension from a price and performance perspective. The standard $120 GTX 650 Ti isn’t even in the picture for obvious reasons so NVIDIA may need to plug this gaping hole in their product stack sooner rather than later.

AMD does however run the very real risk of market saturation and sowing confusion within one of their most important markets. They now have five (yes, FIVE) different SKUs within just $100 of one another while just $50 separates the R9 270X and R7 265 with the R9 270 sitting between them. This has been AMD’s modus operandi for the last few generations and we can somewhat understand the need to plug perceived gaps. However, there’s a chance that one precision-targeted strike from NVIDIA may undo AMD carpet bombing initiative in one swift stroke.

R7-265.jpg

AMD sent us a custom Sapphire R7 265 for our testing which carries an oversized heatsink, a single 6-pin power connector and a pair of cooling fans. Supposedly, the vast majority of board partners will be using their own designs this time around but how that will affect AMD’s stated price of $149 is anyone’s guess. For those wondering, we didn’t have to flash the BIOS on this card since it runs at reference frequencies.

Another big question mark here is availability. AMD is simply “introducing” (read: soft launching) the R7 265 right now with a retail rollout happening sometime before month’s end. With new NVIDIA cards based on the Maxwell architecture rumored to be in the pipeline, the R7 265 is a preemptive blow which may give potential customers pause before jumping on the GeForce bandwagon. It just remains to be seen how effective this card will actually be in the long term.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 334.69 Beta
AMD 14.1 Beta 6



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results.


Frame Time Testing & FCAT

To put a meaningful spin on frame times, we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 28ms or about 35FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 28ms (35 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 25ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

In order to determine accurate onscreen frame times, a decision has been made to move away from FRAPS and instead implement real-time frame capture into our testing. This involves the use of a secondary system with a capture card and an ultra-fast storage subsystem (in our case five SanDisk Extreme 240GB drives hooked up to an internal PCI-E RAID card) hooked up to our primary test rig via a DVI splitter. Essentially, the capture card records a high bitrate video of whatever is displayed from the primary system’s graphics card, allowing us to get a real-time snapshot of what would normally be sent directly to the monitor. By using NVIDIA’s Frame Capture Analysis Tool (FCAT), each and every frame is dissected and then processed in an effort to accurately determine latencies, frame rates and other aspects.

We've also now transitioned all testing to FCAT which means standard frame rates are also being logged and charted through the tool. This means all of our frame rate (FPS) charts use onscreen data rather than the software-centric data from FRAPS, ensuring dropped frames are taken into account in our global equation.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Assassin’s Creed III / Crysis 3

Assassin’s Creed III (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/RvFXKwDCpBI?rel=0" frameborder="0" allowfullscreen></iframe>​

The third iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Boston area which features plenty of NPCs, distant views and high levels of detail.


1920 x 1080

R7-265-38.jpg

R7-265-30.jpg


Crysis 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure Due to the hefty system resource needs of this game, post-process FXAA was used in the place of MSAA.


1920 x 1080

R7-265-39.jpg

R7-265-31.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Dirt: Showdown / Far Cry 3

Dirt: Showdown (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/IFeuOhk14h0?rel=0" frameborder="0" allowfullscreen></iframe>​

Among racing games, Dirt: Showdown is somewhat unique since it deals with demolition-derby type racing where the player is actually rewarded for wrecking other cars. It is also one of the many titles which falls under the Gaming Evolved umbrella so the development team has worked hard with AMD to implement DX11 features. In this case, we set up a custom 1-lap circuit using the in-game benchmark tool within the Nevada level.


1920 x 1080

R7-265-40.jpg

R7-265-32.jpg



Far Cry 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



1920 x 1080

R7-265-41.jpg

R7-265-33.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Hitman Absolution / Max Payne 3

Hitman Absolution (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


1920 x 1080

R7-265-42.jpg

R7-265-34.jpg



Max Payne 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/ZdiYTGHhG-k?rel=0" frameborder="0" allowfullscreen></iframe>​

When Rockstar released Max Payne 3, it quickly became known as a resource hog and that isn’t surprising considering its top-shelf graphics quality. This benchmark sequence is taken from Chapter 2, Scene 14 and includes a run-through of a rooftop level featuring expansive views. Due to its random nature, combat is kept to a minimum so as to not overly impact the final result.


1920 x 1080

R7-265-43.jpg

R7-265-35.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Metro: Last Light / Tomb Raider

Metro: Last Light (DX11)


<iframe width="640" height="360" src="http://www.youtube.com/embed/40Rip9szroU" frameborder="0" allowfullscreen></iframe>​

The latest iteration of the Metro franchise once again sets high water marks for graphics fidelity and making use of advanced DX11 features. In this benchmark, we use the Torchling level which represents a scene you’ll be intimately familiar with after playing this game: a murky sewer underground.


1920 x 1080

R7-265-44.jpg

R7-265-36.jpg


Tomb Raider (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


1920 x 1080

R7-265-45.jpg

R7-265-37.jpg
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Temperatures & Acoustics / Power Consumption

Temperature Analysis


For all temperature testing, the cards were placed on an open test bench with a single 120mm 1200RPM fan placed ~8” away from the heatsink. The ambient temperature was kept at a constant 22°C (+/- 0.5°C). If the ambient temperatures rose above 23°C at any time throughout the test, all benchmarking was stopped..

For Idle tests, we let the system idle at the Windows 7 desktop for 15 minutes and recorded the peak temperature.


R7-265-47.jpg

While the R7 265 will naturally pump out more heat than the Bonaire-based R7 260X, Sapphire’s custom heatsink keeps things under strict control. This should be a good feature for overclocking but as you will see in the next section, this particular card has some serious shortcomings in that area.


Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, a loop of Unigine Valley is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.

R7-265-46.jpg

Cool and quiet are the names of the game here but that shouldn’t come as any surprise considering Sapphire’s history in designing top-tier heatsinks.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

Please note that after extensive testing, we have found that simply plugging in a power meter to a wall outlet or UPS will NOT give you accurate power consumption numbers due to slight changes in the input voltage. Thus we use a Tripp-Lite 1800W line conditioner between the 120V outlet and the power meter.

R7-265-48.jpg

Considering the R7 265 uses a cut-down Curacao / Pitcairn core, these results shouldn’t come as any surprise. Its power consumption lies directly between the R9 270 and R7 260X, nearly equaling the GTX 660 2GB’s numbers. One thing to remember is that AMD is using an older core which a few software and BIOS-based modifications so extreme improvements weren’t expected.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Overclocking Results

Overclocking Results


Going into the overclocking section, we had some great results for AMD’s R7 265. It is based off of a cut-down Pitcairn core and the last card with a similar design, the HD 7850, tended to have plenty of overhead when paired up with adequate cooling. Sapphire card we received had the cooling part of that equation down to a science but ultimately failed to deliver much in the way of frequency overhead.

Before we go on, these results have to be prefaced by saying overclocking will vary widely from one sample to another. We obviously received one that was uncooperative but upon first glance, stability was achieved all the way up to 1065MHz. That’s an impressive feat considering the R7 265 ships with a default engine clock of just 925MHz. However, after a few minutes of gaming, it crashed every time. To achieve a “pass” from us, an overclock has to be stable for at least 45 minutes of constant load.

With all of that being said, the highest fully stable core frequency we achieved was just 979MHz while the memory fared quite a bit better and hit a ceiling at 6044MHz. Let’s hope other samples prove this one to be a dud.

R7-265-51.jpg

R7-265-52.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Conclusion

Conclusion


AMD’s R7 265 has been parachuted into a cluttered segment in the hope that it will achieve relevance against upcoming launches from NVIDIA. While we can’t comment about its placement against rumored alternatives, within the current market, the 265 performs quite well despite the fact that it uses a massaged, slightly optimized two year old platform. This is a testament to the staying power of AMD’s first generation GCN cores but it also has us hoping there’s something more than rebrands on the Radeon lineup’s horizon.

With AMD literally carpet bombing the mid-range market with so many different cards, they ran a real chance of ostracizing certain parts of their lineup. For the most part that hasn’t happened since the R7 265 fits perfectly between the R9 270 and R7 260X without stepping on any toes or being beaten in the price / performance area by its higher-end sibling. The sole causality of this approach was the R7 260X which had its price cut to a highly competitive $119 which makes access to TrueAudio (something the R7 265 lacks) all that much more affordable.

R7-265-50.jpg

In comparison against other AMD cards within close price brackets, the R7 265 is right where it needs to be: 11% slower than the R9 270 and substantially faster than the next closest R7-series part. Whether that muddies the waters or helps with purchasing remains to be seen but those who want a significant bump in framerates should look towards the R9 270 instead of the 265 with its borderline performance and somewhat limited overclocking headroom.

Things start to get interesting when we turn to NVIDIA’s $99 to $199 product stack. Frankly, they don’t have much of one right now. Although we tested it in this review, the GTX 650 Ti Boost has been discontinued with only a few cards available here and there. Meanwhile, the GTX 660 2GB -which happens to be the R7 265’s closest GeForce branded competitor- currently sits at a higher $189. That causes a bit of an issue for NVIDIA since their 66 just can’t compete from a price / performance perspective and the only other card, the lowly GTX 650 Ti, is in another league altogether. While we can critique AMD for an overly ambitious mid-range lineup, NVIDIA has demonstrated why additional coverage is sometimes necessary.

There is one big question mark about this launch: pricing. While AMD claims the R7 265 will hit that magical $149 target, outside factors like the crypto currency craze have begun pushing Radeon cards’ cost through the stratosphere. If that happens here, we’d recommend looking elsewhere for your budget-focused gaming fix.

On paper the R7 265 is certainly a competitive product. However, due to its positioning into a narrow no man’s land between the R9 270 and R7 260X, it feels like a hastily prepared, paper launched effort to head off NVIDIA’s rumored Maxwell launches. Those who want some additional gaming grunt and the capability to achieve consistently playable framerates at high detail 1080P settings will likely gravitate towards the slightly more expensive R9 270. Meanwhile, anyone that wants an efficient, HTPC or SFF focused GPU should look to the TrueAudio-equipped, very affordable R7 260X. The R7 265 is nonetheless very appealing for anyone who doesn't want to step up to the R9-series or finds the other R7 offerings underpowered so it should do well provided we actually see it hit $149.
 
Last edited:

Latest posts

Top