What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

AMD Radeon R9 270X & R7 260X Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
AMD’s R9 280X is meant to target the “gamers’ sweet spot” at $299 but there are two other, highly important cards launching today as well: the R9 270X and R7 260X. They each target a very specific segment but, as with the R9 280X, their primary goal is to up the performance quotient in their respective price categories. With that being said, don’t expect either to provide enthusiasts level framerates at high detail settings; these are mid-range cards with some punch but competing with higher level GPUs isn’t part of their mandate.


At $199, with the R7 270X AMD is aiming for the best possible performance for budget-minded gamers who are using 1080P monitors. To accomplish this, they have equipped it with a core derived from Pitcairn XT (the HD 7870 GHz Edition). Named Curacao, its core layout consists of 1280 stream processors, 32 ROPs and 80 texture units alongside a 256-bit GDDR5 memory interface, mirroring the HD 7870 GHz's layout. On paper, there isn’t anything to distinguish one core from another other than the latter’s newish name. So like the R9 280X, the R7 270X uses a repurposed Graphics Core Next architecture which has been updated to better compete with NVIDIA’s current product stack.

Some may not agree with architectural refreshes but the mid-range market is a perfect proving ground for these initiatives. Not only has AMD been able to hit lower prices through a continuation of core designs but the 28nm manufacturing processes’ relative maturity has allowed for a few clock speed and efficiency optimizations as well. All of those aspects are key points for gamers on a limited budget and NVIDIA followed this same path when creating their GTX 700-series.


Speaking of clock speeds, this is where the R9 270X truly differentiates itself from its HD 7870 GHz predecessor. The core frequency gets an increase of 50MHz while memory goes from 4.8GHz to 5.6GHz which is a substantial boost. In addition, the R7 270X will be available as a 4GB version, though the actual benefit of this additional memory may be minimal when paired up with a mid-range core. Due to various manufacturing process improvements, despite these higher frequencies, the R9 270X should only draw about 5W to 10W more than the HD 7870 GHz Edition.

Prior to the R-series launch AMD quietly reduced the price of their mid-tier offerings so the R9 270X’s $199 price allows it to parachute directly into the gap between the $269 HD 7950 Boost and the $189 HD 7870 GHz. However, AMD’s own board partners may have thrown a small wrench into this equation since the Boost can currently be found for as little as $210 after rebates. That’s just $10 more for what could be a significantly faster graphics card.

Against NVIDIA’s product stack, the R9 270X finds itself in an envious position. With the discontinuation of their GTX 660 Ti, NVIDIA no longer has a part that can compete in this price range since the GTX 660 is now priced at $179 (and never was able to compete against the HD 7870 GHz anyways) after some aggressive price adjustments. Meanwhile the GTX 760 still sits at a much more expensive $249. So, without any direct competition, the R9 270X is well poised as a possible upgrade for anyone still hanging onto an HD 5850, HD 6850 or GTX 460.


The $139 R7 260X is the lowest-priced R-series part we will be reviewing and it is based off of the venerable Bonaire core. However, there’s a bit of a twist in this story: as we theorized in the HD 7790 review, it turns out that Bonaire was the proving ground for many of the technologies incorporated into AMD’s revised GCN architecture. Unlike Tahiti, Pitcairn and Cape Verde, Bonaire represents a slight evolution with an incorporated TrueAudio DSP, making it the only card in AMD’s lineup -other than the R9 290X and R9 290- with native TrueAudio support. There are a few other improvements but those will be kept under wraps until AMD’s flagship parts are fully unveiled sometime later this month.

Using the Bonaire core grants the R7 260X access to 896 stream processors, 16 ROPs and 56 TMUs. However, in a departure from the Cape Verde design, it utilizes a pair Geometry and asynchronous Compute Engines which significantly improves theoretical performance and IPC throughput. The 128-bit memory interface is in-line with similarly priced products and it shouldn’t prove to be too much of a hindrance at 1080P provided detail levels are well managed.


One of Bonaire’s main selling points is the design changes AMD instituted in an effort to optimize power consumption while maximizing clock speeds. Like the Richland APU, it utilizes AMD’s new embedded on-die microcontroller which has the ability to accurately monitor temperatures, voltages and clock speeds along with their relation to ASIC TDP in an effort to ensure optimal performance. These calculations and their respective clock and voltage changes are done at 10ms intervals which is a huge improvement over the previous generation design’s switch rate of 50ms.

In order to make the most out of what this controller brings to the table, AMD has instituted additional P-States within PowerTune. As before, discrete DPM states dictate voltages and clock speeds based upon proximity to power limit but there are now more of them (eight versus four to be exact) so additional granularity can be inserted into the equation. The result is higher sustained engine clocks in all circumstances rather than select cases.


Like the other R-series cards, the R7 260X was created through the judicious application of higher clock speeds to an existing core architecture. In this situation Bonaire’s dials have been turned to the max with a 100MHz core frequency boost and GDDR5 modules that operate at 6.5Gbps on a 128-bit wide bus. Those specifications should help differentiate this new card from the HD 7790 and allow it to compete better against NVIDIA’s alternatives. Unfortunately, this focus on maximizing frequencies has led to a significant increase in board power as the R7 260X requires some 30W more than its predecessor.

With price of $139 there shouldn’t be any doubt as to the R7 260X’s value but the presence of AMD’s HD 7850 2GB at $159 (or less with rebates) will likely cause hesitation in some potential buyers. In the lower end of the midrange market, a few bucks can make a huge difference so it will be interesting to see how these two competing solutions line up against one another.

The real roadblock for AMD’s R7 260X is NVIDIA’s recently announced price reductions. With the GTX 650 Ti Boost 2GB now sitting at $149 and the GTX 650 Ti Boost 1GB at $129, AMD may have learned that announcing pricing too soon before launch can act as a double edged sword. Both of these GeForce cards are now well placed from a pricing perspective.

Much like other cards in AMD’s new lineup, the R9 270X and R7 260X will incorporate support for AMD’s new Mantle API / driver combination alongside DX 11.2 and OpenGL 4.3.

 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
AMD’s R7 250 & R7 240; Cards for the Masses

AMD’s R7 250 & R7 240; Budget Cards for the Masses


While most of the press materials from AMD focused on the three other cards we are reviewing today, two additional SKUs will be launching as well: the R7 250 and R7 240. These target the entry level and OEM segments respectively with updated / refreshed architectures at low price points. Like the other cards in the R-series lineup, both of them feature higher clock speeds than their predecessors in an effort to add some much-needed value to the equation.

Once again, Mantle, DX 11.2 and OpenGL 4.3 are featured quite prominently. These cards’ incorporation of Mantle could become a key metric since, if AMD’s goals are achieved, it will allow for significantly better performance with fewer on-die resources. For low end cards like the R7 250 and R7 240, this could be a game-changer.


The R7 250 uses the same Oland core with 384 stream processors as the OEM-only HD 8670. Clock speeds have seen a small boost to 1.05GHz but the memory speeds are a bit of a question mark. While the 1GB GDDR5 version will feature modules operating at 4.6Gbps, there will also be a 2GB DDR3 part with undisclosed memory frequencies. Naturally, these speed bumps result in a board power increase of about 20W.


A real mystery in this lineup is the R7 240, a card which will likely be a darling in the OEM and system builder market. With 320 stream processors, this looks like a rebranded HD 7510 which was based on the older VLIW5 Northern Islands architecture and its associated Turks LE core. However, Northern Islands doesn’t support DX11.2 or AMD’s Mantle so this card could be using an Oland core derivative which has been cut down to a mere five GCN SIMD units.

AMD wasn’t able to give us any additional information about the R7 240 but since its target is the OEM market, very few, if any will end up in the cases DIYers.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
More Display Possibilities for Eyefinity

More Display Possibilities for Eyefinity


While Eyefinity may not be used by the majority of gamers, the few who use three or even six monitors compose a demanding group of enthusiasts. Unfortunately, in the past, using a single card for Eyefinity purposes either limited output options since the mini DisplayPort connector needed to be part of any monitor grouping. This meant either using a DisplayPort-equipped panel, buying a HDMI / DVI to DisplayPort adaptor or waiting for an Eyefinity version of the card to be released.


On all new R7 and R9 series cards, a user will be able to utilize any combination of display connectors when hooking their card up to an Eyefinity grouping. While most cards won’t have the horsepower necessary to power games across three 1080P screens (the beastly R9 290X and R9 290 will likely be the exceptions to this), the feature will surely come in handy for anyone who wants additional desktop real estate.

These possibilities have been applied to lower-end R7 series cards as well. This is particularly important for content creation purposes where 3D gaming may not be required but workspace efficiency can be greatly increased by using multiple monitors.


Most R-series cards will come equipped with two DVI connectors, a single HDMI 1.4 port and a DisplayPort 1.2 output. While there are a number of different display configurations available, most R9 280X cards will come with a slightly different layout: two DVIs, a single HDMI port and two mini DisplayPorts. In those cases, AMD’s newfound display flexibility will certainly come in handy.

While we’re on the subject of connectors, it should also be mentioned that the R9 290X and R9 290 lack the necessary pin-outs for VGA adaptors. It looks like the graphics market will finally see this legacy support slowly dwindle down with only lower-end cards featuring the necessary connections.


For six monitor Eyefinity, the newer cards’ DisplayPort 1.2 connector supports multi-stream transport which allows for multiple display streams to be carried across a single connector. This will allow daisy-chaining up to three monitors together with another three being supported by the card’s other connectors.

The DP 1.2 connector’s multi streaming ability also allows MST hubs to be used. These hubs essentially take the three streams and then break them up into individual outputs, facilitating connections on monitors that don’t support daisy-chaining. After years of talk, there’s finally one available from Club3D but its rollout in North America isn’t guaranteed.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
TrueAudio; A Revolution in Audio Technology?

TrueAudio; A Revolution in Audio Technology?


When we think of gaming in relation to graphics cards, the first thing that likely comes to mind will be in-game image fidelity and how quickly a given solution can process high graphical detail levels. Realism and player immersion is only partially determined by how “good” a game looks and there are many other factors that contribute to how engaged a player will be in a game. Unfortunately, in the grand scheme of game design and a push towards higher end graphics, the soundstage is often overlooked despite its ability to define an environment and truly draw a gamer in.

Multi channel positional audio goes a long way towards player immersion but the actual quality produced by current solutions isn’t usually up to the standards most expect. We’ve all heard it time and again: a multitude of sounds which get jumbled together or a simple lack of ambient sound with the sole focus being put on the player’s gunshots or footsteps. Basically, it’s almost impossible to find a game with the high definition, visceral audio tracks found in today’s Hollywood blockbusters despite the fact that developers sink hundreds of millions into their titles.


The lack of developer generated, high quality audio tracks isn’t absent for lack of trying. Indeed, the middleware software and facilitators are already present in the marketplace but developers have a finite amount of CPU resources to work with. Typically those CPU cycles have to be shared with primary tasks such as game world building, compute, A.I., physics and simply running the game’s main programming. As you might expect, audio processing is relatively low in the pecking order and rarely gets the reserved CPU bandwidth many think it deserves. This is where AMD’s TrueAudio gets factored into the equation.

While sound cards and other forms of external audio renderers can take some load off the processor’s shoulders, they don’t actually handle the lion’s share of actual processing and sound production. TrueAudio on the other hand remains in the background, acting as a facilitator for audio processing and sound creation and allows for ease-of-use from a development perspective, thus freeing up CPU resources for other tasks.

TrueAudio’s stack provides a highly programmable audio pipeline and allows for decoding, mixing and other features to be done within a versatile environment. This frees programmers from the constraints typically placed upon audio processing during the game creation process.

In order to give TrueAudio some context, let’s compare it to graphics engine development. Audio engineers and programmers usually record real-world sounds and then mix them down or modify layers to create a given effect. Does the player need to hear a gunshot at some point? Record a gunshot and mix accordingly. There is very little ground-up environmental modeling like game designers do with triangles and other graphics tools.

TrueAudio on the other hand allows audio teams to get a head start on the sound development process by creating custom algorithms without having to worry about CPU overhead. As a result, it could allow for more audio detailing without running headfirst into a limited allocation of processor cycles.


According to AMD, one of the best features of TrueAudio is its transparency to developers since it can be accessed through the exact same means as the current audio stack. There aren’t any new languages to learn since it can be utilized through current third party middleware programs, making life for audio programmers easier and allowing for enhanced artistic freedom.

TrueAudio’s position within the audio stack enhances its perception as a facilitator since it runs behind the scenes, rather than attempting to run the show. Supporting game audio tracks are passed to TrueAudio, processed and then sent back to the main Windows Audio stack so it can be output as normal towards the sound card, USB audio driver or via the graphics processor’s HDMI / DisplayPort. It doesn’t take the place of a sound card but rather expands the possibilities for developers and works alongside the standard pipeline to ensure audio fidelity remains high.


TrueAudio is implemented directly within supporting Radeon graphics cards (the R7 260X, R9 290 and R9 290X) via a set of dedicated Tensilica HiFi EP audio DSP cores housed within GPU die. These cores will be dedicated to in-game audio processing and feature floating point as well as fixed point sound processing which gives game studios significantly more freedom than they currently have. It also allows for offloading the processing part of audio rather than remaining tied at the hip to CPU cycles.

In order to ensure quick, seamless access to routing and bridging is possible, the DSPs have rapid access to local-level memory via onboard cache and RAM. There’s also shared instruction data for the streaming DMA engine and other secondary audio processing stages. More importantly, the main bus interface plugs directly into the high speed display pipeline with its frame buffer memory for guaranteed memory access. At all times

While TrueAudio ensures that processing can be done on dedicated DSP cores rather than on the main graphics cores, there can still be a CPU component here as well since TrueAudio is simply supplementing what the main processor is already tasked with doing. In some cases, these CPU algorithms can build upon TrueAudio platform, enhancing audio immersion even more.


One of the primary challenges for audio engineers has always be the creation of a three dimensional audio space through stereo headphones. In typical setup, the in-game engine does the preliminary processing and then mixes down the tracks to simple stereo sound. Additional secondary DSPs (typically located on a USB headphone amp) then render the track into virtual surround signal across a pair of channels, adding in the necessary reverberations, separation and other features to effectively “trick” a user into hearing a directionally-enhanced soundstage. The end result is typically less than stellar since the sounds tend to get jumbled up due to a lack definition.

TrueAudio helps virtual surround sound along by offering a quick pathway for its processing. It uses a high quality DSP which insures individual channels can be separated and addressed with their own dedicated, primary pipeline. AMD has teamed up with GenAudio to get this figured out and from presentations we’ve seen, it seems like they’ve made some incredible headway thus far.


While nothing has to be changed from a developer standpoint since all third party applications and runtimes can work with TrueAudio, this new addition can leveraged for more than just optimizing CPU utilization. Advanced effects, a richer soundstage, clearer voice tracks and more can all be enabled due to its lower overhead and broad-ranging application support. In addition, mastering limiters can allow for individual sounds to come through without distortion.

Unlike some applications, TrueAudio isn’t an end-all-be-all solution since it can be used to target select, high bandwidth streams so not all sounds have to be processed through it. AMD isn’t cutting the CPU out of this equation and that’s important as they move towards a heterogeneous computing environment.


As with all new initiatives, the failure or success of TrueAudio will largely depend on the willingness of developers to support it. While it feels like we've been down this road before with HD3D, Bullet Physics and other AMD marketing points from years past that never really got off the ground, we fell like TrueAudio can shine. Developers are already onboard and AMD has gone through great pains to make its development process easy.

Audio is one of the last frontiers that hasn’t been already addressed. Anything that improves the PC audio experience is welcome but don’t expect TrueAudio to work miracles. It will still only be as good as the end point hardware (in this case your headphones and associated sound card) but it should allow better speaker setups to shine, taking immersion to the next level. Will it fundamentally redefine what the PC graphics card can provide? We certainly hope so.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
A Closer Look at the R9 270X 2GB

A Closer Look at the R9 270X 2GB



Unlike the R9 280X, the R9 270X actually uses AMD’s revised reference design. When compared to the outgoing cards it is more aggressive with a matte black heatsink shroud and slim Radeon Red highlights. It looks great but we have to wonder how many board partners will actually take this approach instead of using their own coolers. For those wondering, the R9 270X retains the same 9.5” length as AMD’s HD 7870 GHz.


One of the most notable changes is the extra-large intake fan which is supposed to enhance airflow and also reduce the sometimes loud acoustical profile of its predecessor. According to AMD the internal heatsink has also been thoroughly revised with a new design that significantly reduces core temperatures.

Airflow towards the fan seems to be key here since AMD added several strategically placed intakes. The integration of these into the shroud design is absolutely brilliant.


As with the HD 7870 GHz, the R9 270X incorporates a pair of 6-pin power connectors and is compatible with a dual Crossfire mode. Only AMD’s high end cards will be compatible with their bridgeless Crossfire solution and we’ll talk more about that in an upcoming review.


As we mentioned in the previous pages, the R9 270X is natively compatible with triple monitor Eyefinity or 6-monitor setups via daisy chaining or an MST hub. This means it incorporates a single full-sized DP 1.2 output, one HDMI 1.4 connector and a pair of DVIs. AMD has also maximized airflow here as well with a well-ventilated backplate

 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
A Closer Look at the R7 260X 2GB

A Closer Look at the R7 260X 2GB



As with most mid-range cards, the R7 260X’s cool-running core allowed AMD to move towards an axial-style heatsink approach rather than the centrifugal approach taken by the R9 270X. While this does mean the vast majority of heat will get dumped back into the card’s immediate vicinity, these designs are typically quieter than their counterparts and also offer better temperatures. From our perspective, this card’s compact 6.75" length and its incorporation of a small acoustical footprint could make it perfect for HTPC environments.


There really isn’t much to see on this card since its low cost has dictated an amount of simplicity be imparted upon the design. However, much like the HD 7790, it requires a single 6-pin PCI-E power input and is Crossfire compatible.


Much like its bigger brother, the R9 270X, the R7 260X natively supports triple monitor Eyefinity or six-monitor Eyefinity provided an MST hub is used. It houses a single full-sized DP 1.2 output, one HDMI 1.4 connector and a pair of DVIs.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Test System & Setup

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 331.40 Beta
AMD 13.11 Beta4



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results.


Frame Time Testing & FCAT

To put a meaningful spin on frame times, we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 28ms or about 35FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 28ms (35 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 25ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

In order to determine accurate onscreen frame times, a decision has been made to move away from FRAPS and instead implement real-time frame capture into our testing. This involves the use of a secondary system with a capture card and an ultra-fast storage subsystem (in our case five SanDisk Extreme 240GB drives hooked up to an internal PCI-E RAID card) hooked up to our primary test rig via a DVI splitter. Essentially, the capture card records a high bitrate video of whatever is displayed from the primary system’s graphics card, allowing us to get a real-time snapshot of what would normally be sent directly to the monitor. By using NVIDIA’s Frame Capture Analysis Tool (FCAT), each and every frame is dissected and then processed in an effort to accurately determine latencies, frame rates and other aspects.

We've also now transitioned all testing to FCAT which means standard frame rates are also being logged and charted through the tool. This means all of our frame rate (FPS) charts use onscreen data rather than the software-centric data from FRAPS, ensuring dropped frames are taken into account in our global equation.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
R9 270X; Assassin’s Creed III / Crysis 3

Assassin’s Creed III (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/RvFXKwDCpBI?rel=0" frameborder="0" allowfullscreen></iframe>​

The third iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Boston area which features plenty of NPCs, distant views and high levels of detail.


1920 x 1080





Crysis 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure Due to the hefty system resource needs of this game, post-process FXAA was used in the place of MSAA.


1920 x 1080


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
R9 270X; Dirt: Showdown / Far Cry 3

Dirt: Showdown (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/IFeuOhk14h0?rel=0" frameborder="0" allowfullscreen></iframe>​

Among racing games, Dirt: Showdown is somewhat unique since it deals with demolition-derby type racing where the player is actually rewarded for wrecking other cars. It is also one of the many titles which falls under the Gaming Evolved umbrella so the development team has worked hard with AMD to implement DX11 features. In this case, we set up a custom 1-lap circuit using the in-game benchmark tool within the Nevada level.


1920 x 1080





Far Cry 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



1920 x 1080


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
R9 270X; Hitman Absolution / Max Payne 3

Hitman Absolution (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


1920 x 1080





Max Payne 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/ZdiYTGHhG-k?rel=0" frameborder="0" allowfullscreen></iframe>​

When Rockstar released Max Payne 3, it quickly became known as a resource hog and that isn’t surprising considering its top-shelf graphics quality. This benchmark sequence is taken from Chapter 2, Scene 14 and includes a run-through of a rooftop level featuring expansive views. Due to its random nature, combat is kept to a minimum so as to not overly impact the final result.


1920 x 1080


 

Latest posts

Twitter

Top