What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

AMD HD 7990 Review; Malta Arrives

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The story behind AMD’s HD 7990 has been anything but straightforward. What was originally a dual GPU product code named New Zealand has gradually morphed into the card we see today. Originally, it was to be released in the first quarter of 2012 but various delays and technical hurdles pushed the card’s –now named “Malta”- release date back to today.

This isn’t the first HD 7990 around either. In order to combat NVIDIA’s GTX 690, AMD gave the HD 7990’s naming rights to their board partners and PowerColor eventually came out with their HD 7990 Devil 13. That card paired up two standard HD 7970 cores but also offered a secondary BIOS which pushed core speeds to 1GHz. As one might have expected, the Devil 13’s price was stratospheric at a cool $1000 but that was eventually reduced to $899 before its discontinuation prior to AMD’s own launch.

HD7990-AMD-76.jpg

In order to create Malta, AMD started with two standard HD 7970 cores, each with 2048 processors and 3GB of GDDR5 memory. Due to heat and power consumption concerns, each Tahiti XT GPU doesn’t quite hit GHz Edition speeds but they do improve upon reference HD 7970 specifications. While the 950MHz (1GHz during its Boost phase) may not bring the HD 7990 in line with the performance of two HD 7970 GHz Edition cards, the 6Gbps memory rate on a 384-bit bus should eliminate any memory bottlenecks.

For the time being, AMD hasn’t been forthcoming about the actual TDP of their latest creation but expect it to be somewhere around the 450W to 500W mark. That’s a massive amount considering NVIDIA’s GTX 690 pumps out around 300W of heat but the HD 7990 is expected to outperform it by a significant amount in some situations.

Pricing is typically one of the biggest question marks whenever dual GPU cards are on the table and make no mistake about it, the HD 7990 is expensive by any stretch of the imagination. At $1000 it matches the GTX 690 and newer GTX TITAN while also being some $100 more expensive than two individual HD 7970 GHz Editions. However, this will be the world’s fastest graphics card so there’s a premium associated with that as well.

HD7990-AMD-90.jpg

While a grand may sound like a ton of money to spend on a single graphics card, AMD softens the blow by adding a massive gaming bundle. Every HD 7990 purchaser will receive download codes for eight Gaming Evolved titles: Far Cry 3 Blood Dragon, Crysis 3, Bioshock infinite, Tomb Raider, Far Cry 3, Hitman: Absolution, Sleeping Dogs and Deus Ex: Human Revolution. All told, that’s just over $300 worth of freebies.

HD7990-AMD-1.jpg

The HD 7990 is one of the more unique looking reference graphics cards on the market. While its plastic fan shroud doesn’t have the high end materials as the GTX 690, the design is built primarily for cooling performance. This isn’t unique either since AMD has used this heatsink on their FirePro S10000, though there have been a few minor changes to improve airflow.

The two cores communicate with one another via a high bandwidth 48 lane PCI-E 3.0 PLX bridge chip which has the ability to push over 96GB/s of information. In addition, the HD 7990 can take advantage of AMD’s innovative Zero Core technology, essentially allowing one core to be shut down in idle scenarios to further reduce its power signature.


AMD’s approach to the HD 7990’s heatsink engineering really boils down to brute force. They’ve equipped it with a trio of large, low RPM axial fans which push cool air down onto a massive copper / aluminum fin array, making this one of the quietest high performance cards on the market. Unfortunately, the layout virtually ensures exhaust air is blown into your case but it should lead to some great core temperature results.

HD7990-AMD-5.jpg
HD7990-AMD-7.jpg

In order to cope with the power input required by the two Tahiti XT cores, a pair of 8-pin connectors has been included and we’d recommend you use nothing less than an 850W PSU when using the HD 7990. AMD also added their standard two-position BIOS switch in case an enthusiast wants to flash a secondary overclocked BIOS to the card.

HD7990-AMD-9.jpg

AMD is predicting the HD 7990 will be primarily used in Eyefinity environments so they’ve equipped it with a single DVI output as well as four mini DisplayPort connectors. We’d expect most board partners to include mini DP to DP adaptors with their cards for native 3x1 Eyefinity support but 3x2 setups aren’t natively supported unless using a daisy-chain layout or secondary hub.

HD7990-AMD-8.jpg

To better distribute the massive amounts of heat generated by two cores and 6GB worth of GDDR5 AMD has installed a complete coverage heat spreader on the rear quarters. There are a few openings here and there but for the most part, the heatsink covers all of the components and memory modules behind the GPUs.

HD7990-AMD-2.jpg
HD7990-AMD-10.jpg

The HD 7990 may be one of the fastest and quietest dual GPU cards available but it is also one of the longest. At 12” it is longer than the GTX 690 and HD 7970 Ghz Edition and will have problems fitting into certain cases.

Another thing to take into account is availability. Following in the footsteps of AMD’s last few releases, the HD 7990 will only be available in about two weeks and even then, quantities will be severely limited.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Testing Methodologies Explained & FCAT Gets a Preview

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 320.00 Beta
AMD 13.5 Beta 2



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Time Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FRAPS also has the capability to log average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs we use in the FIRSt part of this review. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.


Frame Time Testing & FCAT

To put a meaningful spin on frame times ,we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 28ms or about 35FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 28ms (35 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 25ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

In order to determine accurate onscreen frame times, a decision has been made to move away from FRAPS and instead implement real-time frame capture into our testing. This involves the use of a secondary system with a capture card and an ultra-fast storage subsystem (in our case five SanDisk Extreme 240GB drives hooked up to an internal PCI-E RAID card) hooked up to our primary test rig via a DVI splitter. Essentially, the capture card records a high bitrate video of whatever is displayed from the primary system’s graphics card, allowing us to get a real-time snapshot of what would normally be sent directly to the monitor. By using NVIDIA’s Frame Capture Analysis Tool (FCAT), each and every frame is dissected and then processed in an effort to accurately determine latencies, frame rates and other aspects.

As you might expect, this is an overly simplified explanation of FCAT but expect our full FCAT article and analysis to be posted sometime in June. In the meantime, you can consider this article a hybrid of sorts with standard FRAPS testing combined with FCAT results in two areas the latter solution excels in: onscreen (or visible) frame times and true displayed frame rates or FPS. This will actually give this review a beginning, middle and end so to speak as we go through the motions from FRAPS to FCAT.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Assassin’s Creed III / Crysis 3

Assassin’s Creed III (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/RvFXKwDCpBI?rel=0" frameborder="0" allowfullscreen></iframe>​

The third iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Boston area which features plenty of NPCs, distant views and high levels of detail.

2560x1600

HD7990-AMD-37.jpg

HD7990-AMD-30.jpg


5760x1080

HD7990-AMD-51.jpg

HD7990-AMD-44.jpg

For whatever reason, AMD still struggles with Assassin’s Creed III where AMD is still missing Crossfire profiles. This means NVIDIA’s GTX 690 easily wins here. Meanwhile, the AMD solutions fail miserably in Eyefinity as well since they also lack multi monitor profiles for this game.


Crysis 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure Due to the hefty system resource needs of this game, post-process FXAA was used in the place of MSAA.

2560x1600

HD7990-AMD-38.jpg

HD7990-AMD-31.jpg

5760x1080

HD7990-AMD-52.jpg

HD7990-AMD-45.jpg

Crysis 3 shows better results for the HD 7990 but its excess memory bandwidth seems to be doing very little as the GTX 690 is able to maintain a slim lead, though not in minimum framerates. In addition, its clock speeds will lead to it being beaten by two HD 7970 GHz Editions.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Dirt: Showdown / Far Cry 3

Dirt: Showdown (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/IFeuOhk14h0?rel=0" frameborder="0" allowfullscreen></iframe>​

Among racing games, Dirt: Showdown is somewhat unique since it deals with demolition-derby type racing where the player is actually rewarded for wrecking other cars. It is also one of the many titles which falls under the Gaming Evolved umbrella so the development team has worked hard with AMD to implement DX11 features. In this case, we set up a custom 1-lap circuit using the in-game benchmark tool within the Nevada level.


2560x1600

HD7990-AMD-39.jpg

HD7990-AMD-32.jpg


5760x1080

HD7990-AMD-53.jpg

HD7990-AMD-46.jpg

Dirst Showdown provides an interesting counterpoint to AMD’s performance (or lack thereof) in Assassin’s Creed. Here, NVIDIA’s solutions don’t come close to the HD 7990 when it comes to delivering raw framerates.


Far Cry 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.


2560x1600

HD7990-AMD-40.jpg

HD7990-AMD-33.jpg


5760x1080

HD7990-AMD-54.jpg

HD7990-AMD-47.jpg

In the last month or so, both AMD and NVIDIA have rolled out subsequent driver revisions which improved performance in Far Cry 3. It looks like NVIDIA remains slightly ahead with the GTX 690 easily overcoming the HD 7990.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Hitman Absolution / Max Payne 3

Hitman Absolution (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


2560x1600

HD7990-AMD-41.jpg

HD7990-AMD-34.jpg


5760x1080

HD7990-AMD-55.jpg

HD7990-AMD-48.jpg

AMD’s performance here is simply excellent as the GTX 690 seems to simply run out of steam as the resolutions increase. With that being said, it does look like NVIDIA has some issues with “cyclical” performance which gravitates up and down every 20 or so seconds. It will be interesting to see if this repeats itself during frame time testing.


Max Payne 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/ZdiYTGHhG-k?rel=0" frameborder="0" allowfullscreen></iframe>​

When Rockstar released Max Payne 3, it quickly became known as a resource hog and that isn’t surprising considering its top-shelf graphics quality. This benchmark sequence is taken from Chapter 2, Scene 14 and includes a run-through of a rooftop level featuring expansive views. Due to its random nature, combat is kept to a minimum so as to not overly impact the final result.


2560x1600

HD7990-AMD-42.jpg

HD7990-AMD-35.jpg


5760x1080

HD7990-AMD-56.jpg

HD7990-AMD-49.jpg

Max Payne once again sees the GTX 690’s bandwidth holding its raw framerates especially at multi monitor resolutions. This allows the HD 7990 to post a clear win but the $100 less expensive HD 7970 GHz Edition Crossfire setup still manages to beat it by a good 10-12% in every run.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Tomb Raider

Tomb Raider (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


2560x1600

HD7990-AMD-43.jpg

HD7990-AMD-36.jpg


5760x1080

HD7990-AMD-57.jpg

HD7990-AMD-50.jpg

The Tomb Raider tests are certainly interesting since it looks like both AMD and NVIDIA have a good handle on performance with all of the graphics card setups scoring within a few percentage points of one another. The HD7990 is in a statistical dead heat with the GTX 690, a clear indicator that in some games at least these two are quite evenly matched.

Once again though, the HD 7970 GHz Crossfire setup is able to maintain a clear lead over all other entrants.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Onscreen Frame Times w/FCAT

Onscreen Frame Times w/FCAT


When capturing output frames in real-time, there are a number of eccentricities which wouldn’t normally be picked up by FRAPS but are nonetheless important to take into account. For example, some graphics solutions can either partially display a frame or drop it altogether. While both situations may sound horrible, these so-called “runts” and dropped frames will be completely invisible to someone sitting in front of a monitor. However, since these are counted by its software as full frames, FRAPS tends to factor them into the equation nonetheless, potentially giving results that don’t reflect what’s actually being displayed.

With certain frame types being non-threatening to the overall gaming experience, we’re presented with a simple question: should the fine-grain details of these invisible runts and dropped frames be displayed outright or should we show a more realistic representation of what you’ll see on the screen? Since Hardware Canucks is striving to evaluate cards based upon and end-user experience rather than from a purely scientific standpoint, we decided on the latter of these two methods.

With this in mind, we’ve used the FCAT tools to add the timing of partially rendered frames to the latency of successive frames. Dropped frames meanwhile are ignored as their value is zero. This provides a more realistic snapshot of visible fluidity.


HD7990-AMD-65.jpg

HD7990-AMD-66.jpg

HD7990-AMD-67.jpg

HD7990-AMD-68.jpg

The first round out onscreen frame time results shouldn’t come as any surprise for someone familiar with AMD’s struggles in this area. Not all that much has changed, with the HD 7990 providing a wholly unacceptable gaming experience in Assassin’s Creed III, Crysis 3 and Far Cry 3. In each of these instances, the stuttering was extremely noticeable.

The GTX 690 didn’t escape unscathed either with a few hiccups at certain point but there wasn’t anything which approached the HD 7990’s problems.

According to AMD, they’re well on their way towards fixing this problem but they’ve been unable to commit to a timeframe for a driver release which will put things right.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Onscreen Frame Times w/FCAT (pg.2)

Onscreen Frame Times w/FCAT (pg.2)


When capturing output frames in real-time, there are a number of eccentricities which wouldn’t normally be picked up by FRAPS but are nonetheless important to take into account. For example, some graphics solutions can either partially display a frame or drop it altogether. While both situations may sound horrible, these so-called “runts” and dropped frames will be completely invisible to someone sitting in front of a monitor. However, since these are counted by its software as full frames, FRAPS tends to factor them into the equation nonetheless, potentially giving results that don’t reflect what’s actually being displayed.

With certain frame types being non-threatening to the overall gaming experience, we’re presented with a simple question: should the fine-grain details of these invisible runts and dropped frames be displayed outright or should we show a more realistic representation of what you’ll see on the screen? Since Hardware Canucks is striving to evaluate cards based upon and end-user experience rather than from a purely scientific standpoint, we decided on the latter of these two methods. With this in mind, we’ve used the FCAT tools to add the timing of runted to the latency of successive frames. Dropped frames meanwhile are ignored as their value is zero. This provides a more realistic snapshot of visible fluidity.


HD7990-AMD-69.jpg

HD7990-AMD-70.jpg

HD7990-AMD-71.jpg

This last trio of games holds some faint rays of hope for AMD’s frame latencies. In Hitman Absolution the HD 7990 performed marginally better than NVIDIA’s GTX 690 and it didn’t display any large, distracting 35ms+ spikes.

Tomb Raider and Max Payne had two more “almost good” results since they only displayed a few instances where the HD 7990 noticeably stuttered. However, in both situations (particularly in Tomb Raider), the GTX 690 provided a noticeably smoother experience.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Reported vs. Actual Framerate

Reported vs. Actual Framerate


All of our previous testing has led to these final results. So far, we have shown the framerate results from FRAPS and the displayed frame latencies as determined by capturing real-time information from the graphics card. With these two items in hand, determining visible onscreen performance is possible.

While runts (partially rendered frames) and dropped frames are effectively invisible, software based tools like FRAPS tend to count them anyways, artificially inflating framerate results. Basically, this means FRAPS may be reporting a certain framerate but once runted and dropped frames are taken into account, the actual displayed framerate can be significantly lower.

In the charts below, we have taken the raw framerate from FRAPS (listed as Reported) and then used FCAT to subtract the invisible runt and dropped frames in order to give a Actual Framerate. If a graphics solution is operating as it should, the Reported and Actual results should be perfectly aligned, thus indicating it isn’t partially rendering any frames (or discarding them altogether). If the lines are misaligned, performance as determined by FRAPS is being artificially inflated and a gamer will never see the performance it reports.

With all of these things taken into consideration, we believe these tests represent what is ACTUALLY being shown onscreen at a given moment and thus give a definitive portrait of true performance. Please note however this is done on a 2560x1440 screen so the results can’t be directly compared to the tests shown on the initial pages of this review.


HD7990-AMD-58.jpg

HD7990-AMD-59.jpg

HD7990-AMD-60.jpg

HD7990-AMD-61.jpg

These results are simply detrimental to AMD’s hopes of laying claim to the title of fastest graphics card in the world. It seems like the HD 7990 is dropping or partially rendering a ton of frames in Crysis 3, Far Cry 3 and even some within some areas of Dirt Showdown. Ironically, in Assassin’s Creed III, it is relatively well behaved.

Compare and contrast the HD 7990 to the numbers posted by the GTX 690 and you’ll find a clear win in favor of NVIDIA. This goes to prove that while the HD 7990 is a fast solution according to FRAPS, a good portion of its rendered frames never make it to the screen, thus eliminating what was a considerable advantage.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Reported vs. Actual Framerate (pg.2)

Reported vs. Actual Framerate


HD7990-AMD-62.jpg

HD7990-AMD-63.jpg

HD7990-AMD-64.jpg

The last few tests tend to reflect the results we saw previously but not to such a spectacular degree. Both Hitman and Max Payne show very little deviation but Tomb Raider once again displays a significant amount of either runted or dropped frames, significantly lowering the HD 7990’s performance.

With this section done, we can clearly see the difference between myth and reality when it comes to reporting framerates. FRAPS is a great too but it is obviously not able to distinguish between what’s information being passed at the software level versus what is actually being displayed on the screen. This causes the HD 7990 to go from the fastest graphics card around to a solution that routinely trails NVIDIA’s GXT 690. Hopefully AMD will have a fix for this sooner rather than later.
 
Last edited:
Top