What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

AMD Radeon HD 7970 3GB Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
There has been over a year of rumors leading up to this day but after countless false leads and misplaced rumors, AMD is finally set to launch the first salvo of their Southern Islands architecture. Unlike nearly every series since the days of R600, this isn’t simply another refreshed core layout with a few bits added on for additional functionality. Rather, what we are looking at today is the culmination of lessons learned over the last few generations of DX10 and DX11 parts all looped together within a 4.3 billion transistor die. Think this architecture as AMD’s version of Fermi but without the rampant power consumption; it has been designed from the ground up for class leading performance in compute and DX11 environments.

As we saw with Cypress, AMD had some issues achieving optimal results in current generation applications from an architecture that had largely been around since 2006. Cayman changed things around a bit by moving away from VLIW5 and instituting a more efficient VLIW4 core instruction set while including some additional features meant to speed up DX11 rendering. Southern Islands meanwhile kicks the VLIW architecture to the curb once and for all and goes down a path less travelled with a new ground up core redesign called Graphics Core Next. We’ll talk more about the changes in upcoming sections but for the sake of briefness; let’s just say that the differences are significant.


The Tahiti core...all 4.3 billion transistors

The initial cards being launched will carry the Tahiti XT and Tahiti Pro codenames and will be equipped the fullest implementation of the new core design. However, they only represent the tip of AMD’s Southern Islands iceberg since we’re expecting a full top to bottom refreshed lineup before too long.

Naturally, the powers that be wanted to introduce gamers to their flagship single GPU product first which is why today marks the official unveiling of AMD’s HD 7970 3GB. This graphics card sports the 28nm Tahiti XT core and is supposed to set new standards in both gaming and compute performance while maintaining almost the same power consumption as an HD 6970. Sounds impossible doesn’t it? Wait until you see some of the results we were able to squeeze out of this thing. The HD 7970 also includes some never before seen features like PCI-E 3.0 and DirectX 11.1 for those who want a bit of future proofing in their purchases.


With all of that out of the way, it’s time for a bit of transparency. AMD was originally set to launch the HD 7970 alongside its little (yet still very capable) Tahiti Pro sibling in early January but right before we started gearing up for Christmas, things changed. The launch was pulled forward to today and instead of having product in the channel when this review goes live, we’re looking at yet another paper launch from the folks over at AMD. The official date of availability remains January 9th but whether or not there will actually be sufficient cards on shelves remains the million dollar question. In terms of pricing, a number of $549 was thrown out there but we’ll see what retailers end up doing with a flagship card that may be in short supply for the foreseeable future.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Graphics Core Next: From Evolution to Revolution

Graphics Core Next: From Evolution to Revolution


Much like the outgoing Cayman series of cards, Tahiti is focused upon improving AMD’s position within a highly competitive (and lucrative DX11) market. Though previous generations like the HD 5000 and HD 6000 relied largely upon a core architecture that existed since 2006, the next iteration of parts will have a new design that has been engineered from the ground up for DX11 and compute environments. However, in order to see where AMD is going, you have to understand what they’re coming from


AMD’s graphics cores have always had fairly long lifespans and that says a lot about how they have usually designed the best possible architecture for a given generation. While this approach certainly has benefits from a financial and planning perspective, introducing the wrong architectural design can have long term consequences.

The first era of modern GPUs ran from 1998 through 2002 and introduced us to fixed function rendering which worked well for the time but featured limited the ways to do geometry, lighting and texturing. Even though modern graphics architectures still have a fixed function stage containing the geometry processing elements, these have now been incorporated into a much larger rendering picture.

AMD’s second round of designs ushered in the revolutionary DX9 era along with its accompanying generation of products. It featured the beginning of programmable rendering pipelines and new pixel rendering functionality while laying groundwork for the DX10 and DX11 products to come. Meanwhile, the release of DX10 in 2007 meant the introduction of unifed shader units and the VLIW (Very Long Instruction Word) architecture for parallel core operations. AMD has adhered to the VLIW approach for a while now and as DX10 gave way to DX11, additional functionality and minor modifications were gradually built in.

In a roundabout way, this brings us to AMD’s new take on both graphics and parallel computing called Graphics Core Next or GCN. This may not carry the most unique of names, it outlines what this new architecture means for AMD: a true next generation approach. Simply put, it was high time for a change away from VLIW in order to bring intergenerational performance up to the industry’s expectations. GCN also represents the first steps towards a truly heterogonous environment between the CPU and GPU since it will eventually be an integral part of AMD’s upcoming APUs.


The fundamental building block for all things GCN is called the Compute Unit. In layman’s terms this is a compact, self contained building block of sorts that was designed to increase on-die content flow efficiency by keeping much of the data local rather than handing it off to a global shared stream. For example, the previous generation’s SIMD array, compute unit, registers and cache all fed off the same thread sequencer and had to share resources in a complex dance of information. Each Compute Unit is treated independently and allows for the SIMD communications, sequencing and scheduling to be run in a single cohesive structure before handing it off.

From a thread processing standpoint a single Compute Unit has 4 sub Vector Units (or SIMDs) made of up 16 Stream Processors each for a total of 64 cores per CU. This layout is backstopped by a quartet of Texture Units and 16KB of dedicated read / write L1 cache. The amount of L1 cache doubles the amount from previous architectures so instead of reading textures and exporting raster functions to an external memory buffer, these instructions can now be sent to the local cache instead.

With the Vector Units producing their own independent streams, it was important to include a high bandwidth scheduler. The Scheduler works alongside the unified cache and the 64KB of local data share to facilitate the information flow between data lanes. In the Queen’s English, it acts like a traffic light to direct data towards a set location.

Another important part of the Compute Unit’s hierarchy is the inclusion of a dedicated Scalar Unit with its own registers. This unit acts like a general purpose programmable core that can issue its own instructions and can take part of the workload off of the other areas of the Compute Units or can work independently if need be. Think of it as a central processing unit within each cu.


As you can see above, the move away from a VLIW4 SIMD architecture towards Stream Processors contained within four distinct separate Vector Units significantly increases on-die efficiency. The “Quad SIMD” approach is able to process information on a parallel basis without any potential conflicts in the data stream, thus speeding up data hand-offs and increasing overall performance per square millimeter.


Backing up the Compute Units is an expanded and very robust caching design that is linked together by the Global Data Share. While the Global Data Share is the glue that binds communications between CUs together, it can also take some heat off the L2 cache by managing all on-chip data sharing services.

In addition to the aforementioned L1 cache, each quartet of Compute Units has access to 16KB of instruction cache and 32KB of scalar data cache which are both helped out by the L2 cache units. Speaking of the L2 cache, AMD has upped the ante here as well with twelve partitions of 64KB for a total of 768KB.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Increased Geometry Processing & ROP Efficiency

Increased Geometry Processing & ROP Efficiency



Look familiar? Upon first glance there really isn’t all that much different between the geometry processing engines in the current and next generation architectures but there are several optimizations built in for increased efficiency and throughput.

Let’s start with the obvious first. Much like Cayman, Tahiti uses two distinct geometry processing engines that are accessed through a common Command Processor which takes care of load balancing and scheduling. The fixed function stages are broken up into the two engines that work in parallel and contain what AMD calls their “ninth generation” tessellators. Alongside other small changes, these new tessellation units still feature off-chip buffering which allows geometry data from tessellated workloads to be stored in the DRAM if the on-chip cache becomes saturated. However, due to the large amount of fast L2 cache available in the Tahiti core, tessellation performance has been increased by an order of magnitude over Cayman.


The result of these changes to the tessellation engine is a vast improvement over the HD 6900-series at higher levels of tessellation. Many people may clue into the seemingly lackluster increase at lower levels but we have to remember that the previous architecture already brought a ton of potential to the table in exactly these situations. Once everything is taken into account, Tahiti should offer more balanced performance in DX11 games that demand all levels of geometry processing.


Once again there really doesn’t seem to be much in the way of changes to the ROP layout either with partitions of four ROPs and 16 z-stencil units throughout the core. However, AMD makes better use of these ROPs by leveraging Tahiti’s increased memory bandwidth for a 50% theoretical fillrate increase over the previous generation.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
The Tahiti Core Uncovered

The Tahiti Core Uncovered



Once we bring together the items we have seen on the last few pages, a clearer picture of the Tahiti core begins to emerge. From a high level standpoint there are quite a few similarities between the outgoing and incoming core layouts but the functionality introduced by the Graphics Core Next architecture makes this a whole new ballgame.

Let’s start with the basic Graphics Core Next design elements since that is where most of the advances lie. The “core” of the Tahiti XT houses 32 Compute Units broken up into two engines of 16CUs each. If you remember our previous discussions, each one of those CUs houses four SIMDs with 64 cores and four texture units for a total of 2048 Stream Processors and 128 TMUs in a fully enabled Tahiti XT core. When this ~500 SP and 32 TMU increase over Cayman is combined with GCN’s new Compute Unit processing features, AMD claims a 40% increase in compute and texture fillrate performance from one generation to the next.

While the main core elements have changed drastically, items like the Geometry Engines and render backends haven’t seen much in the way of architectural changes and some may even think they have been overlooked. There are still eight combined Render Output Units which hold four ROPs each, giving the Tahiti core a maximum of 32 ROPs, or exactly the same number as Cayman. Granted, the shared L2 cache and additional memory bandwidth does help these attain an approximate 5% real world increase in pixel fillrate but that’s not much considering the improvements apparent elsewhere.

The Geometry Engines house the most critical parts of any DX11 architecture and while it looks like AMD hasn’t done much here, we can’t forget that Cayman already incorporated several key advances in DX11 processing. Nonetheless, there have been some fancy moves going on behind the scenes with the two tessellators being upgraded, increasing their theoretical throughput.


Moving down to the “lower” part of the Tahiti block diagram we come to the L2 cache and memory controllers, both of which have seen a fundamental evolution away from previous designs. Instead of being incorporated into four distinct blocks and being tied to the Render Backends, the full amount L2 cache is now shared throughout the core and scales independently from the ROPs and memory controllers. It has also been doubled in size to 768KB, ensuring there is enough for storing information on the fly.

The GDDR5 memory controllers don’t feature any behavioral differences from the ones found on Cayman but two additional 64-bit units have been added to make a 384-bit interface which powers up to a dozen modules. As we already mentioned, they have been decoupled from the rest of the architecture so in theory we could see a 384-bit card with less ROPs than the Tahiti XT.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Tahiti as a Compute Powerhouse

Tahiti as a Compute Powerhouse


Graphics processing may be what most of us will use AMD’s new architecture for but it has also undergone a thorough revision on the compute side as well. Once again the Compute Units sit at the heart of the equation when running computational algorithms but with the rendering pipeline out of the way, things are done quite a bit differently. We should also mention that AMD has built in native support for Open CL 1.2, DirectCompute 11.1 and C++ AMP as well.


The Tahiti core makes use of two Asynchronous Compute Engines which can run parallel compute pipelines independently from the graphics rendering pipeline. In short this means compute and graphics applications can be run at the same time, though with reduced resources in both situations. A new DMI engine has also been implemented which is designed to take advantage of the massive amount of bandwidth PCI 3.0 offers between the GPU and the CPU. According to AMD, the dual DMI Engines can essentially saturate the full bandwidth of a Gen 3 x16 slot.

By adding ECC support on both internal and external memory AMD has also increased this core’s appeal for the HPC crowd. We’ll be covering the full benefits of the new GPGPU processing engine in a future article.


The Tahiti core may boast 4.31 billion transistors but one of AMD’s main focuses has been to fully utilize their core at all times. Since every aspect of the core architecture can process parallel data flows with independent scheduling, AMD has realized vastly improved performance per square millimeter without blowing power consumption out of the park.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
A New Benchmark in GPU Efficiency

PowerTune Technology


One of the largest challenges GPU manufacturers face is the rapid increase in the power consumption of their higher-end ASICs. NVIDIA’s solution to cut consumption and TDP in their GTX 500-series was a combination of input current monitoring and upgraded heatsinks along with application detection. AMD meanwhile took a different path with their PowerTune technology. It uses a complex set of concurrent calculations to determine on-the-fly TDP levels so clock speeds can be adjusted once the card reaches a pre-determined maximum thermal design power level.

The entire point of PowerTune is to allow AMD to strike a delicate balance between power consumption, thermals and clock speeds. If such a middle-man didn’t exist, the clock speeds many AMD GPUs would have been significantly lower since there would have been nothing to keep TDP in check. As one might expect, PowerTune makes a comeback for the Tahiti cores and it still behaves in the same way as before.


A typical GPU will likely be used in any number of applications but its primary focus will usually be upon one thing: entertainment. While there are several synthetic benchmarks which cause a graphics card to consume copious amounts of power, most typical games will never even begin to approach these levels. As such, AMD has focused their PowerTune technology upon scenarios which put unrealistic loads upon the GPU rather than games. Since most of us don’t sit around all day benchmarking with 3DMark, this is good news.

Unfortunately, depending on their rendering methods there may still be the odd game which will be caught up in the crossfire and have its performance capped but we will be tackling this potential issue in a later section. It is just important to remember that AMD has tuned this technology to deliver the best gaming performance while weeding out potential power viruses.


As AMD describes it, this new technology is simply used to contain power consumption in such a way that the actual TDP of a given product will in effect determine clock speeds. Instead of letting the card run amok for the few seconds of absolute peak consumption that will likely occur every now and then, PowerTune caps power draw through clock speed modification. After the peak periods are concluded, clock speeds along with performance will return to normal.


This may all sound like doom and gloom for overall performance but PowerTune is actually designed for a worst-case scenario rather than a typical usage pattern. The algorithm to determine implied power consumption is based upon an extremely high leakage ASIC operating with 45 degree inlet temperature. Remember that high temperatures increase power draw in transistors so this ensures products are not artificially capped in lower temperature scenarios. Since TDP is the determining factor here, if you keep your card cool within a well ventilated case you should in theory never see PowerTune kick in while gaming. According to AMD, it has also allowed them to drastically increase the clock speed of their cards since PowerTune allows for better TDP predictability.


ZeroCore Power



Another feature AMD is introducing with the Tahiti core is called ZeroCore Power. If you are someone who leaves their computer on for long periods of time or intend on running a multi card setup, pay special attention to this section.

One of the issues with most modern graphics cards is their power consumption when not actively driving 3D graphics content or accelerating certain applications. Even if your monitor is turned off, the only way of conserving electricity is to put the system to sleep or allow it to hibernate. Granted, when in idle mode a GPU doesn’t consume all that much power but a constant 30W over long periods of time can sure add up on a monthly power bill. This is where ZeroCore Power steps into the equation.

The basic idea behind ZeroCore Power is to effectively shut down the card during periods when the GPU isn’t outputting an onscreen image. These “long idle situations” are determined by Windows which is programmed to shut off your display after a preset amount of time (you can access it by going into the Display Power Options and choosing when Windows can turn off your display) in order to optimize full system efficiency. AMD’s driver will detect this and put the graphics card into a suspended sleep mode by shutting off the fan and powering down non-essential onboard components. It will then wake back up the moment Windows detects an input and activates the display again.

According to AMD, ZeroCore Power allows a Tahiti-based card to drop down to about 3W during these long idle situations which is a vast improvement over Cayman’s ~30W consumption.


Where ZeroCore Power technology really comes into its own is in Crossfire setups. Since only one GPU is driving the display at all times, any additional cards are automatically put into ZeroCore mode, even when in standard idle conditions. The result is drastically lower idle power consumption numbers for systems with more than one GPU. Meanwhile, in long idle situations, even the primary graphics card is put into a suspended sleep mode as well.

AMD hasn’t stopped there either. Tahiti has features power saving features like engine clock deep sleep and a DRAM stutter mode (which compresses any residual contents within the framebuffer) in order to further reduce standard 2D power consumption to a mere 15W. If you add this all up, a triple Crossfire setup will consume just 21W when in idle 2D mode (15W for the primary card and 3W for each additional card) compared to about 90W for a 3x HD 6970 configuration. In our opinion, this could be a game changer for any holdouts who couldn’t justify more than one GPU due to excess power consumption and heat production when not gaming.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Advanced Image Quality & Partially Resident Textures

Advanced Image Quality


Much has been said about AMD’s claims of leading edge texture filtering quality on the HD 6000-series but for the most part, it was an improvement over previous generations. Whether it was up to expectations is still open for debate but the Southern Islands family is once again claiming to have virtually eliminated the flickering and artifacts that sometimes appear in games.


In order to high the high note in terms of texture filtering, Southern Islands cards feature an improved anisotropic filtering algorithm that’s designed to virtually eliminate shimmering in high resolution textures. This may sound like a tall order to fulfill but after seeing it in action, we’re confident AMD can deliver this time around.

One of the beauties of this new filtering algorithm is its ability to run without additional buffering so there is no drain on system resources. In addition, it is automatically enabled to gamers should see vastly improved image quality without having to dive into the Catalyst Control Panel.


Introducing PRT (Partially Resident Textures)


One of the main challenges for today’s GPUs is how to handle large amounts of high resolution textures when moving through a scene. Presently, when a player moves through a game environment the texture information in upcoming frames is constantly loaded between the disk, CPU and the graphics card. Usually the effect of this preloading is seamless but as larger amounts of information are loaded, stuttering can occur.


AMD’s solution to this somewhat complex problem is to leverage the local memory on the GPU and allow it to act as a true texture caching system. Essentially, upcoming textures are prefetched from the CPU and disk and stored locally on the GPU until they are ready to be used by the application. In a way this can almost be considered a form of texture “streaming” and should help eliminate the stutter normally associated with scene loading.


In addition to preloading, PRT can also dynamically load selected textures based on when they will be needed instead of loading every bandwidth-hogging texture all the time. This should help eliminate the memory footprint the feature requires.

Unfortunately for gamers Partially Resident Textures technology is application controlled so it has to be built into a game engine before it can be utilized. Supposedly, AMD’s development team is working with game developers to include this feature in upcoming releases but there aren’t any titles on the horizon that will put it to good use.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
Eyefinity 2.0, UVD & AMD’s VCE

Eyefinity 2.0


When Eyefinity was first introduced, it caused some serious waves in the industry since it was the first standard to properly support multi monitor setups. Since then it has gradually evolved alongside NVIDIA’s Surround technology to become a must have for gamers who want the most immersive experience possible. In the last few months have brought about a number of advancements for Eyefinity users and the beginning of 2012 will also hold some great steps forward as AMD transitions to Eyefinity 2.0.


October 2011 showed us the first steps towards the “2.0” ecosystem as a number of new features were rolled out. Support for 5x1 portrait and landscape setups saw the light of day for those of you with truly massive desks and the much requested support for flexible bezel compensation was included as well. Finally, support for very well-heeled gamers came in the guise of full 16k resolution support.


Catalyst 11.12 didn’t roll out anything revolutionary for Eyefinity other than support for full stereoscopic images over multiple panels via AMD’s open HD3D standard. Meanwhile, the 12.1 software stack should include Crossfire profiles for Eyefinity + HD3D setups


The February 2012 drivers will also herald some additional features like custom resolution support and improvements to Catalyst’s Eyefinity preset manager. Last but not least we should also see the first implementation of automatic taskbar repositioning which will place your desktop icons and Windows taskbar on the center monitor for a more convenient setup.


UVD3 & VCE



When the HD 6900-series was first shown off to the world, it included AMD’s third generation Universal Video Decoder. One of its main features was its ability to decode videos which use MVC encoding. As part of the H264 / MPEG-4 AVC codec, MVC is responsible for creating the dual video bitstreams which are essential for stereoscopic 3D output. Supporting this standard brought AMD’s GPUs the ability to process Blu-ray 3D movies through a HDMI 1.4a connector. MPEG-4 Part 2 hardware acceleration for DivX and Xvid codecs was also added. With all of that being said, the Southern Islands-based cards continue to use the UVD3 standard as its base functionality was forward looking enough that additional features weren’t needed.


When compared to UVD3, AMD’s new Video Codec Engine is a different beast altogether. The VCE is essentially a one stop shop for hardware encoding via the GPU’s compute engine and provides a highly parallel scalable pipeline for many high definition tasks. It can also provide additional benefits for transcoding and output tasks. We’’l be taking a more in depth look at the VCE in a future article.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
AMD’s Current and Upcoming Lineup under the Microscope

AMD’s Current and Upcoming Lineup under the Microscope




Now that the sticky and complicated architectural lecture is out of the way, its time concentrate on the HD 7970’s position in AMD’s current lineup. Let’s start with the obvious part first: the Tahiti is supposed to steamroll Cayman in every way possible. At 4.3 billion transistors, this is by far the most complex core AMD has engineered but due to TSMC’s 28nm manufacturing process, power consumption has remained at approximately the same level as the previous generation. These new cores also come with full compatibility for DX11.1 (which will be available with Windows 8) and the bandwidth provided by PCI-E 3.0.

From a purely specifications standpoint, the HD 7970 has nearly 40% more Stream processors and 32 additional TMUs when compared against its predecessor while the engine clock gets a boost to 925MHz. Once again, AMD is trumpeting the overclocking capabilities of their core with claims of 1GHz and higher clock speeds being easily attainable. While the ROP count doesn’t receive much –if any- attention, the higher core speed means throughput has nonetheless been increased from one generation to the next. Memory speeds also remain the same as Cayman XT at 5.5 Gbps but a 3GB layout is standard and the modules are now linked to a massive 384-bit memory bus which results in about 264 GB/s of bandwidth. We should also mention that AMD has equipped all HD 7970 boards with 6 Gbps GDDR5 ICs which should easily overclock to the 6.5 Gbps mark.

4.3 billion transistors, 3GB of GDDR5 and an advanced manufacturing process certainly doesn’t come cheap though, making the HD 7970 the most expensive single GPU card from AMD in recent memory. However, while the Tahiti XT-based card may look pricy when compared to the rest of AMD’s lineup, it actually compares quite well to NVIDIA’s current single GPU flagship; the GTX 580.


While the Tahiti XT based HD 7970 may be the subject of this review, AMD certainly won’t be stopping with just this one card. There is a whole lineup of next generation parts in the pipeline that begins with the HD 7970 and its little brother, a product with the Tahiti Pro core that’s due out early next year. Slightly further down the product stack is Pitcairn, an architecture which should play a role near and dear to most gamers’ hearts since it will take over the highly popular $199 to $299 market from the HD 6800 and HD 6950 cards. Finally, there’s Cape Verde. This small, efficient GPU is billed as the spiritual successor to the highly successful HD 5700 and HD 6700 cards.

With Tahiti, Pitcairn and Cape Verde all on their way, AMD surely has their hands full but there’s one other wrinkle in the fabric of this story: they’ll all be launching alongside a dual GPU product…….before the end of Q1 2012. That means by the time April hits we could conceivably see four new families of AMD GPUs hitting a minimum of seven different price points. Whether or not they’ll all be hard launches is anyone’s guess but regardless of availability, the first half of 2012 could prove to be a great time for graphics card shopping.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,857
Location
Montreal
A Closer Look at the Radeon HD 7970 3GB

A Closer Look at the Radeon HD 7970 3GB



The HD 7970 is a relatively compact 11” card that should fit in almost any ATX enclosure and carries on AMD’s tradition of black / red branding. In this case, AMD has moved away from the matte black design of the HD 6970 by using a high gloss heatsink shroud which is accented by large swaths of red highlights.


Image supplied by AMD

While the Tahiti XT core is based off of an efficient 28nm manufacturing process, the sheer number of transistors and large 365mm2 die size means that it produces a significant amount of heat. In order to keep temperatures in check, a new multi step vapor chamber with an increased thermal mass was designed while the fan was upsized to provide additional airflow at lower RPMs. A new generation of “phase change” thermal compound was also used so it is recommended that the heatsink not be removed unless it is being replaced by an aftermarket unit. According to AMD, these three additions should provide better operational temperatures while decreasing the HD 7970’s acoustical profile.


Considering this card’s price, it is only natural that AMD has chosen to go the fully monty with display connectors but there have been some significant changes over the previous generation. Gone is the diminutive and restrictive fan grille from the HD 6970 and in its place is a more traditional full length opening for optimal airflow. This change necessitated a bit of rethinking in terms of output connectors and as you can see, one of the DVI connectors paid the ultimate price. That doesn’t mean dual DVI compatibility is thrown out the door since every board will ship with an HDMI to dual link DVI adaptor as well as an active mini DisplayPort to DVI connector which allows for native 3x1 Eyefinity support.

Input power is handled by an 8-pin / 6-pin setup while the two Crossfire bridges allow for triple card setups. The dual BIOS switch also makes a comeback and allows for quick and easy switching between the board’s default BIOS and a user uploaded version.


Even though the HD 7970 houses more memory than any other reference AMD card, the PCB’s underside isn’t utilized for additional ICs so AMD didn’t feel the need to include a secondary heatsink a la HD 6970. Instead, the GDDR5 modules are arrayed in a 12 x 256MB pattern and make direct contact with the card’s upper vapor chamber.


The HD 7970’s 11” length lines up perfectly with the HD 6790 while being about ¼” longer than NVIDIA’s GTX 580 due to a slight overlap in the AMD card’s heatsink shroud.
 
Last edited:
Status
Not open for further replies.
Top