What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

EVGA GeForce GTX 280 1GB Superclocked Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,900
Location
Montreal



EVGA GeForce GTX 280 1GB Superclocked Edition Review




Manufacturer Product Page: TBD
Product Number: 01G-P3-1282-AR
Availability: Now
Warranty: Lifetime
Price: Approx. $700 CAD




Nvidia would like to take you on a journey the likes of which we have yet to see in the consumer graphics card world. This is a journey through the minefields that are today’s games, the peaks of processing power that come with parallel computing and the nearly endless fields of applications that will benefit from the GPU doing something other than rendering games. The brave new world we are talking about is brought to us by Nvidia’s new GTX 200-series graphics processing units which are launching today.

In recent months we have seen a gradual evolution of the G80 architecture into the G90-series with a move to 65nm manufacturing and various tweaks which has given rise to cards like the 9800 GTX. These cards have succeeded in appealing to a broad range of consumers and not only gamers due to their competitive prices are relatively high performance. If we think of the 9800 GTX as a technological evolution, then what we see here today with the GTX 200 processors can be considered a revolution akin to the first flight of the Wright brothers. This is what we have all been waiting for; a new architecture (which is still based off the unified shader concept) from Nvidia which will usher in a new age of computing for consumers. You may notice that we say “consumers” since even though they are priced pretty high into the enthusiast spectrum, these new cards will not only appeal to hard-core gamers but also to people looking for a bit more horsepower behind their [email protected] applications and a bevy of other uses.

All of these advances have been made possible through the standardization of Nvidia’s CUDA architecture into something that is accessible to the everyday user. With this technology, a regular graphics processor of the 8 and 9-series along with the 200-series morphs into a number crunching powerhouse the likes of which cannot be matched by today’s processors. What makes the GTX 200 cards different in this respect is that they have been engineered from the ground up to take advantage of CUDA and indeed, as we will see later the possibilities are nearly endless. Thus, the term parallel processing has quickly become the new catch phrase Nvidia uses to describe the potential of their new graphics cards. What Nvidia hopes is that while GTX 200 series has considerable graphics capabilities, its other uses are numerous enough that they will appeal to a much broader customer base rather than be regarded as a pure game-playing solution.

That being said, today Nvidia will be launching two new cards: the GeForce GTX 280 and the GTX 260. Gone are the four-digit numerical names given to so many Nvidia cards of years past. Even though today is the “official” launch date for both these cards, the GTX 280 will only be available to buy from retailers the day following this review while the GTX 260’s availability has been pushed to June 26th. As with all launches, availability will be initially quite tight but as production ramps up so too will allocations to retailers so hopefully any shortages we see will be rectified in short order. As for pricing, Nvidia has quoted to us a price of $649 USD for the GTX 280 and $399 USD for the GTX 260. These prices should be a bit higher here in Canada with a non-overclocked version of the GTX 280 starting around $690 and we have not yet heard any scuttlebutt about Canadian GTX 260 pricing.

While we usually try to focus on stock-clocked cards on release date, today we have something a little different for you. In this review we will be looking at EVGA's GeForce GTX 280 1GB Superclocked card which is of course a reference design with a touch of adrenalin pumped into it for increased clock speeds. EVGA has long been known not only for their excellent customer service, Step Up program and lifetime warranty but also for their habit of releasing overclocked versions of nearly every graphics card known to man. We have been told that this Superclocked version will cost about $10 to $20 more than their stock clocked cards but as we will see later, “Superclocked” no longer means what it once did.

With all of this said and done, let’s delve a bit deeper into the new Nvidia lineup, the architecture behind this card and the applications in which it can be used.


 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,900
Location
Montreal
Nvidia Lineup / EVGA GTX 280 Superclocked Specs

The New Nvidia Lineup



Here it is; the new Nvidia lineup in all its glory and there are some pretty significant changes that we can see right off the bat. The most notable of these changes is the discontinuation of the short-lived 9800 GX2 as Nvidia’s flagship product which is now replaced by the GeForce GTX 280 and to a lesser extent the GTX 260 as well. The rest of the Nvidia high to mid-range lineup stays pretty much the same with cards like the 8800GT and 9600GT receiving further price cuts over the last few weeks.


Sitting at the top of this new lineup is the GTX 280 which is equipped with 1GB of GDDR3 memory working at 2214Mhz (DDR) and is basically on-par with what we saw with the GX2. Also gone are the days were we see a 256-bit memory interface on something that is deemed a “high-end” product since the GTX 280 now uses a 512-bit interface. This should eliminate many of the claimed bottlenecks of the narrower interface used on cards like the 9800 GTX. The core speed (which includes the ROPs and TMUs) operates at a somewhat mundane 602Mhz which is quite interesting since many pundits claimed that with the switch to a 65nm manufacturing process we would see a rapid incline in clock speeds. This has not happened with the core of the G200 series it seems.

To keep with their new parallel processing mentality, Nvidia has changed the name of their Stream Processors (or shader processors depending on your mood) to “processor cores”. There are 240 of these so-called processor cores in the GTX 280’s G200 core which operate at 1296Mhz. This speed is once again quite a bit less than what we are used to seeing with past Nvidia products but considering the number of processors, we can consider this a brute force approach rather than the finesse which comes with faster speeds.

Even though this review isn’t focusing on it, the GTX 260 also sees the light of day albeit via a paper launch. This card is basically a stripped-down version of the 280 at a much, much lower price. If upon its release the 260 can hit the $399 price point, it may end up appealing to a much broader range of customers who don’t need the power of the GTX 280 but want a head start with this new architecture. We will take a look at the GTX 260 in an upcoming review closer to its availability date.


EVGA GeForce 280 GTX Superclocked Specifications



The EVGA Superclocked edition comes with a very minor clock speed increase which may be considered nothing but window dressing by quite a few people. That being said, an increase of 19Mhz on the core and 54Mhz increases on the processor cores and memory isn’t anything to scoff at since this overclock is fully covered by the warranty.

In the past we have been used to EVGA using the Superclocked name to denote some of their higher-clocked cards but this is no longer the case. They have completely revamped their lineup to the point where their Superclocked version is now one slight step above the stock-clocked version and the SSC is now clocked a bit higher than the Superclocked version. Confused yet? Well, EVGA isn’t done yet since they are now releasing a FTW version which will carry with it the highest overclocks of their lineup in addition to a Hydro Copper 16 version which carries a pre-installed water block. The KO version hasn’t seen the light of day in the information we have but we wouldn’t be surprised if it makes a debut sometime down the road.

Here are the product numbers for each of EVGA’s products

Reference: 01G-P3-1280-AR
Superclocked: 01G-P3-1282-AR
SSC: 01G-P3-1284-AR
FTW: 01G-P3-1286-AR
Hydro Copper 16: 01G-P3-1289-AR
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,900
Location
Montreal
The G200-series Architecture

The G200-series Architecture


The G200-series represents Nvidia’s first brand new architecture since the G80 launched all the way back in November of 2006. In human years this 19 month timeframe may have not seemed like a long time but in computer years it was an eternity.

Even though these new cards are still considered graphics cards, the G200 architecture has been built from the ground up in order to make use of emerging applications which can use parallel processing. These applications are specifically designed to take advantage of the massive potential that comes with the inherently parallel nature of a graphics card’s floating point vector processors. To accomplish this, Nvidia has released CUDA which we will be talking about in the next section.

On the graphics processing side of things the G200 series are second generation DX10 chips which do not support DX10.1 like some ATI cards while promising to open a whole new realm in graphics capabilities. Nvidia’s mantra in the graphics processing arena is to move us away from the photo-realism of the last generation of graphics cards into something they call Dynamic Realism. For Nvidia, Dynamic Realism means that not only is the character rendered in photo-real definition but said character interacts with a realistically with a photo real environment as well.

To accomplish all of this, Nvidia knew that they needed a serious amount of horsepower and to this end have released what is effectively the largest, most complex GPU to date with 1.4 billion transistors. To put this into perspective, the original G80 core had about 686 million transistors. Let’s take a look at how this all fits together.


Here we have a basic die shot of the G200 core which shows the layout of the different areas. There are four sets of processor cores clustered into each of the four corners which have separate texture units and shared frame buffers. The processor core areas hold the individual Texture Processing Clusters (or TPCs) along with their local memory. This layout is used for both Parallel Computing and graphics rendering so to put things into a bit better context, let’s have a look at what one of these TPCs looks like.


Each individual TPC consists of 24 stream (or thread) processors which are broken into three groups of eight. When you combine eight SPs plus shared memory into one unit you get what Nvidia calls a Streaming Multiprocessor. Basically, a GTX 280 will have ten texture processing clusters each with a grand total of 24 stream processors for a grand total of 240 processors. On the other hand a GTX 260 has two clusters disabled which brings its total to 192 processor “cores”. Got all of that? I hope so since we are now moving on to the different ways in which this architecture can be used.


Parallel Processing


At the top of the architecture shot above is the hardware-level thread scheduler that manages which threads are set across the texture processing clusters. You will also see that each “node” has its own texture cache which is used to combine memory accesses for more efficient and higher bandwidth memory read/write operations. The “atomic” nodes work in conjunction with the texture cache to speed up memory access when the G200 is being used for parallel processing. Basically, atomic refers to the ability to perform atomic read-modify-write operations to memory. In this mode all 240 processors can be used for high-level calculations such as a Folding @ Home client or video transcoding


Graphics Processing


This architecture is primarily used for graphics processing and when it is being as such there is a dedicated shader thread dispatch logic which controls data to the processor cores as well as setup and raster units. Other than that and the lack of Atomic processing, the layout is pretty much identical to the parallel computing architecture. Overall, Nvidia claims that this is an extremely efficient architecture which should usher in a new damn of innovative games and applications.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,900
Location
Montreal
Of Parallel Processing and CUDA

Of Parallel Processing and CUDA


While we could make this a huge section, we will only go over the very tip of the iceberg that is CUDA and how it relates to the G200 architecture. It should be mentioned up front that due to the unified architecture of both the G80 and the G90 cores, CUDA applications will work on them as well, just not with the speed they will on the G200 series cards. Unfortunately, due to the time constraints of this review and the very beta nature of some of the applications, we were not able to benchmark the parallel processing features of the GTX 280. In the weeks to come, expect a full article dealing with CUDA, the applications which benefit from it and benchmarks. Until then, here is a little primer.


What is CUDA?

Nvidia has this to say about their CUDA architecture:

CUDA is a software and GPU architecture that makes it possible to use the many processor cores (and eventually thousands of cores) in a GPU to perform general-purpose mathematical calculations. CUDA is accessible to all programmers through an extension to the C and C++ programming languages for parallel computing.

To put that into layman’s terms it means that we will now be able to take advantage of the massive potential offered by current GPU architectures in order to speed up certain tasks. In essence, CUDA should be able to take a task like video transcoding which takes hours on a quad core CPU and perform that same operation in a matter of minutes on a GPU. Not all applications can be transferred to the GPU but those that do will supposedly see an amazing jump in performance.

We could go on and on about CUDA but before we go into some of the applications it can be used in, we invite you to visit Nvidia’s CUDA site: CUDA Zone - resource for C developers of applications that solve computing problems


Folding @ Home


By now, many of you know what Stanford University’s Folding @ Home is since it is the most widely used distributed computing program around right now. While in the past it was only ATI graphics cards that were able to fold, Nvidia has taken up the flag as well and will be using the CUDA architecture to make this application available to their customers. From the information we have from Nvidia, a single GTX 280 graphics card could potentially take the place of an entire folding farm of CPUs in terms of folding capabilities.


Video Transcoding


In today’s high tech world mobile devices have given users the capability to bring their movie collections with them on the go. To this end, consumers need to have a quick and efficient way of transferring their movies from one device to another. From my experience, this can be a pain in the butt since it seems like every device from a Cowon D2 to an iPod needs a different resolution, bitrate and compression to look the best possible. Even a quad core processor can take hours to transcode a movie and that just isn’t an option for many of us who are on the go.

To streamline this process for us, Nvidia has teamed up with Elemental Technologies to offer a video transcoding solution which harnesses the power available from the GTX’s 240 processors. The BadaBOOM Media Converter they will be releasing can take a transcoding process which took up to six hours on a quad core CPU and streamline it into a sub-40 minute timeframe. This also frees up your CPU to work on other tasks.

If these promises are kept, this may be one of the most-used CUDA applications even though it will need to be purchased (pricing is not determined at this point).


PhysX Technology


About two years ago there were many industry insiders who predicted that physics implementation would be the next Big Thing when it came to new games. With the release of their PhysX PPU, Ageia brought to the market a stand-alone physics processor which had the potential to redefine gaming. However, the idea of buying a $200 physics card never appealed to many people and the unit never became very popular with either consumers or game developers. Fast forward to the present time and Nvidia now has control over Ageia’s PhysX technology and will be putting it to good use in their all their cards featuring a unified architecture. This means that PhysX suddenly has an installed base numbering in the tens of millions instead of the tiny portion who bought the original PPU. Usually, a larger number of potential customers means that developers will use a technology more often which will lead to more titles being developed for PhysX.

Since physics calculations are inherently parallel, the thread dispatcher in the unified shader architecture is able to shunt these calculations to the appropriate texture processing cluster. This means a fine balancing act must be done since in theory running physics calculations can degrease rendering performance of the GPU. However, it seems like Nvidia is working long and hard to get things balanced out properly so turning up in game physics will have a minimal affect on overall graphics performance.


Our Initial Thoughts Regarding CUDA

We have quickly run through three of the emerging uses for the CUDA technology on the parallel processing architecture of modern Nvidia GPUs and we must say it looks promising. Even though the technology may be new by a consumer’s standpoint, the potential of CUDA is virtually limitless. What needs to be done is to clearly define support for this architecture through the use of both pay-for-use and FREE applications. Without the use of home-brew programs for CUDA to speed up everyday tasks, we can see this technology flying us all by without any widespread adoption.

At this point, all we have are Nvidia’s claims regarding performance and ease of use with CUDA and today’s tasks but only time will tell if these claims can become a reality. What is needed is long-term support from Nvidia for this architecture and from what we have seen; the boys in green are ready.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,900
Location
Montreal
Additional Features of the G200 Architecture

Additional Features of the G200 Architecture


Yes, there is more than what we have already mentioned in the last few sections when it comes to the new GTX 280 and GTX 260 cards. Nvidia has packed their new flagships with more features than you can shake a stick at so let’s go over a few of them which may impact you.


3-Way SLI


As multi-GPU solutions become more and more popular Nvidia is moving towards giving consumers the option to run as many as 3 graphics cards together in order to increase performance to insane levels. Before the release of the 9800GTX, the only cards available for 3-way SLI were the 8800GTX and 8800 Ultra so the GTX 280 and GTX 260 cards have now become the fourth and fifth cards to use this technology. Just be prepared to fork over some megabucks for this privilege since not only would you need God’s Own CPU but a about $2000 for a trio of 280 cards and $1200 for three 260 cards. That is a pretty bitter pill for just about anyone to swallow.


Optional Full HDMI Output


All GTX 280 and GTX 260 cards come with the option for full HDMI output over a DVI to HDMI adaptor. Notice we said “option”? While G200 cards will come with an SPDIF input connector on the card itself, the board partner has to choose whether or not to include a DVI to HDMI dongle so the card can output both sound and images through a HDMI cable. Coupled with the fact that the new GTXes fully support HDCP, this feature can make this card into a multimedia powerhouse. Unfortunately, in order to keep costs down we are sure that there will be quite a few manufacturers who will see fit not to include the necessary hardware for HDMI support. With this in mind, make sure you keep a close eye on the accessories offered with the card of your choice if you want full HDMI support without having to buy a separate dongle.

To be honest with you, this strikes us as a tad odd since if we are paying upwards of $650 for a card, we would expect there to be an integrated HDMI connector a la GX2. Making the DVI to HDMI dongle optional smacks of some serious penny-pinching.


Purevideo HD


To put it into a nutshell, Purevideo HD is Nvidia’s video processing software that offloads up to 100% of the high definition video encoding tasks from your CPU onto your GPU. In theory, this will result in lower power consumption, better feature support for Blu-ray and HD-DVD and better picture quality.


In addition to dynamic contrast enhancement, Purevideo HD has a new feature called Color Tone Enhancement. This feature will dynamically increase the realism and vibrancy for green and blue colors as well as skin tones.


HybridPower


By far, on of the most interesting features supported by the 200-series is Nvidia’s new Hybridpower which is compatible with HybridPower-equipped motherboards like the upcoming 780a and 750a units for AMD AM2 and AM2+ processors. It allows you to shift power between the integrated GPU and your card so if you aren’t gaming, you can switch to integrated graphics to save on power, noise and heat.


While we have not seen if this works, it is definitely an interesting concept since it should allow for quite a bit of flexibility between gaming and less GPU-intensive tasks. There has been more than once where I have been working in Word in the summer where I wished my machine would produce less heat so I wouldn’t be roasting like a stuffed turkey. If this technology can deliver on what it promises, this technology would be great for people who want a high-powered graphics card by night and a word processing station by day.


This technology even works if you have GTX 280 or 260 cards working in SLI and once again you should (in theory) be able to shut down the two high-powered cards when you don’t need them.


All HybridPower-equipped motherboards come with both DVI and VGA output connectors since all video signals from both the on-board GPU and any additional graphics cards go through the integrated GPU. This means you will not have to switch the connector when turning on and off the power-hungry add-in graphics cards. All in all, this looks to be great on paper but we will have to see in the near future if it can actually work as well as it claims to. In terms of power savings, this could be a huge innovation.


Additional Power Saving Methods


Other than the aforementioned HybridPower, the G200-series for cards have some other very interesting power savings features. With the dynamic clock and voltage settings, Nvidia has further been able to reduce power consumption when the system is at idle so if you are using a program that doesn’t require the GPU to work, you don’t have to worry about it consuming copious amounts of power. The same goes for heat since as power consumption decreases so does the heat output from the core. I don’t know about you but I hate sweating like a pig while using Photoshop just because my GPU wants to dump hot air back into the room and with this feature hopefully these sweat sessions will be a thing of the past.

Additionally, Nvidia has added a power saving feature for HD decoding as well. Since the card doesn’t need full power to decode a high definition movie, voltages will be decreased from what they would be in full 3D mode which will once again result in less power draw and heat.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,900
Location
Montreal
Why Buy an EVGA Card?

Why Buy an EVGA Card?


Many of us know EVGA by name since their cards are usually some of the best priced on the market. Other than that, there are several things which EVGA has done to try to differentiate their business model from that of their competition. Not only do they have an excellent support forum and an open, friendly staff but it also seems like they have a love for their products you just can’t find many other places. Passion for one’s products goes a long way in this industry but without a good backbone of customer support, it would all be for nothing. Let’s take a look at what EVGA has to offer the customer AFTER they buy the product.


Lifetime Warranty


Every consumer wants piece of mind when it comes to buying a new computer component especially when that component costs you over $600. In order to protect your investment, EVGA offers their customers a lifetime warranty program which is in effect from the day you register the card until…well…the end of time. The only caveat is that you must register your card within 30 days of purchase or you will only be eligible for their new 1+1 warranty. So as long as you don’t get lazy or forget, consider yourself covered even if you remove the heatsink. The only thing that this warranty doesn’t cover is physical damage done to your card. For more information about the lifetime warranty you can go here: EVGA | Product Warranty

Even if you forget to register your card within the 30 days necessary to receive the lifetime warranty, EVGA still offers you a 1+1 warranty which covers your product for two years. For more information about this warranty, you can go here: EVGA | EVGA 1+1 Limited Warranty


Step-Up Program


While some competitors like BFG now offer trade-up programs as well, EVGA will always be known for having the first of this type of program. This allows a customer with an EVGA card to “step up” their card to a different model within 90 days of purchase. Naturally, the difference in price between the value of your old card and that of the new card will have to be paid but other than that, it is a pretty simple process which gives EVGA’s customers access to newer cards. As is usual certain conditions apply such as the cards being in stock with EVGA and the necessity to register your card but other than that it is pretty straightforward. Check out all the details here: EVGA | Step-Up Program


24 / 7 Tech Support


Don’t you hate it when things go ass-up in the middle of the night without tech support around for the next dozen hours or so? Luckily for you EVGA purchasers, there is a dedicated tech support line which is open 24 hours a day, 7 days a week. As far as we could tell, this isn’t farmed out tech support to the nether regions of Pakistan either since every rep we have spoken to over the last few years has had impeccable English. Well, we say that but maybe EVGA hunted down the last dozen or so expats living in Karachi.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,900
Location
Montreal
Packaging and Accessories

Packaging and Accessories



Click on image to enlarge

Even though the EVGA GTX 280 Superclocked Edition will retail for nearly $700, its box doesn’t get any preferential treatment over standard EVGA products. The front carries the usual oversized EVGA logo along with an indication of the 1GB of memory and an emblem showing them to be the #1 seller in the US.

Meanwhile, the back of the box has a picture of the card, a list of features and included accessories and a small window showing the serial number on the back of the card. Conspicuous by its absence is information about the clock speeds of this overclocked card.


The back of the package also has an area advertising the inclusion of EVGA’s Precision overclocking and monitoring tool. Overall, this is a great tool for beginners to the world of overclocking but many of us prefer more advanced tools like Rivatuner.


Instead of using the usual plastic clamshell-type package to protect the precious card within the box, EVGA has gone full on with the Styrofoam treatment. This is one of the best protected cards we have seen in quite some time so you shouldn’t be worried at all about it getting damaged on its way to you.

Once the two “covers” are flipped open we see that the GTX 280 is partitioned off from the accessories so everything stays nice and secure.


The accessories that are included are basic at best and include a pair of DVI to VGA adaptors, a “HDTV” RGB cable, a Molex to 6-pin adaptor and a 2x 6-pin PCI-E to 8-pin connector. That is it folks; there is no optional SPDIF cable or DVI to HDMI dongle included…on a card which will retail for nearly $700. Really, what more is there to say?


The included CD contains beta drivers which should be updated the second you install the card along with the EVGA Precision overclocking utility. The rest of the documentation mostly centers on basic installation, the necessity of plugging in the 8-pin PCI-E power connector and warranty information.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,900
Location
Montreal
A Closer Look at the EVGA GTX 280 Superclocked

A Closer Look at the EVGA GTX 280 Superclocked



Click on image to enlarge

After all this talk about features, architecture and packaging, we finally get our first look at the EVGA GeForce GTX 280 Superclocked. The first thing that will stand out for you is probably the fact that the heatsink assembly looks very much like the one found on the 9800 GTX. However, there is more to it than what first meets the eye since the shroud hides a heatsink that has some serious cooling potential.

The fan assembly is decked out with EVGA’s usual panache for stylish graphics without going overboard into the area of gaudy with a predominantly black color scheme which is accented by a yellow and orange “spark” effect.


Many have been wondering about the size of this card and true to Nvidia tradition, the length stays the same as the 9800 GTX, 9800 GX2 and 8800 GTX at about 10.5”. Since the power connectors are pushed out to the side, the GTX 280 should have no trouble fitting into most ATX cases on the market.


Here we have something a little different from your standard run of the mill graphics card. Since the 1GB of onboard ran necessitates placement of ram modules on the back of the card, Nvidia has decided to cover the whole thing in one large heatsink. Without a doubt, this looks damn cool and kudos to EVGA for putting some graphics here as well so all your friends will know exactly what kind of badass graphics card you are running in your system.


Even though we mentioned that the heatsink looks similar to that of a 9800 GTX, the first hint that there is a different beast lying just below the surface comes with a look at the much larger fan which is used to draw in fresh air. Much like the assembly on the 8800 GTS 512MB, this rear portion of the GTX 280 is slightly vaulted so the fan clears the rearmost capacitors on the card.


While they may or may not have a functional use, the “shark fin” design on some portions of the GTX 280 looks great. It also seems to give a bit of flair to what is an otherwise very boxy looking card.


Since the power envelope of this card is about 236W when in stock form and quite a bit more when overclocked, it needs both the 8-pin and 6-pin power connectors plugged in. Luckily, this time Nvidia made it so 6+2 pin connectors would fit into the openings in the heatsink shroud.

Next to these power connectors is a small rubber-covered SPDIF port. This is used for cards that come with the optional audio connector which brings audio signals from your sound card to the GTX 280 which will then output the sound through the DVI to HDMI adaptor. Unfortunately, neither of these optional connectors were provided with this EVGA card.


The dual SLI connectors which are used for Tri-SLI are located below a plastic pull-out tab which pops off the side of the card. If you are running a pair of GTX 280 cards in SLI you will only need to use one of these connectors.

Much to our disappointment, the back of the GTX 280 is much like every other Nvidia card for the last several years with a pair of DVI-D connectors as well as the usual TV-out connector. There is no sign of a HDMI connector that we were hoping to see but I guess we can’t always get what we want…
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,900
Location
Montreal
Under the Heatsink

Under the Heatsink


Before we go through the motions here it should be mentioned that talking apart the heatsink assembly on this card is not easy since the top and bottom portions are attached to one another with a clip system. You should be extremely careful when doing this since any slip-up will mean a real danger of damaging your card. Do this at your own risk and note that Hardware Canucks takes no responsibility if you damage your card.


Click on image to enlarge

After removing the necessary screws and gently (and we mean VERY gently) prying the bottom heatsink off the card we are greeted with eight of the sixteen memory modules. Each of them has a thermal pad placed over it so the heat they generate is easily transferred to the aluminum heat spreader.

To continue the removal of the upper heatsink you will need to remove the two screws which are placed right next to the ram modules.


The first thing you will notice is that the IHS placed on top of the G200 core is absolutely immense. It is important to note that what you are seeing here is NOT the actual core of the G200 but rather an internal heatsink much like we see with modern CPUs. This disperses the heat from the core over a more even area so it can be easily dispersed by the cooler.

The power distribution end of this card is a confusing maze of voltage regulators and transistors with surprisingly few capacitors considering the amount of power the GTX 280 needs when under load.


The IHS of the GTX 280 is actually slightly larger than that of the last IHS-equipped Nvidia card; the 8800GTX. We guess this is what you get when packing 1.4 billion transistors into one core.

The memory used on this card consists of two banks of 64MB x 8 Hynix H5RS5223CFR-N2C GDDR3 memory for a total of 16 ICs. This memory is rated to run at 1200Mhz (2400Mhz effective) with 2.0V and a latency of 0.8ns.


In the first pictures you may have noticed a small chip close to the backplate. Even though HD decoding is now done on the GPU itself, the NVIO chip from the G80 series of cards makes a comeback on the GTX 280. This chip is basically responsible for all input and output data that doesn’t originate from or goes through the PCI Express interface.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,900
Location
Montreal
GTX 280 Heatsink / Installing an Aftermarket Cooler

The GTX 280 Heatsink



The underside of the GTX 280 heatsink is reminiscent of the older G80 style units with a large copper base plate and an aluminum heat spreader which makes direct contact with the ram modules, VRMs and NVIO chip. In true Nvidia fashion, there is a copious amount of thermal compound between the IHS and the copper base which is actually quite sticky as well.

Unlike other Nvidia coolers we have seen in the past, the copper contact plate on this one is polished to a shine and even though it shows some minor tooling marks it is of very good quality.


Once the shroud is removed it is apparent that the G200 core needs some serious cooling. The copper base is attached to three large 8mm heatpipes which run up to the aluminum fins in order to quickly disperse the heat generated by the core. The end of one of the heatpipes is also attached to the black metal runner that runs the length of the card to disperse yet more heat. It is actually quite amazing to see the amount of engineering that has gone into this heatsink.


The fan used on this card is an 80mm unit Protechnic Magic unit which we could not find the official ratings for. As has already been mentioned, it is directed downwards towards the base of the cooler in order to clear the capacitors and move more air across the bottom of the aluminum fins.


Aftermarket Heatsink Installation


Usually we show you some aftermarket installation and performance in our reviews but unfortunately you will have to do without anything that interesting this time around. This is due to the fact that every GPU heatsink we have refused to fit around the massive space the core takes up.


In our collection of coolers we have a Thermalright HR03 GT (for G92 and RV670 cards) and an HR03 PLUS (for G80 cards). These pretty much cover the widest heatsink mounting hole offsets and unfortunately neither would fit in any set of holes around the core. Above and to the left you can see the HR03 GT which is a far too small to clear pretty much of anything around the IHS and the mounting plate for the PLUS (above right) was only slightly off but no amount of coaxing could get it to fit.

So, what does this tell us? I’ll go out on a limb here and say that there isn’t a single air cooler on the market today which will fit this card and offer adequate cooling. Water blocks are another matter but judging from the extremely wide offset of the holes (wider than that on a G80), I would venture that none of these will fit either without new retention brackets being made available. Hopefully the likes of Swiftech with their universal MCW60 and D-TEK’s Fuzion GFX will soon get affordable mounting plates for this card.
 
Status
Not open for further replies.

Latest posts

Twitter

Top