What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

BFG GTX 280 OCX 1GB Video Card Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal


BFG GTX 280 OCX 1GB Video Card Review




Manufacturer Product Page: BFG Tech - BFG NVIDIA GeForce GTX 280 OCX 1GB PCIe 2.0
Product Number: BFGRGTX2801024OCXE
Availability: Now
Warranty: Lifetime
Price: Approx. $450



No matter how many video card reviews we do, we know that there will always be something new and exciting just over the horizon. Just imagine, the now-legendary 8800GT was released almost a year ago and between then and now we have seen a flurry of releases from both Nvidia and ATI. While Team Red has progressed from their 3800-series directly to the new 4800-series, Nvidia has gone down a somewhat more winding road. The first 8800-series was augmented by the 8800GT and 8800GTS 512MB which were shortly joined by the 9800GTX, 9600GT and eventually the 9800 GX2. Most of these cards are still in play but have been now joined with Nvidia’s new assault on high end range with both the GeForce GTX 280 and GTX 260.

Almost since their release, the two new GTX 200-series cards have faced extremely tough competition from ATI in the form of the HD4800-series cards. Consumers have rejoiced to see the renewed performance war taking a significant toll on Nvidia’s pricing structure where cards once retailing for over $600 a few weeks prior now sometimes go for under $450. Now with last week’s formal introduction to the HD 4870 X2, Nvidia has officially lost the performance crown to a card that costs about $100 less than the GTX 280 did when it was first introduced. However, even though they no longer have the top dog on the block, Nvidia is hanging tough with their current cards while cutting prices a bit further so not all is lost…not by a long shot.

As the GTX 280 matures, Nvidia’s board partners have been able to eke a bit more performance out of their cards and have released products which carry higher and higher overclocks. While many enthusiasts may scoff at pre-overclocked cards, they hold an allure for many people out there since they offer increased performance right out of the box without having to go through the trail and error process of overclocking themselves. Through the last few years, BFG has always been at the forefront of the pre-overclocked craze and with their OCX cards, they take things to the next level. We should mention now that in our conversations with BFG they have stated that creating a highly overclocked GTX 280 isn’t as easy as it seems due to the massive amounts of heat generated by the core directly influencing the final overclock. That being said, in this review we will be looking at their GTX 280 OCX which is highest-clocked GTX 280 in their lineup that keeps the stock cooler. The only higher-clocked 280 sports a copper waterblock so it will be interesting to see how this particular air-cooled card copes with the increased heat output of the overclocked core.

While availability of this card seems extremely limited here in Canada, our friends south of the border have things a bit better with availability at several large retailers. Believe it or not, where this card was once retailing for somewhere north of $650, it seems that prices have come down enough that the GTX 280 OCX can be had for as little $450. Coupled with BFG’s lifetime warranty and newly-implemented Trade-Up program, $450 represents a surprising value in the grand scheme of things.

If the BFG GTX 280 OCX can perform up to our expectations, it may be a real winner for those of you who want some pre-overclocked goodness. Its performance however, has yet to be shown so let’s get this review under way!!

 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
The Current Nvidia Lineup / BFG GTX 280 OCX Specs

The Current Nvidia Lineup



Here it is; the new Nvidia lineup in all its glory and there are some pretty significant changes that we can see right off the bat. The most notable of these changes is the discontinuation of the short-lived 9800 GX2 as Nvidia’s flagship product which is now replaced by the GeForce GTX 280 and to a lesser extent the GTX 260 as well. The rest of the Nvidia high to mid-range lineup stays pretty much the same with cards like the 8800GT and 9600GT receiving some pretty steep price cuts of late. There has also been the addition of the 9800 GTX+ and the 9800GT of which the former uses the new 55nm manufacturing process. The 9800GT on the other hand is basically an 8800GT with a few features thrown in for good measure and uses either 65nm or a new 55nm core.

You all may have seen a trend within the last few weeks of rapidly falling GT200-series prices in the face of rising competition from ATI’s new cards and because of this these cards have actually become somewhat affordable. Granted, nearly $450 for a single GTX 280 is no small chunk of change but it sure beats the astronomical $680 it was released at. The same goes for the GTX 260 but to a somewhat lesser extent with price cuts bringing it in at a shade over $300 putting it in direct competition with the HD4870 from ATI.

Sitting at the top of this new lineup is the GTX 280 which is equipped with 1GB of GDDR3 memory working at 2214Mhz (DDR) and is basically on-par with what we saw with the GX2. Also gone are the days were we see a 256-bit memory interface on something that is deemed a “high-end” product since the GTX 280 now uses a 512-bit interface. This should eliminate many of the claimed bottlenecks of the narrower interface used on cards like the 9800 GTX. The core speed (which includes the ROPs and TMUs) operates at 602Mhz which is quite interesting since many pundits claimed that with the switch to a 65nm manufacturing process we would see a rapid incline in clock speeds. This has not happened with the core of the G2T00 series it seems.

Looking at the “little brother” GTX 260, it seems that there was quite a bit of pruning going on with lower clock speeds and less memory being the flavour of the day while also being combined with less processor cores. This in effect lowers its price and makes it easier to produce in volume but at the same time it could offer significant performance decreases when compared with the GTX 280.

To keep with their new parallel processing mentality, Nvidia has changed the name of their Stream Processors (or shader processors depending on your mood) to “processor cores”. There are 240 of these so-called processor cores in the GTX 280’s GT200 core which operate at 1296Mhz with those on the GTX 260 operate at a bit more mundane 1242Mhz. This speed is once again quite a bit less than what we are used to seeing with past Nvidia products but considering the number of processors, we can consider this a brute force approach rather than the finesse which comes with faster speeds.


BFG GTX 280 OCX Specifications



Looking at the specifications of the BFG GTX 280 OCX it is quite evident that this is one of the faster GXT 280 cards available on the market today. While a 63Mhz overclock on the core may not seem like much to many of you, you have to remember that the core on this card produces a massive amount of heat and isn’t all that efficient which means it is not quite suited to higher clocks. It is in the memory department where this card gets its balls from and it receives a nearly 200Mhz overclock over stock speeds. All in all, this should result in a card which performs quite a bit better than the reference design but will that difference actually translate to a noticeable increase in framerates? We will find out in the testing section…
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
The GT200-series Architecture

The GT200-series Architecture


The GT200-series represents Nvidia’s first brand new architecture since the G80 launched all the way back in November of 2006. In human years this 19 month timeframe may have not seemed like a long time but in computer years it was an eternity.

Even though these new cards are still considered graphics cards, the GT200 architecture has been built from the ground up in order to make use of emerging applications which can use parallel processing. These applications are specifically designed to take advantage of the massive potential that comes with the inherently parallel nature of a graphics card’s floating point vector processors. To accomplish this, Nvidia has released CUDA which we will be talking about in the next section.

On the graphics processing side of things the GT200 series are second generation DX10 chips which do not support DX10.1 like some ATI cards while promising to open a whole new realm in graphics capabilities. Nvidia’s mantra in the graphics processing arena is to move us away from the photo-realism of the last generation of graphics cards into something they call Dynamic Realism. For Nvidia, Dynamic Realism means that not only is the character rendered in photo-real definition but said character interacts with a realistically with a photo real environment as well.

To accomplish all of this, Nvidia knew that they needed a serious amount of horsepower and to this end have released what is effectively the largest, most complex GPU to date with 1.4 billion transistors. To put this into perspective, the original G80 core had about 686 million transistors. Let’s take a look at how this all fits together.


Here we have a basic die shot of the GT200 core which shows the layout of the different areas. There are four sets of processor cores clustered into each of the four corners which have separate texture units and shared frame buffers. The processor core areas hold the individual Texture Processing Clusters (or TPCs) along with their local memory. This layout is used for both Parallel Computing and graphics rendering so to put things into a bit better context, let’s have a look at what one of these TPCs looks like.


Each individual TPC consists of 24 stream (or thread) processors which are broken into three groups of eight. When you combine eight SPs plus shared memory into one unit you get what Nvidia calls a Streaming Multiprocessor. Basically, a GTX 280 will have ten texture processing clusters each with a grand total of 24 stream processors for a grand total of 240 processors. On the other hand a GTX 260 has two clusters disabled which brings its total to 192 processor “cores”. Got all of that? I hope so since we are now moving on to the different ways in which this architecture can be used.


Parallel Processing


At the top of the architecture shot above is the hardware-level thread scheduler that manages which threads are set across the texture processing clusters. You will also see that each “node” has its own texture cache which is used to combine memory accesses for more efficient and higher bandwidth memory read/write operations. The “atomic” nodes work in conjunction with the texture cache to speed up memory access when the GT200 is being used for parallel processing. Basically, atomic refers to the ability to perform atomic read-modify-write operations to memory. In this mode all 240 processors can be used for high-level calculations such as a Folding @ Home client or video transcoding


Graphics Processing


This architecture is primarily used for graphics processing and when it is being as such there is a dedicated shader thread dispatch logic which controls data to the processor cores as well as setup and raster units. Other than that and the lack of Atomic processing, the layout is pretty much identical to the parallel computing architecture. Overall, Nvidia claims that this is an extremely efficient architecture which should usher in a new damn of innovative games and applications.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Of Parallel Processing and CUDA

Of Parallel Processing and CUDA



What is CUDA?

Nvidia has this to say about their CUDA architecture:

CUDA is a software and GPU architecture that makes it possible to use the many processor cores (and eventually thousands of cores) in a GPU to perform general-purpose mathematical calculations. CUDA is accessible to all programmers through an extension to the C and C++ programming languages for parallel computing.

To put that into layman’s terms it means that we will now be able to take advantage of the massive potential offered by current GPU architectures in order to speed up certain tasks. In essence, CUDA should be able to take a task like video transcoding which takes hours on a quad core CPU and perform that same operation in a matter of minutes on a GPU. Not all applications can be transferred to the GPU but those that do will supposedly see an amazing jump in performance.

We could go on and on about CUDA but before we go into some of the applications it can be used in, we invite you to visit Nvidia’s CUDA site: CUDA Zone - resource for C developers of applications that solve computing problems


Folding @ Home


By now, many of you know what Stanford University’s Folding @ Home is since it is the most widely used distributed computing program around right now. While in the past it was only ATI graphics cards that were able to fold, Nvidia has taken up the flag as well and will be using the CUDA architecture to make this application available to their customers. From the information we have from Nvidia, a single GTX 280 graphics card could potentially take the place of an entire folding farm of CPUs in terms of folding capabilities.


Video Transcoding


In today’s high tech world mobile devices have given users the capability to bring their movie collections with them on the go. To this end, consumers need to have a quick and efficient way of transferring their movies from one device to another. From my experience, this can be a pain in the butt since it seems like every device from a Cowon D2 to an iPod needs a different resolution, bitrate and compression to look the best possible. Even a quad core processor can take hours to transcode a movie and that just isn’t an option for many of us who are on the go.

To streamline this process for us, Nvidia has teamed up with Elemental Technologies to offer a video transcoding solution which harnesses the power available from the GTX’s 240 processors. The BadaBOOM Media Converter they will be releasing can take a transcoding process which took up to six hours on a quad core CPU and streamline it into a sub-40 minute timeframe. This also frees up your CPU to work on other tasks.

If these promises are kept, this may be one of the most-used CUDA applications even though it will need to be purchased (pricing is not determined at this point).


PhysX Technology


About two years ago there were many industry insiders who predicted that physics implementation would be the next Big Thing when it came to new games. With the release of their PhysX PPU, Ageia brought to the market a stand-alone physics processor which had the potential to redefine gaming. However, the idea of buying a $200 physics card never appealed to many people and the unit never became very popular with either consumers or game developers. Fast forward to the present time and Nvidia now has control over Ageia’s PhysX technology and will be putting it to good use in their all their cards featuring a unified architecture. This means that PhysX suddenly has an installed base numbering in the tens of millions instead of the tiny portion who bought the original PPU. Usually, a larger number of potential customers means that developers will use a technology more often which will lead to more titles being developed for PhysX.

Since physics calculations are inherently parallel, the thread dispatcher in the unified shader architecture is able to shunt these calculations to the appropriate texture processing cluster. This means a fine balancing act must be done since in theory running physics calculations can degrease rendering performance of the GPU. However, it seems like Nvidia is working long and hard to get things balanced out properly so turning up in game physics will have a minimal affect on overall graphics performance.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Additional Features of the GT200 Architecture

Additional Features of the GT200 Architecture


Yes, there is more than what we have already mentioned in the last few sections when it comes to the new GTX 280 and GTX 260 cards. Nvidia has packed their new flagships with more features than you can shake a stick at so let’s go over a few of them which may impact you.


3-Way SLI


As multi-GPU solutions become more and more popular Nvidia is moving towards giving consumers the option to run as many as 3 graphics cards together in order to increase performance to insane levels. Before the release of the 9800GTX, the only cards available for 3-way SLI were the 8800GTX and 8800 Ultra so the GTX 280 and GTX 260 cards have now become the fourth and fifth cards to use this technology. Just be prepared to fork over some megabucks for this privilege since not only would you need God’s Own CPU but at about $1500 for a trio of 280 cards and $1000 for three 260 cards. That is a pretty bitter pill for just about anyone to swallow.


Optional Full HDMI Output


All GTX 280 and GTX 260 cards come with the option for full HDMI output over a DVI to HDMI adaptor. Notice we said “option”? While GT200 cards will come with an SPDIF input connector on the card itself, the board partner has to choose whether or not to include a DVI to HDMI dongle so the card can output both sound and images through a HDMI cable. Coupled with the fact that the new GTXes fully support HDCP, this feature can make this card into a multimedia powerhouse. Unfortunately, in order to keep costs down we are sure that there will be quite a few manufacturers who will see fit not to include the necessary hardware for HDMI support. With this in mind, make sure you keep a close eye on the accessories offered with the card of your choice if you want full HDMI support without having to buy a separate dongle.

To be honest with you, this strikes us as a tad odd since if we are paying upwards of $400 for a card, we would expect there to be an integrated HDMI connector a la GX2. Making the DVI to HDMI dongle optional smacks of some serious penny-pinching.


Purevideo HD


To put it into a nutshell, Purevideo HD is Nvidia’s video processing software that offloads up to 100% of the high definition video encoding tasks from your CPU onto your GPU. In theory, this will result in lower power consumption, better feature support for Blu-ray and HD-DVD and better picture quality.


In addition to dynamic contrast enhancement, Purevideo HD has a new feature called Color Tone Enhancement. This feature will dynamically increase the realism and vibrancy for green and blue colors as well as skin tones.


HybridPower


By far, on of the most interesting features supported by the 200-series is Nvidia’s new Hybridpower which is compatible with HybridPower-equipped motherboards like the upcoming 780a and 750a units for AMD AM2 and AM2+ processors. It allows you to shift power between the integrated GPU and your card so if you aren’t gaming, you can switch to integrated graphics to save on power, noise and heat.


While we have not seen if this works, it is definitely an interesting concept since it should allow for quite a bit of flexibility between gaming and less GPU-intensive tasks. There has been more than once where I have been working in Word in the summer where I wished my machine would produce less heat so I wouldn’t be roasting like a stuffed turkey. If this technology can deliver on what it promises, this technology would be great for people who want a high-powered graphics card by night and a word processing station by day.


This technology even works if you have GTX 280 or 260 cards working in SLI and once again you should (in theory) be able to shut down the two high-powered cards when you don’t need them.


All HybridPower-equipped motherboards come with both DVI and VGA output connectors since all video signals from both the on-board GPU and any additional graphics cards go through the integrated GPU. This means you will not have to switch the connector when turning on and off the power-hungry add-in graphics cards. All in all, this looks to be great on paper but we will have to see in the near future if it can actually work as well as it claims to. In terms of power savings, this could be a huge innovation.


Additional Power Saving Methods


Other than the aforementioned HybridPower, the GT200-series for cards have some other very interesting power savings features. With the dynamic clock and voltage settings, Nvidia has further been able to reduce power consumption when the system is at idle so if you are using a program that doesn’t require the GPU to work, you don’t have to worry about it consuming copious amounts of power. The same goes for heat since as power consumption decreases so does the heat output from the core. I don’t know about you but I hate sweating like a pig while using Photoshop just because my GPU wants to dump hot air back into the room and with this feature hopefully these sweat sessions will be a thing of the past.

Additionally, Nvidia has added a power saving feature for HD decoding as well. Since the card doesn’t need full power to decode a high definition movie, voltages will be decreased from what they would be in full 3D mode which will once again result in less power draw and heat.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
The BFG Advantage: Lifetime Warranty & Trade-Up

The BFG Advantage: Lifetime Warranty & Trade-Up

With dozens of manufacturers vying for your attention in the highly competitive graphics card market, companies are always looking for ways to distinguish themselves from their competition. Some have gone the route of offering highly overclocked cards while others tend to focus on the customer satisfaction aspect of their business before thinking about increasing the performance of their products. BFG has been making a name for themselves by offering the best of both worlds by releasing both overclocked versions of their cards while giving a customer service experience that is second to none. Two of the major aspects of BFG’s commitment to their customers are their Lifetime Warranty and newly-introduced Trade-Up program.


Lifetime Warranty

One of the longtime marquees of BFG has been their Lifetime Warranty on all their graphics cards sold here in North America. From personal experience, all someone has to do is call BFG’s 24/7 customer support hotline, troubleshoot with the representative and if nothing comes of it an RMA number will be issued. This may seem too easy to be true but numerous posts across several tech-centric forums bear nothing but praise for BFG and the way they handle their customers. Indeed, our own http://www.hardwarecanucks.com/forum/troubleshooting/1829-canadian-rma-experience-3.html thread has several posts about good experiences with BFG’s Lifetime Warranty. Just remember: in order to be eligible for the lifetime warranty you must register your card with BFG within 30 days of purchase.

Unfortunately, some manufacturers have one-upped BFG by offering their own lifetime warranties but unlike BFG they also cover aftermarket cooler installation and overclocking.

For more information about BFG’s Lifetime Warranty, please visit their website here: BFG Tech - Warranty


Trade-Up Program


BFG has recently introduced their Trade-Up program which is in effect for 100 days after the purchase of a new BFG graphics card. This program gives a BFG customer piece of mind by offering them the opportunity to trade in their graphics card for a newer model within 100 days plus pay the difference in cost. The worth of the BFG graphics card you trade in is based off of the pre-determined MSRP of the card in question at the time you apply for the trade-up so this price will probably be quite a bit less after a few months. For now, there is only a few graphics cards listed on the Trade-Up page with their current trade-in value but that will change as more come out: BFG Tech - tradeupmatrix.

This means if you purchase either card we are reviewing here today, you will be able to trade it in for a better card if one is released within 100 calendar days of your invoice date. The only caveat about this is that your card’s value will be based off of the pre-determined BFG price whenever it is you choose to trade it in. In addition, you must register your card within 30 days to have a chance at trading it in for something better.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
BFG GTX 280 OCX Packaging & Accessories

BFG GTX 280 OCX Packaging & Accessories



Click on image to enlarge

As with all other BFG boxes, this one keeps the understated packaging black design but adds something a bit different. BFG routinely releases two different box designs; one of which is destined for brick and mortar retailers while the other gets sent to etailers. Since no one buying online will actually see the box, a bit of money is saved by using a sticker on the etail boxes instead of printing a completely new box design every time a new card is released. As you can see, the card package we received is decked out in etail regalia since the GTX 280 OCX will probably only be available online.


When BFG releases and OCX-branded card, they pull out all the stops when it comes to accessories and the GTX 280 OCX is no different. This includes all of the things we wished we had seen in the packaging of reference GTX 280s. You get the following:

- DVI to HDMI dongle
- 1ft S/PDIF connector
- 6ft HDMI cable
- DVI to VGA dongle
- 6-pin to 8-pin PCI-E adaptor
- Molex to 6-pin adaptor
- TV-Out connector


The S/PDIF cable is a great addition since it will allow you to plug in your GTX 280 to an audio card or motherboard in order to pass sound to your graphics card and through a HDMI cable. This is where the DVI to HDMI dongle comes into play since it effectively allows both video and audio signals to be transferred to a HDMI cable.


The included HDMI cable is of surprisingly good quality with gold plated connectors and is about six feet long. While discerning consumers will probably want to upgrade it for their 1080P TVs, the rest of us will find the video quality it provides to be perfect without having to spend a ridiculous amount of money on a professional-grade unit.


There is also a fair bit of documentation which comes with the GTX 280 OCX but most of it is regarding things many of us already know like connecting the 8-pin connector and how to run an S/PDIF cable to the card. There is also a warranty card and a pair of case stickers.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
A Closer Look at the BFG GTX 280 OCX

A Closer Look at the BFG GTX 280 OCX



The BFG GTX 280 keeps to the exact reference design as all of the other 280 cards which have been released. It is about 10.5” long which means it should fit into most ATX-sized cases and has a massive heatsink assembly covering the entire card. This heatsink design has been an Nvidia hallmark for some time now and it not only helps cool the card quite well but it also looks damn good.


The continuous monotonous black expanse of the heatsink shroud has been broken up somewhat by BFG’s sticker depicting one of their usual mascots pulling a little Human Torch number with flames shooting out of his hands. To us this is much better than seeing some sort Magna character totting an oversized gun but then again you might enjoy that type of stuff. They have also applied a small OCX sticker to the central fan hub.


The backside of every GTX 280 card is covered by a continuous piece of aluminum which is supposed to help dissipate the heat generated by the additional memory modules on this side. There is also a small pull-out tab which covers the SLI connector on the card. Unlike the lower-end Nvidia cards, the GTX 280 is Tri-SLI capable.


One side of the BFG GTX 280 OCX holds yet another sticker as well as a number of power and input connectors. This card requires you to use an 8-pin PCI-E power connector in addition to the 6-pin due to its high power requirements but luckily their positioning on the 280 is perfect. There is also an S/PDIF connector which is covered by a plastic tab which needs to be removed if you want to hook up the included audio cable.


The backplate holds the usual output connectors which includes a pair of DVI outputs as well as the TV-out connector. As we said in our original GTX 280 review, it would have been great to have seen a built-in HDMI connector but that was not meant to be.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Under the Heatsink

Under the Heatsink


Please note that this section has been copied from our original GTX 280 review since both cards are of the reference design.

Before we go through the motions here it should be mentioned that talking apart the heatsink assembly on this card is not easy since the top and bottom portions are attached to one another with a clip system. You should be extremely careful when doing this since any slip-up will mean a real danger of damaging your card. Do this at your own risk and note that Hardware Canucks takes no responsibility if you damage your card.


Click on image to enlarge

After removing the necessary screws and gently (and we mean VERY gently) prying the bottom heatsink off the card we are greeted eight of the sixteen memory modules. Each of them has a thermal pad placed over it so the heat they generate is easily transferred to the aluminum heat spreader.

To continue the removal of the upper heatsink you will need to remove the two screws which are placed right next to the ram modules.


The first thing you will notice is that the IHS placed on top of the G200 core is absolutely immense. It is important to note that what you are seeing here is NOT the actual core of the G200 but rather an internal heatsink much like we see with modern CPUs. This disperses the heat from the core over a more even area so it can be easily dispersed by the cooler.

The power distribution end of this card is a confusing maze of voltage regulators and transistors with surprisingly few capacitors considering the amount of power the GTX 280 needs when under load.


The IHS of the GTX 280 is actually slightly larger than that of the last IHS-equipped Nvidia card; the 8800GTX. We guess this is what you get when packing 1.4 billion transistors into one core.

The memory used on this card consists of two banks of 64MB x 8 Hynix H5RS5223CFR-N2C GDDR3 memory for a total of 16 ICs. This memory is rated to run at 1200Mhz (2400Mhz effective) with 2.0V and a latency of 0.8ns.


In the first pictures you may have noticed a small chip close to the backplate. Even though HD decoding is now done on the GPU itself, the NVIO chip from the G80 series of cards makes a comeback on the GTX 280. This chip is basically responsible for all input and output data that doesn’t originate from or goes through the PCI Express interface.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
The GTX 280 Heatsink

The GTX 280 Heatsink



The underside of the GTX 280 heatsink is reminiscent of the older G80 style units with a large copper base plate and an aluminum heat spreader which makes direct contact with the ram modules, VRMs and NVIO chip. In true Nvidia fashion, there is a copious amount of thermal compound between the IHS and the copper base which is actually quite sticky as well.

Unlike other Nvidia coolers we have seen in the past, the copper contact plate on this one is polished to a shine and even though it shows some minor tooling marks it is of very good quality.


Once the shroud is removed it is apparent that the G200 core needs some serious cooling. The copper base is attached to three large 8mm heatpipes which run up to the aluminum fins in order to quickly disperse the heat generated by the core. The end of one of the heatpipes is also attached to the black metal runner that runs the length of the card to disperse yet more heat. It is actually quite amazing to see the amount of engineering that has gone into this heatsink.


The fan used on this card is an 80mm unit Protechnic Magic unit which we could not find the official ratings for. As has already been mentioned, it is directed downwards towards the base of the cooler in order to clear the capacitors and move more air across the bottom of the aluminum fins.
 
Status
Not open for further replies.
Top