What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

BFG GeForce GTX 260 896MB Video Card Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal


BFG GeForce GTX 260 896MB Video Card Review





Manufacturer Product Page: BFG Tech - BFG NVIDIA GeForce GTX 260 896MB PCIe 2.0
Product Number: BFGEGTX260896E
Availability: Now
Warranty: Lifetime
Price: Click Here to Compare Prices



It really is a good time to be looking for a new graphics card, isn’t it? Prices are falling, with them overall performance per dollar is increasing and many holdouts out there are gradually being swayed into either ATI’s or Nvidia’s camp. Whether some people realize it or not, competition is a good thing and no matter which manufacturer you support it is their competitor who contributes to keep their prices within reason. There was a time within the last two years when we saw high-performance products from Team Red trail off into the distance and because of this, prices for high-end cards stagnated near and above the $500 range. Not only that, but very little forward progress was made in development of GPU architecture and we saw rehashes of the same cards over and over again. While we may still be seeing clones of the G80 cards (born again in the 65nm G90-series) their prices are amazingly affordable for the performance they offer after being confronted with the ATI HD4800-series. However, there are a couple of new kids on the block in the Nvidia stables which were built from the ground up and are raring to go.

Both the GTX 260 and the GTX 280 were launched a little over a month ago and while the higher end 280 was available immediately at launch, the less expensive 260 was greeted with a paper launch. Since that time, Nvidia seemed to have been surprised by ATI’s launch of the HD4800-series of cards which turned out to be extremely successful and performed above what many expected them to. In order to combat these lower-priced cards, Nvidia has sent the prices of the GTX 280 and GTX 260 into freefall. Once retailing for a bit over $400, GTX 260 cards can now be had for a shade under $300 after rebates here in Canada in some rare cases. Say it with me again: “competition is good”.

The GTX 260 resides in the lower-end of the GT200 series but is nonetheless pegged to go head to head against ATI’s HD4870. It has all the features of its big-brother but has a cut-down amount of memory, slightly lower clock speeds and less Stream Processors. All of these changes contribute to make this a much more affordable card while offering performance that was unheard of less than a year ago.

Unfortunately, up until now we have not been able to get our hands on one of these cards but BFG has finally stepped up to the plate and have provided us with one of their stock-clocked cards. This GTX 260 holds a bit of a dubious place in BFG’s lineup of being (believe it or not) less widely available than their overclocked “OC” version while actually costing a bit more as well. We can’t really fault BFG for this since we asked for a stock-clocked card and that is what they delivered. That being said, being backed up by BFG’s warranty and new Trade-Up program definitely has its benefits which are why customers usually take a very good look at their cards.

 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
The Current Nvidia Lineup

The Current Nvidia Lineup



Here it is; the new Nvidia lineup in all its glory and there are some pretty significant changes that we can see right off the bat. The most notable of these changes is the discontinuation of the short-lived 9800 GX2 as Nvidia’s flagship product which is now replaced by the GeForce GTX 280 and to a lesser extent the GTX 260 as well. The rest of the Nvidia high to mid-range lineup stays pretty much the same with cards like the 8800GT and 9600GT receiving some pretty steep price cuts of late.

You all may have seen a trend within the last few weeks of rapidly falling GT200-series prices in the face of rising competition from ATI’s new cards and because of this these cards have actually become somewhat affordable. Granted, nearly $450 for a single GTX 280 is no small chunk of change but it sure beats the astronomical $680 it was released at. The same goes for the GTX 260 but to a somewhat lesser extent with price cuts bringing it in at a shade over $300 putting it in direct competition with the HD4870 from ATI.


Sitting at the top of this new lineup is the GTX 280 which is equipped with 1GB of GDDR3 memory working at 2214Mhz (DDR) and is basically on-par with what we saw with the GX2. Also gone are the days were we see a 256-bit memory interface on something that is deemed a “high-end” product since the GTX 280 now uses a 512-bit interface. This should eliminate many of the claimed bottlenecks of the narrower interface used on cards like the 9800 GTX. The core speed (which includes the ROPs and TMUs) operates at 602Mhz which is quite interesting since many pundits claimed that with the switch to a 65nm manufacturing process we would see a rapid incline in clock speeds. This has not happened with the core of the G2T00 series it seems.

Looking at the “little brother” GTX 260, it seems that there was quite a bit of pruning going on with lower clock speeds and less memory being the flavour of the day while also being combined with less processor cores. This in effect lowers its price and makes it easier to produce in volume but at the same time it could offer significant performance decreases when compared with the GTX 280.

To keep with their new parallel processing mentality, Nvidia has changed the name of their Stream Processors (or shader processors depending on your mood) to “processor cores”. There are 240 of these so-called processor cores in the GTX 280’s GT200 core which operate at 1296Mhz with those on the GTX 260 operate at a bit more mundane 1242Mhz. This speed is once again quite a bit less than what we are used to seeing with past Nvidia products but considering the number of processors, we can consider this a brute force approach rather than the finesse which comes with faster speeds.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
The GT200-series Architecture

The GT200-series Architecture


The GT200-series represents Nvidia’s first brand new architecture since the G80 launched all the way back in November of 2006. In human years this 19 month timeframe may have not seemed like a long time but in computer years it was an eternity.

Even though these new cards are still considered graphics cards, the GT200 architecture has been built from the ground up in order to make use of emerging applications which can use parallel processing. These applications are specifically designed to take advantage of the massive potential that comes with the inherently parallel nature of a graphics card’s floating point vector processors. To accomplish this, Nvidia has released CUDA which we will be talking about in the next section.

On the graphics processing side of things the GT200 series are second generation DX10 chips which do not support DX10.1 like some ATI cards while promising to open a whole new realm in graphics capabilities. Nvidia’s mantra in the graphics processing arena is to move us away from the photo-realism of the last generation of graphics cards into something they call Dynamic Realism. For Nvidia, Dynamic Realism means that not only is the character rendered in photo-real definition but said character interacts with a realistically with a photo real environment as well.

To accomplish all of this, Nvidia knew that they needed a serious amount of horsepower and to this end have released what is effectively the largest, most complex GPU to date with 1.4 billion transistors. To put this into perspective, the original G80 core had about 686 million transistors. Let’s take a look at how this all fits together.


Here we have a basic die shot of the GT200 core which shows the layout of the different areas. There are four sets of processor cores clustered into each of the four corners which have separate texture units and shared frame buffers. The processor core areas hold the individual Texture Processing Clusters (or TPCs) along with their local memory. This layout is used for both Parallel Computing and graphics rendering so to put things into a bit better context, let’s have a look at what one of these TPCs looks like.


Each individual TPC consists of 24 stream (or thread) processors which are broken into three groups of eight. When you combine eight SPs plus shared memory into one unit you get what Nvidia calls a Streaming Multiprocessor. Basically, a GTX 280 will have ten texture processing clusters each with a grand total of 24 stream processors for a grand total of 240 processors. On the other hand a GTX 260 has two clusters disabled which brings its total to 192 processor “cores”. Got all of that? I hope so since we are now moving on to the different ways in which this architecture can be used.


Parallel Processing


At the top of the architecture shot above is the hardware-level thread scheduler that manages which threads are set across the texture processing clusters. You will also see that each “node” has its own texture cache which is used to combine memory accesses for more efficient and higher bandwidth memory read/write operations. The “atomic” nodes work in conjunction with the texture cache to speed up memory access when the GT200 is being used for parallel processing. Basically, atomic refers to the ability to perform atomic read-modify-write operations to memory. In this mode all 240 processors can be used for high-level calculations such as a Folding @ Home client or video transcoding


Graphics Processing


This architecture is primarily used for graphics processing and when it is being as such there is a dedicated shader thread dispatch logic which controls data to the processor cores as well as setup and raster units. Other than that and the lack of Atomic processing, the layout is pretty much identical to the parallel computing architecture. Overall, Nvidia claims that this is an extremely efficient architecture which should usher in a new damn of innovative games and applications.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
Of Parallel Processing and CUDA

Of Parallel Processing and CUDA



What is CUDA?

Nvidia has this to say about their CUDA architecture:

CUDA is a software and GPU architecture that makes it possible to use the many processor cores (and eventually thousands of cores) in a GPU to perform general-purpose mathematical calculations. CUDA is accessible to all programmers through an extension to the C and C++ programming languages for parallel computing.

To put that into layman’s terms it means that we will now be able to take advantage of the massive potential offered by current GPU architectures in order to speed up certain tasks. In essence, CUDA should be able to take a task like video transcoding which takes hours on a quad core CPU and perform that same operation in a matter of minutes on a GPU. Not all applications can be transferred to the GPU but those that do will supposedly see an amazing jump in performance.

We could go on and on about CUDA but before we go into some of the applications it can be used in, we invite you to visit Nvidia’s CUDA site: CUDA Zone - resource for C developers of applications that solve computing problems


Folding @ Home


By now, many of you know what Stanford University’s Folding @ Home is since it is the most widely used distributed computing program around right now. While in the past it was only ATI graphics cards that were able to fold, Nvidia has taken up the flag as well and will be using the CUDA architecture to make this application available to their customers. From the information we have from Nvidia, a single GTX 280 graphics card could potentially take the place of an entire folding farm of CPUs in terms of folding capabilities. However, even now, we have not received this kind of performance from our cards.


Video Transcoding


In today’s high tech world mobile devices have given users the capability to bring their movie collections with them on the go. To this end, consumers need to have a quick and efficient way of transferring their movies from one device to another. From my experience, this can be a pain in the butt since it seems like every device from a Cowon D2 to an iPod needs a different resolution, bitrate and compression to look the best possible. Even a quad core processor can take hours to transcode a movie and that just isn’t an option for many of us who are on the go.

To streamline this process for us, Nvidia has teamed up with Elemental Technologies to offer a video transcoding solution which harnesses the power available from the GTX’s 240 processors. The BadaBOOM Media Converter they will be releasing can take a transcoding process which took up to six hours on a quad core CPU and streamline it into a sub-40 minute timeframe. This also frees up your CPU to work on other tasks.

If these promises are kept, this may be one of the most-used CUDA applications even though it will need to be purchased (pricing is not determined at this point).


PhysX Technology


About two years ago there were many industry insiders who predicted that physics implementation would be the next Big Thing when it came to new games. With the release of their PhysX PPU, Ageia brought to the market a stand-alone physics processor which had the potential to redefine gaming. However, the idea of buying a $200 physics card never appealed to many people and the unit never became very popular with either consumers or game developers. Fast forward to the present time and Nvidia now has control over Ageia’s PhysX technology and will be putting it to good use in their all their cards featuring a unified architecture. This means that PhysX suddenly has an installed base numbering in the tens of millions instead of the tiny portion who bought the original PPU. Usually, a larger number of potential customers means that developers will use a technology more often which will lead to more titles being developed for PhysX.

Since physics calculations are inherently parallel, the thread dispatcher in the unified shader architecture is able to shunt these calculations to the appropriate texture processing cluster. This means a fine balancing act must be done since in theory running physics calculations can degrease rendering performance of the GPU. However, it seems like Nvidia is working long and hard to get things balanced out properly so turning up in game physics will have a minimal affect on overall graphics performance.


Our Initial Thoughts Regarding CUDA

We have quickly run through three of the emerging uses for the CUDA technology on the parallel processing architecture of modern Nvidia GPUs and we must say it looks promising. Even though the technology may be new by a consumer’s standpoint, the potential of CUDA is virtually limitless. What needs to be done is to clearly define support for this architecture through the use of both pay-for-use and FREE applications. Without the use of home-brew programs for CUDA to speed up everyday tasks, we can see this technology flying us all by without any widespread adoption.

At this point, all we have are Nvidia’s claims regarding performance and ease of use with CUDA and today’s tasks but only time will tell if these claims can become a reality. What is needed is long-term support from Nvidia for this architecture and from what we have seen; the boys in green are ready.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
Additional Features of the GT200 Architecture

Additional Features of the GT200 Architecture


Yes, there is more than what we have already mentioned in the last few sections when it comes to the new GTX 280 and GTX 260 cards. Nvidia has packed their new flagships with more features than you can shake a stick at so let’s go over a few of them which may impact you.


3-Way SLI


As multi-GPU solutions become more and more popular Nvidia is moving towards giving consumers the option to run as many as 3 graphics cards together in order to increase performance to insane levels. Before the release of the 9800GTX, the only cards available for 3-way SLI were the 8800GTX and 8800 Ultra so the GTX 280 and GTX 260 cards have now become the fourth and fifth cards to use this technology. Just be prepared to fork over some megabucks for this privilege since not only would you need God’s Own CPU but at about $1500 for a trio of 280 cards and $1000 for three 260 cards. That is a pretty bitter pill for just about anyone to swallow.


Optional Full HDMI Output


All GTX 280 and GTX 260 cards come with the option for full HDMI output over a DVI to HDMI adaptor. Notice we said “option”? While GT200 cards will come with an SPDIF input connector on the card itself, the board partner has to choose whether or not to include a DVI to HDMI dongle so the card can output both sound and images through a HDMI cable. Coupled with the fact that the new GTXes fully support HDCP, this feature can make this card into a multimedia powerhouse. Unfortunately, in order to keep costs down we are sure that there will be quite a few manufacturers who will see fit not to include the necessary hardware for HDMI support. With this in mind, make sure you keep a close eye on the accessories offered with the card of your choice if you want full HDMI support without having to buy a separate dongle.

To be honest with you, this strikes us as a tad odd since if we are paying upwards of $400 for a card, we would expect there to be an integrated HDMI connector a la GX2. Making the DVI to HDMI dongle optional smacks of some serious penny-pinching.


Purevideo HD


To put it into a nutshell, Purevideo HD is Nvidia’s video processing software that offloads up to 100% of the high definition video encoding tasks from your CPU onto your GPU. In theory, this will result in lower power consumption, better feature support for Blu-ray and HD-DVD and better picture quality.


In addition to dynamic contrast enhancement, Purevideo HD has a new feature called Color Tone Enhancement. This feature will dynamically increase the realism and vibrancy for green and blue colors as well as skin tones.


HybridPower


By far, on of the most interesting features supported by the 200-series is Nvidia’s new Hybridpower which is compatible with HybridPower-equipped motherboards like the upcoming 780a and 750a units for AMD AM2 and AM2+ processors. It allows you to shift power between the integrated GPU and your card so if you aren’t gaming, you can switch to integrated graphics to save on power, noise and heat.


While we have not seen if this works, it is definitely an interesting concept since it should allow for quite a bit of flexibility between gaming and less GPU-intensive tasks. There has been more than once where I have been working in Word in the summer where I wished my machine would produce less heat so I wouldn’t be roasting like a stuffed turkey. If this technology can deliver on what it promises, this technology would be great for people who want a high-powered graphics card by night and a word processing station by day.


This technology even works if you have GTX 280 or 260 cards working in SLI and once again you should (in theory) be able to shut down the two high-powered cards when you don’t need them.


All HybridPower-equipped motherboards come with both DVI and VGA output connectors since all video signals from both the on-board GPU and any additional graphics cards go through the integrated GPU. This means you will not have to switch the connector when turning on and off the power-hungry add-in graphics cards. All in all, this looks to be great on paper but we will have to see in the near future if it can actually work as well as it claims to. In terms of power savings, this could be a huge innovation.


Additional Power Saving Methods


Other than the aforementioned HybridPower, the GT200-series for cards have some other very interesting power savings features. With the dynamic clock and voltage settings, Nvidia has further been able to reduce power consumption when the system is at idle so if you are using a program that doesn’t require the GPU to work, you don’t have to worry about it consuming copious amounts of power. The same goes for heat since as power consumption decreases so does the heat output from the core. I don’t know about you but I hate sweating like a pig while using Photoshop just because my GPU wants to dump hot air back into the room and with this feature hopefully these sweat sessions will be a thing of the past.

Additionally, Nvidia has added a power saving feature for HD decoding as well. Since the card doesn’t need full power to decode a high definition movie, voltages will be decreased from what they would be in full 3D mode which will once again result in less power draw and heat.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
The BFG Advantage: Lifetime Warranty & Trade-Up

The BFG Advantage: Lifetime Warranty & Trade-Up

With dozens of manufacturers vying for your attention in the highly competitive graphics card market, companies are always looking for ways to distinguish themselves from their competition. Some have gone the route of offering highly overclocked cards while others tend to focus on the customer satisfaction aspect of their business before thinking about increasing the performance of their products. BFG has been making a name for themselves by offering the best of both worlds by releasing both overclocked versions of their cards while giving a customer service experience that is second to none. Two of the major aspects of BFG’s commitment to their customers are their Lifetime Warranty and newly-introduced Trade-Up program.


Lifetime Warranty

One of the longtime marquees of BFG has been their Lifetime Warranty on all their graphics cards sold here in North America. From personal experience, all someone has to do is call BFG’s 24/7 customer support hotline, troubleshoot with the representative and if nothing comes of it an RMA number will be issued. This may seem too easy to be true but numerous posts across several tech-centric forums bear nothing but praise for BFG and the way they handle their customers. Indeed, our own http://www.hardwarecanucks.com/forum/troubleshooting/1829-canadian-rma-experience-3.html thread has several posts about good experiences with BFG’s Lifetime Warranty. Just remember: in order to be eligible for the lifetime warranty you must register your card with BFG within 30 days of purchase.

Unfortunately, some manufacturers have one-upped BFG by offering their own lifetime warranties but unlike BFG they also cover aftermarket cooler installation and overclocking.

For more information about BFG’s Lifetime Warranty, please visit their website here: BFG Tech - Warranty


Trade-Up Program


BFG has recently introduced their Trade-Up program which is in effect for 100 days after the purchase of a new BFG graphics card. This program gives a BFG customer piece of mind by offering them the opportunity to trade in their graphics card for a newer model within 100 days plus pay the difference in cost. The worth of the BFG graphics card you trade in is based off of the pre-determined MSRP of the card in question at the time you apply for the trade-up so this price will probably be quite a bit less after a few months. For now, there is only a few graphics cards listed on the Trade-Up page with their current trade-in value but that will change as more come out: BFG Tech - tradeupmatrix.

This means if you purchase either card we are reviewing here today, you will be able to trade it in for a better card if one is released within 100 calendar days of your invoice date. The only caveat about this is that your card’s value will be based off of the pre-determined BFG price whenever it is you choose to trade it in. In addition, you must register your card within 30 days to have a chance at trading it in for something better.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
BFG GTX 260 Packaging & Accessories

Packaging & Accessories



The packaging for this BFG card is no different than any other we have seen with the usual stylized character on the front with additional information on the back. Unlike some cards we have encountered in the past, this box also contains specification information which is great for those of you buying it from a brick and mortar retailer.


The card is protected with BFG’s usual packaging method of first wrapping the card in bubble-wrap and then enclosing the whole affair with a cardboard façade. There is some additional Styrofoam thrown in for good measure which makes this an extremely durable way to protect the cards from any damage.


There are also the usual pieces of documentation included plus a driver CD (no, it doesn’t have a label for some reason) and a couple of BFG case badges. Other than the blazing yellow warranty information card there is also a pamphlet that has information about connecting the proper power connectors to your new card.


The accessory bundle is definitely nothing to write home about since to be honest, there really isn’t much included. All that you get is a DVI to VGA dongle, a Molex to 6-pin adaptor and a TV-out cable. That’s it folks; no DVI to HDMI connector and no SPDIF cable even though this card is equipped to handle both HDMI and high-def audio.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
A Closer Look at the BFG GTX 260

A Closer Look at the BFG GTX 260



When looking at the GTX 260 the first time you will probably be stunned by its resemblance to the more powerful GTX 280. It has the same heatsink, is 10.5” long and in this case is clad in BFG’s usual kick ass color scheme. We must say that BFG really knows how to keep things wonderfully simple without any dancing pixies or overblown graphics one their decals.


One of the ways to differentiate this card from the GTX 280 is the fact that it has a pair of 6-pin PCI-E power connectors instead of the 280’s 6-pin and 8-pin. Nvidia was able to get away with this configuration since this card is rated to consume a whopping 54W less than its bit brother. Next to the power connectors is a small audio cover which hides the SPDIF connector on the card.

BFG has added a few additional stickers on the exterior of their GTX 260 but not enough of them to be considered tasteless or gaudy. The one we have pictured above will be right-side up when the card is installed into a standard ATX case.


Since there are ram modules on the back of the GTX 260 that need cooling, Nvidia has designed the heatsink so it wraps around both the top and the bottom of the card. Unlike some other GTX –series cards on the market, BFG has taken the path less travelled by not putting a decal on the backside of the card.

There is also a small cover on the side of the heatsink which can be removed to expose the dual SLI connector. With less-expensive cards there will be a single SLI connector which will allow you to link up to two cards together for increased performance but with the GTX 260, you are able to daisy-chain put to three cards together on supporting motherboards.


The fan on the BFG GTX 260 is a large, 80mm affair which is designed to suck in cool air and blow it over the internal heatsink assembly while being exhausted out the back. It also has a small opening at the back in order to keep the exposed capacitors from overheating.


The backplate is no different from many other cards we have seen in the past since it has the usual two DIV connectors, a single TV-out connector and an exhaust grille for the hot air generated by the core.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
Under the Heatsink

Under the Heatsink


Please note that removing the heatsink on this card will void your warranty.

Let me put this quite honestly for you: if you have no idea what you are doing, DO NOT attempt to remove the heatsink on one of these cards. Due to the fact that both the front and the front and back of the heatsink interlocking at several places, getting the two apart can quickly seem like a lesson in futility. If you are determined, make sure you take your time, don’t force anything apart and make sure you don’t damage anything.


Once the heatsink is removed we can see that this card has a wonderful black PCB and has a massive GPU core IHS smack in the middle which is surrounded by the ram modules. There is also a small NVIO input / output chip closer to the backplate which is used for the handling all the signals that go to and from the card. Meanwhile, the back of the card is a bit uneventful other than the fact that it carries additional ram modules.


Due to the extremely high heat generated by the core, Nvidia decided to place and IHS (internal heatsink) over the G200 core in order to evenly distribute the heat it generates before it hits the underside of the actual cooler. While the benefits or lack thereof of this approach can be discussed until the cows come home, one thing is quite apparent: this IHS is absolutely massive.

The memory used on this card consists of two banks of 64MB x 7 Hynix H5RS5223CFR-N0C GDDR3 memory for a total of 14 ICs. This memory is rated to run at 1000Mhz (2000Mhz effective) with 2.0V and a latency of 0.8ns. This is a bit different from the 280’s 2400Mhz-rated N2C modules but they should overclock quite good.


When comparing the 260 on the left and the 280 on the right it is quite obvious that there are quite a few differences between the power distribution sections. The 260 has a few of the inductor soldering points left blank and there are some VRMs missing as well since its core doesn’t have the same insane power requirements as its big brother and there isn’t as much ram installed either.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,421
Location
Montreal
Test System & Setup

Test System & Setup

System Used

Processor: Intel Core 2 Quad Extreme QX9770 @ 3.852Ghz
Memory: G.Skill 2x 2GB DDR2-1000 @ 1052Mhz DDR
Motherboard: DFI LanParty DK X38 T2R
Disk Drive: Pioneer DVD Writer
Hard Drive: Hitachi Deskstar 320GB SATAII
Fans: 2X Yate Loon 120mm @ 1200RPM
Power Supply: Corsair HX1000W
Monitor: Samsung 305T 30” widescreen LCD
OS: Windows Vista Ultimate x64 SP1


Graphics Cards:

BFG GTX 260
Sapphire Radeon HD4850 512MB (Stock)
Palit Radeon HD4870 512MB (Stock)
EVGA Geforce GTX 280 (stock)
EVGA 8800GT 512MB (stock)
XFX 8800GTS 512MB (stock)
BFG 9800 GTX (stock)
HIS HD3870 (stock)


Drivers:

Nvidia 177.41 WHQL (GTX 280 / 260)
Nvidia 175.19 WHQL
ATI Catalyst 8.7
ATI Catalyst 8.6 (HD3870)

Due to the unpredictability of some beta drivers in Windows Vista x64, we have decided to only use WHQL drivers for all graphics cards other than the one being tested.


Applications Used:

3DMark06 Professional
3DMark Vantage
Enemy Territory: Quake Wars
Devil May Cry 4 Demo
Crysis
Call of Duty 4: Modern Warfare
Prey
Unreal Tournament III
World in Conflict


*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 4 benchmark runs

- If the game did not support 2560 x 1600 resolution, the closest resolution to that was used
 
Status
Not open for further replies.

Latest posts

Twitter

Top