What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

EVGA GTX 260 Core 216 55nm Superclocked Edition Video Card Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal


EVGA GTX 260 Core 216 55nm Superclocked Edition Video Card Review




Price: Click here to compare prices
Model Number: 896-P3-1257-AR
TechWiki Info: EVGA GTX 260 Core 216 55nm Superclocked Edition
Availability:
Now
Warranty: Lifetime



By now we all know that Nvidia is trying to transition their graphics cards to the 55nm manufacturing process. The transition began with the 9800 GTX+ and has made its way into a few hand-picked 9800 GT models which have yet to find their way to retail. Many of us have been waiting for the shrunken-down cores to eventually trickle down into the newer cards; namely the GTX 200 series. Let’s be honest here for a second; the GTX 280 and 260 cards are power sucking monsters which are not only expensive for Nvidia to produce but also don’t fit very well with the environmentally friendly aspect of today’s marketplace. Efficiency both energy-wise and production-wise is the name of the game these days and with ATI already having 55nm parts on the shelves for the better part of a year now, it was high time Nvidia made the transition as well. The consumers and their bottom line demanded it.

Has anyone else noticed that we have been seeing a large number of GTX 200-series card on sale as of late? The reason for this is that Nvidia is trying to get rid of their 65nm cores and starting immediately, we should see 55nm GTX 260 cards make their way to retailers. Just remember, distinguishing a 55nm card from a 65nm one can be a daunting task since we have heard that some board partners will not be advertising the new core on their packaging or PR materials. Even the name hasn’t changed; this is still the GTX 260. The lack of a name change is due to a number of reasons but first and foremost among them is that the 55nm cores will not offer any performance increases over the older cores. Granted, all 55nm GTX 260 cards will feature 216 shaders but other than that, this is still the same card we have come to know and love.

While I can promise you that in the future we will see quite a few new products from Nvidia with 55nm cores, today we will be looking at a simple respin of the GTX 260. You may remember that a few months ago we took a look at the EVGA GTX 260 Core 216 Superclocked Edition card and found it to be excellent competition for the HD 4870 cards. Once again before Christmas, this same card performed extremely well in our Games of Christmas ’08 article. Why are we talking about the Core 216 Superclocked? Well, the first EVGA card out of the paddock with the 55nm core just happens to be the GTX 260 Core 216 Superclocked Edition sporting the exact same specs as the card we have been running for the past few months. While it may be a bit counter-intuitive to review a card which is nearly a mirror image of a previous one, this particular example should hopefully provide some pleasant surprises in terms of heat production and power consumption. To this end we will be seriously beefing up those two sections of the review while the comments to the general gaming benchmark results have been cut out.

As usual, EVGA offers their Lifetime Warranty and Trade Up program with the GTX 260 Core 216 Superclocked Edition but they also bundle in a full version of Far Cry 2 for a bit of added value. Pricing for this card hasn’t quite settled yet and as thus is actually slightly higher than other 216 shader GTX 260s. Is this added cost worth it? We are about to find out.


 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
The GT200-series Architecture

The GT200-series Architecture


The GT200-series represents Nvidia’s first brand new architecture since the G80 launched all the way back in November of 2006. In human years this timeframe may have not seemed like a long time but in computer years it was an eternity.

Even though these new cards are still considered graphics cards, the GT200 architecture has been built from the ground up in order to make use of emerging applications which can use parallel processing. These applications are specifically designed to take advantage of the massive potential that comes with the inherently parallel nature of a graphics card’s floating point vector processors. To accomplish this, Nvidia has released CUDA which we will be talking about in the next section.

On the graphics processing side of things the GT200 series are second generation DX10 chips which do not support DX10.1 like some ATI cards while promising to open a whole new realm in graphics capabilities. Nvidia’s mantra in the graphics processing arena is to move us away from the photo-realism of the last generation of graphics cards into something they call Dynamic Realism. For Nvidia, Dynamic Realism means that not only is the character rendered in photo-real definition but said character interacts with a realistically with a photo real environment as well.

To accomplish all of this, Nvidia knew that they needed a serious amount of horsepower and to this end have released what is effectively the largest, most complex GPU to date with 1.4 billion transistors. To put this into perspective, the original G80 core had about 686 million transistors. Let’s take a look at how this all fits together.


Here we have a basic die shot of the GT200 core which shows the layout of the different areas. There are four sets of processor cores clustered into each of the four corners which have separate texture units and shared frame buffers. The processor core areas hold the individual Texture Processing Clusters (or TPCs) along with their local memory. This layout is used for both Parallel Computing and graphics rendering so to put things into a bit better context, let’s have a look at what one of these TPCs looks like.


Each individual TPC consists of 24 stream (or thread) processors which are broken into three groups of eight. When you combine eight SPs plus shared memory into one unit you get what Nvidia calls a Streaming Multiprocessor. Basically, a GTX 280 will have ten texture processing clusters each with a grand total of 24 stream processors for a grand total of 240 processors. On the other hand a GTX 260 has two clusters disabled which brings its total to 192 processor “cores”. Got all of that? I hope so since we are now moving on to the different ways in which this architecture can be used.


Parallel Processing


At the top of the architecture shot above is the hardware-level thread scheduler that manages which threads are set across the texture processing clusters. You will also see that each “node” has its own texture cache which is used to combine memory accesses for more efficient and higher bandwidth memory read/write operations. The “atomic” nodes work in conjunction with the texture cache to speed up memory access when the GT200 is being used for parallel processing. Basically, atomic refers to the ability to perform atomic read-modify-write operations to memory. In this mode all 240 processors can be used for high-level calculations such as a Folding @ Home client or video transcoding


Graphics Processing


This architecture is primarily used for graphics processing and when it is being as such there is a dedicated shader thread dispatch logic which controls data to the processor cores as well as setup and raster units. Other than that and the lack of Atomic processing, the layout is pretty much identical to the parallel computing architecture. Overall, Nvidia claims that this is an extremely efficient architecture which should usher in a new damn of innovative games and applications.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Of Parallel Processing and CUDA

Of Parallel Processing and CUDA



What is CUDA?

Nvidia has this to say about their CUDA architecture:

CUDA is a software and GPU architecture that makes it possible to use the many processor cores (and eventually thousands of cores) in a GPU to perform general-purpose mathematical calculations. CUDA is accessible to all programmers through an extension to the C and C++ programming languages for parallel computing.

To put that into layman’s terms it means that we will now be able to take advantage of the massive potential offered by current GPU architectures in order to speed up certain tasks. In essence, CUDA should be able to take a task like video transcoding which takes hours on a quad core CPU and perform that same operation in a matter of minutes on a GPU. Not all applications can be transferred to the GPU but those that do will supposedly see an amazing jump in performance.

We could go on and on about CUDA but before we go into some of the applications it can be used in, we invite you to visit Nvidia’s CUDA site: CUDA Zone - resource for C developers of applications that solve computing problems


Folding @ Home


By now, many of you know what Stanford University’s Folding @ Home is since it is the most widely used distributed computing program around right now. While in the past it was only ATI graphics cards that were able to fold, Nvidia has taken up the flag as well and will be using the CUDA architecture to make this application available to their customers. From the information we have from Nvidia, a single GTX 280 graphics card could potentially take the place of an entire folding farm of CPUs in terms of folding capabilities.


Video Transcoding


In today’s high tech world mobile devices have given users the capability to bring their movie collections with them on the go. To this end, consumers need to have a quick and efficient way of transferring their movies from one device to another. From my experience, this can be a pain in the butt since it seems like every device from a Cowon D2 to an iPod needs a different resolution, bitrate and compression to look the best possible. Even a quad core processor can take hours to transcode a movie and that just isn’t an option for many of us who are on the go.

To streamline this process for us, Nvidia has teamed up with Elemental Technologies to offer a video transcoding solution which harnesses the power available from the GTX’s 240 processors. The BadaBOOM Media Converter they will be releasing can take a transcoding process which took up to six hours on a quad core CPU and streamline it into a sub-40 minute timeframe. This also frees up your CPU to work on other tasks.

If these promises are kept, this may be one of the most-used CUDA applications even though it will need to be purchased (pricing is not determined at this point).


PhysX Technology


About two years ago there were many industry insiders who predicted that physics implementation would be the next Big Thing when it came to new games. With the release of their PhysX PPU, Ageia brought to the market a stand-alone physics processor which had the potential to redefine gaming. However, the idea of buying a $200 physics card never appealed to many people and the unit never became very popular with either consumers or game developers. Fast forward to the present time and Nvidia now has control over Ageia’s PhysX technology and will be putting it to good use in their all their cards featuring a unified architecture. This means that PhysX suddenly has an installed base numbering in the tens of millions instead of the tiny portion who bought the original PPU. Usually, a larger number of potential customers means that developers will use a technology more often which will lead to more titles being developed for PhysX.

Since physics calculations are inherently parallel, the thread dispatcher in the unified shader architecture is able to shunt these calculations to the appropriate texture processing cluster. This means a fine balancing act must be done since in theory running physics calculations can degrease rendering performance of the GPU. However, it seems like Nvidia is working long and hard to get things balanced out properly so turning up in game physics will have a minimal affect on overall graphics performance.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Additional Features of the GT200 Architecture

Additional Features of the GT200 Architecture


Yes, there is more than what we have already mentioned in the last few sections when it comes to the new GTX 280 and GTX 260 cards. Nvidia has packed their new flagships with more features than you can shake a stick at so let’s go over a few of them which may impact you.


3-Way SLI


As multi-GPU solutions become more and more popular Nvidia is moving towards giving consumers the option to run as many as 3 graphics cards together in order to increase performance to insane levels. Before the release of the 9800GTX, the only cards available for 3-way SLI were the 8800GTX and 8800 Ultra so the GTX 280 and GTX 260 cards have now become the fourth and fifth cards to use this technology. Just be prepared to fork over some megabucks for this privilege since not only would you need God’s Own CPU but at about $1500 for a trio of 280 cards and $1000 for three 260 cards. That is a pretty bitter pill for just about anyone to swallow.


Optional Full HDMI Output


All GTX 280 and GTX 260 cards come with the option for full HDMI output over a DVI to HDMI adaptor. Notice we said “option”? While GT200 cards will come with an SPDIF input connector on the card itself, the board partner has to choose whether or not to include a DVI to HDMI dongle so the card can output both sound and images through a HDMI cable. Coupled with the fact that the new GTXes fully support HDCP, this feature can make this card into a multimedia powerhouse. Unfortunately, in order to keep costs down we are sure that there will be quite a few manufacturers who will see fit not to include the necessary hardware for HDMI support. With this in mind, make sure you keep a close eye on the accessories offered with the card of your choice if you want full HDMI support without having to buy a separate dongle.

To be honest with you, this strikes us as a tad odd since if we are paying upwards of $400 for a card, we would expect there to be an integrated HDMI connector a la GX2. Making the DVI to HDMI dongle optional smacks of some serious penny-pinching.


Purevideo HD


To put it into a nutshell, Purevideo HD is Nvidia’s video processing software that offloads up to 100% of the high definition video encoding tasks from your CPU onto your GPU. In theory, this will result in lower power consumption, better feature support for Blu-ray and HD-DVD and better picture quality.


In addition to dynamic contrast enhancement, Purevideo HD has a new feature called Color Tone Enhancement. This feature will dynamically increase the realism and vibrancy for green and blue colors as well as skin tones.


HybridPower


By far, on of the most interesting features supported by the 200-series is Nvidia’s new Hybridpower which is compatible with HybridPower-equipped motherboards like the upcoming 780a and 750a units for AMD AM2 and AM2+ processors. It allows you to shift power between the integrated GPU and your card so if you aren’t gaming, you can switch to integrated graphics to save on power, noise and heat.


While we have not seen if this works, it is definitely an interesting concept since it should allow for quite a bit of flexibility between gaming and less GPU-intensive tasks. There has been more than once where I have been working in Word in the summer where I wished my machine would produce less heat so I wouldn’t be roasting like a stuffed turkey. If this technology can deliver on what it promises, this technology would be great for people who want a high-powered graphics card by night and a word processing station by day.


This technology even works if you have GTX 280 or 260 cards working in SLI and once again you should (in theory) be able to shut down the two high-powered cards when you don’t need them.


All HybridPower-equipped motherboards come with both DVI and VGA output connectors since all video signals from both the on-board GPU and any additional graphics cards go through the integrated GPU. This means you will not have to switch the connector when turning on and off the power-hungry add-in graphics cards. All in all, this looks to be great on paper but we will have to see in the near future if it can actually work as well as it claims to. In terms of power savings, this could be a huge innovation.


Additional Power Saving Methods


Other than the aforementioned HybridPower, the GT200-series for cards have some other very interesting power savings features. With the dynamic clock and voltage settings, Nvidia has further been able to reduce power consumption when the system is at idle so if you are using a program that doesn’t require the GPU to work, you don’t have to worry about it consuming copious amounts of power. The same goes for heat since as power consumption decreases so does the heat output from the core. I don’t know about you but I hate sweating like a pig while using Photoshop just because my GPU wants to dump hot air back into the room and with this feature hopefully these sweat sessions will be a thing of the past.

Additionally, Nvidia has added a power saving feature for HD decoding as well. Since the card doesn’t need full power to decode a high definition movie, voltages will be decreased from what they would be in full 3D mode which will once again result in less power draw and heat.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Why Buy an EVGA Card?

Why Buy an EVGA Card?


Many of us know EVGA by name since their cards are usually some of the best priced on the market. Other than that, there are several things which EVGA has done to try to differentiate their business model from that of their competition. Not only do they have an excellent support forum and an open, friendly staff but it also seems like they have a love for their products you just can’t find many other places. Passion for one’s products goes a long way in this industry but without a good backbone of customer support, it would all be for nothing. Let’s take a look at what EVGA has to offer the customer AFTER they buy the product.


Lifetime Warranty


Every consumer wants piece of mind when it comes to buying a new computer component especially when that component costs you over $600. In order to protect your investment, EVGA offers their customers a lifetime warranty program which is in effect from the day you register the card until…well…the end of time. The only caveat is that you must register your card within 30 days of purchase or you will only be eligible for their new 1+1 warranty. So as long as you don’t get lazy or forget, consider yourself covered even if you remove the heatsink. The only thing that this warranty doesn’t cover is physical damage done to your card. For more information about the lifetime warranty you can go here: EVGA | Product Warranty

Even if you forget to register your card within the 30 days necessary to receive the lifetime warranty, EVGA still offers you a 1 year warranty.


Step-Up Program


While some competitors like BFG now offer trade-up programs as well, EVGA will always be known for having the first of this type of program. This allows a customer with an EVGA card to “step up” their card to a different model within 90 days of purchase. Naturally, the difference in price between the value of your old card and that of the new card will have to be paid but other than that, it is a pretty simple process which gives EVGA’s customers access to newer cards. As is usual certain conditions apply such as the cards being in stock with EVGA and the necessity to register your card but other than that it is pretty straightforward. Check out all the details here: EVGA | Step-Up Program


24 / 7 Tech Support


Don’t you hate it when things go ass-up in the middle of the night without tech support around for the next dozen hours or so? Luckily for you EVGA purchasers, there is a dedicated tech support line which is open 24 hours a day, 7 days a week. As far as we could tell, this isn’t farmed out tech support to the nether regions of Pakistan either since every rep we have spoken to over the last few years has had impeccable English. Well, we say that but maybe EVGA hunted down the last dozen or so expats living in Karachi.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Core 216 Superclocked Specs / Packaging & Accessories

EVGA GTX 260 Core 216 Superclocked Edition Specifications



The clock frequencies of the 55nm Core 216 Superclocked card reflect those of its 65nm card to a tee but it is important to note that its speeds are not that far removed from a stock card. As such, its main performance boost will come from the additional 24 shaders afforded to it when compared to the older GTX 260 with 192 SPs. A while ago EVGA changed its naming scheme and the Superclocked Edition cards suddenly went from some of the higher clocked versions in their lineup to those which are only moderately overclocked. It would not surprise us to see the SSC or FTW monikers make their way to these new 55nm cards but only time will tell.


Packaging and Accessories



The outer box art for this card is very similar to every other EVGA GTX 200-series cards we have reviewed to date but there is a bit of red added to the usual orange band across the front. Meanwhile, the back of the box holds a bare minimum of information which includes a brief glance at the card we will be seeing a bit more of in this review.


Unlike some past EVGA cards which have used dense foam for protection, the 55nm card is encased in a plastic clamshell to make sure it gets to you in one piece.


EVGA goes all out with the accessories you get with the 55nm Superclocked card. In addition to the usual Molex to 6-pin power adaptors and DVI to VGA dongle, there is also a DVI to HDMI connector as well as an S/PDIF cable for audio pass thought. Finally, a full version of Far Cry 2 is also included which adds a fair bit of value to this bundle but remember that you only have a limited number of installs with this game so use them wisely.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
A Closer Look at the EVGA GTX 260 Core 216 Superclocked

A Closer Look at the EVGA GTX 260 Core 216 Superclocked



This sight will probably be familiar to many of you since the basic design of the 55nm GTX 260 is very similar to the older card at first glance. The length stays unchanged at 10.5” but EVGA seems to have added a few different touches in terms of decals and whatnot. One thing is sure: it is definitely interesting to see red of all colors on an EVGA Nvidia card. One thing is for sure; this color scheme will not appeal to everyone.


The underside of the 55nm GTX 260 is where there is a large departure from past 200-series cards. Since there is no longer any memory ICs on the back of the PCB, some money could be saved by not installing a heatsink here.


Even though the GPU core takes less power than the older version, there is still a pair of 6-pin power connectors used along with the S/PDIF input. Originally, the side of the GTX 260 heatsink was slightly corrugated to allow for a greater surface area for dispersing heat brought to it by a single heatpipe. As we will see in the next section, this heatpipe is MIA so the design here now has an aesthetic value as opposed to a functional one.


The output connectors on the EVGA GTX 260 Core 216 Superclocked Edition are the same as we have seen on nearly every other stock 200-series card to date: a single HDTV connector and a pair of DVI outputs.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Under the Heatsink / A Look at a Changed Heatsink

Under the Heatsink



Once the heatsink is removed from the card, we are able to get a clear look at both the IHS of the 55nm G200 B2 core as well as the 14ICs that make up the 896MB of memory.


The memory modules EVGA used are Samsung K4J52324QH-HJ1A units rated at 1.0ns at 1000Mhz SDR. This should give a bit of overclocking room but not much considering the memory is already pre-overclocked a bit above 2100Mhz already.


65nm card top / 55nm card bottom

When comparing one card to the other, the differences become abundantly apparent; not only does the 55nm card have a shim around the IHS but the PCB itself is slightly longer by a few millimeters and there is a distinct “bulge” in it closer to the rear. There are also noticeable differences in the power distribution and voltage regulation sections due to the different electrical needs of the new core.

For those of you wondering; the mounting holes around the core are the same on the new card are as the outgoing product. Therefore, any compatible heatsinks or non-full coverage water blocks should fit without a problem. However, full cover water blocks will not be compatible due to the different layout of the VRMs.


A Look at a Changed Heatsink



65nm card bottom / 55nm card top

It seems that even though the EVGA GTX 260 Core 216 Superclocked Edition has a 55nm core, the heatsink is still designed to take a hell of a lot of heat. This really drives the point that the huge amounts of heat generated by the 65nm G200 core were not due to its manufacturing process but rather the fact that Nvidia has packed a massive number of transistors into one small space.

Basically, this design uses copper heatpipes to move the heat away from the core towards a bank of aluminum fins which are used to disperse the heat. This process is aided with the airflow from an 80mm fan sucking in cool air and pushing it over these fins.


65nm card bottom / 55nm card top

Now comes the interesting part; while the heatsink from the 65nm GTX 260 and the 55nm card may look the same from the outside, they are quite a bit different from one another with the 55nm’s heatsink getting the short end of the stick. The most noticeable difference is the copper contact place which is quite a bit smaller for the 55nm core even though the IHS of the new card is the same size as the outgoing card. In the upper left corner near the contact plate, you can also see that the new heatsink leaves a piece of copper heatpipe exposed.

Most of the differences can be seen in the top-down shot. As you can see, the heatpipe which touches the side of the heatsink and continues through the bottom of the cooling fins and then makes its way up and under the fan is MIA on the 55nm card. The aluminum cooling fin assembly is also smaller on the new card since it does not extend as far past the heatpipes as the one cooling the 65nm card. All in all it seems that the new heatsink is simply a castrated version of the older one which should prove to be interesting when we run our temperature testing.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Test System & Setup

Test System & Setup

System Used

Processor: Intel Core 2 Quad Extreme QX9770 @ 3.852Ghz
Memory: G.Skill 2x 2GB DDR2-1000 @ 1052Mhz DDR
Motherboard: ASUS P5E Deluxe X48
Disk Drive: Pioneer DVD Writer
Hard Drive: Hitachi Deskstar 320GB SATAII
Fans: 2X Yate Loon 120mm @ 1200RPM
Power Supply: Corsair HX1000W
Monitor: Samsung 305T 30” widescreen LCD
OS: Windows Vista Ultimate x64 SP1


Graphics Cards:

Palit HD 4870 X2
Sapphire HD 4870 1GB
Palit HD 4870 512MB
EVGA GTX 260 Core 216 Superclocked 55nm
EVGA GTX 280
EVGA GTX 260 Core 216 Superclocked
BFG GTX 260
EVGA 9800 GTX+


Drivers:

Nvidia 180.48 WHQL
ATI 8.12 WHQL


Applications Used:

Call of Duty: World at War
Crysis: Warhead
X3: Terran Conflict
Dead Space
Left 4 Dead
Far Cry 2
Fallout 3
Need for Speed Undercover


*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 4 benchmark runs

All game-specific methodologies are explained above the graphs for each game
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,861
Location
Montreal
Call of Duty: World at War

Call of Duty: World at War


To benchmark this game, we played through 10 minutes of the second mission (Little Resistance) starting from right after the player calls in the rocket strike on the enemy positions on the beach. This was benchmarked using FRAPS.

1680 X 1050





2560 X 1600



 
Status
Not open for further replies.

Latest posts

Twitter

Top