What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Nvidia GeForce GTX 275 896MB Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
2404949c7099afdc.png



Nvidia GeForce GTX 275 896MB Review





When an unmarked box shows up at my door it is usually one of two things: my neighbour’s new copy of Debbie Does Dallas: The Ultimate Edition or a brand new computer component that hasn’t been released yet. While receiving either is usually a welcome surprise, we will have to put a discussion about Debbie’s impressive exploits on the back burner since this time the box’s cargo was a new video card from Nvidia: the GTX 275.

The history behind Nvidia’s answer to the ATI HD 4890 has understandably been receiving much less rumour-mongering as past launches. This is mostly due to the fact that the card itself seems to be mostly a quick knee-jerk reaction to an assault by ATI into the relatively under appreciated market between the $225 and $300 (USD) price brackets. With all of the price cutting of late, both the GTX 260 216 and the HD 4870 1GB have fallen below the $200 USD mark which means a substantial gap has suddenly opened up between these cards and the $350 GTX 285. ATI chose to attack this market with what is essentially an overclocked version of their HD 4870 1GB which has been renamed the HD 4890. Meanwhile, Nvidia couldn’t be seen as resting on their backsides so the GTX 275 896MB was quickly hatched.

What we have here is an engineering version of the GTX 275 896 MB but from what we have heard; Nvidia will allow their board partners to design their own cards from the ground up. This will probably result in quite a few variations when it comes to framebuffer size, clock speeds, power consumption and even thermal characteristics. Unfortunately, this can also lead to some corner cutting (read: penny pinching) so Nvidia will hopefully implement a certification system whereby these “custom” cards are checked to see if they meet specifications. According to various manufacturers we spoke to, they will also be releasing reference based cards with some (from EVGA, XFX and BFG) introducing overclocked versions as well. Stay tuned since we already have some of these in the review pipeline.

In a nutshell, the GTX 275 is basically half of a GTX 295 whose core, shaders and memory have slightly higher clocks. Performance-wise it is supposed to sit somewhere between the GTX 260 216 and the GTX 285 even though looking at the specifications on a later page will give you the distinct impression that framerates may be closer to those of the GTX 285. If that turns out to be true, its price of around $250 USD could prove to be the undoing of Nvidia’s flagship single chip card.

Enough about marketing-speak, let’s talk reality here for a second or two. First of all, even though Nvidia will only announce their “official” price a day before launch, figures of $250 USD have been passed around. While that price is highly competitive against the HD 4890, we will be anxious to see if it holds any foundation in reality considering Canadian pricing will probably be between $360 and $380 according to sources. This may change as cards become available so stay tuned for more news on that front. Also, contrary to some reports we have been told by both AIBs and various local retailers that they expect stock of these cards to be available at North American retail by April 9th with smaller shipments trickling in ahead of that.

We hate paper launches as much as the next guy but it looks like the GTX 275 could be a serious contender against the ATI HD 4890. Let’s stop the intro now so we can dive into this review and have a good look at Nvidia’s new card.

GTX275-12.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Nvidia GTX 275 896MB Specifications

Nvidia GTX 275 896MB Specifications


GTX275-11.jpg

We mentioned in the introduction that the GTX 275 is nothing more than single GPU version of the popular GTX 295. To a certain extent that is true considering it uses the same frambuffer size, bus width and 55nm core that is found on each individual PCB of the GTX 295. On the other hand, the actual clock speeds are increased since Nvidia didn’t have to worry about the thermal load of two GPUs as they did with the GTX 295. When seen in relation to the GTX 260 216, it is pretty evident that the GTX 275’s performance should greatly outstrip even overclocked versions of that card.

However, the more interesting aspect about the specifications is how they relate to the GTX 280 and GTX 285. With the same number of Stream Processors as the higher-end cards plus higher speeds than the outgoing GTX 280, the potential for class-leading performance definitely seems to be there in spades. The only minor details that could hold this card back are its memory bus, number of ROPs and framebuffer size. It should be interesting to see if these aspects show performance differences across the entire spectrum or only at the higher end of the IQ settings we use in our testing.

GTX275-10.jpg

Honestly, the GTX 275’s placement is a bit odd and it looks like Nvidia may have outdone themselves when it came to competing with the HD 4890. If you look closely at the specifications, it is pretty apparent that this card is within spitting distance of the GTX 285 1GB yet far out in front of the GTX 260 216. This could mean some overlap when it comes to performance but more importantly, will this card mean the GTX 285 will suffer in the long run? Don’t get me wrong, if the GTX 275 can keep up with the GTX 285 then customers will benefit in the short term but where does that leave the rest of Nvidia’s lineup? Only time will tell but we will have to wait a bit until we get to the benchmarks for the real low-down.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The GT200-series Architecture

The GT200-series Architecture


The GT200-series represents Nvidia’s first brand new architecture since the G80 launched all the way back in November of 2006. In human years this timeframe may have not seemed like a long time but in computer years it was an eternity.

Even though these new cards are still considered graphics cards, the GT200 architecture has been built from the ground up in order to make use of emerging applications which can use parallel processing. These applications are specifically designed to take advantage of the massive potential that comes with the inherently parallel nature of a graphics card’s floating point vector processors. To accomplish this, Nvidia has released CUDA which we will be talking about in the next section.

On the graphics processing side of things the GT200 series are second generation DX10 chips which do not support DX10.1 like some ATI cards while promising to open a whole new realm in graphics capabilities. Nvidia’s mantra in the graphics processing arena is to move us away from the photo-realism of the last generation of graphics cards into something they call Dynamic Realism. For Nvidia, Dynamic Realism means that not only is the character rendered in photo-real definition but said character interacts with a realistically with a photo real environment as well.

To accomplish all of this, Nvidia knew that they needed a serious amount of horsepower and to this end have released what is effectively the largest, most complex GPU to date with 1.4 billion transistors. To put this into perspective, the original G80 core had about 686 million transistors. Let’s take a look at how this all fits together.

GTX280-80.jpg

Here we have a basic die shot of the GT200 core which shows the layout of the different areas. There are four sets of processor cores clustered into each of the four corners which have separate texture units and shared frame buffers. The processor core areas hold the individual Texture Processing Clusters (or TPCs) along with their local memory. This layout is used for both Parallel Computing and graphics rendering so to put things into a bit better context, let’s have a look at what one of these TPCs looks like.

GTX280-90.jpg

Each individual TPC consists of 24 stream (or thread) processors which are broken into three groups of eight. When you combine eight SPs plus shared memory into one unit you get what Nvidia calls a Streaming Multiprocessor. Basically, a GTX 280 / 285 will have ten texture processing clusters each with a grand total of 24 stream processors for a grand total of 240 processors. On the other hand a GTX 260 has two clusters disabled which brings its total to 192 processor “cores”. Got all of that? I hope so since we are now moving on to the different ways in which this architecture can be used.


Parallel Processing

GTX280-85.jpg

At the top of the architecture shot above is the hardware-level thread scheduler that manages which threads are set across the texture processing clusters. You will also see that each “node” has its own texture cache which is used to combine memory accesses for more efficient and higher bandwidth memory read/write operations. The “atomic” nodes work in conjunction with the texture cache to speed up memory access when the GT200 is being used for parallel processing. Basically, atomic refers to the ability to perform atomic read-modify-write operations to memory. In this mode all 240 processors can be used for high-level calculations such as a Folding @ Home client or video transcoding


Graphics Processing

GTX280-83.jpg

This architecture is primarily used for graphics processing and when it is being as such there is a dedicated shader thread dispatch logic which controls data to the processor cores as well as setup and raster units. Other than that and the lack of Atomic processing, the layout is pretty much identical to the parallel computing architecture. Overall, Nvidia claims that this is an extremely efficient architecture which should usher in a new damn of innovative games and applications.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Of Parallel Processing and CUDA

Of Parallel Processing and CUDA



What is CUDA?

Nvidia has this to say about their CUDA architecture:

CUDA is a software and GPU architecture that makes it possible to use the many processor cores (and eventually thousands of cores) in a GPU to perform general-purpose mathematical calculations. CUDA is accessible to all programmers through an extension to the C and C++ programming languages for parallel computing.

To put that into layman’s terms it means that we will now be able to take advantage of the massive potential offered by current GPU architectures in order to speed up certain tasks. In essence, CUDA should be able to take a task like video transcoding which takes hours on a quad core CPU and perform that same operation in a matter of minutes on a GPU. Not all applications can be transferred to the GPU but those that do will supposedly see an amazing jump in performance.

We could go on and on about CUDA but before we go into some of the applications it can be used in, we invite you to visit Nvidia’s CUDA site: CUDA Zone - resource for C developers of applications that solve computing problems


Folding @ Home

GTX280-82.jpg

By now, many of you know what Stanford University’s Folding @ Home is since it is the most widely used distributed computing program around right now. While in the past it was only ATI graphics cards that were able to fold, Nvidia has taken up the flag as well and will be using the CUDA architecture to make this application available to their customers. From the information we have from Nvidia, a single GTX 280 graphics card could potentially take the place of an entire folding farm of CPUs in terms of folding capabilities.


Video Transcoding

GTX280-91.jpg

In today’s high tech world mobile devices have given users the capability to bring their movie collections with them on the go. To this end, consumers need to have a quick and efficient way of transferring their movies from one device to another. From my experience, this can be a pain in the butt since it seems like every device from a Cowon D2 to an iPod needs a different resolution, bitrate and compression to look the best possible. Even a quad core processor can take hours to transcode a movie and that just isn’t an option for many of us who are on the go.

To streamline this process for us, Nvidia has teamed up with Elemental Technologies to offer a video transcoding solution which harnesses the power available from the GTX’s 240 processors. The BadaBOOM Media Converter they will be releasing can take a transcoding process which took up to six hours on a quad core CPU and streamline it into a sub-40 minute timeframe. This also frees up your CPU to work on other tasks.

If these promises are kept, this may be one of the most-used CUDA applications even though it will need to be purchased (pricing is not determined at this point).


PhysX Technology

GTX280-86.jpg

About two years ago there were many industry insiders who predicted that physics implementation would be the next Big Thing when it came to new games. With the release of their PhysX PPU, Ageia brought to the market a stand-alone physics processor which had the potential to redefine gaming. However, the idea of buying a $200 physics card never appealed to many people and the unit never became very popular with either consumers or game developers. Fast forward to the present time and Nvidia now has control over Ageia’s PhysX technology and will be putting it to good use in their all their cards featuring a unified architecture. This means that PhysX suddenly has an installed base numbering in the tens of millions instead of the tiny portion who bought the original PPU. Usually, a larger number of potential customers means that developers will use a technology more often which will lead to more titles being developed for PhysX.

Since physics calculations are inherently parallel, the thread dispatcher in the unified shader architecture is able to shunt these calculations to the appropriate texture processing cluster. This means a fine balancing act must be done since in theory running physics calculations can degrease rendering performance of the GPU. However, it seems like Nvidia is working long and hard to get things balanced out properly so turning up in game physics will have a minimal affect on overall graphics performance.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Additional Features of the GT200 Architecture

Additional Features of the GT200 Architecture


Yes, there is more than what we have already mentioned in the last few sections when it comes to the new GTX 280 / 285 and GTX 260 cards. Nvidia has packed their new flagships with more features than you can shake a stick at so let’s go over a few of them which may impact you.


3-Way SLI

GTX-35.jpg

As multi-GPU solutions become more and more popular Nvidia is moving towards giving consumers the option to run as many as 3 graphics cards together in order to increase performance to insane levels. Before the release of the 9800GTX, the only cards available for 3-way SLI were the 8800GTX and 8800 Ultra so the GTX 280 / 285 and GTX 260 cards have now become the fourth and fifth cards to use this technology. Just be prepared to fork over some megabucks for this privilege since not only would you need God’s Own CPU but at about $1500 for a trio of 280 cards and $1000 for three 260 cards. That is a pretty bitter pill for just about anyone to swallow.


Optional Full HDMI Output

GTX-38.jpg

All GTX 280 and GTX 260 cards come with the option for full HDMI output over a DVI to HDMI adaptor. Notice we said “option”? While GT200 cards will come with an SPDIF input connector on the card itself, the board partner has to choose whether or not to include a DVI to HDMI dongle so the card can output both sound and images through a HDMI cable. Coupled with the fact that the new GTXes fully support HDCP, this feature can make this card into a multimedia powerhouse. Unfortunately, in order to keep costs down we are sure that there will be quite a few manufacturers who will see fit not to include the necessary hardware for HDMI support. With this in mind, make sure you keep a close eye on the accessories offered with the card of your choice if you want full HDMI support without having to buy a separate dongle.

To be honest with you, this strikes us as a tad odd since if we are paying upwards of $400 for a card, we would expect there to be an integrated HDMI connector a la GX2. Making the DVI to HDMI dongle optional smacks of some serious penny-pinching.


Purevideo HD

GTX-44.jpg

To put it into a nutshell, Purevideo HD is Nvidia’s video processing software that offloads up to 100% of the high definition video encoding tasks from your CPU onto your GPU. In theory, this will result in lower power consumption, better feature support for Blu-ray and HD-DVD and better picture quality.

GTX-46.jpg

In addition to dynamic contrast enhancement, Purevideo HD has a new feature called Color Tone Enhancement. This feature will dynamically increase the realism and vibrancy for green and blue colors as well as skin tones.


HybridPower

GTX-37.jpg

By far, on of the most interesting features supported by the 200-series is Nvidia’s new Hybridpower which is compatible with HybridPower-equipped motherboards like the upcoming 780a and 750a units for AMD AM2 and AM2+ processors. It allows you to shift power between the integrated GPU and your card so if you aren’t gaming, you can switch to integrated graphics to save on power, noise and heat.

GTX-40.jpg
GTX-41.jpg

While we have not seen if this works, it is definitely an interesting concept since it should allow for quite a bit of flexibility between gaming and less GPU-intensive tasks. There has been more than once where I have been working in Word in the summer where I wished my machine would produce less heat so I wouldn’t be roasting like a stuffed turkey. If this technology can deliver on what it promises, this technology would be great for people who want a high-powered graphics card by night and a word processing station by day.

GTX-42.jpg

This technology even works if you have GTX 280 / 285 or 260 cards working in SLI and once again you should (in theory) be able to shut down the two high-powered cards when you don’t need them.

GTX-43.jpg

All HybridPower-equipped motherboards come with both DVI and VGA output connectors since all video signals from both the on-board GPU and any additional graphics cards go through the integrated GPU. This means you will not have to switch the connector when turning on and off the power-hungry add-in graphics cards. All in all, this looks to be great on paper but we will have to see in the near future if it can actually work as well as it claims to. In terms of power savings, this could be a huge innovation.


Additional Power Saving Methods

GTX280-87.jpg

Other than the aforementioned HybridPower, the GT200-series for cards have some other very interesting power savings features. With the dynamic clock and voltage settings, Nvidia has further been able to reduce power consumption when the system is at idle so if you are using a program that doesn’t require the GPU to work, you don’t have to worry about it consuming copious amounts of power. The same goes for heat since as power consumption decreases so does the heat output from the core. I don’t know about you but I hate sweating like a pig while using Photoshop just because my GPU wants to dump hot air back into the room and with this feature hopefully these sweat sessions will be a thing of the past.

Additionally, Nvidia has added a power saving feature for HD decoding as well. Since the card doesn’t need full power to decode a high definition movie, voltages will be decreased from what they would be in full 3D mode which will once again result in less power draw and heat.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Closer Look at the GTX 275 896MB

A Closer Look at the GTX 275 896MB


GTX275-1.jpg
GTX275-9.jpg

The reference sample we received came in the aforementioned brown paper box so there really isn’t too much to show other than the card itself. At 10.5”, it is the same length as other high-end Nvidia cards yet also a good inch longer than the competing ATI HD 4890.

This is a bone-stock Nvidia reference sample with what looks like the GTX 260 / GTX 280 black heatsink and it has a limited number of connectors due to its early production date.

GTX275-3.jpg
GTX275-4.jpg

The cooling system is typical of all GTX 200-series cards with a single fan being used to suck in and move cool air over an internal fin assembly before it is exhausted out the backplate. Meanwhile, the rear of the card shows us that the fan assembly is slightly vaulted in order to allow air to pass over the VRMs and capacitors.

GTX275-2.jpg

It seems like the heatsink shroud has been designed in such a way that a portion of the hot exhaust air will make its way into your case. Why Nvidia has added these small vent slots is beyond us other than the fact that they add a bit of design flair to the whole setup.

GTX275-5.jpg
GTX275-6.jpg

Connectors are identical to all single chip 55nm GTX 200-series cards with a pair of 6-pin power connectors and an S/PDIF header for audio pass through. There is also the usual GeForce logo on the side of the card as well as a double head SLI connector for triple SLI.

GTX275-8.jpg

When compared to a GTX 285, we can see that there is literally not one lick of difference between it and the GTX 275. They have the same length, same fan and same black PCB. Is there any difference under the hood? Let’s find out on the next page.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Under the Heatsink

Under the Heatsink


GTX275-14.jpg

Once what seems like a million screws are removed from the card, the heatsink slips free and we are able to get our first look at what makes the GTX 275 tick. What we have is the usual large grey IHS placed directly over the GT200 core to better dissipate its heat and the GDDR3 memory ICs placed in a typical 64MB x 14 pattern.

GTX275-15.jpg
GTX275-17.jpg

As with all 55nm cores on GTX-200 series cards, this one has a metal retention plate placed around it to provide additional stiffness. Interestingly, there aren’t any identifiable markings on the core which normally note a product number.

The memory ICs used are Samsung’s K4J52324QH-HJ08 modules that are rated to run at a latency 0.833ns. Unfortunately, due to the low latency their rated speed is somewhat less than we are used to seeing at 1250Mhz and considering they are "only" running at 1134Mhz, we expect a good amount of overclocking headroom.

GTX275-18.jpg
GTX275-16.jpg

In the picture above we have the GTX 275 above and a GTX 285 below. Other than the differences in the memory layouts, it seems like the new GTX 275 is using a lower-end 6-phase PWM design rather than the 8-phase design used on the higher end card. There are also additional capacitors placed near the SLI connector and a generally simpler PCB design on the new card.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup

Test System & Setup

Processor: Intel Core i7 920(ES) @ 4.0Ghz (Turbo Mode Enabled)
Memory: Corsair 3x2GB Dominator DDR3 1600Mhz
Motherboard: Gigabyte EX58-UD5
Cooling: CoolIT Boreas mTEC + Scythe Fan Controller
Disk Drive: Pioneer DVD Writer
Hard Drive: Western Digital Caviar Black 640GB
Power Supply: Corsair HX1000W
Monitor: Samsung 305T 30” widescreen LCD
OS: Windows Vista Ultimate x64 SP1


Graphics Cards:

Sapphire HD 4890 1GB (Stock)
EVGA GTX 285 (Stock)
EVGA GTX 260 216 (Stock)
GTX 275 896MB (Stock)
Sapphire HD 4870 1GB (Stock)



Drivers:

ATI 9.3 WHQL
Nvidia 182.08 WHQL
ATI 8.592.1 RC1 (HD 4890)
Nvidia 185.65 (GTX 275)


Applications Used:

3DMark Vantage
Call of Duty: World at War
Crysis: Warhead
Fallout 3
Far Cry 2
Grand Theft Auto IV
Left 4 Dead
Tom Clancy’s Hawx


*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 4 benchmark runs

All game-specific methodologies are explained above the graphs for each game
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
3DMark Vantage

3DMark Vantage


HD4890-25.jpg


HD4890-78.jpg


HD4890-76.jpg


3D Mark Vantage sees Nvidia’s cards with a commanding lead in the Overall category due to their use of PhysX within the CPU tests. However, when the GPU scores are calculated, the GTX 275 is still quite ahead of ATI’s HD 4890 1GB
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Call of Duty: World at War

Call of Duty: World at War


HD4890-27.jpg

To benchmark this game, we played through 10 minutes of the third mission (Hard Landing) starting from when the player first enters the swamp, through the first bunker until the final push onto the airfield. This was benchmarked using FRAPS.

1680 x 1050

HD4890-34.jpg


HD4890-45.jpg


1920 x 1200

HD4890-56.jpg


HD4890-67.jpg


2560 x 1600

HD4890-77.jpg


HD4890-88.jpg


In some tests, the GTX 275 falters a bit under the pressure from the HD 4890 1GB but it remains very competitive especially when you look at how it performs against the much more expensive GTX 285. Even though ATI has claimed to have more efficient AA processing, it seems that the GTX 275 is able to surge ahead whenever AA is turned on in this game. All in all though, the race in this game is just too close to call.
 
Last edited:
Status
Not open for further replies.

Latest posts

Top