What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

EVGA GeForce GTX 465 1GB Superclocked Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
For all intents and purposes, the months following NVIDIA’s release of the GTX 400 series have been interesting to say the least. At first, retail channels were starved of both the GTX 470 and GTX 480 due to either popularity or supply issues (we’ll probably never know which it was) but the situation today is vastly improved with both cards being available at most retailers. Pricing hasn’t changed much either with both NVIDIA and ATI sticking to their guns by continuing on the same course as before. This is actually extremely interesting since many expected a price drop was in the cards for ATI’s HD 5870 and HD 5850 once the competition launched their products but from where we are standing, even without reducing prices, their cards are still extremely popular. All that was left was for NVIDIA to flesh out the rest of their Fermi-based lineup.

Today’s release of the GTX 465 is actually a huge step in the right direction for NVIDIA because they finally have a DX11 part that can compete in the highly popular sub-$300 market. We mentioned in the past that ATI is extremely competitive in the high-end $300+ category and in nearly every one of the sub-$200 price points. However the decidedly lackluster $240 HD 5830 is the only card that effectively bridges the veritable chasm between $200 and $300. Granted, some lower-end HD 5850 cards have been known to retail for slightly less than $300 when on sale but for the most part, NVIDIA had a $100 window to work with. With a suggested price of around $279, it seems like the GTX 465 is trying to thread the needle head’s worth of space between the HD 5830 and the HD 5850.

Interestingly enough, NVIDIA isn’t “officially” launching this card themselves but has instead decided to let their board partners take point and design their own versions of the GTX 465 around a basic set of specifications. This recipe worked with varying degrees of success for ATI and their partners on the HD 5830 launch but we have to remember that things can go awry if an AIB decides to “go cheap” on components. For the most part however, the GTX 465s we have seen all seem to be using reference GTX 470 PCBs. In our opinion this will add to the cost of what should be a value-oriented product but at least the platform itself is known to be more than durable enough. We’ll get into the specs a bit further in the next section.

We know that many of you are scratching your heads over the name of this card since NVIDIA usually reserves the “xx5” moniker for any updated cards that may be launched after the first series has run its course. In this case they have decided to skip the obvious GTX 460 name which does seem to imply there is some other card waiting in the wings. We’ll let the usual conspiracy theorists chew on that one for a little while longer though.

In this particular review we will be looking at the EVGA GTX 465 1GB Superclocked Edition which not only packs increased clock speeds but also comes with EVGA’s lifetime warranty and 90-day Step Up program. As you read this, you should be able to find this card in stock at retailers for between $15 to $20 more than a reference GTX 465.

For the time being, the GTX 465 should whet the appetites of NVIDIA’s faithful who couldn’t stomach paying $350 for a GTX 470. Whether or not it will satisfy this craving and successfully trounce not one but two ATI products is exactly what we are about to find out.

EVGA-GTX465-15.jpg
 
Last edited by a moderator:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
In-Depth GF 100 Architecture Analysis (Core Layout)

In-Depth GF 100 Architecture Analysis (Core Layout)


The first stop on the whirlwind tour of the GeForce GF100 is an in-depth look at what makes the GPU tick as we peer into the core layout and how NVIDIA has designed it.

Many people incorrectly believed that the Fermi architecture was primarily designed for GPU computing applications and very little thought was given to the graphics processing capabilities. This couldn’t be further from the truth since the computing and graphics capabilities were determined in parallel and the result is a brand new architecture tailor made to live in a DX11 environment. Basically, NVIDIA needed to apply what they had learned from past generations (G80 & GT200) to the GF100.

GF100-5.jpg

What you are looking at above is the heart and soul of any GF100 card: the core layout. While we will go into each section in a little more detail below, from the overall view we can see that the main functions are broken up into four distinct groups called Graphics Processing Clusters or GPCs which are then broken down again into individual Streaming Multiprocessors (SMs), raster engines and so on. To make matters simple, think of it way: in its highest-end form, a GF100 will have four GPCs, each of which is equipped with four SMs for a total of 16 Streaming Multiprocessors broken up into groups of four. Within each of these SMs are 32 CUDA Cores (or shader processors from past generations) for a total of 512 cores in total. However, the current GTX 480 and GTX 470 cards make do with slightly less cores (480 and 448 respectively) while the GTX 465 cuts things down even more.

On the periphery of the die is the GigaThread Engine along with the memory controllers. The GigaThread Engine performs the somewhat thankless duty of reading the CPU’s commands over the host interface and then fetching data from the system’s main memory bank. The data is then copied over onto the framebuffer of the graphics card itself before being passed along to the designated engine within the core. Meanwhile, in its fullest form the GF100 incorporates a total of six 64-bit GDDR5 memory controllers for a total of 384-bits. The massive amount of bandwidth created by a 384-bit GDDR5 memory interface will provide extremely fast access to the system memory and eliminate any bottlenecks seen in past generations.

GF100-6.jpg

Each Streaming Multiprocessor holds 32 CUDA cores along with 16 load / store units which allows for a total of 16 threads per clock to be processed. Above these we see Warp Schedulers along with the associated dispatch units that process 32 concurrent threads (called Warps) to the cores.

Finally, closer to the bottom of the SM is the L1 / L2 cache, Polymorph Engine and the four texture units. In total, the maximum number of texture units in this architecture is 64 which should come as a surprise considering the older GT200 architecture supported up to 80 TMUs. However, NVIDIA has implemented a number of improvements with the way the architecture handles textures which we will go into in a later section. Suffice to say that the texture units are now integrated into the SP without having multiple SPs addressing a common texture cache.

GF100-7.jpg

Independent of the SM structure is six dedicated partitions of eight ROP units for a total of 48 ROPs as opposed to the 32 units from the GT200 architecture. Also different from the GT200 layout is that instead of backing up directly into the memory bus, the ROPs interface with the shared L2 cache which provides a quick interface for data storage.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Efficiency Through Caching

Efficiency Through Caching


There are benefits to having dedicated L1 and L2 caches as this approach not only helps when it comes to GPGPU computing but also for storing draw calls so they are not passed off to the memory on the graphics card. This is supposed to drastically streamline rendering efficiency, especially in situations with a lot of higher-level geometry.

GF100-9.jpg

Above we have an enlarged section of the cache and memory layout within each SM. To put things into perspective, a Streaming Multiprocessor has 64KB of shared, programmable on-chip memory that can be configured in one of two ways. It can either be laid out as 48 KB of shared memory with 16 KB of L1 cache, or as 16 KB of Shared memory with 48 KB of L1 cache. However, when used for graphics processing as opposed to GPGPU functions, the SM will make use of the 16 KB L1 cache configuration. This L1 cache is supposed to help with access to the L2 cache as well as streamlining functions like stack operations and global loads / stores.

In addition, each texture unit now has its own high efficiency cache as well which helps with rendering speed.

GF100-8.jpg

Through the L2 cache architecture NVIDIA is able to keep most of the rendering function data like tessellation, shading and rasterizing on-die instead of going to the framebuffer (DRAM) which would slow down the process. Caching for the GPU benefits bandwidth amplification and alleviates memory bottlenecks which normally occur when doing multiple reads and writes to the framebuffer. In total, the GF100 has 768KB of L2 cache which is dynamically load balanced for peak efficiency.

It is also possible for the L1 and L2 cache to do loads and stores to memory and pass data from engine to engine so nothing moves off chip. Unfortunately, one of the issues with this approach is that significant die area is taken up by doing geometry processing in a parallel and scalable way while not using DRAM bandwidth.

GF100-11.jpg

When compared with the new GeForce GF100, the previous architecture is inferior in every way. The GT200 only used cache for textures and featured a read-only L2 cache structure whereas the new GPU’s L2 is rewritable and caches everything from vertex data to textures to ROP data and nearly everything in between.

By contrast, with their Radeon HD 5000-series, ATI dumps all of the data from the geometry shaders to the memory and then pulls it back into the core for rasterization before output. This causes a drop in efficiency and therefore performance. Meanwhile, as we discussed before, NVIDIA is able to keep all of their functions on-die in the cache without having to introduce memory latency into the equation and hogging bandwidth.

So what does all of this mean for the end-user? Basically, it means vastly improved memory efficiency since less bandwidth is being taken up by unnecessary read and write calls. This can and will benefit the GF100 in high resolution, high IQ situations where lesser graphics cards’ framebuffers can easily become saturated.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Closer Look at the Raster & PolyMorph Engines

A Closer Look at the Raster & PolyMorph Engines


In the last few pages you may have noticed mention of the PolyMorph and Raster engines which are used for highly parallel geometry processing operations. What NVIDIA has done is effectively grouped all of the fixed function stages into these two engines, which is one of the main reasons drastically improved geometry rendering is being touted for GF100 cards. In previous generations these functions used to be outside of the core processing stages (SMs) and NVIDIA has now brought them inside the core stages to ensure proper load balancing. This in effect will help immeasurably with tessellated scenes which feature extremely high triangle counts.

We should also note here and now that the GTX 400 series’ “core” clock numbers refer to the speed at which these fixed function stages run.

GF100-12.jpg

Within the PolyMorph engine there are five stages from Vertex Fetch to the Stream Output which each process data from the Streaming Multiprocessor they are associated with. The data then gets output to the Raster Engine. Contrary to past architectures which featured all of these stages in a single pipeline, the GF100 architecture does all of the calculations in a completely parallel fashion. According to our conversations with NVIDIA, their approach vastly improves triangle, tessellation, and Stream Out performance across a wide variety of applications.

In order to further speed up operations, data goes from one of 16 PolyMorph engines to another and uses the on-die cache structure for increased communication speed.

GF100-13.jpg

After the PolyMorph engine is done processing data, it is handed off to the Raster Engine’s three pipeline stages that pass off data from one to the next. These Raster Engines are set up to work in a completely parallel fashion across the GPU for quick processing.

GF100-14.jpg

Both the PolyMorph and Raster engines are distributed throughout the architecture which increases parallelism but are distributed in a different way from one another. In total, there are 16 PolyMorph engines which are incorporated into each of the SMs throughout the core while the four Raster Engines are placed at a rate of one per GPC. This setup makes for four Graphics Processing Clusters which are basically dedicated, individual GPUs within the core architecture allowing for highly parallel geometry rendering.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Image Quality Improvements

Image Quality Improvements


Even though additional geometry could end up adding to the overall look and “feel” of a given scene, methods like tessellation and HDR lighting still require accurate filtering and sampling to achieve high rendering fidelity. For that, you need custom anti-aliasing (AA) modes as well as vendor-specific anisotropic filtering (AF) and everything in between. As the power of GPUs rapidly outpaces the ability of DX9 and even DX10 games to feed them with information, a new focus has been turned to image quality adjustments. These adjustments do tend to impact upon framerates but with GPUs like the GTX 400 series there is much less of a chance that increasing IQ will result in a game becoming unplayable.


Quicker Jittered Sampling Techniques

Many of you are probably scratching your head and wondering what in the world jittered sampling is. Basically, it is a shadow sampling method that has been around since the DX9 days which allows for realistic, soft shadows to be mapped by the graphics hardware. Unfortunately, this method is extremely resource hungry so it hasn’t been used very often regardless of how good the shadows it produces may look.

GF100-20.jpg

In the picture above you can see what happens with shadows which don’t use this method of mapping. Basically, for a shadow to look good it shouldn’t have a hard, serrated edge.

GF100-21.jpg

Soft shadows are the way to go and while past generations of hardware were able to do jittered sampling, they just didn’t have the resources to do it efficiently. Their performance was adequate with one light source in a scene but when asked to produce soft shadows from multiple light sources (in a night scene for example), the framerate would take an unacceptably large hit. With the GF100, NVIDIA had the opportunity to vastly improve shadow rendering and they did just that.

GF100-22.jpg

To do quicker, more efficient jittered sampling, NVIDIA worked with Microsoft to implement hardware support for Gather4 in DX11. Instead of doing four texture fetches per cycle, the hardware is now able to specify one coordinate with an offset and fetch four textures instead of having to fetch all four separately. This will significantly improve the shadow rendering efficiency of the hardware and is still able to work as a standard Gather4 instruction set if need be.

With this feature turned on, NVIDIA expects a 200% improvement in shadow rendering performance when compared to the same scene being rendered with their hardware Gather4 turned off.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Image Quality Improvements (pg.2)

Image Quality Improvements



32x CSAA Mode for Improved AA

In our opinion, the differences between the AA modes above 8x are minimal at best unless you are rendering thin items such as grass, a chain-link fence or a distant railing. With the efficiency of the DX11 API in addition to increased horsepower from cards like the GTX 400 series, it is now possible to use geometry to model vegetation and the like. However, developers will continue using the billboarding and alpha texturing methods from DX9 which allow for dynamic vegetation, but it will continue to look jagged and under-rendered. In such cases, anti-aliasing can be applied but high levels of AA are needed in order to properly render these items. This is why NVIDIA has implemented their new 32x Coverage Sample AA.

GF100-23.jpg

In order to accurately apply AA, three things are needed: coverage samples, color samples and levels of transparency. To put this into context, GT 200 had 8 color samples and 8 coverage samples which means a total rate of 16 samples on edges. However, this only allowed for only 9 levels of transparency. This led to edges which still looked jagged and without proper blending so dithering was implemented to mask the banding.

The GF100 on the other hand features 24 coverage samples and 8 color samples for a total of 32 samples (hence the 32x CSAA moniker). This layout also offers 33 levels of transparency for much smoother blending of the anti-aliased edges into the background and increased performance as well.

GF100-24.jpg

With increased efficiency comes decreased overhead when running complex AA routines and NVIDIA specifically designed the GF100 to cope with high IQ settings. Indeed, on average this new architecture only loses about 7% of its performance when going from 8x AA to 32x CSAA.


TMAA and CSAA: Hand in Hand

No matter how much AA you apply in DX9, there will still invariably be some issues with distant, thin objects that are less than a pixel wide due to the method older APIs use to render these. Transparency Multisample AA (TMAA) allows the DX9 API to convert shader code to effectively use alpha to coverage routines when rendering a scene. This, combined with CSAA, can greatly increase the overall image quality.

GF100-25.jpg

It may be hard to see in the image above but without TMAA, the railing in the distance would have its lines shimmer in and out of existence due to the fact that the DX9 API doesn’t have the tools necessary to properly process sub-single pixel items. It may not impact upon gaming but it is noticeable when moving through a level.

GF100-26.jpg

Since coverage samples are used as part of GF100’s TMAA evaluation, much smoother gradients are produced. TMAA will help in instances such as this railing and even with the vegetation examples we used in the last section.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Touching on NVIDIA Surround / 3D Vision Surround

Touching on NVIDIA Surround / 3D Vision Surround


During CES, NVIDIA unveiled their answer to ATI’s Eyefinity multi-display capability: 3D Vision Surround and NVIDIA Surround. These two “surround” technologies from NVIDIA share common ground but in some ways their prerequisites and capabilities are at two totally different ends of the spectrum. We should also mention straight away that both of these technologies will become available soon and will support bezel correction management from the outset.


NVIDIA Surround

GF100-45.jpg

Not to be confused with NVIDIA’s 3D Vision Surround, their standard Surround moniker allows for three displays to be fed concurrently via an SLI setup. Yes, you need an SLI system in order to run three displays at the same time but the good news is that NVIDIA Surround is backwards compatible with GTX 200-series cards in addition to forwards compatible with all current GTX 400 series parts including the GTX 465. This method can display information across three 2560 x 1600 screens and allows for a mixture of monitors to be used as long as they all support the same resolutions.

The reason why SLI is needed is because both the GT200 series and the GF100 / 400-series cards are only capable of having a pair of display adapters active at the same time. In addition, if you want to drive three monitors at reasonably high detail levels, you’ll need some serious horsepower and that’s exactly what a dual or triple card system gives you.

This does tend to leave out the people who may want to use three displays for professional applications but that’s where NVIDIA’s Quadro series comes into play.


3D Vision Surround

GF100-46.jpg

We all know by now that immersive gaming has been taken to new levels by both ATI, with their HD 5000-series’ ability to game on up to three monitors at once, and NVIDIA’s own 3D Vision which offers stereoscopic viewing within games. What has now happened is a combining of these two techniques under the 3D Vision Surround banner, which brings stereo 3D to surround gaming.

This is the mac-daddy of display technologies and it is compatible with SLI setups of GTX 400 cards and older GT200-series. The reasoning behind this is pretty straightforward: you need a massively powerful system for rendering and outputting what amounts to six high resolution 1920 x 1080 images (two to each of the three 120Hz monitors). Another thing you should be aware of is the fact that all three monitors MUST be of the same make and model in order to ensure uniformity.

All in all, we saw NVIDIA’s 3D Vision Surround in action and while it was extremely impressive to say the least, we can't give any more thoughts about it since more testing on our part must be done.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
NVIDIA’s GTX 465; Specs, Background & Market Position

NVIDIA’s GTX 465; Specs, Background & Market Positioning


The premise behind the GTX 465 is actually twofold: to increase NVIDIA’s stable of marketable DX11 graphics cards and to ensure that any cores that don’t pass the binning necessary to make it into the GTX 470 or GTX 480 are still used in some way. If rumors are to be believed, NVIDIA pays their chip foundry (TSMC) by the wafer regardless of how many useable cores each of these pieces of silicon holds. This means there’s a need to make the most out of each wafer by using as many of the dies as possible. Since each wafer will always hold some dies that will have a good portion of their CUDA cores working, these are used for lower-end cards like the GTX 470 and GTX 465. Even the flagship GTX 480 was released with one Streaming Multiprocessor (32 CUDA cores) disabled due to yields of 512-core parts not being high enough for a widespread launch. Knowing this, let’s check out what the GTX 465 is saddled with.

EVGA-GTX465-68.jpg

When it comes to the core and graphics clock speeds, the GTX 465 is a spitting image of its bigger brother; the GTX 470. However, that’s where the similarities stop since the newest NVIDIA card makes due with 1GB of GDDR5 clocked at a mere 3.206Ghz QDR. Coupled with a narrower 256-bit memory bus than the 320-bit one that graces the 470, the GTX 465 is only able to muster a mere 102.6GB/s of memory bandwidth. To give you some perspective, the older GTX 275 with its GDDR3 interface was good for 127GB/s while ATI’s HD 5850 and HD 5830 have 128GB/s of bandwidth due to their use of GDDR5 clocked at 4Ghz. The result of this bandwidth situation could be anything from low framerates in high resolution scenarios to lackluster performance in games requiring large amounts of quickly accessed texture memory. On the other hand, most of the gamers this card targets will likely never play with anything above a 24” monitor anyways.

EVGA-GTX465-12.jpg

Graphical representation of the GTX 465 core architecture.

Back when we first previewed the architecture, we stated that NVIDIA would likely disable two Streaming Multiprocessors (64 CUDA cores and 8 TMUs) every time they wanted to create a new product in order to keep some performance differentiation between market segments. This hasn’t happened and the core layout of the GTX 465 illustrates how you can go from a full 512 core GPU to something altogether different. Instead of going with the predicted 384 CUDA cores and 48 TMUs for this new card, NVIDIA took out the proverbial trimmers and cut out an additional SM or 32 cores. The result is 11 SMs spread over three Graphics Processing Clusters for a total of 352 cores and 44 Texture Units. Luckily, the Raster Engine features dynamic load balancing so this odd number on the third GPC won’t be an issue.

Since the ROP, Cache and memory controller array scales separately from the SMs and GPCs, NVIDIA was a bit less liberal in their cutting here. What we are left with are four 64-bit GDDR5 memory controllers for a 256-bit interface along with 32 ROPs and 256KB of L2 Cache. The cache itself should partially alleviate some of the performance drop-off this card experiences because of its relatively low memory bandwidth.

EVGA-GTX465-14.jpg

Earlier, we mentioned that disabling cores can be beneficial when it comes to maximizing the returns from a single wafer. Unfortunately, this can also act as a double edged sword so to speak due to a convoluted law of diminishing returns when it comes to die sizes, performance and power consumption. It is feasible to have the vast majority of cores on a given die disabled in order to make a lower-end product but what you are left with is a massive number of transistors that are still consuming power (albeit a fraction of what they would if all the cores would be enabled) which makes such a proposition unappealing to consumers and board partners alike. This is one of the main reasons why the GTX 465 has a similar TDP to that of a GTX 470 while coming with significantly less horsepower. What’s the solution to this? A cut-down die like ATI has been using for their HD 5770 and lower end cards but for the time being, NVIDIA hasn’t released any specifications about what this might look like.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
EVGA’s Warranty, Step-Up, Etc.

EVGA’s Warranty, Step-Up, Etc.


Many of us know EVGA by name since their cards are usually some of the best priced on the market. Other than that, there are several things which EVGA has done to try to differentiate their business model from that of their competition. Not only do they have an excellent support forum and an open, friendly staff but it also seems like they have a love for their products you just can’t find many other places. Passion for one’s products goes a long way in this industry but without a good backbone of customer support, it would all be for nothing. Let’s take a look at what EVGA has to offer the customer AFTER they buy the product.


Lifetime Warranty

GTX280-92.jpg

Every consumer wants piece of mind when it comes to buying a new computer component especially when that component costs you over $400. In order to protect your investment, EVGA offers their customers a lifetime warranty program which is in effect from the day you register the card until…well…the end of time. The only caveat is that you must register your card within 30 days of purchase or you will only be eligible for their new 1+1 warranty. So as long as you don’t get lazy or forget, consider yourself covered even if you remove the heatsink. The only thing that this warranty doesn’t cover is physical damage done to your card. For more information about the lifetime warranty you can go here: EVGA | Product Warranty

Even if you forget to register your card within the 30 days necessary to receive the lifetime warranty, EVGA still offers you a 1 year warranty.


Step-Up Program

EVGA-GTX465-13.jpg

While some competitors like BFG now offer trade-up programs as well, EVGA will always be known for having the first of this type of program. This allows a customer with an EVGA card to “step up” their card to a different model within 90 days of purchase. Naturally, the difference in price between the value of your old card and that of the new card will have to be paid but other than that, it is a pretty simple process which gives EVGA’s customers access to newer cards. As is usual certain conditions apply such as the cards being in stock with EVGA and the necessity to register your card but other than that it is pretty straightforward. Check out all the details here: EVGA | Step-Up Program


24 / 7 Tech Support

GTX280-89.jpg

Don’t you hate it when things go ass-up in the middle of the night without tech support around for the next dozen hours or so? Luckily for you EVGA purchasers, there is a dedicated tech support line which is open 24 hours a day, 7 days a week. As far as we could tell, this isn’t farmed out tech support to the nether regions of Pakistan either since every rep we have spoken to over the last few years has had impeccable English. Well, we say that but maybe EVGA hunted down the last dozen or so expats living in Karachi.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
EVGA GTX 465 Superclocked Specs / Packaging & Accessories

EVGA GTX 465 Superclocked Specs


EVGA-GTX465-78.jpg

As with all of EVGA’s “Superclocked” edition cards, this one doesn’t come with any massive overclocks. There was a day before the advent of EVGA’s SSC and FTW models when the “Superclocked” designation meant you were getting significantly higher clock speeds but those days are long over.

Unfortunately, neither the 18Mhz core overclock nor the 34Mhz memory speed increase will do anything noticeable for gaming though in our charts there will be a small difference between this card and the reference version. We are told that EVGA has additional SSC and FTW cards in the pipeline so we’ll hold out some hope for higher clock speeds in the future.


Packaging and Accessories


EVGA-GTX465-1.jpg
EVGA-GTX465-2.jpg

EVGA’s box is…well…typical EVGA to be honest. With a black background, a highlighted “SC” designating this as the Superclocked version and a listing of features on the back, the only thing that is really missing is any mention of actual clock speeds.

EVGA-GTX465-3.jpg

Within the box is a plastic sleeve which protects the card from damage as well as the accessories.

EVGA-GTX465-4.jpg
EVGA-GTX465-5.jpg

The accessories that come with EVGA’s Superclocked version of the GTX 465 actually buck the trend put forward by other board partners of including only the basics. In EVGA’s case we still have the usual dual Molex to 6-pin adaptors, driver CD and DVI to VGA dongle but supplementing this is a six foot mini HDMI to HDMI cable, a case badge and a massive case sticker. Not too shabby at all.
 
Status
Not open for further replies.

Latest posts

Top