What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GTX 660 2GB Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
In today’s gaming market, there’s a very simple rule of thumb: everyone loves reading about high end graphics cards, but with affordability dictating purchase decisions, most gamers will forego expensive flagship products. The $399 to $299 segments take up most of the slack but the sub-$250 market is where AMD and NVIDIA typically hit their price / performance stride. Looking back at previous and current generations, this is where the ultra popular 8800 GT, GTX 460, HD 6870 and HD 7850 cut their teeth before inevitable price drops brought them to even lower price points.

NVIDAI calls this narrow yet popular slice of the overall GPU pie the “gamers’ sweet spot” but up until this point, they have been more than content to let the GTX 560 Ti and its ilk compete against the Pitcairn-based HD 7800-series. The battle has been somewhat lopsided but the entry of the new GeForce GTX 660 2GB is meant to effectively turn things around by utilizing an efficient GK106 core and setting a new benchmark in high performance affordability with a price of just $229.

GTX-660-19.jpg

NVIDIA’s GTX 660 Ti used a scaled down version of the GK104, a core which was used to great effect within two other high end cards: the GTX 670 and GTX 680. Since only so much can be cut from a given microarchitecture before it becomes inefficient and cost prohibitive for lower spec’d SKUs, another route was taken for the GTX 660. Even though it carries the same name as its larger sibling, the GTX 660 2GB uses a new core code-named GK106.

While it may still use the Kepler architecture championed by NVIDIA this generation, the GK106’s core layout is slightly different and more streamlined than the GK104’s 3.54 billion transistor payload. In its current form, the GK106 uses five SMX blocks, each with 192 CUDA cores and 16 Texture Units, resulting in a die with 960 CUDA cores and 80 TMUs. Four of these Streaming Multiprocessor blocks are paired up into separate Graphics Compute Clusters (with their associated Raster Engines) which leaves one lonely SMX for the final GPC. Naturally, this somewhat unbalanced approach has us wondering if the GK106 has a single 192-core SMX cut out in order to ensure the necessary performance segmentation between the GTX 660 Ti and GTX 660.

Aside from the typical core rendering functionalities, the GK106 also incorporates the usual 1:1:128 ratio for ROP, GDDR5 memory controller and L2 cache partitions. As such, it uses three partitions of eight ROPS (for a total of 24), three 128-bit memory controllers and 384KB of L2 cache. With a 192-bit memory interface, the mixed memory sizes from past generations have been incorporated here as well, allowing GK106-based cards to use 1.5GB, 2GB or 3GB of memory. In its reference form, the GTX 660 will utilize a 2GB framebuffer but expect the board partners to introduce custom 3GB cards in short order.

All of the changes wrapped into this core have resulted in a substantial die size savings which in effect lowers per unit costs and should hopefully improve yields as well. At 2.54 billion transistors, the GK106 is about 28% smaller than the GK104.

GTX-660-79.jpg

In terms of specifications, the GTX 660 is a combination of freshness coupled with a few instances of déjà vu. Its 960 cores, 24 ROPs and 80 TMUs are far enough removed from the Ti’s offerings that there should be little to no performance overlap, even with pre-overclocked cards from NVIDIA’s board partners. Meanwhile, the GK106’s low transistor count allows the GTX 660 to run at a blistering 980MHz –or 1033MHz and higher with Boost- without guzzling down too much power. As a matter of fact, even though its TDP is listed as 140W, the card shouldn’t consume more than 117W in typical gaming scenarios. This frugality may not seem like a big deal but it allows the GTX 660 to become a simple drop-in upgrade for users with older systems and slightly outdated power supplies.

The memory layout of this card hasn’t changed one iota from the GTX 660 Ti. Like its $299 sibling the GTX 660 uses 2GB of GDDR5 operating at 6Gbps on a 192 bit bus which is good for 144GB/s. This puts it within spitting distance of the 153.3GB/s offered by the HD 7870 and HD 7850 so expect very close performance in most bandwidth limited situations.

At $229, the GTX 660 slides neatly into NVIDIA current lineup between the GTX 660 Ti and the newly introduced GTX 650. For those of you wondering, the $109 GTX 650 is essentially a clone of the OEM-only GT 640 GDDR5 with higher clock speeds but the same GK107 core. We’ll be covering this card in depth soon but for the time being, it is a simple footnote next to the GTX 660 launch.

GTX-660-17.jpg

Within today’s market, NVIDIA didn’t really need all that much crystal ball gazing to determine where the GTX 660 2GB should end up. Currently they have AMD’s HD 7950 and virtually non-existent HD 7950 Boost Edition covered with the GTX 660 Ti. However, the $259 HD 7870 GHz Edition and $199 to $209 HD 7850 2GB have been wreaking havoc upon the mid range since their release more than half a year ago. Their judicious price cuts have brought pressure upon NVIDIA but that hasn’t stopped the GTX 660 from sliding neatly into a space between these two AMD competitors.

In essence, this new card has been parachuted into a minefield of strong products in the hope that it will offer a balance between two extremely popular alternative solutions. It’s also got some big shoes to fill since the GTX 560 Ti and GTX 460 were both considered leaders of their respective generations.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
EVGA GTX 660 SC & MSI GTX 660 Twin Frozr OC

EVGA GTX 660 SC & MSI GTX 660 Twin Frozr OC


GTX-660-80.jpg

Within this review, we will actually be covering three cards in parallel instead of the usual reference-only approach. Naturally, the GTX 660 2GB reference card shows up but this time it is in the form of an EVGA GTX 660 Superclocked with a reflashed stock BIOS. Unfortunately, our testing has shown that simply downclocking overclocked Kepler cards does not result in reference-matching performance so we needed to take the BIOS flashing route in order to ensure complete accuracy.

The Superclocked will also be represented at its default speeds, which aren’t that high when compared to NVIDIA’s base specifications but it should still grant a few extra FPS here and there. The real selling point of this card will likely be its transferable warranty and a price of just $229. This is the first time this generation we are seeing EVGA come to the table with a pre-overclocked card that doesn’t cost any more than the base MSRP.

MSI also makes an appearance in this review with their GTX 660 Twin Frozr OC. While it may not have the the Power Edition’s upgraded component stack or clock speeds that are as high as EVGA’s, the OC still has higher than reference speeds and a great heatsink design. Unlike the EVGA Superclocked, this card will be priced at a $10 premium but according to MSI, a mail in rebate program will be running at launch which brings the Twin Frozr OC’s cost down by $10.


EVGA GTX 660 2GB Superclocked


Part Number: 02G-P4-2662-KR
Warranty: 3 Years (transferable)

GTX-660-1.jpg

EVGA has based their GTX 660 Superclocked off of the reference design and barring a few extremely minor differences, it could be the GTX 660 Ti’s doppelganger. The GTX 660 uses a classic blower-style heatsink design that exhausts hot air outside of a case’s confines and at just 9 ½” long EVGA’s card should have no issue fitting into most enclosures. We also happen to like the understated tone on tone design used here since it should be able to blend seamlessly into any environment.

GTX-660-2.jpg
GTX-660-4.jpg

With a typically power draw only 117W and a TDP of 140W, NVIDIA didn’t need to get fancy with the GTX 660’s power connectors and as a result this card only has a single 6-pin input. This should make it compatible with most existing power supplies but should your PSU come up short, EVGA has included a Molex to 6-pin adaptor. Meanwhile, only a single SLI connector is included which means the GK106 is compatible with 2-way SLI, a step down from the GXT 660 Ti’s Tri-SLI certification.

GTX-660-3.jpg

As one might expect, the real differences between the GTX 660 and its bigger brother –the GTX 660 Ti- aren’t evident by just looking at the PCB’s underside. However, we can see that NVIDIA’s reference PCB comes in at an extremely short 6 ¾” while the remainder of this card’s length is taken up by the heatsink / fan shroud’s overhang.

GTX-660-5.jpg

The backplate connectors are a carbon copy of those found most other Kepler-base cards: lone DisplayPort and HDMI outputs and dual DVIs. However, EVGA has changed things up a bit by using their custom high airflow bracket which is supposed to allow for increased exhaust capacity and lower internal temperatures.


MSI GTX 660 Ti 2GB Twin Forzr OC


Part Number: N660-TF-2GD5/OC
Warranty: 3 Years

GTX-660-11.jpg

MSI’s GTX 660 Ti Twin Frozr uses the iconic heatsink its name is derived from but with a length of 8 ¾”, it is actually shorter than the reference design. Unlike the GTX 660 Ti Power Edition, this card doesn’t incorporate a revised and upgraded PWM design but it does feature slightly higher end components than the reference card.


GTX-660-12.jpg
GTX-660-13.jpg

The Twin Frozr heatsink uses a pair of 70mm fans, a large internal aluminum fin array and three large heatpipes which are attached to a nickel plated base plate. MSI has also included a secondary aluminum stiffener in order to maintain PCB rigidity.

GTX-660-14.jpg
GTX-660-15.jpg

The card’s single 6-pin power connector is all that’s needed in order to keep the Twin Frozr OC fed with current, despite its higher clock speeds and ability to overclock even further. This power connector is attached to a large blank area of the PCB which is completely devoid of components as the PWM section has been pushed in closer to the GK106 core.

GTX-660-16.jpg

As with nearly every GTX 660 we are bound to see, the MSI card uses a reference I/O panel with two DIV connectors and outputs for HDMI and DisplayPort, allowing for native NVIDIA Surround + accessory display compatibility.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Under the GTX 660’s Heatsink

Under the GTX 660’s Heatsink


Note: we used EVGA’s GTX 660 SC for this section since it uses a reference PCB design. However, the heatsink on the reference card will be different from the one you see below.

GTX-660-6.jpg

Removing the heatsink from this card is extremely easy, which bodes well for anyone installing an aftermarket cooler. Just be aware that there are a few small overhangs that tend to snag the PCB so don’t force anything or you’ll likely crack the plastic casing.

GTX-660-9.jpg
GTX-660-10.jpg

Instead of using a somewhat cheap looking heatsink EVGA’s card uses a large cast aluminum affair that uses a copper contact plate with secondary cooling areas for the VRM modules and GDDR5 memory. This is all tied to a dense fin array that’s directly fed by a copper heatpipe, making it much more substantial than anything we’ve seen from reference GTX 660 Ti and GTX 670 cards. EVGA tells us this is a custom design which is supposed to evenly distribute heat while ensuring all of the components stay at optimal temperature.

GTX-660-8.jpg
GTX-660-7.jpg

The component layout on the GTX 660 is different from previous cards as well since its 4+2 PWM is pushed to the rearmost area of the PCB while the memory modules are loosely spaced around the core’s periphery. Speaking of the GK106 core, at 214mm² it is substantially smaller than the 294mm² of NVIDIA’s GK104 and it does without the large IHS from previous generations.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The SMX: Kepler’s Building Block

The SMX: Kepler’s Building Block


GTX-680-116.jpg

Much like Fermi, Kepler uses a modular architecture which is structured into dedicated, self contained compute / graphics units called Streaming Multiprocessors or in this case Extreme Streaming Multiprocessors. While the basic design and implementation principles may be the same as the previous generation (other than doubling up the parallel threading capacity that is), several changes have been built into this version that help it further maximize performance and consume less power than its predecessor.

Due to die space limitations on the 40nm manufacturing process, the Fermi architecture had to cope with less CUDA cores but NVIDIA offset this shortcoming by running these cores at a higher speed than the rest of processing stages. The result was a 1:2 graphics to core clock ratio that led to excellent performance but unfortunately high power consumption numbers.

As we already mentioned, the inherent efficiencies of TSMC’s 28nm manufacturing process has allowed Kepler’s SMX to take a different path by offering six times the number of processors but running their clocks at a 1:1 ratio with the rest of the core. So essentially we are left with core components that run at slower speeds but in this case sheer volume makes up for and indeed surpasses any limitation. In theory this should lead to an increase in raw processing power for graphics intensive workloads and higher performance per watt even though the CUDA cores’ basic functionality and throughput hasn’t changed.

Each SMX holds 192 CUDA cores along with 32 load / store units which allows for a total of 32 threads per clock to be processed. Alongside these core blocks are the Warp Schedulers along with the associated dispatch units which process 64 concurrent threads (called Warps) to the cores while the primary register file currently sits at 65,536 x 32-bit. All of these numbers have been increased twofold over the previous generation to avoid causing bottlenecks now that each SMX’s CUDA core count is so high.

GTX-680-117.jpg

NVIDIA’s ubiquitous PolyMorph geometry engine has gone through a redesign as well. Each engine still contains five stages from Vertex Fetch to the Stream Output which process data from the SMX they are associated with. The data then gets output to the Raster Engine within each Graphics Processing Cluster. In order to further speed up operations, data is dynamically load balanced and goes from one of eight PolyMorph engines to another through the on-die caching infrastructure for increased communication speed.

The difference main difference between the current and past generation PolyMorph engines boils down to data stream efficiency. The new “2.0” version in the Kepler core boasts primitive rates that are two times higher and along with other improvements throughout the architecture offers a fourfold increase in tessellation performance over the Fermi-based cores.

GTX-680-118.jpg

The SMX plays host to a dedicated caching network which runs parallel to the primary core stages in order to help store draw calls so they are not passed off through the card’s memory controllers, taking up valuable storage space. Not only does this help with geometry processing efficiency but GPGPU performance can also be drastically increased provided an API can take full advantage of the caching hierarchy.

As with Fermi, each one of Kepler’s SMX blocks has 64KB of shared, programmable on-chip memory that can be configured in one of three ways. It can either be laid out as 48 KB of shared memory with 16 KB of L1 cache, or as 16 KB of Shared memory with 48 KB of L1 cache. Kepler adds another 32/32 mode which balances out the configuration for situations where the core may be processing graphics in parallel with compute tasks. This L1 cache is supposed to help with access to the on-die L2 cache as well as streamlining functions like stack operations and global loads / stores. However, in total, the GK104and GK106 have less SMXs than their Fermi predecessors which results in significantly less on-die memory. This could negatively impact compute performance in some instances.

Even though there haven’t been any fundamentally changes in the way textures are handled across the Kepler architecture, each SMX receives a huge influx of texture units to 16 up from Fermi’s four.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
GPU Boost; Dynamic Clocking Comes to Graphics Cards

GPU Boost; Dynamic Clocking Comes to Graphics Cards


Turbo Boost was first introduced into Intel’s CPUs years ago and through a successive number of revisions, it has become the de facto standard for situation dependent processing performance. In layman’s terms Turbo Boost allows Intel’s processors to dynamically fluctuate their clock speeds based upon operational conditions, power targets and the demands of certain programs. For example, if a program only demanded a pair of a CPU’s six cores the monitoring algorithms would increase the clock speeds of the two utilized cores while the others would sit idle. This sets the stage for NVIDIA’s feature called GPU Boost.

GTX-680-110.gif

Before we go on, let’s explain one of the most important factors in determining how high a modern high end graphics card can clock: a power target. Typically, vendors like AMD and NVIDIA set this in such a way that ensures an ASIC doesn’t overshoot a given TDP value, putting undue stress upon its included components. Without this, board partners would have one hell of a time designing their cards so they wouldn’t overheat, pull too much power from the PWM or overload a PSU’s rails.

While every game typically strives to take advantage of as many GPU resources as possible, many don’t fully utilize every element of a given architecture. As such, some processing stages may sit idle while others are left to do the majority of rendering, post processing and other tasks. As in our Intel Turbo boost example this situation results in lower heat production, reduced power consumption and will ultimately cause the GPU core to fall well short of its predetermined power (or TDP) target.

In order to take advantage of this NVIDIA has set their “base clock” –or reference clock- in line with a worst case scenario which allows for a significant amount of overhead in typical games. This is where the so-called GPU Boost gets worked into the equation. Through a combination of software and hardware monitoring GPU Boost fluctuates clock speeds in an effort to run as close as possible to the GK104’s TDP of 195W, the GTX 670's TDP of 170W, the 150W allotted to the GTX 660 Ti and finally the GTX 660’s 140W. When gaming, this monitoring algorithm will typically result in a core speed that is higher than the stated base clock.

GTX-680-111.gif

Unfortunately, things do get a bit complicated since we are now talking about two clock speeds, one of which may vary from one application to another. The “Base Clock” is the minimum speed at which the core is guaranteed to run, regardless of the application being used. Granted, there may be some power viruses out there which will push the card beyond even these limits but the lion’s share of games and even most synthetic applications will have no issue running at or above the Base Clock.

The “Boost Clock” meanwhile is the typical speed at which the core will run in non-TDP limited applications. As you can imagine, depending on the core’s operational proximity to the power target this value will surely fluctuate to higher and lower levels. However, NVIDIA likens the Boost Clock rating to a happy medium that nearly every game will achieve, at a minumum. For those of you wondering, both the Base Clock and the Boost Clock will be advertised on all Kepler-based cards and on the GTX 680 the values are 1006MHz and 1058MHz respectively. The GTX 670 and GTX 660 Ti meanwhile run at 915MHz / 980MHz and the GTX 660 hits 980MHz / 1033MHz.

GPU Boost differs from AMD’s PowerTune in a number of ways. While AMD sets their base clock off of a typical in-game TPD scenario and throttles performance if an application exceeds these predetermined limits, NVIDIA has taken a more conservative approach to clock speeds. Their base clock is the minimum level at which their architecture will run under the worst case conditions and this allows for a clock speed increase in most games rather than throttling.

In order to better give you an idea of how GPU Boost operates, we logged clock speeds and Power Use in Dirt 3 and 3DMark11 using EVGA’s new Precision X utility.

GTX-680-121.gif

GTX-680-122.gif

Example w/GTX 680

In both of the situations above the clock speeds tend to fluctuate as the core moves closer to and further away from its maximum power limit. Since the reaction time of the GPU Boost algorithm is about 100ms, there are situations when clock speeds don’t line up with power use, causing a minor peak or valley but for the most part both run in perfect harmony. This is most evident in the 3DMark11 tests where we see Kepler’s ability to run slightly above the base clock in a GPU intensive test and then boost up to even higher levels in the Combined Test which doesn’t stress the architecture nearly as much.

GTX-680-93.jpg

Example w/GTX 680

According to NVIDIA, lower temperatures could promote higher GPU Boost clocks but even by increasing our sample’s fan speed to 100%, we couldn’t achieve higher Boost speeds. We’re guessing that high end forms of water cooling would be needed to give this feature more headroom and according to some board partners, benefits could be seen once temperatures hit below 70 degrees Celcius. However, the default GPU Boost / Power offset NVIDIA built into their core seems to leave more than enough wiggle room to ensure that all reference-based cards should behave in the same manner.

There may be a bit of variance from the highest to the lowest leakage parts but the resulting dropoff in Boost clocks will never be noticeable in-game. This is why the boost clock is so conservative; it strives to stay as close as possible to a given point so power consumption shouldn’t fluctuate wildly from one application to another. But will this cause performance differences from one reference card to another? Absolutely not unless they are running at abnormally hot or very cool temperatures.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Introducing TXAA

Introducing TXAA


As with every other new graphics’ architecture that has launched in the last few years, NVIDIA will be launching a few new features alongside Kepler. In order to improve image quality in a wide variety of scenarios, FXAA has been added as an option to NVIDIA’s control panel, making it applicable to every game. For those of you who haven’t used it, FXAA is a form of post processing anti aliasing which offers image quality that’s comparable to MSAA but at a fraction of the performance cost.

GTX-680-103.jpg

Another item that has been added is a new anti aliasing component called TXAA. TXAA uses hardware multisampling alongside a custom software-based resolve AA filter for a sense of smoothness and adds an optional temporal component for even higher in game image quality.

According to NVIDIA, their TXAA 1 mode offers comparable performance to 2xMSAA but results in much higher edge quality than 8xMSAA. TXAA2 meanwhile steps things up to the next level by offering image enhancements that can’t be equaled by MSAA but once again the performance impact is negligible when compared against higher levels of multisampling.

GTX-680-113.gif

From the demos we were shown, TXAA has the ability to significantly decrease the aliasing in scenes Indeed, it looks like the developer market is trying to move away from inefficient implementations of multi sample anti aliasing and have instead started gravitating towards more higher performance alternatives like MLAA, FXAA and now possibly TXAA.

There is however one catch: TXAA cannot be enabled in NVIDIA’s control panel. Instead, game engines have to support it and developers will be implementing it within the in-game options. Presently the only title that supports it is The Secret World MMORPG but expect additional titles should become available soon.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Smoother Gaming Through Adaptive VSync

Smoother Gaming Through Adaptive VSync


In a market that seems eternally obsessed with high framerates, artificially capping performance at a certain levels by enabling Vertical Synchronization (or VSync) may seem like a cardinal sin. In simplified terms, VSync essentially sets the framerate within games to the refresh rate of the monitor which means games running on 60Hz monitors will achieve framerates of no higher than 60FPS. 120Hz panels eliminate this limitation and boost framerates to 120 but monitors sporting the technology are few in number and they usually come with astronomical price points.

GTX-680-101.jpg

With today’s graphics cards pushing boundaries that weren’t even dreamed of a few years ago, gamers usually want to harness every last drop of their latest purchase. This alongside the possible input lag issues VSync causes many gamers choose to disable VSync altogether. However, there are some noteworthy issues associated with running games at high framerates, asynchronously to the vertical refresh rate of most monitors.

Without VSync enabled, games will flow more naturally, average framerates are substantially higher and commands will be registered onscreen in short order. However, as the framerates run outside of the monitor’s refresh rate tearing begins to occur, decreasing image quality and potentially leading to unwanted distractions. Tearing happens when fragments of multiple frames are displayed on the screen as the monitor can’t keep up with the massive amount of rendered information being pushed through at once.

GTX-680-100.jpg

For some, V-Sync can be a saving grace since it eliminates the horizontal tearing but other than the aforementioned input lag, there is one other major drawback: stuttering. Remember, syncing up the monitor with your game holds both refresh rates and framerates at 60. However, some scenes can cause framerates to droop well below the optimal 60 mark which will lead to some frames being “missed” by the 60Hz monitor refresh and thus cause a slight stuttering effect. Basically, as the monitor is refreshing itself 60 times every second, the lower framerate causes it to momentarily display 30, 20, 15 (or any other multiple of 60) frames per second.

GTX-680-102.jpg

Through Adaptive VSync NVIDIA now gives users the best of both worlds by still capping framerates at the same level as the screen’s refresh rate but when framerate droops are detected, it temporarily disables the synchronization. This boosts framerates for as long as needed before once again enabling VSync when performance climbs to optimal levels. It is supposed to virtually eliminate visible stutter –even though some will still occur as the algorithm switches over- and improve overall framerates while still maintaining the tear-free experience normally associated with VSync.

GTX-680-120.jpg

Adaptive VSync can be enabled in drivers’ control panel but will only be available starting with the 300.xx-series driver stack. For this technology to be effective, all VSync changes should be done in the control panel while VSync needs to be disabled within any in-game graphics menu. There’s also the option for a “Half Refresh Rate” sync that can be used to lock the framerates to 30 FPS for highly demanding games or for graphics cards that can’t quite hit the 60 FPS mark.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup / Benchmark Sequences

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte EX58-UD5
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 306.23 Beta
AMD 12.8

***Note that the GTX 660 Ti and GTX 660 used in the following tests is an EVGA Superclocked version that has received a BIOS flash with reference specifications. Unfortunately, downclocking a Kepler-based pre-overclocked card WILL NEVER result in performance that replicates a reference design as the only way of modifying the base clock is through a modified BIOS.


Application Benchmark Information:
Note: In all instances, in-game sequences were used. The videos of the benchmark sequences have been uploaded below.


Batman: Arkham City

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/Oia84huCvLI?version=3&hl=en_US&rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Oia84huCvLI?version=3&hl=en_US&rel=0" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Battlefield 3

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/i6ncTGlBoAw?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/i6ncTGlBoAw?version=3&hl=en_US" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Crysis 2

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/Bc7_IAKmAsQ?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Bc7_IAKmAsQ?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Deus Ex Human Revolution

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/GixMX3nK9l8?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/GixMX3nK9l8?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Dirt 3

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/g5FaVwmLzUw?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/g5FaVwmLzUw?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Metro 2033

<object width="480" height="360"><param name="movie" value="http://www.youtube.com/v/8aZA5f8l-9E?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/8aZA5f8l-9E?version=3&hl=en_US" type="application/x-shockwave-flash" width="480" height="360" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Shogun 2: Total War

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/oDp29bJPCBQ?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/oDp29bJPCBQ?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Skyrim

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/HQGfH5sjDEk?version=3&hl=en_US&rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/HQGfH5sjDEk?version=3&hl=en_US&rel=0" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Wargame: European Escalation

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/ztXmjZnWdmk?version=3&hl=en_US&rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/ztXmjZnWdmk?version=3&hl=en_US&rel=0" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Witcher 2 v2.0

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/tyCIuFtlSJU?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/tyCIuFtlSJU?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​

*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
3DMark 11 (DX11)

3DMark 11 (DX11)


3DMark 11 is the latest in a long line of synthetic benchmarking programs from the Futuremark Corporation. This is their first foray into the DX11 rendering field and the result is a program that incorporates all of the latest techniques into a stunning display of imagery. Tessellation, depth of field, HDR, OpenCL physics and many others are on display here. In the benchmarks below we have included the results (at default settings) for both the Performance and Extreme presets.


Performance Preset

GTX-660-30.jpg


Extreme Preset

GTX-660-31.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Batman: Arkham City (DX11)

Batman: Arkham City (DX11)


Batman: Arkham City is a great looking game when all of its detail levels are maxed out but it also takes a fearsome toll on your system. In this benchmark we use a simple walkthrough that displays several in game elements. The built-in benchmark was avoided like the plague simply because the results it generates do not accurately reflect in-game performance.

1920 x 1200

GTX-660-32.jpg


GTX-660-33.jpg


2560 x 1600

GTX-660-34.jpg


GTX-660-35.jpg
 

Latest posts

Top