What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce GTX 660 Ti Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
In the months since its release, NVIDIA’s Kepler has reached impressive levels of popularity. The GTX 680 had waiting lists as long as we’ve ever seen, the GTX 670 remains a price / performance leader and the astronomically expensive GTX 690 is the fastest single card on the market. Unfortunately, there really hasn’t been anything geared towards sub-$350 price points but that’s all about to change with the GTX 660 Ti 2GB.

The GTX 660 Ti continues NVIDIA’s march down-market into lower pricing segments by targeting those with GTX 460 and HD 6870 series products that are looking for a significant performance boost in today’s more demanding games. Traditionally, the cards within the $199 to $349 price brackets are hot commodities and the GTX 660 Ti looks to be no different. This sweet spot segment has housed cards like the GTX 560 Ti, GTX 460, and now that their prices have been cut, the HD 7870 and HD 7950 as well. For those of you who need a quick recounting of history, every one of those products defined its generation and are still sought-after commodities.

GTX-660-TI-18.jpg

So what differentiates a GTX 660 Ti from its more expensive and more capable siblings? The answer to that is twofold: at first glance not much has changed but the few differences which were made will have profound impact upon overall performance. Simply put, this card still uses the 3.54 billion transistor GK104 core we’ve come to know from other Kepler based cards but reduces capabilities in certain key areas.

Moving to this version of the GK104 seems to have been relatively simple and NVIDIA’s engineers deserve some credit for designing what seems to be a very scalable architecture. Instead of creating a whole new core with some base elements shared with the flagship parts -the GF114 and GF104 used this philosophy to great effect- NVIDIA simply optimized the existing GK104 by cutting back on the memory controllers, ROPs and L2 cache.

Since the memory controllers are tied at the hip to L2 cache and the ROP arrays, all three had to be cut back in order to achieve a balanced, adaptable solution for lower price brackets. Naturally, this will scale back performance when compared against a GTX 670 but these changes have also led to a significantly lower power and heat envelope than other Kepler-based cards.

GTX-660-TI-79.jpg

The end result of NVIDIA’s cutting is a core with 1344 cores and 112 texture units spread across seven SMX engines with an operating frequency of up to 980MHz with GPU Boost, mirroring the specifications of a GTX 670. The similarities stop there since the GTX 660Ti will come with 24 ROPs and a 192-bit memory interface in order to distinguish it from other Kepler-based products. In plain English this means only 1/8th of the cores have been disabled but 1/4 of the ROP / Cache pipeline and one of the four memory controllers will be unavailable for the remaining cores. Will this cause a bottleneck? With the GTX 660 Ti’s operating frequency being equal to that of the GTX 670, likely not all that much but the performance hit in certain scenarios could be significant.

To stay competitive from a memory bandwidth standpoint, NVIDIA needed to use 2GB of GDDR5 but the 192-bit interface presented a challenge. If a standard layout would have been implemented, each of the three available controllers would have been populated by 512MB of memory, resulting in a somewhat paltry 1.5GB. Instead of going this route, the GTX 660 Ti uses a technology pioneered by NVIDIA’s GF116 and by extension the GTX 550 Ti: mixed memory allotments. This allows each channel to be populated by a different memory density and through the use of proprietary dynamic load balancing, the three memory controllers on the GTX 660 Ti’s core can adapt their throughout according to the architecture’s needs. In this case, NVIDIA has equipped two controllers with the standard 2x 256MB allotment while the remaining controller receives a quartet of 256MB modules.

At first, the 192-bit spec may be a turn-off for some in comparison to the HD 7870’s 256-bit layout but it has been paired up with the same 6Gbps GDDR5 modules found on the higher end card, partially mitigating the bandwidth loss from a narrow memory interface. Regardless of the inclusion of ultra gas GDDR5 memory, the GTX 660 Ti’s bandwidth is about 10% less than a HD 7870 so you can clearly see the importance of that extra 64-bit memory controller.

GTX-660-TI-17.jpg

Pricing is an extremely important aspect of any card with mass gaming market ambitions and the GTX 660 Ti continues the $100 step-down trend set by NVIDIA and AMD in this generation. As solution that supposedly bridges a narrow gap between the HD 7870 and HD 7950, it receives an SRP of $299 which may force AMD to lower their prices again. This is slightly above $249 the launch price of NVIDIA’s own GTX 560 Ti but it lines up nearly perfectly with the often forgotten GTX 560 Ti 448 so the 660 should still present a great price / performance solution for gamers on a tighter budget. NVIDIA’s board partners will also be introducing pre-overclocked, custom designed SKUs with upgraded cooling solutions right alongside reference designs. These upgraded models should run in price from $309 up to $329 and above depending upon features and clock speeds. However, for the purposes of this review, we will be looking at a reference clocked model.

While some feel they have been waiting overly long for the GTX 660 Ti to hit the market, the timing of its release is perfect for the all-important back to school buying season. We’ve also been told by numerous board partners that stock will be ready for purchase on launch day and barring any massive upsurge in demand, there should be more than enough cards to go around. So for anyone that’s been on the sidelines, hoping NVIDIA’s capable Kepler architecture to reach more affordable levels, this could be the day you’ve been waiting for.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Closer Look at the GeForce GTX 660 Ti 2GB

A Closer Look at the GeForce GTX 660 Ti 2GB


GTX-660-TI-8.jpg

Due to the similarity of the cores used in the GTX 670 and GTX 660 Ti, many of NVIDIA’s board partners will be interchanging one reference design with the other. In this case, we used EVGA’s Superclocked model as a starting point since it uses a standard layout with a few small differences such as slightly rounded edges on the heatsink shroud. Even with these minor changes, it still retains the reference length of 9 ½” and sets the template that other entry-level models will follow.

GTX-660-TI-10.jpg
GTX-660-TI-12.jpg

Even though the GTX 660 Ti’s TDP level is just 150W and it draws about 134W under typical load scenarios, NVIDIA has equipped it with a pair of 6-pin power connectors in order to provide adequate power overhead for overclocking. With a recommended power supply rating of 450W, this card should be an easy drop-in upgrade for the vast majority of systems out there.

As with the more expensive cards in NVIDIA’s current Kepler-based lineup, the GTX 660 Ti houses a pair of SLI connectors which allow up to three cards to run in tandem.

GTX-660-TI-15.jpg

Much like its predecessors, the GTX 660 Ti uses a standard Kepler backplate configuration though EVGA has chosen to modify it somewhat with larger exhaust openings for increased airflow. A pair of DVI connectors and single full sized outputs for DisplayPort and HDMI are included, making it compatible with 3+1 NVIDIA Surround setups.

GTX-660-TI-11.jpg

Flipping the card over reveals a sight that should be familiar for anyone who has read our GTX 670 review. For the sake of cost savings and production simplicity, reference GTX 660 Ti cards will use the same PCB –with a few slight modifications- as NVIDIA’s GTX 670.

While the heatsink shroud continues out to this card’s full length, the actual PCB is much, much shorter. We should note however that many board partners will be eschewing this short PCB design and will instead be using their own, slightly longer versions. Now is also a great time to mention that we’ve already seen some preliminary designs of single slot GTX 660 Ti cards so stay tuned for our reviews of those in the coming months.

GTX-660-TI-13.jpg
GTX-660-TI-14.jpg

With the differences between EVGA’s version and a reference design in mind, there really isn’t much to distinguish the GTX 660Ti from its big brother. However, the underside of this card’s PCB does have a single noticeable change: the memory IC placement has been rationalized for the GTX 660 Ti’s trio of controllers.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Under the GTX 660 Ti’s Heatsink

Under the GTX 660 Ti’s Heatsink


GTX-660-TI-1.jpg

Removing the heatsink on this card is actually a straightforward process that involves a few screws and some wiggling. This should be good news for anyone that wants to install an aftermarket cooler or a water block. As can be seen, the plastic shroud is designed to provide optimal airflow towards the fin assembly and then out through the primary exhaust port.

GTX-660-TI-2.jpg

Unfortunately, this isn’t what the reference GTX 660 Ti’s heatsink will actually look like. This is EVGA’s expanded design which is used for higher spec’d versions like the Superclocked so expect the one included on $299 cards to look identical to the slightly flimsy one found on the GTX 670.


We may be running the risk of sounding like a broken record here but even with the main cooling assembly removed, NVIDIA’s GTX 660 Ti looks much like the GTX 670. It uses a simple 4+2 phase PWM but has a slightly different Hynix branded 8-module memory layout which aligns with the cut down controller design. We can expect to see upgrades in this area from various board partners, some of which will release cards with 5+2 and even 6+2 PWM sections.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The SMX: Kepler’s Building Block

The SMX: Kepler’s Building Block


GTX-680-116.jpg

Much like Fermi, Kepler uses a modular architecture which is structured into dedicated, self contained compute / graphics units called Streaming Multiprocessors or in this case Extreme Streaming Multiprocessors. While the basic design and implementation principles may be the same as the previous generation (other than doubling up the parallel threading capacity that is), several changes have been built into this version that help it further maximize performance and consume less power than its predecessor.

Due to die space limitations on the 40nm manufacturing process, the Fermi architecture had to cope with less CUDA cores but NVIDIA offset this shortcoming by running these cores at a higher speed than the rest of processing stages. The result was a 1:2 graphics to core clock ratio that led to excellent performance but unfortunately high power consumption numbers.

As we already mentioned, the inherent efficiencies of TSMC’s 28nm manufacturing process has allowed Kepler’s SMX to take a different path by offering six times the number of processors but running their clocks at a 1:1 ratio with the rest of the core. So essentially we are left with core components that run at slower speeds but in this case sheer volume makes up for and indeed surpasses any limitation. In theory this should lead to an increase in raw processing power for graphics intensive workloads and higher performance per watt even though the CUDA cores’ basic functionality and throughput hasn’t changed.

Each SMX holds 192 CUDA cores along with 32 load / store units which allows for a total of 32 threads per clock to be processed. Alongside these core blocks are the Warp Schedulers along with the associated dispatch units which process 64 concurrent threads (called Warps) to the cores while the primary register file currently sits at 65,536 x 32-bit. All of these numbers have been increased twofold over the previous generation to avoid causing bottlenecks now that each SMX’s CUDA core count is so high.

GTX-680-117.jpg

NVIDIA’s ubiquitous PolyMorph geometry engine has gone through a redesign as well. Each engine still contains five stages from Vertex Fetch to the Stream Output which process data from the SMX they are associated with. The data then gets output to the Raster Engine within each Graphics Processing Cluster. In order to further speed up operations, data is dynamically load balanced and goes from one of eight PolyMorph engines to another through the on-die caching infrastructure for increased communication speed.

The difference main difference between the current and past generation PolyMorph engines boils down to data stream efficiency. The new “2.0” version in the Kepler core boasts primitive rates that are two times higher and along with other improvements throughout the architecture offers a fourfold increase in tessellation performance over the Fermi-based cores.

GTX-680-118.jpg

The SMX plays host to a dedicated caching network which runs parallel to the primary core stages in order to help store draw calls so they are not passed off through the card’s memory controllers, taking up valuable storage space. Not only does this help with geometry processing efficiency but GPGPU performance can also be drastically increased provided an API can take full advantage of the caching hierarchy.

As with Fermi, each one of Kepler’s SMX blocks has 64KB of shared, programmable on-chip memory that can be configured in one of three ways. It can either be laid out as 48 KB of shared memory with 16 KB of L1 cache, or as 16 KB of Shared memory with 48 KB of L1 cache. Kepler adds another 32/32 mode which balances out the configuration for situations where the core may be processing graphics in parallel with compute tasks. This L1 cache is supposed to help with access to the on-die L2 cache as well as streamlining functions like stack operations and global loads / stores. However, in total, the GK104 has less SMXs than Fermi which results in significantly less on-die memory. This could negatively impact compute performance in some instances.

Even though there haven’t been any fundamentally changes in the way textures are handled across the Kepler architecture, each SMX receives a huge influx of texture units to 16 up from Fermi’s four.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
GPU Boost; Dynamic Clocking Comes to Graphics Cards

GPU Boost; Dynamic Clocking Comes to Graphics Cards


Turbo Boost was first introduced into Intel’s CPUs years ago and through a successive number of revisions, it has become the de facto standard for situation dependent processing performance. In layman’s terms Turbo Boost allows Intel’s processors to dynamically fluctuate their clock speeds based upon operational conditions, power targets and the demands of certain programs. For example, if a program only demanded a pair of a CPU’s six cores the monitoring algorithms would increase the clock speeds of the two utilized cores while the others would sit idle. This sets the stage for NVIDIA’s feature called GPU Boost.

GTX-680-110.gif

Before we go on, let’s explain one of the most important factors in determining how high a modern high end graphics card can clock: a power target. Typically, vendors like AMD and NVIDIA set this in such a way that ensures an ASIC doesn’t overshoot a given TDP value, putting undue stress upon its included components. Without this, board partners would have one hell of a time designing their cards so they wouldn’t overheat, pull too much power from the PWM or overload a PSU’s rails.

While every game typically strives to take advantage of as many GPU resources as possible, many don’t fully utilize every element of a given architecture. As such, some processing stages may sit idle while others are left to do the majority of rendering, post processing and other tasks. As in our Intel Turbo boost example this situation results in lower heat production, reduced power consumption and will ultimately cause the GPU core to fall well short of its predetermined power (or TDP) target.

In order to take advantage of this NVIDIA has set their “base clock” –or reference clock- in line with a worst case scenario which allows for a significant amount of overhead in typical games. This is where the so-called GPU Boost gets worked into the equation. Through a combination of software and hardware monitoring GPU Boost fluctuates clock speeds in an effort to run as close as possible to the GK104’s TDP of 195W, the GTX 670's TDP of 170W and the 150W allotted to the GTX 660 Ti. When gaming, this monitoring algorithm will typically result in a core speed that is higher than the stated base clock.

GTX-680-111.gif

Unfortunately, things do get a bit complicated since we are now talking about two clock speeds, one of which may vary from one application to another. The “Base Clock” is the minimum speed at which the core is guaranteed to run, regardless of the application being used. Granted, there may be some power viruses out there which will push the card beyond even these limits but the lion’s share of games and even most synthetic applications will have no issue running at or above the Base Clock.

The “Boost Clock” meanwhile is the typical speed at which the core will run in non-TDP limited applications. As you can imagine, depending on the core’s operational proximity to the power target this value will surely fluctuate to higher and lower levels. However, NVIDIA likens the Boost Clock rating to a happy medium that nearly every game will achieve, at a minumum. For those of you wondering, both the Base Clock and the Boost Clock will be advertised on all Kepler-based cards and on the GTX 680 the values are 1006MHz and 1058MHz respectively. The GTX 670 and GTX 660 Ti meanwhile run at 915MHz / 980MHz

GPU Boost differs from AMD’s PowerTune in a number of ways. While AMD sets their base clock off of a typical in-game TPD scenario and throttles performance if an application exceeds these predetermined limits, NVIDIA has taken a more conservative approach to clock speeds. Their base clock is the minimum level at which their architecture will run under the worst case conditions and this allows for a clock speed increase in most games rather than throttling.

In order to better give you an idea of how GPU Boost operates, we logged clock speeds and Power Use in Dirt 3 and 3DMark11 using EVGA’s new Precision X utility.

GTX-680-121.gif

GTX-680-122.gif

Example w/GTX 680

In both of the situations above the clock speeds tend to fluctuate as the core moves closer to and further away from its maximum power limit. Since the reaction time of the GPU Boost algorithm is about 100ms, there are situations when clock speeds don’t line up with power use, causing a minor peak or valley but for the most part both run in perfect harmony. This is most evident in the 3DMark11 tests where we see the GK104’s ability to run slightly above the base clock in a GPU intensive test and then boost up to even higher levels in the Combined Test which doesn’t stress the architecture nearly as much.

GTX-680-93.jpg

Example w/GTX 680

According to NVIDIA, lower temperatures could promote higher GPU Boost clocks but even by increasing our sample’s fan speed to 100%, we couldn’t achieve higher Boost speeds. We’re guessing that high end forms of water cooling would be needed to give this feature more headroom and according to some board partners, benefits could be seen once temperatures hit below 70 degrees Celcius. However, the default GPU Boost / Power offset NVIDIA built into their core seems to leave more than enough wiggle room to ensure that all reference-based cards should behave in the same manner.

There may be a bit of variance from the highest to the lowest leakage parts but the resulting dropoff in Boost clocks will never be noticeable in-game. This is why the boost clock is so conservative; it strives to stay as close as possible to a given point so power consumption shouldn’t fluctuate wildly from one application to another. But will this cause performance differences from one reference card to another? Absolutely no,t unless they are running at abnormally hot or very cool temperatures.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Introducing TXAA

Introducing TXAA


As with every other new graphics’ architecture that has launched in the last few years, NVIDIA will be launching a few new features alongside Kepler. In order to improve image quality in a wide variety of scenarios, FXAA has been added as an option to NVIDIA’s control panel, making it applicable to every game. For those of you who haven’t used it, FXAA is a form of post processing anti aliasing which offers image quality that’s comparable to MSAA but at a fraction of the performance cost.

GTX-680-103.jpg

Another item that has been added is a new anti aliasing component called TXAA. TXAA uses hardware multisampling alongside a custom software-based resolve AA filter for a sense of smoothness and adds an optional temporal component for even higher in game image quality.

According to NVIDIA, their TXAA 1 mode offers comparable performance to 2xMSAA but results in much higher edge quality than 8xMSAA. TXAA2 meanwhile steps things up to the next level by offering image enhancements that can’t be equaled by MSAA but once again the performance impact is negligible when compared against higher levels of multisampling.

GTX-680-113.gif

From the demos we were shown, TXAA has the ability to significantly decrease the aliasing in scenes Indeed, it looks like the developer market is trying to move away from inefficient implementations of multi sample anti aliasing and have instead started gravitating towards more higher performance alternatives like MLAA, FXAA and now possibly TXAA.

There is however one catch: TXAA cannot be enabled in NVIDIA’s control panel. Instead, game engines have to support it and developers will be implementing it within the in-game options. Presently the only title that supports it is The Secret World MMORPG but expect additional titles should become available soon.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup / Benchmark Sequences

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: ASUS P8Z68-V PRO Gen3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 305.37 Beta
AMD 12.7 Beta

***IMPORTANT NOTE: THe GTX 660 Ti used in the following tests is an EVGA Superclocked version that has received a BIOS flash with reference specifications. Unfortunately, downclocking a Kepler-based pre-overclocked card MAY NOT result in performance that replicates a reference design as the only surefire way of modifying the power limits of pre-overclocked cards is through a modified BIOS.

Application Benchmark Information:
Note: In all instances, in-game sequences were used. The videos of the benchmark sequences have been uploaded below.


Batman: Arkham City

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/Oia84huCvLI?version=3&hl=en_US&rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Oia84huCvLI?version=3&hl=en_US&rel=0" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Battlefield 3

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/i6ncTGlBoAw?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/i6ncTGlBoAw?version=3&hl=en_US" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Crysis 2

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/Bc7_IAKmAsQ?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Bc7_IAKmAsQ?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Deus Ex Human Revolution

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/GixMX3nK9l8?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/GixMX3nK9l8?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Dirt 3

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/g5FaVwmLzUw?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/g5FaVwmLzUw?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Metro 2033

<object width="480" height="360"><param name="movie" value="http://www.youtube.com/v/8aZA5f8l-9E?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/8aZA5f8l-9E?version=3&hl=en_US" type="application/x-shockwave-flash" width="480" height="360" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Shogun 2: Total War

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/oDp29bJPCBQ?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/oDp29bJPCBQ?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Skyrim

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/HQGfH5sjDEk?version=3&hl=en_US&rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/HQGfH5sjDEk?version=3&hl=en_US&rel=0" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Wargame: European Escalation

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/ztXmjZnWdmk?version=3&hl=en_US&rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/ztXmjZnWdmk?version=3&hl=en_US&rel=0" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Witcher 2 v2.0

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/tyCIuFtlSJU?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/tyCIuFtlSJU?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​

*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
3DMark 11 (DX11)

3DMark 11 (DX11)


3DMark 11 is the latest in a long line of synthetic benchmarking programs from the Futuremark Corporation. This is their first foray into the DX11 rendering field and the result is a program that incorporates all of the latest techniques into a stunning display of imagery. Tessellation, depth of field, HDR, OpenCL physics and many others are on display here. In the benchmarks below we have included the results (at default settings) for both the Performance and Extreme presets.


Performance Preset

GTX-660-TI-30.jpg


Extreme Preset

GTX-660-TI-31.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Batman: Arkham City (DX11)

Batman: Arkham City (DX11)


Batman: Arkham City is a great looking game when all of its detail levels are maxed out but it also takes a fearsome toll on your system. In this benchmark we use a simple walkthrough that displays several in game elements. The built-in benchmark was avoided like the plague simply because the results it generates do not accurately reflect in-game performance.

1920 x 1200

GTX-660-TI-32.jpg


GTX-660-TI-33.jpg


2560 x 1600

GTX-660-TI-34.jpg


GTX-660-TI-35.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Battlefield 3 (DX11)

Battlefield 3 (DX11)


For this benchmark, we used a sequence from the Rock and Hard Place mission. The results may seem lower than normal and this is due to the fact that after playing through the game multiple times, this one are was found to be the most demanding on the GPU. As with all of the tests, we try to find a worst case scenario in order to ensure a given card can properly play through the whole game instead of just a “typical” section.

1920 x 1200

GTX-660-TI-37.jpg


GTX-660-TI-38.jpg


2560 x 1600

GTX-660-TI-39.jpg


GTX-660-TI-40.jpg
 

Latest posts

Top