What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce GTX 690 Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Up to this point, the GTX 690’s story has played out like a James Bond movie. Its specifications and final design were so secret, NVIDIA employees intimately involved in the project couldn’t exchange emails about some of the project's details. Rather, many things GTX 690 had to be discussed in person or not at all, lest a stray memo get set to the wrong external contacts. Board partners (some of whom tend to leak more than a fifty year old rowboat) only had a name to go by and didn’t even know what the card looked like. In this industry, secrecy is paramount and to their credit, NVIDIA kept everyone guessing about what their next Kepler-based card would be.

This past Sunday, the rumors were put to rest as NVIDIA introduced the GTX 690. By sporting a pair of GK104 cores and more rendering power than any other card, one thing was abundantly clear: this is how NVIDIA will take back the crown from AMD after more than two years of playing second fiddle. Granted, the GTX 590 offered great framerates and blazed a new, quieter, more refined approach for dual GPU cards but it couldn’t consistently beat the HD 6990, a graphics card that has been the world’s fastest for thirteen months.

Believe it or not, the reasoning behind this lack a top end halo product was straightforward: while the previous generation Fermi-based chips were impressively powerful, they were also power hungry and could heat a small house if given the chance. This limited NVIDIA’s options when trying to incorporate a pair of relatively inefficient cores onto a single PCB and posed a challenge that AMD is surely facing now with the GCN architecture. Kepler on the other hand focuses primarily upon architectural efficiency though the use of TSMC’s 28nm manufacturing process and by cutting away certain elements that were built into Fermi but aren’t of use to gamers buying GeForce-branded graphics cards.

NV-GTX-690-89.jpg

The GTX 690 has benefited from NVIDIA’s new engineering approach since the Kepler-based GK104 core uses a fraction of the power and produces significantly less heat than its predecessor. As a result, the GTX 690 plays host to a pair of fully enabled GK104 cores, each with 1536 CUDA cores, 128 Texture Units, 32 ROPs and is topped off by 4GB of GDDR5 (2GB per GPU) operating at 6Gbps through two 256-bit wide interfaces. These specifications should look familiar to you since they mirror those found on the GTX 680, making this new card one of the only dual GPU solutions to use fully enabled cores.

While the GTX 690 may use two GK104 cores, NVIDIA has made some minor clock speed adjustments in order to meet a reasonable TDP value. The 1006MHz Base Clock (the core’s lowest frequency when running a 3D application) on the GTX 680 is now scaled back to a more modest 915MHz and the Boost Clock is down to 1015MHz from 1058MHz. The memory speeds have remained unchanged though. However, since the cores have enough TDP overhead and most games won’t push them to the limit, they will likely operate at or above the Boost Clock in most applications, resulting in performance that closely mirrors two GTX 680s.

NV-GTX-690-19.jpg

Power saving measures on the GTX 690 may not be all that extreme but when added to the card’s 10 phase all digital PWM and copper-infused PCB, they have a significant impact upon overall power consumption and heat production. Instead of doubling up the GTX 680’s TDP of 192W –a number that’s already quite low in today’s high end GPU market- a GTX 690 boasts a TDP of just 300W. To give you an idea of where this stands, a single GTX 295 drew 295W while the GTX 590 sucked down an impressive 365W and the HD 6990 required about 300W.

NVIDIA hasn’t stopped at GTX 680 SLI-like performance either. Magnesium alloy, a complete lack of plastic, an LED illuminated logo and other details give the GTX 690 a build quality that befits an ultra high end product.

With leading edge framerates and a design that’s bound to turn heads, this card pushes the limit in nearly every way, particularly from a pricing standpoint. At a stratospheric $999 the GTX 690 certainly isn’t an impulse buy and yet, once you see its performance, you may think twice about dismissing it based upon price alone.

NV-GTX-690-18.jpg
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Closer Look at the GeForce GTX 690

A Closer Look at the GeForce GTX 690


NV-GTX-690-1.jpg

When we posted our preview of the GTX 690, the general feedback on the HWC forums was straightforward: this is one sexy card. The combination of silver and matte black along with a centrally mounted fan and a few subtle touches of NVIDIA green seems to have struck a chord with enthusiasts. But the care and subtle craftsmanship that’s gone into this card have to be seen firsthand to be believed.


The exterior shroud above the two heatsinks is made out of cast aluminum and plated with a matte silver trivalent chromium finish which allows self-healing of minor scratches to prevent oxidation. Since the cast aluminum is quite thick, it can effectively act as a secondary heatsink without causing a potentially dangerous spike in exterior casing temperatures.

Since NVIDIA wanted to kick cheap, vibration prone plastic to the curb, the GTX 690’s fan housing has been fabricated out of injection molded magnesium alloy. This not only allows for a high amount of durability but the blackened alloy also acts as an acoustical dampening material, further reducing fan noise.

To top off an already impressive design, a pair of heat resistant polycarbonate shrouds has been installed to give you a picture window view into the inner workings of the internal heatsinks. In addition, NVIDIA and GTX 690 logos have been laser etched into the card’s topmost cast aluminum pieces.

NVIDIA claims their design for the GTX 690 is groundbreaking and meticulous in its approach. We have to agree. The material gaps are surgically thin, minor rattles normally associated with plastic shrouds are completely absent and the whole design feels like it was engineered by a group of scientists in a dark, secret room in Lockheed Martin’s Skunk Works. The GTX 690 costs a grand and with this kind of design, it’s hard not to see why.

NV-GTX-690-6.jpg
NV-GTX-690-17.jpg

NVIDIA has also incorporated a glowing logo into the GXT 690’s side which has been laser cut into the fan housing and uses an LED backlight for illumination. Supposedly, software like EVGA’s Precision and MSI’s AfterBurner will be able to control this LED and how it behaves. One way or another, it looks great and should be the centerpiece of any windowed case.

NV-GTX-690-8.jpg
NV-GTX-690-7.jpg

The GTX 690 is equipped with two 8-pin connectors, providing more than enough current without drawing excess power through the PCI-E slot. This layout should also allow for some overhead just in case the end user wants to overclock their card for even more performance.

With two GPU cores already sitting at the heart of this card, a single SLI connector has been included, enabling Quad SLI should you feel a burning need to burn two grand.

NV-GTX-690-11.jpg

The rear panel connectors are a departure from the GTX 680’s layout since this card natively supports three primary dual link DVI-equipped displays. There is also a mini DisplayPort connector for a fourth accessory display if need be. Something not discussed in NVIDIA’s documentation is the possibility of this card running up to SIX displays (this technically isn’t supported through the drivers at this point but could be included sometime in the future) via the combination of a DisplayPort hub coupled with the three DVI outputs.


With a length of about 11”, the GTX 690 should have no issue fitting in any ATX case currently on the market. This does make it longer than the GTX 680 but it still looks downright tiny when placed next to a HD 6990.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Diving Under the Heatsink

Diving Under the Heatsink


NV-GTX-690-15.jpg

Popping off the GTX 690’s shroud, we’re greeted by a relatively straightforward design consisting of two vapor chamber heatsinks topped with large fin arrays. These fins are designed so they funnel and accelerate the fan’s airflow while also eliminating noise-causing areas of turbulence. NVIDIA has added a specially formed secondary heatsink that covers the memory modules, VRM and other components.

The heatsink layout does tend to throw half of its hot exhaust air into your case but we can’t see this being too much of an issue for enthusiasts with high end enclosures.

NV-GTX-690-16.jpg

Below the heatsinks is the pair of GK104 cores which are surrounded by metal stiffening plates for added durability. Each of these is coupled up with eight 265MB GDDR5 memory modules and a 5-phase high efficiency, high output PWM.

The real star of this show is the PLX chip sitting between the two GPUs. On past dual GPU cards, NVIDIA used their own NF200 bridge but it was only compatible with a PCI-E 2.0 interface. The PLX PEX 8747 meanwhile is equipped with 48 PCI-E 3.0 lanes, making it a perfect way for NVIDIA to squeeze every last drop of performance out of this design. Of those 48 lanes, 36 are split between the two graphics cores while the remaining 18 are used for the primary interconnect between the GTX 690 and the motherboard’s PCI-E bus. Naturally, the bridge chip is fully compatible with previous PCI-E certifications as well so you won’t have any problem using the GTX 690 on a PCI-E 2.0 or -heaven forbid- a Gen 1.1 compatible motherboard.

One of the most important aspects of this bridge chip is its speed. Since it takes the place of the sometimes inefficient, latency-laden bus / CPU handoff process when dual individual cards are installed, the GTX 690’s internal SLI connection is able to benefit from a substantially quicker interconnect speed. This is why NVIDIA claims the underclocked card can match up almost evenly against two GTX 680s.

NV-GTX-690-9.jpg

The GTX 690’s underside doesn’t show us anything out of the ordinary since all of the memory modules and primary componentry is on the card’s other side. There are however sixteen secondary VRM modules installed directly over the VRM modules.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
GPU Boost; Dynamic Clocking Comes to Graphics Cards

GPU Boost; Dynamic Clocking Comes to Graphics Cards


Turbo Boost was first introduced into Intel’s CPUs years ago and through a successive number of revisions, it has become the de facto standard for situation dependent processing performance. In layman’s terms Turbo Boost allows Intel’s processors to dynamically fluctuate their clock speeds based upon operational conditions, power targets and the demands of certain programs. For example, if a program only demanded a pair of a CPU’s six cores the monitoring algorithms would increase the clock speeds of the two utilized cores while the others would sit idle. This sets the stage for NVIDIA’s new feature called GPU Boost.

GTX-680-110.gif

Before we go on, let’s explain one of the most important factors in determining how high a modern high end graphics card can clock: a power target. Typically, vendors like AMD and NVIDIA set this in such a way that ensures an ASIC doesn’t overshoot a given TDP value, putting undue stress upon its included components. Without this, board partners would have one hell of a time designing their cards so they wouldn’t overheat, pull too much power from the PWM or overload a PSU’s rails.

While every game typically strives to take advantage of as many GPU resources as possible, many don’t fully utilize every element of a given architecture. As such, some processing stages may sit idle while others are left to do the majority of rendering, post processing and other tasks. As in our Intel Turbo boost example this situation results in lower heat production, reduced power consumption and will ultimately cause the GPU core to fall well short of its predetermined power (or TDP) target.

In order to take advantage of this NVIDIA has set their “base clock” –or reference clock- in line with a worst case scenario which allows for a significant amount of overhead in typical games. This is where the so-called GPU Boost gets worked into the equation. Through a combination of software and hardware monitoring GPU Boost fluctuates clock speeds in an effort to run as close as possible to the GK104’s TDP of 195W. When gaming, this monitoring algorithm will typically result in a core speed that is higher than the stated base clock.

GTX-680-111.gif

Unfortunately, things do get a bit complicated since we are now talking about two clock speeds, one of which may vary from one application to another. The “Base Clock” is the minimum speed at which the core is guaranteed to run, regardless of the application being used. Granted, there may be some power viruses out there which will push the card beyond even these limits but the lion’s share of games and even most synthetic applications will have no issue running at or above the Base Clock.

The “Boost Clock” meanwhile is the typical speed at which the core will run in non-TDP limited applications. As you can imagine, depending on the core’s operational proximity to the power target this value will surely fluctuate to higher and lower levels. However, NVIDIA likens the Boost Clock rating to a happy medium that nearly every game will achieve, at a minumum. For those of you wondering, both the Base Clock and the Boost Clock will be advertised on all Kepler-based cards and on the GTX 680 the values are 1006MHz and 1058MHz respectively.

GPU Boost differs from AMD’s PowerTune in a number of ways. While AMD sets their base clock off of a typical in-game TPD scenario and throttles performance if an application exceeds these predetermined limits, NVIDIA has taken a more conservative approach to clock speeds. Their base clock is the minimum level at which their architecture will run under the worst case conditions and this allows for a clock speed increase in most games rather than throttling.

In order to better give you an idea of how GPU Boost operates, we logged clock speeds and Power Use in Dirt 3 and 3DMark11 using EVGA’s new Precision X utility.

GTX-680-121.gif

GTX-680-122.gif

Example w/GTX 680

In both of the situations above the clock speeds tend to fluctuate as the core moves closer to and further away from its maximum power limit. Since the reaction time of the GPU Boost algorithm is about 100ms, there are situations when clock speeds don’t line up with power use, causing a minor peak or valley but for the most part both run in perfect harmony. This is most evident in the 3DMark11 tests where we see the GK104’s ability to run slightly above the base clock in a GPU intensive test and then boost up to even higher levels in the Combined Test which doesn’t stress the architecture nearly as much.

GTX-680-93.jpg

Example w/GTX 680

According to NVIDIA, lower temperatures could promote higher GPU Boost clocks but even by increasing our sample’s fan speed to 100%, we couldn’t achieve higher Boost speeds. We’re guessing that high end forms of water cooling would be needed to give this feature more headroom and according to some board partners, benefits could be seen once temperatures hit below 70 degrees Celcius. However, the default GPU Boost / Power offset NVIDIA built into their core seems to leave more than enough wiggle room to ensure that all reference-based cards should behave in the same manner.

There may be a bit of variance from the highest to the lowest leakage parts but the resulting dropoff in Boost clocks will never be noticeable in-game. This is why the boost clock is so conservative; it strives to stay as close as possible to a given point so power consumption shouldn’t fluctuate wildly from one application to another. But will this cause performance differences from one reference card to another? Absolutely not unless they are running at abnormally hot or very cool temperatures.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Smoother Gaming Through Adaptive VSync

Smoother Gaming Through Adaptive VSync


In a market that seems eternally obsessed with high framerates, artificially capping performance at a certain levels by enabling Vertical Synchronization (or VSync) may seem like a cardinal sin. In simplified terms, VSync essentially sets the framerate within games to the refresh rate of the monitor which means games running on 60Hz monitors will achieve framerates of no higher than 60FPS. 120Hz panels eliminate this limitation and boost framerates to 120 but monitors sporting the technology are few in number and they usually come with astronomical price points.

GTX-680-101.jpg

With today’s graphics cards pushing boundaries that weren’t even dreamed of a few years ago, gamers usually want to harness every last drop of their latest purchase. This alongside the possible input lag issues VSync causes many gamers choose to disable VSync altogether. However, there are some noteworthy issues associated with running games at high framerates, asynchronously to the vertical refresh rate of most monitors.

Without VSync enabled, games will flow more naturally, average framerates are substantially higher and commands will be registered onscreen in short order. However, as the framerates run outside of the monitor’s refresh rate tearing begins to occur, decreasing image quality and potentially leading to unwanted distractions. Tearing happens when fragments of multiple frames are displayed on the screen as the monitor can’t keep up with the massive amount of rendered information being pushed through at once.

GTX-680-100.jpg

For some, V-Sync can be a saving grace since it eliminates the horizontal tearing but other than the aforementioned input lag, there is one other major drawback: stuttering. Remember, syncing up the monitor with your game holds both refresh rates and framerates at 60. However, some scenes can cause framerates to droop well below the optimal 60 mark which will lead to some frames being “missed” by the 60Hz monitor refresh and thus cause a slight stuttering effect. Basically, as the monitor is refreshing itself 60 times every second, the lower framerate causes it to momentarily display 30, 20, 15 (or any other multiple of 60) frames per second.

GTX-680-102.jpg

Through Adaptive VSync NVIDIA now gives users the best of both worlds by still capping framerates at the same level as the screen’s refresh rate but when framerate droops are detected, it temporarily disables the synchronization. This boosts framerates for as long as needed before once again enabling VSync when performance climbs to optimal levels. It is supposed to virtually eliminate visible stutter –even though some will still occur as the algorithm switches over- and improve overall framerates while still maintaining the tear-free experience normally associated with VSync.

GTX-680-120.jpg

Adaptive VSync can be enabled in drivers’ control panel but will only be available starting with the 300.xx-series driver stack. For this technology to be effective, all VSync changes should be done in the control panel while VSync needs to be disabled within any in-game graphics menu. There’s also the option for a “Half Refresh Rate” sync that can be used to lock the framerates to 30 FPS for highly demanding games or for graphics cards that can’t quite hit the 60 FPS mark.

So what kind of affect does Adaptive VSync have upon a typical gaming experience? We used it in Batman: Arkham City and Dirt 3 to find out.

GTX-680-80.jpg

GTX-680-81.jpg

Examples w/GTX 680

The results were certainly definitive, at least in the case of Batman: Arkham City. In it, the framerates were significantly more constant as the Adaptive VSync effectively eliminating the peaks and valleys normally associated with a highly demanding game. Dirt 3 on the other hand doesn’t really benefit from this technology in an overt manner but there are still plenty of instances where the framerates were smoothed out so they didn’t reach quite as far into negative territory.

The graphs above only tell half the story though since the real impact of Adaptive VSync can only be experienced when actually playing a game live. Stuttering will become nearly nonexistent and the difference between it being enabled and disabled really is like night and day. The term “smooth as a baby’s bottom” comes to mind. Unfortunately, NVIDIA hasn’t quite found a way to eliminate VSync’s usual input lag issues but turning on Triple Buffering within the control panel can help mask these problems.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Introducing TXAA

Introducing TXAA


As with every other new graphics’ architecture that has launched in the last few years, NVIDIA will be launching a few new features alongside Kepler. In order to improve image quality in a wide variety of scenarios, FXAA has been added as an option to NVIDIA’s control panel, making it applicable to every game. For those of you who haven’t used it, FXAA is a form of post processing anti aliasing which offers image quality that’s comparable to MSAA but at a fraction of the performance cost.

GTX-680-103.jpg

Another item that has been added is a new anti aliasing component called TXAA. TXAA uses hardware multisampling alongside a custom software-based resolve AA filter for a sense of smoothness and adds an optional temporal component for even higher in game image quality.

According to NVIDIA, their TXAA 1 mode offers comparable performance to 2xMSAA but results in much higher edge quality than 8xMSAA. TXAA2 meanwhile steps things up to the next level by offering image enhancements that can’t be equaled by MSAA but once again the performance impact is negligible when compared against higher levels of multisampling.

GTX-680-113.gif

From the demos we were shown, TXAA has the ability to significantly decrease the aliasing in scenes Indeed, it looks like the developer market is trying to move away from inefficient implementations of multi sample anti aliasing and have instead started gravitating towards more higher performance alternatives like MLAA, FXAA and now possibly TXAA.

There is however one catch: TXAA cannot be enabled in NVIDIA’s control panel. Instead, game engines have to support it and developers will be implementing it within the in-game options. Presently there isn’t a single title on the market that supports TXAA but that should change over the next 12 months. Once available, it will be backwards compatible with the GTX 400 and GTX 500 series GPUs as well.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
NVIDIA Surround Improvements

NVIDIA Surround Improvements


When it was first released, many thought of NVIDIA’s Surround multi monitor technology as nothing more than way to copy AMD’s competing Eyefinity. Since then it has become much more with NVIDIA rolling out near seamless support for the stereoscopic 3D Vision Surround while gradually improving performance and compatibility with constant driver updates. The one thing they were missing was the ability to run more than two monitors off of a single core graphics card. Well, Kepler is about to change that.

GTX-680-104.jpg

By thoroughly revising their display engine so it has the ability to output signals to four monitors simultaneously. This means a trio of monitors can be used alongside a fourth “accessory” display. This will allow you to game in Surround on the three primary screens while the fourth screen can act as a location for email, instant messaging and anything else you may want to keep track of. There are some Windows-related limitations when running 3DVision since the fourth panel won’t be able to display a 2D image in parallel with stereoscopic content but running your game in windowed mode should alleviate this issue.

NVIDIA has also included a simple yet handy feature to Surround: minimizing the Windows taskbar to the center panel. This means all of your core functionality can stay confined in one area without having to move the cursor across three monitors to interact with some items. Unfortunately, for the time being there isn’t any way to move the taskbar but NVIDIA may implement this as an option in a later release and the ability to span the taskbar across all monitors is still available.

GTX-680-105.jpg

Bezel correction is an integral part of the Surround experience since it offers continual, break-free images from one screen to the next. However it does tend to hide portions of the image as it compensates for the bezel’s thickness, sometimes leading to in game menus getting cut off. The new Bezel Peaking feature allows gamers to temporarily disable bezel correction by pressing CTRL+ALT+B in order to see and interact with anything being hid. The corrective measures can be enabled again without exiting the application.

GTX-680-106.jpg

One major complaint from gamers that use surround is the wide array of unused and sometimes unusable resolutions that Windows displays in games. NVIDIA has avoided this by adding a Custom Resolutions option into their control panel so the user can select only the resolutions they want to be displayed in games.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup / Benchmark Sequences

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: ASUS P8Z68-V PRO Gen3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 301.33 Beta (GTX 680 & GTX 690)
AMD 12.4 WHQL + CAP 12.3
NVIDIA 295.73 WHQL

Application Benchmark Information:
Note: In all instances, in-game sequences were used. The videos of the benchmark sequences have been uploaded below.


Battlefield 3

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/i6ncTGlBoAw?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/i6ncTGlBoAw?version=3&hl=en_US" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>​

Crysis 2

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/Bc7_IAKmAsQ?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Bc7_IAKmAsQ?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Deus Ex Human Revolution

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/GixMX3nK9l8?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/GixMX3nK9l8?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Dirt 3

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/g5FaVwmLzUw?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/g5FaVwmLzUw?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Metro 2033

<object width="480" height="360"><param name="movie" value="http://www.youtube.com/v/8aZA5f8l-9E?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/8aZA5f8l-9E?version=3&hl=en_US" type="application/x-shockwave-flash" width="480" height="360" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Shogun 2: Total War

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/oDp29bJPCBQ?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/oDp29bJPCBQ?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Witcher 2 v2.0

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/tyCIuFtlSJU?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/tyCIuFtlSJU?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
3DMark 11 (DX11)

3DMark 11 (DX11)


3DMark 11 is the latest in a long line of synthetic benchmarking programs from the Futuremark Corporation. This is their first foray into the DX11 rendering field and the result is a program that incorporates all of the latest techniques into a stunning display of imagery. Tessellation, depth of field, HDR, OpenCL physics and many others are on display here. In the benchmarks below we have included the results (at default settings) for both the Performance and Extreme presets.


Performance Preset

NV-GTX-690-30.jpg


Extreme Preset

NV-GTX-690-31.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Batman: Arkham City (DX11)

Batman: Arkham City (DX11)


Batman: Arkham City is a great looking game when all of its detail levels are maxed out but it also takes a fearsome toll on your system. In this benchmark we use a simple walkthrough that displays several in game elements. The built-in benchmark was avoided like the plague simply because the results it generates do not accurately reflect in-game performance.

1920 x 1200

NV-GTX-690-32.jpg


NV-GTX-690-33.jpg


2560 x 1600

NV-GTX-690-34.jpg


NV-GTX-690-35.jpg
 
Status
Not open for further replies.

Latest posts

Top