What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Gigabyte GTX 780 WindForce 3X OC Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
NVIDIA’s GTX 780 is by far the fastest single GPU card currently available but it does come with a high price. At $649 it may not be as costly as a GTX TITAN or GTX 690 and actually finding one is becoming increasingly difficult but the GTX 780 is nonetheless quite popular. In order to capitalize on this Board partners like Gigabyte are striving to add some more value to this equation by creating their own custom versions with better cooling, higher clock speeds and upgraded components. More importantly, with a slight frequency increase the GTX 780 has the capability to run head to head against the $999 TITAN.

GTX-780-GB-47.jpg

Gigabyte’s entries into the GTX 780 market go down two separate paths. They have a reference version and the SKU we’ll be covering in this review: the GTX 780 WindForce 3X OC. As with many of Gigabyte’s “OC” edition cards, the core Base frequency has received a not so insignificant increase to 954MHz while the Boost clock will likely hit an average of 1006MHz representing a 12% increase over a base model GTX 780. This has been achieved without modifying the card’s voltage range so power consumption should be the same or less a reference design. Unfortunately, once again GDDR5 speeds remain at their default speeds but there should be some extra gas left in the tank if manual overclocking is your thing.

With higher clock speeds and an upgraded heatsink design, Gigabyte has tacked on a premium to this card of about $30. While this does tend to push the WindForce OC’s $679 cost closer to the TITAN, there’s more than enough improvements here to justify a mere 5% price increase.

GTX-780-GB-2.jpg

There really isn’t much to distinguish Gigabyte’s new GTX 780 from its WindForce-branded GTX 680 predecessor. It still uses the same large, triple fan, dual slot heatsink design that’s been so successful in the past but this does add about an inch to the overall length. At 11.5” the GTX 780 WindForce OC won’t have any issues fitting into an ATX case but with the recent boom in the mATX and SFF markets, be sure to measure before taking the plunge.

GTX-780-GB-3.jpg
GTX-780-GB-5.jpg

The most important aspect of this card is actually its heatsink. The WindForce 3X design consists of three 80mm fans which have been angled with a slight incline to ensure airflow is directed downwards towards the fin array without any excess turbulence. Gigabyte has also equipped this cooler with six heatpipes: two 8mm and four 6mm making it capable of handling up to 450W of heat.

This heatsink’s importance goes beyond merely remaining quiet; its lower temperatures also lead to higher clock speeds courtesy of NVIDIA’s GPU Boost 2.0. As we explained in the original launch-day GTX 780 review, clock speeds were often limited by GPU Boost’s reaction to higher temperatures which the reference heatsink capped at 80°C. With lower temperatures comes an increased frequency range so while the reference card was able to hit a clock speed of 927MHz quite consistently, we found the GTX 780 WindForce OC remained at a very constant 1071MHZ. Naturally, voltage and power limit constraints still cap these clocks quite aggressively but at least Gigabyte’s card offers a more consistent performance envelope.

GTX-780-GB-4.jpg
GTX-780-GB-6.jpg

The connector layout on Gigbayte’s card is typical of the reference design with two DVIs, one HDMI and a single DisplayPort. Power connectors also follow standard lines with an 8-pin / 6-pin combo.

GTX-780-GB-7.jpg

For our sample, Gigabyte used the reference PCB though in the coming month’s we’ll likely see a Super Overclock version which boasts a top-to-bottom custom design.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 320.18 Beta
NVIDIA 320.14 Beta
AMD 13.5 Beta 2



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 320.18 Beta
NVIDIA 320.14 Beta
AMD 13.5 Beta 2



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results.


Frame Time Testing & FCAT

To put a meaningful spin on frame times, we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 28ms or about 35FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 28ms (35 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 25ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

In order to determine accurate onscreen frame times, a decision has been made to move away from FRAPS and instead implement real-time frame capture into our testing. This involves the use of a secondary system with a capture card and an ultra-fast storage subsystem (in our case five SanDisk Extreme 240GB drives hooked up to an internal PCI-E RAID card) hooked up to our primary test rig via a DVI splitter. Essentially, the capture card records a high bitrate video of whatever is displayed from the primary system’s graphics card, allowing us to get a real-time snapshot of what would normally be sent directly to the monitor. By using NVIDIA’s Frame Capture Analysis Tool (FCAT), each and every frame is dissected and then processed in an effort to accurately determine latencies, frame rates and other aspects.

We've also now transitioned all testing to FCAT which means standard frame rates are also being logged and charted through the tool. This means all of our frame rate (FPS) charts use onscreen data rather than the software-centric data from FRAPS, ensuring dropped frames are taken into account in our global equation.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Assassin’s Creed III / Crysis 3

Assassin’s Creed III (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/RvFXKwDCpBI?rel=0" frameborder="0" allowfullscreen></iframe>​

The third iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Boston area which features plenty of NPCs, distant views and high levels of detail.


2560x1440

GTX-780-GB-37.jpg

GTX-780-GB-30.jpg


Crysis 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure Due to the hefty system resource needs of this game, post-process FXAA was used in the place of MSAA.


2560x1440

GTX-780-GB-38.jpg

GTX-780-GB-31.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Dirt: Showdown / Far Cry 3

Dirt: Showdown (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/IFeuOhk14h0?rel=0" frameborder="0" allowfullscreen></iframe>​

Among racing games, Dirt: Showdown is somewhat unique since it deals with demolition-derby type racing where the player is actually rewarded for wrecking other cars. It is also one of the many titles which falls under the Gaming Evolved umbrella so the development team has worked hard with AMD to implement DX11 features. In this case, we set up a custom 1-lap circuit using the in-game benchmark tool within the Nevada level.


2560x1440

GTX-780-GB-39.jpg

GTX-780-GB-32.jpg



Far Cry 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



2560x1440

GTX-780-GB-40.jpg

GTX-780-GB-33.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Hitman Absolution / Max Payne 3

Hitman Absolution (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


2560x1440

GTX-780-GB-41.jpg

GTX-780-GB-34.jpg


Max Payne 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/ZdiYTGHhG-k?rel=0" frameborder="0" allowfullscreen></iframe>​

When Rockstar released Max Payne 3, it quickly became known as a resource hog and that isn’t surprising considering its top-shelf graphics quality. This benchmark sequence is taken from Chapter 2, Scene 14 and includes a run-through of a rooftop level featuring expansive views. Due to its random nature, combat is kept to a minimum so as to not overly impact the final result.


2560x1440

GTX-780-GB-42.jpg

GTX-780-GB-35.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Tomb Raider

Tomb Raider (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


2560x1440

GTX-780-GB-43.jpg

GTX-780-GB-36.jpg
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Temperatures & Acoustics / Power Consumption

Temperature Analysis


For all temperature testing, the cards were placed on an open test bench with a single 120mm 1200RPM fan placed ~8” away from the heatsink. The ambient temperature was kept at a constant 22°C (+/- 0.5°C). If the ambient temperatures rose above 23°C at any time throughout the test, all benchmarking was stopped..

For Idle tests, we let the system idle at the Windows 7 desktop for 15 minutes and recorded the peak temperature.


GTX-780-GB-45.jpg

The GK110 is a relatively hot running core but Gigabyte’s WindForce heatsink has no issue getting temperatures under control. As we said in the introduction, this is particularly important since it allows this card to leverage GeForce Boost 2.0 for higher frequencies.


Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, a loop of Unigine Valley is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.

GTX-780-GB-44.jpg

The WindForce coolers never cease to amaze on the acoustics front either and once again we see a Gigabyte card winning by a large margin. What’s particularly impressive is the fact that the reference GTX 780’s heatsink is built for quiet operation so it’s doubtful you’ll even hear the GTX 780 WindForce when gaming or even without any other fans running.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

Please note that after extensive testing, we have found that simply plugging in a power meter to a wall outlet or UPS will NOT give you accurate power consumption numbers due to slight changes in the input voltage. Thus we use a Tripp-Lite 1800W line conditioner between the 120V outlet and the power meter.

GTX-780-GB-46.jpg

Since Gigabyte hasn’t changed the power or voltage limits of their card, the only thing that will drag up power consumption is the clock speeds. However, due to the better cooling, it remains quite efficient, barely requiring more power than a reference GTX 780.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Overclocking Results

Overclocking Results


Thus far, our overclocking experience with the GTX 780 has been relatively successful with the reference sample hitting a constant 1122MHz before smashing head first into a power and voltage limit wall. The Gigabyte GTX 780 OC on the other hand comes with higher frequencies out of the box and consistently delivered a Boost speed of about 1070MHz. That doesn’t leave much room for improvement.

Like the reference card, NVIDIA’s voltage limits have been retained by Gigabyte meaning this card only has 35mV of overhead while the maximum Power Limit of 106% has also been carried over.

So what does this mean for overclocking? Not a whole lot since the default clock speeds of the WindForce OC put it very close to the imposed limits to begin with. However, there is some additional headroom lurking below the surface and we achieved a constant Boost clock of 1153MHz while the memory purred along at 6856MHz. That’s not too bad for a reference card by Gigabyte’s offering was quite obviously held back by other factors.

In our opinion, you’re almost better off just keeping this card at its default speeds since the overclock we achieved really doesn’t make much of a difference. This will likely hold true for other GTX 780 cards as well, or at least until a board partner breaks ranks and releases a card that has the ability to go past NVIDIA’s presets.

GTX-780-GB-48.jpg

GTX-780-GB-49.jpg
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Conclusion

Conclusion


When the GTX 780 was originally reviewed some weeks back, we found it to be an extremely impressive graphics card which effectively bridged the gap between NVIDIA’s GTX 680 and GTX TITAN. While the GTX 770 has taken over the 680’s mantle, this GK110-based card still offers great price / performance ratio, particularly when compared against the TITAN. Gigabyte’s WindForce OC helps things along even more by placing their GTX 780 right alongside NVIDIA’s $1000 monster while costing about $320 less.

As we mentioned in the introduction, the WindForce OC’s more important feature is its heatsink. The technology and engineering excellence Gigabyte put into its design help it achieve some noteworthy performance increases over the reference card.

In years past better heatsink designs tended to help overclocking, acoustics and not much else. However, with the advent NVIDIA’s GPU Boost, GeForce cards are now able to take advantage of the excess thermal overhead by dynamically increasing their core clocks provided voltage and power limits don’t ruin the show. Gigabyte has been able to leverage these aspects through the lower temperature provided by the WindForce heatsink. As a result, their card’s core frequency remains far above those offered by the reference design without consuming significantly more power.

With the GTX 780 WindForce OC’s core humming along at a constant 1071MHz or thereabouts, it is able to outpace a stock card by between 10% to 15% on average. This actually results in noticeable in-game improvements and the consistency it brings to the table really helps even out framerates.

To accomplish the higher clock speeds, Gigabyte hasn’t sacrificed in other areas either. The WindForce heatsink remains one of the quietest on the market and the temperatures it displays are nothing short of spectacular.

Unfortunately, due to the strict limits NVIDIA puts on voltage adjustments, there’s not much overclocking headroom here. We kept running head first into the power and voltage limits within a few dozen MHz of the OC’s stock Boost frequency. Ironically, this also prevented Gigabyte’s GTX 780 from totally spanking the TITAN.

In a market that’s quickly becoming saturated by board partners vying for a segment shrinking in popularity, everyone has to bring their A-game to the table. That’s exactly what Gigabyte has done with the GTX 780 WindForce OC. It is an excellent all-round graphics card that offers TITAN-like performance and quiet operation without requiring a significant premium over the reference version. To us, that’s a winning combination that renders NVIDIA’s flagship nearly redundant at certain resolutions.

240463923aca1f6b.jpg
 
Last edited:

Latest posts

Top