What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

EVGA GTX 780 Ti SC ACX Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
When NVIDIA launched the GTX 780 Ti, they were aiming to take the wind out of AMD’s sales. While their new card cost significantly more than the R9 290X, it offered better performance, a significantly lower acoustical profile and lower power consumption. Those are all first-level differentiators but NVIDIA also added their Holiday Gaming Bundle and a $100 coupon for SHIELD and when combined with those elements, the GTX 780 Ti suddenly started to look quite appealing for gamers in search of the best money could buy.

This brings us to the subject of today’s review: EVGA’s GTX 780 Ti Superclocked. For all intents and purposes, custom GTX 780 Ti cards will hit the market weeks ahead of similar R9 290X efforts from AMD’s board partners and EVGA’s latest demonstrates this perfectly. It offers higher clock speeds, a reference cooler and potentially more overclocking headroom than the reference card for a minor $20 premium. We do have to mention that it isn’t widely available yet but you’ll start seeing it at retailers in the coming weeks.

GTX-780-TI-EVGA-123-49.jpg

Higher core clocks are the name of the game with the GTX 780 Superclocked largely because its upgraded heatsink keeps the core at much lower operating temperatures. This improved thermal overhead in turn allows NVIDIA’s GPU Boost to run it at increased frequencies. Naturally, EVGA’s built-in overclock helps things along as well but, as is usually the case, they have erred on the side of caution by keeping the Base clock under the 1GHz mark. Expect that to change as additional cards are released in their lineup.

With the engine getting a healthy dose of adrenalin, EVGA has pointedly ignored the memory speeds. This is quite simply due to the fact that 7Gbps modules operating across a 384-bit interface already provide a titanic amount of bandwidth. Plus, getting such highly clocked memory to consistently overclock provides nightmares for the folks binning the ICs.

GTX-780-TI-EVGA-123-1.jpg

So how has EVGA been able to achieve what seems to be an inhumanly short turn-around time between the GTX 780 Ti’s initial launch and their card’s introduction? They’ve simply taken elements already used on their GTX 780 Superclocked ACX and transposed them over onto this new card. This isn’t too much of a stretch since, for all intents and purposes, the GTX 780 and GTX 780 Ti are the same card with a few minor variances.

The defining feature of this particular card is of course EVGA’s excellent ACX cooler which houses a pair of 80mm fans and a massive internal heatsink. As we’ve already mentioned, it is this impressive piece of engineering that keeps the GK110 running at the temperatures necessary to achieve higher performance metrics. You can learn more about the ACX and the design behind it here.

Other than the distinctive heatsink design, the GTX 780 Ti Superclocked is simply a reference NVIDIA card. It uses a simple 6+8 pin power connector layout and a back panel payload consisting of two DVI outputs and connectors for HDMI and DisplayPort. This allows it to natively support up to four monitors (three for Surround and an accessory display).

EVGA’s Superclocked series have an enviable track record and we expect nothing less from this one. However, at $720 (or $780 here in Canada), it isn’t inexpensive but some of the initial sting is taken away by the aforementioned game bundle. But is this particular GTX 780 Ti really worth almost $200 more than a bone stock R9 290X?
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
Test System & Setup

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 331.70 Beta
AMD 13.11 v8 Beta



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results.


Frame Time Testing & FCAT

To put a meaningful spin on frame times, we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 28ms or about 35FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 28ms (35 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 25ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

In order to determine accurate onscreen frame times, a decision has been made to move away from FRAPS and instead implement real-time frame capture into our testing. This involves the use of a secondary system with a capture card and an ultra-fast storage subsystem (in our case five SanDisk Extreme 240GB drives hooked up to an internal PCI-E RAID card) hooked up to our primary test rig via a DVI splitter. Essentially, the capture card records a high bitrate video of whatever is displayed from the primary system’s graphics card, allowing us to get a real-time snapshot of what would normally be sent directly to the monitor. By using NVIDIA’s Frame Capture Analysis Tool (FCAT), each and every frame is dissected and then processed in an effort to accurately determine latencies, frame rates and other aspects.

We've also now transitioned all testing to FCAT which means standard frame rates are also being logged and charted through the tool. This means all of our frame rate (FPS) charts use onscreen data rather than the software-centric data from FRAPS, ensuring dropped frames are taken into account in our global equation.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
Assassin’s Creed III / Crysis 3

Assassin’s Creed III (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/RvFXKwDCpBI?rel=0" frameborder="0" allowfullscreen></iframe>​

The third iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Boston area which features plenty of NPCs, distant views and high levels of detail.


2560 x 1440

GTX-780-TI-EVGA-123-38.jpg

GTX-780-TI-EVGA-123-30.jpg


Crysis 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure Due to the hefty system resource needs of this game, post-process FXAA was used in the place of MSAA.


2560 x 1440

GTX-780-TI-EVGA-123-39.jpg

GTX-780-TI-EVGA-123-31.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
Dirt: Showdown / Far Cry 3

Dirt: Showdown (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/IFeuOhk14h0?rel=0" frameborder="0" allowfullscreen></iframe>​

Among racing games, Dirt: Showdown is somewhat unique since it deals with demolition-derby type racing where the player is actually rewarded for wrecking other cars. It is also one of the many titles which falls under the Gaming Evolved umbrella so the development team has worked hard with AMD to implement DX11 features. In this case, we set up a custom 1-lap circuit using the in-game benchmark tool within the Nevada level.


2560 x 1440

GTX-780-TI-EVGA-123-40.jpg

GTX-780-TI-EVGA-123-32.jpg



Far Cry 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



2560 x 1440

GTX-780-TI-EVGA-123-41.jpg

GTX-780-TI-EVGA-123-33.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
Hitman Absolution / Max Payne 3

Hitman Absolution (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


2560 x 1440

GTX-780-TI-EVGA-123-42.jpg

GTX-780-TI-EVGA-123-34.jpg



Max Payne 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/ZdiYTGHhG-k?rel=0" frameborder="0" allowfullscreen></iframe>​

When Rockstar released Max Payne 3, it quickly became known as a resource hog and that isn’t surprising considering its top-shelf graphics quality. This benchmark sequence is taken from Chapter 2, Scene 14 and includes a run-through of a rooftop level featuring expansive views. Due to its random nature, combat is kept to a minimum so as to not overly impact the final result.


2560 x 1440

GTX-780-TI-EVGA-123-43.jpg

GTX-780-TI-EVGA-123-35.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
Metro: Last Light / Tomb Raider

Metro: Last Light (DX11)


<iframe width="640" height="360" src="http://www.youtube.com/embed/40Rip9szroU" frameborder="0" allowfullscreen></iframe>​

The latest iteration of the Metro franchise once again sets high water marks for graphics fidelity and making use of advanced DX11 features. In this benchmark, we use the Torchling level which represents a scene you’ll be intimately familiar with after playing this game: a murky sewer underground.


2560 x 1440

GTX-780-TI-EVGA-123-44.jpg

GTX-780-TI-EVGA-123-36.jpg


Tomb Raider (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


2560 x 1440

GTX-780-TI-EVGA-123-45.jpg

GTX-780-TI-EVGA-123-37.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
Temperature & Acoustics / Power Consumption

Temperature Analysis


For all temperature testing, the cards were placed on an open test bench with a single 120mm 1200RPM fan placed ~8” away from the heatsink. The ambient temperature was kept at a constant 22°C (+/- 0.5°C). If the ambient temperatures rose above 23°C at any time throughout the test, all benchmarking was stopped..

For Idle tests, we let the system idle at the Windows 7 desktop for 15 minutes and recorded the peak temperature.


GTX-780-TI-EVGA-123-47.jpg

The low temperature numbers posted by EVGA’s ACX cooler shouldn’t come as any surprise but we can’t forget that it has an extremely hot running core to contend with. This makes its results all that much more impressive, especially when you consider that it allows NVIDIA’s Boost algorithms to further enhance frequencies in nearly every application.


Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, a loop of Unigine Valley is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.

GTX-780-TI-EVGA-123-46.jpg

We’ve seen some extremely loud cards as of late; from the R9 290X to the R9 290, AMD seems to be sacrificing acoustics in order to satisfy those looking for enhanced performance. NVIDIA and their board partners on the other hand have found a near-perfect equation which allows them to offer optimal frequencies without embarrassingly high fan speeds. The ACX cooler continues this tradition in excellent form with impressively low results.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

Please note that after extensive testing, we have found that simply plugging in a power meter to a wall outlet or UPS will NOT give you accurate power consumption numbers due to slight changes in the input voltage. Thus we use a Tripp-Lite 1800W line conditioner between the 120V outlet and the power meter.

GTX-780-TI-EVGA-123-48.jpg

While the higher clock speeds do tend to push the Superclocked’s power consumption up into a higher range, it is still more power efficient than an R9 290X. That’s an impressive feat considering how much more performance this card packs into its frame.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
Overclocking Results

Overclocking Results


Overclocking the reference GTX 780 Ti returned some reasonably impressive results with core frequencies pleateauing around the 1143MHz mark. EVGA on the other hand has dialed things up to eleven and their Superclocked nearly hits those speeds without so much as touching its untapped overclocking potential.

It goes without saying that there’s a bit left in this card’s tank. We were able to hit a constant speed of 1213MHz which represents a 76MHz boost. Unfortunately, at that point, NVIDIA’s Voltage Limit put a stop to the fun and artificially capped frequencies. We’re sure that with an “unlocked” tool which allows for additional voltage and Power Limit headroom, the Superclocked ACX could go even further.

The GDDR5 also played the part of willing participant at peaked at 7804MHz. That’s the highest we’ve achieved thus far on 7Gbps modules.

Naturally, these increases boosted EVGA’s GTX 780 Superclocked to ridiculous heights in our charts.

GTX-780-TI-EVGA-123-51.jpg

GTX-780-TI-EVGA-123-52.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,841
Location
Montreal
Conclusion

Conclusion


Let’s get the obvious out of the way before getting too far into this conclusion. EVGA’s GTX 780 Ti Superclocked is fast. Unbelievably fast. It is actually the highest performing single core graphics card we’ve ever tested and that’s saying something considering there have been some extremely impressive examples in this generation. However, this isn’t all about raw framerates; the manner by which this card achieves gaming greatness is the real star of the show.

AMD’s R9 290X also happens to be one of the better cards available but it simply lacks the poise and polish that have been hallmarks of NVIDIA’s high end offerings. In order to achieve optimal performance, its fan speeds has to be cranked up to obscene levels and it requires more power than a GTX 690. Not so with the EVGA GTX 780 Ti Superclocked. It may cost a lot more but it remained quiet and well behaved throughout testing despite using an overclocked GK110 core. Overclocking headroom on this particular sample was a pleasant surprise as well with final continual clock speeds in excess of 1.2GHz.

GTX-780-TI-EVGA-123-50.jpg

The reason behind the Superclocked’s mind-bending numbers is its heatsink. The ACX cooler is cleverly designed so it only takes up two slots and doesn’t protrude past the reference PCB but still achieves incredible temperatures. As a result, NVIDIA’s Boost algorithm (which is set at 81°C) can step into the fray and dynamically enhance core frequencies.

As you can see, even though EVGA has rated their Superclocked to hit a Boost clock of 1046MHz, it has no problem hitting and –more importantly- staying at 1137MHz. This leads to framerates which come within spitting distance of those achieved by a $1000 GTX 690 and are leaps and bounds better than anything AMD has in their stable at this point in time. That’s some mammoth performance which is coupled with surprisingly a surprisingly perf per watt scale but ultimately, it simply shows how a graphics card should behave under normal conditions.

Is this card really worth $200 more than the R9 290X? That really depends on your needs. The Superclocked’s extreme framerate numbers and other elements go a long way towards justifying its premium. It couldn’t have come at a better time since all of the season’s best games were just released. AMD’s board partners meanwhile won’t have custom R9 290X designs ready until sometime in late December, putting them well behind the ball. So, if you want the highest possible performance right now along with a serious amount of future proofing, grab EVGA’s GTX 780 Ti Superclocked and you won’t regret it.

While the EVGA GTX 780 Ti Superclocked may be one of the most expensive graphics cards on the market, to gamers who want the best, it will be worth every penny. From overclocking headroom to acoustics to temperatures to out-of-box performance, it is currently the one to beat.

240463923aca1f6b.jpg
 

Latest posts

Twitter

Top