What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce GTX 770 2GB Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Fresh off their GTX 780 launch , NVIDIA is quickly following up with yet another graphics card: the GTX 770. This new GPU may not target the same ultra high end market as its larger, more power hungry sibling but its presence puts yet another card into direct competition against AMD’s HD 7970 GHz Edition.

While the GK110-based TITAN and GTX 780 were parachuted into performance segments where AMD just couldn’t compete, the GTX 770 has slightly more modest goals. It is meant as a direct replacement to the now-discontinued GTX 670 while offering GTX 570 and GTX 580 users a similarly priced upgrade path. However, there aren’t many things here we haven't seen before so GTX 670 / GTX 680 and HD 7970 / HD 7950 customers likely won’t find anything particularly enticing.


In many ways, the GTX 770 is nothing more than a rebranded and prettied up version of the GTX 680. Unlike the GTX 780, it uses a GK104 core which has the exact same 4 GPC, 8 SMX layout as NVIDIA’s former flagship so there won’t be additional performance derived through architectural changes.

This is an interesting choice but understandable since the higher end GK110 design couldn’t have been cut down any more without entering into the realm of diminishing returns. In addition, NVIDIA hasn’t been sitting idle for the last fourteen months and they’ve given the GTX 770 some features to distinguish it from the outgoing generation.


While the core’s specifications won’t come as any surprise for those who’ve memorized GK104’s layout, it does operate at higher frequencies that the GTX 680. In addition, NVIDIA has included GPU Boost 2.0, allowing the card to reach Boost clocks (and above) more consistently, thus potentially raising aggregate performance. This means voltage increases will be allowed with the GTX 770 but once again we will see some serious constraints put on them. It will also be available at launch in 2GB and 4GB configurations, potentially offering better performance at extreme resolution, texture and anti aliasing settings.

Other than GPU Boost and the addition of two memory configurations, there is one major area of differentiation: memory clocks. The GTX 770 is the first graphics card available with the new 7Gbps GDDR5 modules from Samsung and Hynix. This allows it to achieve notably higher aggregate bandwidth numbers, a factor that will surely make a difference in some memory-limited situations.

With higher clock speeds and faster memory, the GTX 770 naturally requires more power than a GTX 680 and it will likely produce more heat as well. However, an increase of nearly 40W TDP is a bit surprising. It could very well be that the 7Gbps GDDR5 uses high leakage modules in order to achieve its frequency advantage.


The real surprise of this release won’t be performance or specifications. Rather, the GTX 770’s $399 price will likely turn everyone’s head. Instead of towing the same line as they did in past launches, NVIDIA went against the grain in order to offer a card which is the undisputed leader in the price / performance race. It undercuts the GTX 680 and HD 7970 GHz Edition by a good $50 while potentially offering a better gameplay experience. With this in mind, expect the GTX 680 and GTX 670 to quickly drop in price in an effort to clear out remaining inventories. One also has to wonder what this will do to the lower market segments but we expect replacements for the GTX 660 series and GTX 650 in short order as well.

With a few minor improvements and substantially faster memory many will question whether the GTX 770 really deserves a new product designation. However, with a price of just $399 coupled with GTX 680-beating performance, no one will care what this card is called provided it can supply the gaming experience they expect.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
A Closer Look at the GTX 770

A Closer Look at the GTX 770


The GTX 770 is what NVIDIA calls a “virtual” card which essentially means there won’t be a reference version of it. While all of the tests we conducted used a card supplied by NVIDIA that looked identical to the GTX 780, board partners won’t be using that for their wares. Rather, they are free to release their own custom designs so in this section we will be looking at one such product: Gigabyte’s GTX 770 WindForce OC. We’ll be actually reviewing this in a separate article sometime after launch.


The GTX 770 WindForce OC uses Gigabyte’s well-regarded WindForce 3X heatsink and boasts higher clock but its price will likely fall directly into NVIDIA’s $399 target. It is also quite a bit longer than the reference GTX 680, GTX 780 and GTX 670 at around 11” but this shouldn’t cause an issue in most cases.


Like its predecessors, the latest version of Gigabyte’s WindForce cooler uses a trio of 80mm fans that are backstopped by six large direct-contact heatpipes and a full-length aluminum heatsink. Gigabyte claims this design is capable of dispersing 450W of heat so actually cooling the relatively efficient GK104 core shouldn’t be a problem.


Running along the card’s edge is a metal stiffener which keeps the heatsink’s significant weight evenly dispersed so it doesn’t bow the PCB. Gigabyte has actually gone a second step and returned this upwards to partially enclose the heatsink’s sides, providing a great looking clean finish.

We can also see that part of the GTX 770 WindForce OC’s substantial length is due to the cooler sitting partially over the PCB’s edge.


The connector layout on Gigabyte’s card mirrors every NVIDIA card from within the Kepler family. It uses a pair of DVI outputs alongside an HDMI port and DisplayPort, ensuring multi monitor compatibility. For power input, there is a simple 6+8 pin layout.


Gigabyte has also added a custom PCB to their card with an upgraded PWM area. Unlike past WindForce cards, this one uses a black PCB.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
GeForce Experience's ShadowPlay & OCing Gets "Reasons"

GeForce Experience’s ShadowPlay


When NVIDIA first announced GeForce Experience, many enthusiasts just shrugged and moved on with their gaming lives. However, this deceptively simple looking piece of software could very well revolutionize PC gaming, allowing for high fidelity image quality without the need to tweak countless in-game settings.


For regular PC gamers, finding just the right settings which optimize a given hardware configuration’s performance is part of the fun. Unfortunately for novices and casual gamers who are used to the “load and play” mentality of console and tablet games, the process of balancing framerates and image quality can prove to be a daunting one. With GeForce Experience, NVIDIA takes the guesswork out of the equation by linking their software with a cloud-based service which uses a broadly established database to automatically find the best possible in-game settings for your hardware. This could potentially open up PC gaming to a much larger market.

GeForce Experience’s goals are anything but modest and judging from a highly successful Beta phase (over 2.5 million people downloaded the application), NVIDIA will likely begin rolling out the final version in the next few months. The software's next evolutionary steps are being done in parallel with its ongoing development so new features are being added in preparation for launch. GFE will soon be used as the backbone for NVIDIA’s SHIELD handheld gaming device and a brand new addition aptly named ShadowPlay has entered the picture too.


With recoded and live gaming sessions becoming hugely popular on video streaming services, ShadowPlay aims to offer a way to seamlessly log your onscreen activities without the problems of current solutions. Applications like FRAPS which have long been used for in-game recording are inherently inefficient since they tend to require a huge amount of resources, bogging down performance during situations when you need it the most. In addition, their file formats aren’t all that space conscious with 1080P videos of over 10 minutes routinely eating up over a gigabyte of storage space.

By leveraging the compute capabilities of NVIDIA’s GeForce graphics cards, ShadowPlay can automatically buffer up to 20 minutes of previous in-game footage. In many ways it acts like a PVR by recording in the background using a minimum of resources, ensuring a gamer will never notice a significant performance hit when it is enabled. There is also a Manual function which can start and stop recording with the press of a hotkey. All videos are encoded in real time using H.264 / MPEG4 compression by some of the GPU’s compute modules, making for relatively compact files.

Since ShadowPlay’s recording and encoding is processed on the fly, it can be done asynchronously to the onscreen framerate so there won’t be any FRAPS-like situations where you’ll need to game at 30 or 60 FPS when recording.


NVIDIA Adds “Reasons” to Overclocking


After TITAN launched, NVIDIA spent a good amount of time talking to overclockers in order to get their feedback about GeForce Boost 2.0 and its impact upon clock speeds. From voltage to Power Limit to Temperature Limits, Boost imparts a large number of limiting factors onto core frequencies but enthusiasts had no way of knowing exactly which of these was limiting their overclocks. In order to give this much-needed information to us in a meaningful way, NVIDIA has implemented what they call “Reasons” into overclocking software. In layman’s terms, “Reasons” simply allows you to see what settings have to be modified in order to achieve higher overclocks and it does so in a brilliantly simple manner.


As we can see above, within the Monitoring tab of EVGA Precision there are now five new categories: Temp Limit, Power Limit, Voltage Limit, OV Max Limit and Utilization Limit. Each of these logs the information for a specific Boost modifier, all of which can artificially hold back clock speeds. The graphs are presented in such a way that a reading of “1” means that a limit is being reached while a “0” means there’s still some overhead. Just take note that getting a “1” in OV (over voltage) Max Limit is a serious red flag. It means the ASIC is overly stressed, possibly leading to core damage so either the voltage or clock speeds should be dialed back as soon as possible.

If the example above is used, our card is being held back by the Power Limit and Voltage Limit, so increasing both within Precision should theoretically lead to higher clock speeds. This is definitely helpful but with such stringent limitations being put on the Voltage and Power Limit modifications, one of these will always become the bottleneck regardless of how well a card is cooled.

According to NVIDIA, this feature will be available in EVGA’s Precision, ASUS’ GPU Tweak, MSI’s AfterBurner and most other vendor-specific overclocking software.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Testing Methodologies Explained & FCAT Gets Explained

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 320.18 Beta
NVIDIA 320.14 Beta
AMD 13.5 Beta 2



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Time Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FRAPS also has the capability to log average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs we use in the FIRSt part of this review. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.


Frame Time Testing & FCAT

To put a meaningful spin on frame times, we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 28ms or about 35FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 28ms (35 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 25ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

In order to determine accurate onscreen frame times, a decision has been made to move away from FRAPS and instead implement real-time frame capture into our testing. This involves the use of a secondary system with a capture card and an ultra-fast storage subsystem (in our case five SanDisk Extreme 240GB drives hooked up to an internal PCI-E RAID card) hooked up to our primary test rig via a DVI splitter. Essentially, the capture card records a high bitrate video of whatever is displayed from the primary system’s graphics card, allowing us to get a real-time snapshot of what would normally be sent directly to the monitor. By using NVIDIA’s Frame Capture Analysis Tool (FCAT), each and every frame is dissected and then processed in an effort to accurately determine latencies, frame rates and other aspects.

As you might expect, this is an overly simplified explanation of FCAT but expect our full FCAT article and analysis to be posted sometime in June. In the meantime, you can consider this article a transitional piece, though FCAT is being used for all testing with FRAPS being completely cast aside.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Assassin’s Creed III / Crysis 3

Assassin’s Creed III (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/RvFXKwDCpBI?rel=0" frameborder="0" allowfullscreen></iframe>​

The third iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Boston area which features plenty of NPCs, distant views and high levels of detail.


2560x1440



With Assassin's Creed III being an NVIDIA-optimized game, it should come as no surprise that the GTX 770 has a strong outing against AMD's HD 7970 GHz Edition. However, it excels against other NVIDIA cards, easily surpassing the GTX 680 but not quite coming close to the $650 GTX 780.


Crysis 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure Due to the hefty system resource needs of this game, post-process FXAA was used in the place of MSAA.


2560x1440



The GTX 770 once again shows moderate performance improvements over a reference GTX 680 and manages to just edge out AMD's flagship. At first, this may not seem all that impressive but we have to remember that NVIDIA's latest card retails for $399 rather than the $450 demanded by its immediate competition.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Dirt: Showdown / Far Cry 3

Dirt: Showdown (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/IFeuOhk14h0?rel=0" frameborder="0" allowfullscreen></iframe>​

Among racing games, Dirt: Showdown is somewhat unique since it deals with demolition-derby type racing where the player is actually rewarded for wrecking other cars. It is also one of the many titles which falls under the Gaming Evolved umbrella so the development team has worked hard with AMD to implement DX11 features. In this case, we set up a custom 1-lap circuit using the in-game benchmark tool within the Nevada level.


2560x1440



Much like Assassin's Creed favors NVIDIA, Dirt Showdown has been optimized from day one for AMD's architecture. This allows the HD 7970 GHz Edition to remain well out in front of the GTX 770. However, the real story here is the 770's improvement over the GTX 670, a card it is destined to replace in NVIDIA's product stack.


Far Cry 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



2560x1440



Here, NVIDIA's GTX 770 is able to deliver some impressive results, staying well ahead of the GTX 680 and HD 7970 GHz Edition. In comparison against the GTX 670, it posts literally game-changing framerate improvements.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Hitman Absolution / Max Payne 3

Hitman Absolution (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


2560x1440



This game certainly doesn't favor NVIDIA's cards but the GTX 770 manages to still provide excellent performance. It may loose to the HD 7970 GHz Edition but the GTX 670 is left in the dust.


Max Payne 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/ZdiYTGHhG-k?rel=0" frameborder="0" allowfullscreen></iframe>​

When Rockstar released Max Payne 3, it quickly became known as a resource hog and that isn’t surprising considering its top-shelf graphics quality. This benchmark sequence is taken from Chapter 2, Scene 14 and includes a run-through of a rooftop level featuring expansive views. Due to its random nature, combat is kept to a minimum so as to not overly impact the final result.


2560x1440



Unfortunately for the GTX 770, their card looses by a wide margin against the HD 7970 GHz Edition. For whatever reason, this game seems to be yet another one of AMD's strengths, despite it not being part of their Gaming Evolved program.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Tomb Raider

Tomb Raider (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


2560x1440



The GTX 770 once again looses by a narrow margin to the GHz Edition while managing to stay well ahead of the GTX 670. However, we can see that in situations like this one, the improvements made to this new card don't really add any noticeable performance improvements over a GTX 680.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Onscreen Frame Times w/FCAT

Onscreen Frame Times w/FCAT


When capturing output frames in real-time, there are a number of eccentricities which wouldn’t normally be picked up by FRAPS but are nonetheless important to take into account. For example, some graphics solutions can either partially display a frame or drop it altogether. While both situations may sound horrible, these so-called “runts” and dropped frames will be completely invisible to someone sitting in front of a monitor. However, since these are counted by its software as full frames, FRAPS tends to factor them into the equation nonetheless, potentially giving results that don’t reflect what’s actually being displayed.

With certain frame types being non-threatening to the overall gaming experience, we’re presented with a simple question: should the fine-grain details of these invisible runts and dropped frames be displayed outright or should we show a more realistic representation of what you’ll see on the screen? Since Hardware Canucks is striving to evaluate cards based upon and end-user experience rather than from a purely scientific standpoint, we decided on the latter of these two methods.

With this in mind, we’ve used the FCAT tools to add the timing of partially rendered frames to the latency of successive frames. Dropped frames meanwhile are ignored as their value is zero. This provides a more realistic snapshot of visible fluidity.






With AMD's major frame time issues being concentrated around multi-card setups it shouldn't come as any surprise that their results here are mostly in-line with those of NVIDIA. However, Far Cry 3 remains a game where AMD just can't catch a break, allowing the GTX 770 to provide a much smoother gaming experience.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
13,264
Location
Montreal
Onscreen Frame Times w/FCAT (pg.2)

Onscreen Frame Times w/FCAT (pg.2)


When capturing output frames in real-time, there are a number of eccentricities which wouldn’t normally be picked up by FRAPS but are nonetheless important to take into account. For example, some graphics solutions can either partially display a frame or drop it altogether. While both situations may sound horrible, these so-called “runts” and dropped frames will be completely invisible to someone sitting in front of a monitor. However, since these are counted by its software as full frames, FRAPS tends to factor them into the equation nonetheless, potentially giving results that don’t reflect what’s actually being displayed.

With certain frame types being non-threatening to the overall gaming experience, we’re presented with a simple question: should the fine-grain details of these invisible runts and dropped frames be displayed outright or should we show a more realistic representation of what you’ll see on the screen? Since Hardware Canucks is striving to evaluate cards based upon and end-user experience rather than from a purely scientific standpoint, we decided on the latter of these two methods. With this in mind, we’ve used the FCAT tools to add the timing of runted to the latency of successive frames. Dropped frames meanwhile are ignored as their value is zero. This provides a more realistic snapshot of visible fluidity.





Much like the first four games we tested on the previous page, these don't show anything out of the ordinary with both cards essentially tying one another. They were smooth throughout testing so there's no concern here about AMD's telltale stutter issues.
 
Last edited:

Latest posts

Twitter

Top