What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GTX 650 Ti Boost 2GB Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
NVIDIA’s GTX 650 Ti and GTX 660 may be popular graphics cards in their own respective markets; there is a significant performance and price difference between them. The new GTX 650 Ti Boost may have a mouthful of a name but, like AMD’s own HD 7790, it is meant to plug one of the most obvious gaps within the GeForce lineup with a card that can offer high framerates at 1080P without costing more than $175. Somewhat ironically, this is exactly what AMD’s newest card sets out to accomplish but NVIDIA’s approach is very different. We hinted at a green team broadside in the HD 7790 review and this is it.

Instead of neatly slotting the GTX 650 Boost into a convenient position without modifying their current lineup’s pricing structure, NVIDIA will be shuffling things around a bit. The GTX 650 Ti Boost will start at $169 for the 2GB version and $149 for an upcoming 1GB SKU while incorporating $75 of in game credits for certain Free to Play titles. Naturally, this necessitates some movement for the GTX 650 Ti which is being lowered to a mere $129 and the GTX 650 goes to $109 while even the GTX 660 2GB has moved to a new $199 level.

With NVIDIA suddenly targeting areas where AMD previously dominated, this causes a world of issues for the Radeon product stack. They just introduced their HD $149 7790, a card which competes quite well against the GTX 650 Ti but suddenly finds itself being an overpriced outsider. This isn’t an envious situation for card which hasn’t even become available to purchase yet, while NVIDIA's latest addition will be hard launched today. To make matters even worst, the GTX 650 Ti Boost is actually meant to compete against the HD 7850, a GPU that currently goes for about $15 to $20 more.

GTX-650-BOOST-1212-3.jpg

The GTX 650 Ti Boost uses the same 28nm GK106 core as its sibling, the GTX 650 Ti, and has one of its SMs disabled as well but that’s where the similarities with the lower end version come to an end. Unlike the standard GTX 650 Ti, this one makes use of three 64-bit memory controllers for a 192-bit wide interface. Since the 1:1 ratio between ROPs and memory controllers remains in place, this also allows for an increase in ROPs to 24.

While the memory layout will grant the GTX 650 Ti Boost a significant bandwidth advantage over its predecessor, NVIDIA’s addition of GeForce Boost into this equation is the real differentiating factor. Boost gives the engine clock an ability to run at much higher frequencies provided there’s enough thermal and power overhead to do so. The previous version of the GTX 650 Ti didn’t have this ability, meaning it was constrained to its reference clock at all times.

GTX-650-BOOST-1212-56.jpg

The GTX 650 Ti Boost takes some clock speed mojo from NVIDIA’s GTX 660 and melds it together with a few tricks out of the GTX 650 Ti’s architecture. This leads to its base, boost and memory frequencies mirroring those found on the $199 card while the actual CUDA core layout uses the same 768 design as its less expensive brother. The clock speeds on our sample routinely hit the 1100MHz mark when taking advantage of GeForce Boost’s headroom. The resulting Frankenstein-like creation melds the best of both worlds without stepping on the feet of any other NVIDIA cards. Expect board partners’ products to hit a number of price points and clock speeds above these marks.

Mixed memory sizes also make a comeback with the three controllers being in charge of 1GB or 2GB. The first GTX 650 Ti Boost SKU being introduced will be the 2GB version while the 1GB product should be available sometime in the first two weeks of April. We’ll have a review of the 1GB in the coming weeks but expect it to be a perfect fit for the 1080P crowd.

Due to its clock speeds, memory allotment and the use of a partially disabled GK106 core, the Boost’s TDP comes in closer to the GTX 660. This shouldn’t be too much of an issue since 140W is still low enough to ensure its use in SFF systems while still retaining a good perf per watt ratio in the mid-range market.

GTX-650-BOOST-1212-1.jpg

NVIDIA’s reference GTX 650 Ti Boost looks exactly like the GTX 660 and GTX 660 Ti with a push / pull type heatsink configuration and a long black shroud. It also uses a single 6-pin power connector along with SLI compatibility which is something the non-Boost version doesn’t have. Just don’t expect too many retail products to reflect this configuration since most will come with custom heatsinks.

GTX-650-BOOST-1212-2.jpg

Flipping the card over we can see that NVIDIA retained their ridiculously small PCB, hopefully allowing board partners to created designs which are tailored to the HTPC and SFF market. We can also see that NVIDIA has added a pair of slits behind the fan overhang, ensuring the cooling assembly is never starved for airflow.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Testing Methodologies Explained

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 314.21 Beta
AMD 13.2 Beta 7
AMD HD 7790 Beta



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Time Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FRAPS also has the capability to log average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs we have begun to use. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

While frame times are an interesting metric to cover, it’s important to put them into a more straightforward context as well. In its frametime analysis, FRAPS reports a timestamp (again in milliseconds) for each rendered frame in the sequence in which it was rendered. For example, Frame 20 occurred at 19ms and Frame 21 at 30ms of the benchmark run. Subtracting Frame 20 from Frame 21 allows us to determine how much time it took to render Frame 21. This method would be repeated over and over again to show frame time consistency.

In order to put a meaningful spin on frame times ,we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 40ms or 25FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 40ms (25 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 30ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

Since the entire point of this exercise is to determine how much the frame time varies within each second, we will see literally thousands of data points being represented. So expect some truly epic charts.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Assassin’s Creed III / Crysis 3

Assassin’s Creed III (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/RvFXKwDCpBI?rel=0" frameborder="0" allowfullscreen></iframe>
The third iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Boston area which features plenty of NPCs, distant views and high levels of detail.

GTX-650-BOOST-1212-37.jpg

GTX-650-BOOST-1212-30.jpg

For a $169 card, the GTX 650 Ti Boost posts some remarkable results in Assassin’s Creed III with performance that’s nearly 10% ahead of the $185 HD 7850 2GB. Meanwhile, its minimums nearly match those seen on a GTX 660.


Crysis 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure Due to the hefty system resource needs of this game, post-process FXAA was used in the place of MSAA.

GTX-650-BOOST-1212-38.jpg

GTX-650-BOOST-1212-31.jpg

Crysis 3 shows very much the same results we saw in Assassin’s Creed III with the GTX 650 Ti Boost showing the ability to remain well ahead of the HD 7850 which AMD’s new HD 7790 is nothing but a distant memory.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Dirt: Showdown / Far Cry 3

Dirt: Showdown (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/IFeuOhk14h0?rel=0" frameborder="0" allowfullscreen></iframe>
Among racing games, Dirt: Showdown is somewhat unique since it deals with demolition-derby type racing where the player is actually rewarded for wrecking other cars. It is also one of the many titles which falls under the Gaming Evolved umbrella so the development team has worked hard with AMD to implement DX11 features. In this case, we set up a custom 1-lap circuit using the in-game benchmark tool within the Nevada level.

GTX-650-BOOST-1212-39.jpg

GTX-650-BOOST-1212-32.jpg

Dirt: Showdown’s use of certain AMD-optimized rendering paths seems to make short work of NVIDIA cards and provides a thorn in the GTX 650 Ti Boost’s side. As we’ll see throughout testing, this is an exception rather than a rule but hopefully NVIDIA’s driver team will soon roll out some additional optimizations for this title.


Far Cry 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.


GTX-650-BOOST-1212-40.jpg

GTX-650-BOOST-1212-33.jpg

While this may be an AMD Gaming Evolved title, you wouldn’t know it from the GTX 650 Ti Boost’s performance. It is once again able to remain slightly ahead of the $185 HD 7850 2GB and leaves the HD 7790 in the dust. It also retains enough separation with the GTX 650 Ti to justify that card’s new lower price.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Hitman Absolution / Max Payne 3

Hitman Absolution (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.

GTX-650-BOOST-1212-41.jpg

GTX-650-BOOST-1212-34.jpg

Ironically, the GTX 650 Ti Boost’s performance here makes it look like NVIDIA may want to look into a gap-filler between their newest product and the GTX 650 Ti. That solution may be on its way with the eventual release of the Boost 1GB.


Max Payne 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/ZdiYTGHhG-k?rel=0" frameborder="0" allowfullscreen></iframe>​

When Rockstar released Max Payne 3, it quickly became known as a resource hog and that isn’t surprising considering its top-shelf graphics quality. This benchmark sequence is taken from Chapter 2, Scene 14 and includes a run-through of a rooftop level featuring expansive views. Due to its random nature, combat is kept to a minimum so as to not overly impact the final result.

GTX-650-BOOST-1212-42.jpg

GTX-650-BOOST-1212-35.jpg

Here we see an ever so slight misstep by NVIDIA’s graphics cards as AMD seems to have found the secret mojo behind Max Payne 3. As a result, the GTX 650 Ti Boost falls back a few places but still remains well ahead of the HD 7790 1GB.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Tomb Raider

Tomb Raider (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


GTX-650-BOOST-1212-43.jpg

GTX-650-BOOST-1212-36.jpg

With Tomb Raider being such a recently released title, there is still a fair amount of work being put into optimizations by AMD and NVIDIA. Currently, the GeForce cards are dominating which allows the GTX 650 Ti Boost to pull well ahead of the HD 7850 2GB.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Frame Time Testing

Frame Time Testing


Please note that our complete frame time testing methodology can be found on the Methodologies page. For the purposes of this section, any frame time above 40ms (ie: 25FPS or below) should be considered too slow to maintain a fluid in-game experience.

GTX-650-BOOST-1212-44.jpg

GTX-650-BOOST-1212-45.jpg

GTX-650-BOOST-1212-46.jpg

GTX-650-BOOST-1212-47.jpg

Even though we are running the risk of sounding like a broken record, AMD’s solution may have slightly improved frame time latencies but NVIDIA’s cards are clearly superior in this respect. There is a small slip-up on NVIDIA’s part within Dirt Showdown but the popular Far Cry 3 drives a stake into the HD 7850’s and HD 7790’s hearts time and again.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Frame Time Testing (pg.2)

Frame Time Testing (pg.2)


Please note that our complete frame time testing methodology can be found on the Methodologies page. For the purposes of this section, any frame time above 40ms (ie: 25FPS or below) should be considered too slow to maintain a fluid in-game experience.

GTX-650-BOOST-1212-48.jpg

GTX-650-BOOST-1212-49.jpg

GTX-650-BOOST-1212-50.jpg

The final frame time tests reflect the results seen in the last batch. The GTX 650 Ti Boost provides a much smoother gameplay experience, particularly in Hitman Absolution and Tomb Raider. Hopefully, AMD can get a handle on this at some point in time since their issues degrade a gamer's experience in Gaming Evolved titles.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Temperatures & Acoustics / Power Consumption

Temperature Analysis


For all temperature testing, the cards were placed on an open test bench with a single 120mm 1200RPM fan placed ~8” away from the heatsink. The ambient temperature was kept at a constant 22°C (+/- 0.5°C). If the ambient temperatures rose above 23°C at any time throughout the test, all benchmarking was stopped..

For Idle tests, we let the system idle at the Windows 7 desktop for 15 minutes and recorded the peak temperature.


GTX-650-BOOST-1212-54.jpg

The Boost’s reference heatsink isn’t all that efficient in dispersing heat from the core but temperatures remain at manageable levels. Expect board partners to take up some of the slack here with custom coolers and even some upgraded stock heatsink designs.


Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, a loop of Unigine Valley is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.

GTX-650-BOOST-1212-52.jpg

As with most modern cards, the GTX 650 Ti Boost is relatively quiet in its reference form.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

Please note that after extensive testing, we have found that simply plugging in a power meter to a wall outlet or UPS will NOT give you accurate power consumption numbers due to slight changes in the input voltage. Thus we use a Tripp-Lite 1800W line conditioner between the 120V outlet and the power meter.

GTX-650-BOOST-1212-55.jpg

While the Boost uses a Gk106 core with a single SM disabled and high clock speeds, its power consumption numbers were actually quite surprising. Both it and the GTX 660 are rated for a TDP of about 140W and yet NVIDIA’s newest card consumes significantly less. This could be due to anything from a lower leakage core to the disabled SM having a cumulative impact upon efficiency.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Overclocking Results

Overclocking Results


Overclocking the GXT 650 Ti Boost followed the natural progression of most other Kepler-based cards. This involves increasing the Boost Offset, Memory Offset and Power Target within EVGA's Precision software. For those wondering, the maximum Power Target on this card is currently 110%, though certain "unlocked" software will likely push it beyond that.

In order to dial in the clock speeds, we used a combination of Precision and EVGA's OC Scanner. However the actual core voltage wasn't touched, even though Precision allows for up to 1.15V, up from the reference 962mV while the fan speed was set to a constant 50%.

With the settings detailed above, the Boost achieved decent clock speeds of 1205MHz and 6544MHz on the core and memory respectively. The core value actually represents the AVERAGE core frequency in-game, measured across our entire testing suite. We actually saw it climb to 1265MHz at some points but they were few and far between.

Naturally, the increased clock speeds all but guarantee the GTX 650 Ti Boost will nearly catch up to a stock GTX 660 2GB.

GTX-650-BOOST-1212-57.jpg


GTX-650-BOOST-1212-58.jpg
 
Last edited:

Latest posts

Top