The NVIDIA GTX 1050 Ti & GTX 1050 Review

Editor-in-Chief

Author: SKYMTL
Date: October 24, 2016
Product Name: GTX 1050 / GTX 1050 Ti

Exactly a week ago NVIDIA announced the impending release of their GTX 1050 Ti and GTX 1050 into a budget gaming market which has been bereft of competition for the better part of two years. At that time, we didn’t’ know exactly where these two cards would land since performance results needed to be kept close to our collective chests until today. Now that I have finally been able to digest all of the information, I can wholeheartedly say that if you are looking for a $99 to $150 GPU reading the entirety of this review is a necessity.

Much like AMD though to a significantly lesser extent, NVIDIA’s lower cost GPU lineup was made up almost entirely of previous generation offerings. While both the Maxwell and Pascal architecture benefited from high end refreshes, the GTX 750 and GTX 750 Ti have remained untouched in their positioning for the better part of two years. Indeed, even those highly competitive –for the time- cards were launched at $150 and $120 respectively and NVIDIA’s options below that were relegated to OEM-focused options like the GT 740. That’s about to change since the new GeForce lineup is about to add two new entrants which fight to reduce the cost of entry-level GPUs while taking performance per dollar to a whole new level.

Typically the under-$150 market segment sees a race to the bottom of the performance barrel with cards offering truly pathetic value. The GTX 1050 Ti and GTX 1050 aim to change that by leveraging the Pascal architecture’s inherent strengths of efficiency, performance and broad-scale optimizations into something that approaches acceptability for budget conscious gamers.

Efficiency will continually play a large role in this review since it is actually the largest contributing factor to the GTX 1050-series’ success and pricing. You see, the new GP107 core produces such a small amount of heat and requires so little power that board partners don’t’ need to install extravagant heatsinks to keep it cool or even add a PCI-E power connector since the cards will draw all they need directly from the motherboard. This effectively lowers BOM costs and allows these two new cards to sell for less than they otherwise would.

The master of ceremonies directing things behind the scenes is a new 3.3 billion transistor, 132mm² GP107 core. It really is a thing to marvel at since, through the use of a 16nm FinFET manufacturing process NVIDIA’s engineers have been able to jam in almost two times the number of transistors as the GTX 750’s GM107 core while reducing the overall footprint by about 10%.

In its most basic presentation, the GP107 has six Streaming Multiprocessors which accounts for 768 CUDA cores and 32 texture units alongside two Raster Engines. On the back-end there’s a 4×32-bit memory controller layout which feeds the 7Gbps GDDR5 modules. Now while this may not sound like much, its backed up by NVIDIA’s robust color compression algorithms which allow for a more efficient use of available bandwidth.

While the GTX 1050 Ti receives the full monty GP107, its less expensive sibling the GTX 1050 has a single SM cut for a total of 640 cores. There isn’t any associated memory interface reduction but the lower priced card will only come with a maximum of 2GB of GDDR5 versus the 4GB of its bigger brother. This was likely done to maximize core yields but NVIDIA did prop this card up by allowing it to run at higher speeds so in situations where its 2GB and lower core count won’t become a bottleneck, the GTX 1050 should become a very close competitor.

When looking at the pure on-paper specifications, I wouldn’t blame you if there was some disappointment about how these new cards line up against their direct predecessor, the GTX 950. With its 768 cores, 40 TMUs and 32 ROPs NVIDIA’s GTX 1050 Ti is literally a carbon copy (not counting the fact the GTX 950 was based on much larger, more ungainly core) while the GTX 1050 actually makes due with even less resources. However there’s more going on here than what first meets the eye since the Pascal architecture has quite a few tricks up its sleeve that Maxwell didn’t have. Both consume just 75W which will facilitate drop-in upgrades though some board partners may choose to include a dedicated 6-pin connector for more overclocking headroom.

With that being said, the true upgrade path towards the GTX 1050-series will likely be from a GTX 750 or GTX 650 class product. In those cases the GTX 1050 Ti and GTX 1050 actually cost LESS than the original buyers spent on their cards yet will offer a whole new level of performance, not to mention much broader support for DX12 and Vulkan features. It is them NVIDIA is targeting with this launch though folks using a GTX 950 may still get a good framerate uplift in certain situations.

The pricing on this particular launch is extremely competitive to say the least and AMD has already reacted in the only way they can. They’ve already reduced the RX 460 2GB cost from $109 to $99 in an effort to entice folks over to Team Red. The RX 470 4GB’s price has also been marched over to the chopping block and its now going for $169 which is actually quite enticing…..but more about that later.

Unlike past released of Pascal GPUs, this one is entirely upon the shoulders of NVIDIA’s board partners since there won’t be any reference designs / Founders Editions. On one hand that will insure a good variety of options at retainers from day one. Unfortunately, this also means the GTX 1050 Ti and GTX 1050’s actual costing will be determined not by NVIDIA but rather by outside influences.

Let’s take this ASUS GTX 1050 Ti DUAL. It boasts reference clock speeds and a dual-fan cooler with a very basic aluminum heatsink and no additional 6-pin connector. Its price? $159. That’s a $20 premium which pushes this card’s price right up against the RX 470 and, as you will see in the results, that’s a battle it will lose every time. Luckily there will be plenty of options out there sitting at the $139 price (I’ve already received confirmations from EVGA, Gigabyte and PNY) so at least this sample’s performance will give you some idea of what you should expect.

On the GTX 1050 end, EVGA’s single fan Superclocked card is just what the doctor ordered. It is compact, goes for $119 and will be widely available at launch. In essence this is the quintessential GTX 1050 and it does come with slightly higher clock speeds of 1417MHz and 1531MHz for the Base and Boost respectively. It too comes without a 6-pin power input so drop-in installation won’t be a problem in most older systems.

When taken purely at face value the GTX 1050 Ti and GTX 1050 have already lowered the pricing bar for entry-level PC gamers if AMD’s price cuts are anything to go by. However, they’ll need to strike a very delicate balance between cost, performance and power consumption if there’s hope of success.

Test System & Setup

Processor: Intel i7 5960X @ 4.3GHz
Memory: G.Skill Trident X 32GB @ 3000MHz 15-16-16-35-1T
Motherboard: ASUS X99 Deluxe
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / Acer XB280HK (4K)
OS: Windows 10 Pro

Drivers:
AMD Radeon Software 16.10.2 Hotfix
NVIDIA 375.57 Beta

*Notes:

– All games tested have been patched to their latest version

– The OS has had all the latest hotfixes and updates installed

– All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings

The Methodology of Frame Testing, Distilled

How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results in DX11.

DX12 Benchmarking

For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.

Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented. It should also be noted that in order to give a results as close as possible to what the human eye will “see”, we DO NOT use the

One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.

Battlefield 1

Battlefield 1 will likely become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles around. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.


Deus Ex – Mankind Divided

Deus Ex titles have historically combined excellent storytelling elements with action-forward gameplay and Mankind Divided is no difference. This run-through uses the streets and a few sewers of the main hub city Prague along with a short action sequence involving gunplay and grenades.


Doom (OpenGL)

Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.


Fallout 4

The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.


Far Cry 4

This game Ubisoft’s Far Cry series takes up where the others left off by boasting some of the most impressive visuals we’ve seen. In order to emulate typical gameplay we run through the game’s main village, head out through an open area and then transition to the lower areas via a zipline.


Grand Theft Auto V

In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.


Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.


Overwatch

Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.


Rise of the Tomb Raider

Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.

The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.


The Division

The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.


Warhammer: Total War

Unlike some of the latest Total War games, the hotly anticipated Warhammer title has been relatively bug free, performs well on all systems and still incorporates the level detail and graphics fidelity this series is known for. In this sequence, we use the in-game benchmarking tool to play back one of our own 40 second gameplay sessions which includes two maxed-out armies and includes all of the elements normally seen in standard gameplay. That means zooms and pans are used to pivot the camera and get a better view of the battlefield.


Witcher 3

Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.


Battlefield 1

Battlefield 1 will likely become known as one of the most popular multiplayer games around but it also happens to be one of the best looking titles around. It also happens to be extremely well optimized with even the lowest end cards having the ability to run at high detail levels.

In this benchmark we use a runthough of The Runner level after the dreadnought barrage is complete and you need to storm the beach. This area includes all of the game’s hallmarks in one condensed area with fire, explosions, debris and numerous other elements layered over one another for some spectacular visual effects.


Deus Ex – Mankind Divided

Deus Ex titles have historically combined excellent storytelling elements with action-forward gameplay and Mankind Divided is no difference. This run-through uses the streets and a few sewers of the main hub city Prague along with a short action sequence involving gunplay and grenades.


Doom (OpenGL)

Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.


Gears of War 4

Like many of the other exclusive DX12 games we have seen, Gears of War 4 looks absolutely stunning and seems to be highly optimized to run well on a variety of hardware. In this benchmark we use Act III, Chapter III The Doorstep, a level that uses wide open views along with several high fidelity environmental effects. While Gears does indeed include a built-in benchmark we didn’t find it to be indicative of real-world performance.


Hitman (2016)

The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.

For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.


Quantum Break

Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.

Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.


Rise of the Tomb Raider

Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.

The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.


Warhammer: Total War

Unlike some of the latest Total War games, the hotly anticipated Warhammer title has been relatively bug free, performs well on all systems and still incorporates the level detail and graphics fidelity this series is known for. In this sequence, we use the in-game benchmarking tool to play back one of our own 40 second gameplay sessions which includes two maxed-out armies and includes all of the elements normally seen in standard gameplay. That means zooms and pans are used to pivot the camera and get a better view of the battlefield.


Analyzing Temperatures & Frequencies Over Time

Modern graphics card designs make use of several advanced hardware and software facing algorithms in an effort to hit an optimal balance between performance, acoustics, voltage, power and heat output. Traditionally this leads to maximized clock speeds within a given set of parameters. Conversely, if one of those last two metrics (those being heat and power consumption) steps into the equation in a negative manner it is quite likely that voltages and resulting core clocks will be reduced to insure the GPU remains within design specifications. We’ve seen this happen quite aggressively on some AMD cards while NVIDIA’s reference cards also tend to fluctuate their frequencies. To be clear, this is a feature by design rather than a problem in most situations.

In many cases clock speeds won’t be touched until the card in question reaches a preset temperature, whereupon the software and onboard hardware will work in tandem to carefully regulate other areas such as fan speeds and voltages to insure maximum frequency output without an overly loud fan. Since this algorithm typically doesn’t kick into full force in the first few minutes of gaming, the “true” performance of many graphics cards won’t be realized through a typical 1-3 minute benchmarking run. Hence why we use a 10-minute warm up period before all of our benchmarks.

For now, let’s see how these new algorithms are used when the card is running at default speeds.

This should come as no surprise to anyone reading this review since I’ve already mentioned time and again how efficient NVIDIA’s Pascal architecture really is. It is however important to remember that the EVGA card in these charts does have a mild overclock. Does that matter? Absolutely not since both the EVGA and ASUS cards have more than enough cooling capacity to deal with the lightweight GP107 core’s output.

Those temperature results lead to some very consistent clock speeds throughout our time range though there are some small dips at the very beginning of testing. Nonetheless, both cards achieve clock speeds that are well above their stated specifications.

Fan speeds are nothing to worry about either since it looks like both EVAG and ASUS don’t have in increase RPMs to achieve optimal temperatures.

Finally, consistent temperatures and frequencies obviously lead to some level-headed performance numbers.

Acoustical Testing

What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Hitman Absolution is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.

As you may have already guessed, the ASUS GTX 1050 Ti Dual and EVGA GTX 1050 SC are both extremely quiet cards. Then again, there have been very few GPUs this generation that we can accuse of being overly loud. Hopefully this trend continues.

System Power Consumption

For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

Now these results may require a bit of an explanation since they point directly towards NVIDIA being a leader –by a long shot- in power consumption metrics.

The GTX 1050 Ti and GTX 1050 actually share a lot in common here since even though the smaller sibling has an SM locked away, it still consumes about the same amount of power due to its higher frequencies. Naturally, the overclocked EVGA card requires a bit more juice but the few watts really aren’t anything to worry about, even for those of you without a modern PSU.

Against AMD cards, Pascal really shows its true colors. Both of these new GPUs outperform AMD’s RX 460 2GB but require almost 20W less. That’s nothing short of incredible and it highlights just how far behind the Polaris architecture really is.

Overclocking Results

I approached overclocking on these two cards with an open mind but there are actually some limitations here that had me running face first into a brick wall. Basically, the ASUS GTX 1050 Ti and EVGA GTX 1050 Superclocked barely had one ounce of overclocking headroom left in their tanks.

Now before you gasp in horror, let me explain something here. Neither of these samples have an auxiliary 6-pin connector and without that, they’re limited to the amount of power the PCI-E slot can provide. Through GeForce Boost, NVIDIA’s engineers did their absolute best to maximize out of box clock speeds within that significant limitation and the end result is consistent performance across the board but less room to move the cards beyond that.

Naturally, if you pay a bit more and buy a GTX 1050-series GPU with that 6-pin you can expect more headroom. However, without that both the voltage and Power Limits can’t be increased.

So, for the record, out of the box the EVGA GTX 1050 hit 1721MHz and overclocked it fluctuated between 1746MHz and 1774MHz. Meanwhile ASUS’ Dual card originally posted a frequency of 1660MHz while adjustments took it to about 1733Mhz.

Conclusion

The launch of NVIDIA’s GTX 1050 and GTX 1050 Ti comes at an key moment of 2016, right before the holiday shopping season but after AMD showed their own hand with the RX 470 and RX 460. For the most part these two new GeForce cards fill in the vast open space between the Radeon offerings by playing a smart game of avoidance rather than outright confrontation. However, that doesn’t necessarily mean there isn’t any competition since given that this is a launch spearheaded by board partners, price actual points vary to an extreme degree.

Let’s start things off with the GTX 1050 Ti. Provided you don’t want to switch ecosystems from NVIDIA to AMD, I can’t imagine a better upgrade path for users of GTX 750 or GTX 650 series cards. The amount of additional horsepower you receive is nothing short of incredible, particularly in DX12 situations. Granted, folks with a GTX 950 won’t see much benefit when upgrading their card to a brand new GTX 1050 Ti but don’t forget that due to the pricing involved (the 950 was initially priced at $159) we’re likely to see them jump ship to the GTX 1060 3GB rather than head down-market.

From a raw competitive analysis standpoint the GTX 1050 Ti really does bridge the gap between AMD’s RX 460 and RX 470….at least on paper. In addition, there’s enough open sky between it and the GTX 1060-series that NVIDIA won’t have to worry about cross contamination within their own lineup, a situation which has plagued the RX 470 and RX 480 4GB. This card really does feature the best of both worlds; the ability to provide maximum detail settings at 1080P while still maintaining drop-in compatibility with slightly older systems.

At the beginning of this review I mentioned efficiency as a highlight of the Pascal architecture and the GTX 1050 Ti once again proves that point. It –at times- vastly outstrips AMD’s supposedly-efficient RX 460 yet requires much, much less input power. For high end systems where gamers have beastly power supplies, power consumption can somewhat be overclooked but budget-minded buyers need to look at global system costs. If a GPU upgrade doesn’t require a new PSU, that’s half the battle won.

Moving on to the GTX 1050 and it’s obvious this card was tailor made to compete directly against the RX 460. Even though AMD quickly cut their cards’ price in preparation for the deluge of 1050-series reviews, a bit more snipping may be required. There’s a slim $10 difference between these two cards but NVIDIA’s offering reigns supreme in nearly every situation. But that “nearly” word does come into play here because…

In previous reviews I’ve mentioned that NVIDIA’s offerings struggle to maintain their commanding leads when moving from DX11 to DX12 scenarios and that situation hasn’t changed one iota here. Our numbers don’t lie: the triple digit lead enjoyed by the GTX 1050 over the RX 460 2GB pretty much evaporates into thin air and the RX 470 extends its already-large lead over the GTX 1050 Ti. With a total of seven titles in my DX12 suite it’s now a little easier to get a handhold on what the future may hold and that future seems to be a real dog-fight down in the trenches between the red and green teams.

The GTX 1050 and GTX 1050 Ti’s ultimate success or failure will rest upon the laurels of board partners and that’s both a relief (no Founders Edition!) and a potential cause for concern. While I’m assured there will indeed be plenty of examples at $109 and $139, neither of the samples I initially received actually hit those prices. The ASUS GTX 1050 Ti Dual goes for $159 despite its reference clocks and EVGA’s GTX 1050 Superclocked boasts an asking price of $119.

As a matter of fact, our YouTube editor Eber has a Gigabyte GTX 1050 Ti G1 Gaming on hand which will retail for a cool $169. $169! I can’t begin to tell you how poor those metrics look; a GTX 1050 Ti at $139 or even $149 makes perfectly good sense but paying upwards of that for moderately increased frequencies should be avoided at all costs, particularly with the RX 470 hovering at $169. I’ll actually make this choice easy: if you want to spend $160 to $180 on a GPU, the RX 470 4GB is the hands-down winner provided it ends up hitting AMD’s newly reduced MSRP.

EVGA’s GTX 1050 SC on the other hand doesn’t suffer all that much due to its $10 premium since it does have higher clock speeds and includes a ridiculously compact chassis. Luckily I was able to get a reference BIOS for comparison and while the frequency uplift won’t grant noticeable performance uplifts, ten bucks doesn’t vault it into a higher end category like I described with the ASUS and Gigabyte cards above.

One thing that will be a bone of contention for some will be the GTX 1050-series’ severely truncated overclocking headroom on cards without an auxiliary power connector. It looks like NVIDIA did their absolute best to maximize clock speeds within the limited output current range of a 75W PCI-E slot and that means there’s precious little overhead for user-facing frequency adjustments. Stepping up to a model with a dedicated 6-pin power connector will likely change that (I head stories of nearly 2GHz being achievable on some samples) but not on either of the two cards I have on hand right now.

The moral of the GTX 1050-series story is very much the same as any other launch: a near-perfect combination of price and performance is certainly achievable but not guaranteed. Choose incorrectly and you’re throwing money out the window for minimal benefit. For the GTX 1050 Ti and GTX 1050 that means sticking to a choice that is no more than a $10 premium over NVIDIA’s “reference” prices of $109 and $139 respectively. Hit that point and both of these cards offer an awesome bang for your buck upgrade. Go above that and you’ll have my condolences since you should have looked elsewhere, especially if DX12 is a necessity.

Posted in

Latest Reviews