What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

The GTX 750 Ti Review; Maxwell Arrives

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
NVIDIA’s Kepler architecture has been around for some time now. While it may be hard to believe, we were introduced to Kepler in GTX 680 guise back in March of 2012 which means it’s nearly two years old now. There have been some revisions since then, allowing the architecture to deliver better performance per watt and better compete against AMD’s alternatives. Now the time has come to unveil their next generation DX11 architecture, code named Maxwell.

Unlike previous launches, Maxwell isn’t being rolled out into the high end market but will initially target volume sales within the $99 to $149 price points via a new 1.87 billion transistor GM107 core. This focus on budget minded gamers may sound like an odd decision on NVIDIA’s part but they feel the current flagship GeForce parts compete well against AMD’s product stack (and they do) so there’s more than enough time to fine-tune Maxwell for other applications. Make no mistake about it though, GM107 is simply a pipe-cleaner part that is meant to test a new architecture and prepare the way for higher end products in the near future.

Contrary to AMD’s scattershot approach by flooding the mid-range with a deluge of different, closely priced cards like the R7 260X, R7 265 and R9 270, NVIDIA is taking a more measured approach. The GM107 will be first rolled into two different SKUs: the GTX 750 Ti and GTX 750.

Even though it uses a tried and true 28nm manufacturing process, Maxwell’s primary goal is to deliver groundbreaking improvements in performance per watt, essentially allowing NVIDIA’s engineers to do more with less. This is particularly important for the GTX 750 Ti and GTX 750 since they will fit perfectly into older prebuilt systems without needing an expensive and sometimes complicated power supply swap.

GTX-750-TI-REVIEW-49.JPG

The GTX 750 Ti is equipped with 640 CUDA cores, 40 texture units, 16 ROPs and 2GB of GDDR5 operating across a 128-bit memory interface. The Maxwell architecture is particularly effective in the TDP department, allowing the GTX 750 Ti to hit just 60W (the actual power requirement is closer to 65W). Yes, you read that right; 60W or a bit more than half of what a GTX 650 Ti output. Such low power numbers don't hold back frequencies either since it has a Base Clock of 1020MHz but due to NVIDIA’s purposeful underrating of core speeds, most users will likely see their cards operate at 1050MHz or higher.

Remember what we said about doing more with less? That’ll be the common thread that binds together Maxwell products and the GTX 750 Ti provides no better example of this. It is meant to drastically outpace the GTX 650 Ti even though it is equipped with fewer cores, less texture units and a similar memory interface layout, while requiring a lot less power. Those improvements may seem impossible when looking at the paper specifications but they all come down to large-scale architectural changes going on behind the scenes. We’ll get into those a bit later.

Back when the R7 265 was reviewed, we mentioned that with the GTX 650 Ti Boost’s discontinuation, NVIDIA had a gaping hole within their lineup between the GTX 650 Ti and GTX 660 2GB. The GTX 750Ti will now take over from the Boost at $149 or exactly the same price as the now-EOL’d mid-tier darling. Those are some big shoes to fill since the GTX 650 Ti Boost was considered one of the previous generation’s best values, combining a low price (particularly after NVIDIA’s aggressive price cuts late last year) with excellent 1080P performance. NVIDIA will also be launching a GTX 750 Ti 1GB at a slightly lower price point of $139 a bit later this month.

The GTX 750 on the other hand uses the same GM107 core but with two Streaming Multiprocessors disabled, resulting in lower Core and TMU counts. Core frequencies will remain the same, though the memory allotment and speeds will be scaled back. At $119 and with a TDP of just 55W, the GXT 750 could become the ultimate HTPC and SFF card.

GTX-750-TI-REVIEW-4.JPG

The GTX 750 Ti and GTX 750 will work in tandem to replace the GTX 650 Ti (the GTX 650 will stick around for now) and offer some form of response to AMD’s packed lineup. As 650 Ti replacements they look like a home run but positioning against the $149 R7 265 may prove to be challenging for the GTX 750 Ti since it lacks the AMD card’s 256-bit memory bus and raw gaming horsepower. However, considering the Radeon lineup’s abysmal pricing track record as of late, whether or not the R7 265 will actually hit $149 is anyone’s guess. We wouldn’t bet on it though since we still haven’t seen the R7 260X at its new $119 price point yet, nearly a week after the reduction’s announcement.

Maxwell shares quite a bit in common with Kepler. G-SYNC support, which is a huge deal for lower end cards, has been included provided board partners include a DisplayPort output. There’s also a built-in, and improved H.264 video encoder for compatibility with NVIDIA’s GameStream ecosystem and ShadowPlay. There's no SLI support though, so NVIDIA's higher end solution are safe from a possible dual GTX 750 threat.

NVIDIA has learned their lessons when it comes to paper launches. Even though the GTX 750 Ti and GTX 750 are based on new technology, they will be widely available right away from major retailers. Free games and other incentives from the 700-series and 600-series won’t be carrying over to them but NVIDIA is hoping a combination of extreme efficiency and good performance will be enough to sway budget-minded gamers over to their side.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
In-Depth with the Maxwell Architecture & GM107

In-Depth with the Maxwell Architecture & GM107


Maxwell represents the next step in the evolution of NVIDIA’s Fermi architecture, a process which started with Kepler and now continues into this generation. This means many of the same “building blocks” are being used but they’ve gone though some major revisions in an effort to further reduce power consumption while also optimizing die area usage and boosting process efficiency.

At this point some of you may be thinking that by focusing solely on TDP numbers, NVIDIA is offering up performance as a sacrificial lamb. This couldn’t be further from the truth. By improving on-die efficiency and lowering power consumption and heat output, engineers are giving themselves more room to work with. Let’s take a high end part like the GK110 or GTX 780 Ti as an example of this metric. Like its predecessors, NVIDIA was constrained to a TDP of 250W and they basically crammed as much as possible into a die which hit that plateau at reasonable engine frequencies. As Maxwell comes into the fold, the 250W ceiling won’t change but the amount of performance which can be squeezed out of that 250W could improve exponentially. In short, Maxwell’s evolutionary design will have a profound effect upon their flagship parts’ overall performance while also bringing substantial perf per watt benefits to every market segment.

Before we move on, some mention has to be made about the 28nm manufacturing process NVIDIA has used for the GM107 core because it’s an integral part of the larger Maxwell story. With the new core layout efficiency has taken a significant step forward, allowing NVIDIA to provide GK106-matching performance out of a smaller, less power hungry core. GPU manufacturers used to rely upon manufacturing process improvements to address their need for lower TDP but now NVIDIA has made reduced power requirements an integral part of their next generation architecture. This puts the need for a smaller process node into question but it certainly doesn’t preclude the use of 20nm sometime in the future. NVIDIA likely feels that with their enhanced, highly optimized core redesign, jumping to an unproven manufacturing process really isn’t necessary for great results.

GTX-750-TI-REVIEW-9.png
GTX-750-TI-REVIEW-12.png

Maxwell SMM (LEFT) / Kepler SMX (RIGHT)

Much like with Kepler and Fermi, the basic building block of all Maxwell cores is the Streaming Multiprocessor. The Maxwell SM (or SMM) still uses NVIDIA’s second generation PolyMorph Engine which includes various fixed function stages like a dedicated tessellator and parts of the vertex fetch pipeline alongside a shared instruction cache and 64K of shared memory. This is where the similarities end since the main processing stages of Maxwell’s SMs have undergone some drastic changes.

While Kepler’s SMXs each housed a single core logic block which consisted of a quartet of Warp Schedulers, eight Dispatch Units, a large 65,536 x 32-bit Register File, 16 Texture Units and 192 CUDA cores, Maxwell’s design breaks these up into smaller chunks for easier management and more streamlined data flows. While the number schedulers, Dispatch Units and Register File size remains the same, they’re separated into four distinct processing blocks, each containing 32 CUDA cores and a purpose-built Instruction Buffer for better routing. In addition, load / store units are now joined to just four cores rather than the six of Kepler, allowing each SMM to process 32 thread per clock despite its lower number of cores. This layout ensures the CUDA cores aren’t all fighting for the same resources, thus reducing computational latency.

There are still a number of shared resources here as well. For example, each pair of processing blocks has access to 12KB of L1 / Texture cache (for a total of 24KB per SMM) servicing 64 cores while there’s still a globally shared cache structure of 64KB. As with Kepler, this block is completely programmable and can be configured in one of three ways. It can either be laid out as 48 KB of shared memory with 16 KB of L1 cache, as 16 KB of Shared memory with 48 KB of L1 cache or in a 32/32 mode which balances out the configuration for situations where the core may be processing graphics in parallel with compute tasks. This L1 cache is supposed to help with access to the on-die L2 cache as well as streamlining functions like stack operations and global loads / stores.

Maxwell’s Streaming Multiprocessor’s design was created to diminish the SM’s physical size, allowing more units to be used on-die in a more power-conscious manner. This has been achieved by lowering the number of CUDA cores from Kepler’s 192 to 128 while Texture Unit allotment goes from 16 to just 8. However, due to the inherent processing efficiency within Maxwell, these shortcomings don’t amount to much since each individual CUDA core is now able to offer up to 35% more performance while the Texture Units have also received noteworthy improvements. In an apples to apples comparison, an SMM can deliver 90% of an SMX’s performance while taking up much less space.

GTX-750-TI-REVIEW-7.png

With the changes outlined above, we can now get a better understanding of how NVIDIA utilized the SMM to create their new GM107 core. In this iteration, the GTX 750 Ti receives the full allotment with five Streaming Multiprocessors for a total of 604 CUDA cores while the GTX 750 has a single SMM disabled and includes 476 cores. Compare and contrast this to a GK107 and you’ll immediately see the differences; where the GK107 made due with just two SMX blocks, the five included here bring about huge benefits with more PolyMorph Engines and better information routing, all while requiring significantly less power. The per-SM texture unit discrepancy between Maxwell and Kepler has been addressed by simply adding more “building blocks” to the equation.

Another interesting thing to note is die size in comparison to the previous generation. Even though a GM107 core has 1.86 billion transistors and takes up 148mm² of space, it actually consumes less power and has a lower TDP than a significantly smaller 1.3 billion transistor, 118mm² GK107 design.

The back-end functions have largely remained the same with a fully enabled GM107 core featuring 16 ROPs spread over two blocks of eight alongside a pair of 64-bit memory controllers. However, as with many other elements of Maxwell, there’s more here than what first meets the eye. NVIDIA has equipped this core with a massive 2048KB L2 cache, representing a nearly tenfold increase over GK107’s layout. This substantial increase in on-chip cache means there will be fewer requests to the DRAM, reducing power consumption and eliminating certain bandwidth bottlenecks when paired up with the other memory enhancements built into Maxwell.

The comparison numbers between GK107 and the new GM107 are quite telling: Maxwell offers 25% more peak texture performance and about 2.3 times more delivered shader performance than the previous generation. This allows NVIDIA’s entry level core to compete with GK106-class parts. Maxwell is also able to deliver more granularity on the modification front since engineers can remove single SMM modules and create derivative parts without leaving too much of a gap between various core layouts.


Enhanced Video & Improved Power States


Over the last year, we’ve seen NVIDIA utilize Kepler’s onboard NVENC H.264 video encoder in a number of unique ways. At first, it facilitated game streaming services from a GeForce-equipped system to the SHIELD handheld gaming device brought PC gaming to either your HDTV or into your hands. Since all of the encoding was done within the GPU’s hardware, the amount of processing overhead for SHIELD was reduced, thus optimizing battery life and performance. We also experienced ShadowPlay, an innovative use of NVENC to record gameplay without the need for process-hogging applications like FRAPS or PlayClaw.

With Maxwell, these features make a comeback but with an enhanced NVENC block that’s equipped with a dedicated decoder cache. As a result of these changes encoding and decoding functions in Maxwell-based GPUs can double up on Kepler’s encode performance while also featuring higher levels of decode throughput.

NVIDIA has also added a so-called “GC5” power state which reduces power consumption when the GPU’s utilization is low. This will be of particular use to anyone who wants a GTX 750-series card for an HTPC since video playback will require less power.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Closer Look at the GTX 750 Ti

A Closer Look at the GTX 750 Ti


For the purposes of this review, NVIDIA sent us a reference GTX 750 Ti but they consider this to be a “virtual” launch. In other words, while board partners are able to utilize the design you see below, they are encouraged to engineer their own boards with upgraded heatsinks and possibly better components. Below our usual pictures, you will find a selection of board partner cards which highlight the wide variety of options which will be available when the 750 Ti launches.

GTX-750-TI-REVIEW-1.jpg

Much like the GTX 650 Ti before it, the GTX 750 Ti is a compact card which would be a perfect fit for HTPCs and SFF boxes. At 5 ¾” in length, this is one of the shortest cards available but in reference form it carries a dual slot heatsink.

Expect most board partners to initially reuse their GTX 650 Ti boards for their GTX 750 Ti products but as they come to grips with the GM107 core’s requirements, expect some distinct designs to emerge. Due to Maxwell’s extremely low TDP, we’re hoping to see single slot, low profile GTX 750 Ti’s become available at some point. Those would make for an awesome addition for any HTPC system.

GTX-750-TI-REVIEW-2.jpg

One element missing from the reference GTX 750 Ti is the usual 6-pin PCI-E power connector. Due to this card’s inherent efficiency, all that’s needed is a PCI-E 1.1 slot’s 75W power capacity (PCI-E 2.0 brings this up to 150W) without the need for auxiliary current. It should also be mentioned that since some designs will use repurposed GTX 650 Ti boards or come with significant pre-overclocks, we will see some launch-day products with a required secondary power input.

GTX-750-TI-REVIEW-3.jpg

The reference card’s backplate is a bit of a disappointment for a number of reasons. First and foremost is the lack of a full-sized HDMI output which is a key item most HTPC users are looking for. In addition, NVIDIA decided to go with a pair of DVI outputs rather than a combination of DVI and DisplayPort. With this layout, the GTX 750 Ti doesn’t natively support HDTVs without an adaptor (unless your set has a mini HDMI input) or G-SYNC.

At this point it looks like about half of cards available at launch will have a DisplayPort connector for G-SYNC compatibility while the others will come with an odd mix of dual DVIs, mini HDMI connectors or dual HDMIs for native 4K support. We suggest taking a close look at your chosen card’s connector offerings before pressing the “buy it now” button.

GTX-750-TI-REVIEW-5.jpg

With a die area of just 148mm², the GM107 die is slightly larger than NVIDIA’s GK107 but with its focus on efficiency, we should see some spectacular power consumption and TDP numbers.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: Gigabyte Z68X-UD3H-B3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 334.69 Beta
AMD 14.1 Beta 6



*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings


The Methodology of Frame Testing, Distilled


How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.

We are now using FCAT for ALL benchmark results.


Frame Time Testing & FCAT

To put a meaningful spin on frame times, we can equate them directly to framerates. A constant 60 frames across a single second would lead to an individual frame time of 1/60th of a second or about 17 milliseconds, 33ms equals 30 FPS, 50ms is about 20FPS and so on. Contrary to framerate evaluation results, in this case higher frame times are actually worse since they would represent a longer interim “waiting” period between each frame.

With the milliseconds to frames per second conversion in mind, the “magical” maximum number we’re looking for is 28ms or about 35FPS. If too much time spent above that point, performance suffers and the in game experience will begin to degrade.

Consistency is a major factor here as well. Too much variation in adjacent frames could induce stutter or slowdowns. For example, spiking up and down from 13ms (75 FPS) to 28ms (35 FPS) several times over the course of a second would lead to an experience which is anything but fluid. However, even though deviations between slightly lower frame times (say 10ms and 25ms) wouldn’t be as noticeable, some sensitive individuals may still pick up a slight amount of stuttering. As such, the less variation the better the experience.

In order to determine accurate onscreen frame times, a decision has been made to move away from FRAPS and instead implement real-time frame capture into our testing. This involves the use of a secondary system with a capture card and an ultra-fast storage subsystem (in our case five SanDisk Extreme 240GB drives hooked up to an internal PCI-E RAID card) hooked up to our primary test rig via a DVI splitter. Essentially, the capture card records a high bitrate video of whatever is displayed from the primary system’s graphics card, allowing us to get a real-time snapshot of what would normally be sent directly to the monitor. By using NVIDIA’s Frame Capture Analysis Tool (FCAT), each and every frame is dissected and then processed in an effort to accurately determine latencies, frame rates and other aspects.

We've also now transitioned all testing to FCAT which means standard frame rates are also being logged and charted through the tool. This means all of our frame rate (FPS) charts use onscreen data rather than the software-centric data from FRAPS, ensuring dropped frames are taken into account in our global equation.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Assassin’s Creed III / Crysis 3

Assassin’s Creed III (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/RvFXKwDCpBI?rel=0" frameborder="0" allowfullscreen></iframe>​

The third iteration of the Assassin’s Creed franchise is the first to make extensive use of DX11 graphics technology. In this benchmark sequence, we proceed through a run-through of the Boston area which features plenty of NPCs, distant views and high levels of detail.


1920 x 1080

GTX-750-TI-REVIEW-38.jpg

GTX-750-TI-REVIEW-30.jpg


Crysis 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/zENXVbmroNo?rel=0" frameborder="0" allowfullscreen></iframe>​

Simply put, Crysis 3 is one of the best looking PC games of all time and it demands a heavy system investment before even trying to enable higher detail settings. Our benchmark sequence for this one replicates a typical gameplay condition within the New York dome and consists of a run-through interspersed with a few explosions for good measure Due to the hefty system resource needs of this game, post-process FXAA was used in the place of MSAA.


1920 x 1080

GTX-750-TI-REVIEW-39.jpg

GTX-750-TI-REVIEW-31.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Dirt: Showdown / Far Cry 3

Dirt: Showdown (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/IFeuOhk14h0?rel=0" frameborder="0" allowfullscreen></iframe>​

Among racing games, Dirt: Showdown is somewhat unique since it deals with demolition-derby type racing where the player is actually rewarded for wrecking other cars. It is also one of the many titles which falls under the Gaming Evolved umbrella so the development team has worked hard with AMD to implement DX11 features. In this case, we set up a custom 1-lap circuit using the in-game benchmark tool within the Nevada level.


1920 x 1080

GTX-750-TI-REVIEW-40.jpg

GTX-750-TI-REVIEW-32.jpg



Far Cry 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/mGvwWHzn6qY?rel=0" frameborder="0" allowfullscreen></iframe>​

One of the best looking games in recent memory, Far Cry 3 has the capability to bring even the fastest systems to their knees. Its use of nearly the entire repertoire of DX11’s tricks may come at a high cost but with the proper GPU, the visuals will be absolutely stunning.

To benchmark Far Cry 3, we used a typical run-through which includes several in-game environments such as a jungle, in-vehicle and in-town areas.



1920 x 1080

GTX-750-TI-REVIEW-41.jpg

GTX-750-TI-REVIEW-33.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Hitman Absolution / Max Payne 3

Hitman Absolution (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/8UXx0gbkUl0?rel=0" frameborder="0" allowfullscreen></iframe>​

Hitman is arguably one of the most popular FPS (first person “sneaking”) franchises around and this time around Agent 47 goes rogue so mayhem soon follows. Our benchmark sequence is taken from the beginning of the Terminus level which is one of the most graphically-intensive areas of the entire game. It features an environment virtually bathed in rain and puddles making for numerous reflections and complicated lighting effects.


1920 x 1080

GTX-750-TI-REVIEW-42.jpg

GTX-750-TI-REVIEW-34.jpg



Max Payne 3 (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/ZdiYTGHhG-k?rel=0" frameborder="0" allowfullscreen></iframe>​

When Rockstar released Max Payne 3, it quickly became known as a resource hog and that isn’t surprising considering its top-shelf graphics quality. This benchmark sequence is taken from Chapter 2, Scene 14 and includes a run-through of a rooftop level featuring expansive views. Due to its random nature, combat is kept to a minimum so as to not overly impact the final result.


1920 x 1080

GTX-750-TI-REVIEW-43.jpg

GTX-750-TI-REVIEW-35.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Metro: Last Light / Tomb Raider

Metro: Last Light (DX11)


<iframe width="640" height="360" src="http://www.youtube.com/embed/40Rip9szroU" frameborder="0" allowfullscreen></iframe>​

The latest iteration of the Metro franchise once again sets high water marks for graphics fidelity and making use of advanced DX11 features. In this benchmark, we use the Torchling level which represents a scene you’ll be intimately familiar with after playing this game: a murky sewer underground.


1920 x 1080

GTX-750-TI-REVIEW-44.jpg

GTX-750-TI-REVIEW-36.jpg


Tomb Raider (DX11)


<iframe width="560" height="315" src="http://www.youtube.com/embed/okFRgtsbPWE" frameborder="0" allowfullscreen></iframe>​

Tomb Raider is one of the most iconic brands in PC gaming and this iteration brings Lara Croft back in DX11 glory. This happens to not only be one of the most popular games around but it is also one of the best looking by using the entire bag of DX11 tricks to properly deliver an atmospheric gaming experience.

In this run-through we use a section of the Shanty Town level. While it may not represent the caves, tunnels and tombs of many other levels, it is one of the most demanding sequences in Tomb Raider.


1920 x 1080

GTX-750-TI-REVIEW-45.jpg

GTX-750-TI-REVIEW-37.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Temperatures & Acoustics / Power Consumption

Temperature Analysis


For all temperature testing, the cards were placed on an open test bench with a single 120mm 1200RPM fan placed ~8” away from the heatsink. The ambient temperature was kept at a constant 22°C (+/- 0.5°C). If the ambient temperatures rose above 23°C at any time throughout the test, all benchmarking was stopped..

For Idle tests, we let the system idle at the Windows 7 desktop for 15 minutes and recorded the peak temperature.


GTX-750-TI-REVIEW-47.jpg

NVIDIA expects some big things from their Maxwell architecture, one of which is a significantly lower TDP than previous designs. Here we can see just how effective it is. With just a basic aluminum heatsink and simple fan, the reference card is able to deliver some very low temperatures. The results are higher than the GTX 650 Ti 1GB but that was to be expected since its predecessor’s reference design used a slightly better cooler.


Acoustical Testing


What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, a loop of Unigine Valley is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.

GTX-750-TI-REVIEW-46.jpg

The GTX 750 Ti’s heatsink is quiet but that shouldn’t come as a surprise considering there isn’t all that much heat to disperse.


System Power Consumption


For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.

Please note that after extensive testing, we have found that simply plugging in a power meter to a wall outlet or UPS will NOT give you accurate power consumption numbers due to slight changes in the input voltage. Thus we use a Tripp-Lite 1800W line conditioner between the 120V outlet and the power meter.

GTX-750-TI-REVIEW-48.jpg

This is where NVIDIA’s Maxwell really shines. Even though we’re comparing it directly against a 1GB GTX 650 Ti (the 2GB version will likely require about 7W to 10W more) and other supposedly low power alternatives, nothing else stands a chance.

To put this into perspective, the GTX 750 Ti consistently outperforms AMD’s R7 260X but requires 42W (yes, you read that right) less when running at full tilt. In addition, it boasts a noteworthy reduction in idle power requirements. From a performance per watt standpoint, it’s obvious that NVIDIA has a real winner on their hands.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Overclocking Results

Overclocking Results


As with most other new GPU launches, we didn’t have all that much time to play around with the overclocking capabilities of NVIDIA’s GTX 750 Ti. However, unlike previous experiences with “fresh” reference cards, this one absolutely blew us away with its capabilities. To achieve the results you see below, we used EVGA’s Precision tool.

First and foremost, it should be said that NVIDIA and their board partners are still constraining voltage increases but in this case, there’s absolutely no modification allowed. The Power Limit also receives a hard cap of 100% so there’s no play there either. Normally this would mean poor overclocking results but based on our limited experience with the GTX 750 Ti, there’s plenty of overhead left in the tank, so much so that we wonder whether or not NVIDIA was a bit too conservative with their reference frequencies.

Let’s cut to the chase then. Without any Power Limit or Voltage increases, the core on our card hit 1303MHz (!!) with the fan at 50% while maintaining absolute stability for the usual 45 minutes of intense gaming. The GDDR5 meanwhile leveled out at 6044MHz. When combined, these frequencies allowed our sample to hit framerates that nearly equaled those from a GTX 660 2GB.

GTX-750-TI-REVIEW-51.jpg

GTX-750-TI-REVIEW-52.jpg
 
Last edited:

Latest posts

Top