The NVIDIA GTX 1080 Ti Performance Review
Test System & Setup
Processor: Intel i7 5960X @ 4.7GHz
Memory: G.Skill Trident X 32GB @ 3000MHz 15-16-16-35-1T
Motherboard: ASUS X99 Deluxe
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / Acer XB280HK (4K)
OS: Windows 10 Pro
NVIDIA 378.14 Beta
– All games tested have been patched to their latest version
– The OS has had all the latest hotfixes and updates installed
– All scores you see are the averages after 3 benchmark runs
All IQ settings were adjusted in-game and all GPU control panels were set to use application settings
The Methodology of Frame Testing, Distilled
How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.
Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.
FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.
Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.
We are now using FCAT for ALL benchmark results in DX11.
For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.
Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.
One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.
- Test System, Setup & Methodologies
- DX11 / 1440P: Call of Duty: Infinite Warfare / Fallout 4
- DX11 / 1440P: Grand Theft Auto V / Overwatch
- DX11 / 1440P: Titanfall 2 / Witcher 3
- DX12 / 1440P: Battlefield 1 / Deus Ex – Mankind Divided
- DX12 + Vulkan / 1440P: The Division / Doom
- DX12 / 1440P: Gears of War / Hitman
- DX12 / 1440P: Quantum Break / Rise of the Tomb Raider
- DX11 / 4K: Call of Duty: Infinite Warfare / Fallout 4
- DX11 / 4K: Grand Theft Auto V / Overwatch
- DX11 / 4K: Titanfall 2 / Witcher 3
- DX12 / 4K: Battlefield 1 / Deus Ex – Mankind Divided
- DX12 + Vulkan / 4K: The Division / Doom
- DX12 / 4K: Gears of War / Quantum Break
- DX12 / 4K: Quantum Break / Rise of the Tomb Raider
- Analyzing Temperatures & Frequencies Over Time
- Acoustics & Power Consumption
- Overclocking Results - Pushing Past 2GHz
- Conclusion; The Fastest Just Got More "Affordable"