The AMD RX 580 8GB Performance Review
Test System & Setup
Processor: Intel i7 5960X @ 4.4GHz
Memory: G.Skill Trident X 32GB @ 3200MHz 15-16-16-35-1T
Motherboard: ASUS X99 Deluxe
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / Acer XB280HK (4K)
OS: Windows 10 Pro
AMD Crimson ReLive 17.10.1030 B8
NVIDIA 381.65 WHQL
– All games tested have been patched to their latest version
– The OS has had all the latest hotfixes and updates installed
– All scores you see are the averages after 3 benchmark runs
All IQ settings were adjusted in-game and all GPU control panels were set to use application settings
The Methodology of Frame Testing, Distilled
How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.
Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.
FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.
Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.
We are now using FCAT for ALL benchmark results in DX11.
For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.
Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.
One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.
Understanding the “Lowest 1%” Lines
In the past we had always focused on three performance metrics: performance over time, average framerate and pure minimum framerates. Each of these was processed from the FCAT or Presentmon results and distilled down into a basic chart.
Unfortunately, as more tools have come of age we have decided to move away from the “minimum” framerate indication since it is a somewhat deceptive metric. Here is a great example:
In this example, which is a normalized framerate chart whose origin is a 20,000 line log of frame time milliseconds from FCAT, our old “minimum” framerate would have simply picked out the one point or low spike in the chart above and given that as an absolute minimum. Since we gave you context of the entire timeline graph, it was easy to see how that point related to the overall benchmark run.
The problem with that minimum metric was that it was a simple snapshot that didn’t capture how “smooth” a card’s output was perceived. As we’ve explained in the past and here, it is easy for a GPU to have a high average framerate while throwing out a ton of interspersed higher latency frames. Those frames can be perceived as judder and while they may not dominate a gaming experience, their presence can seriously detract from your immersion.
In the case above, there are a number of instances where frame times go through the roof, none of which would accurately be captured by our classic Minimum number. However, if you look closely enough, all of the higher frame latency occurs in the upper 1% of the graph. When translated to framerates, that’s the lowest 1% (remember, high frame times = lower frame rate). This can be directly translated to the overall “smoothness” represented in a given game.
So this leads us to our “Lowest 1%” within the graphs. What this represents is an average of all the lowest 1% of results from a given benchmark output. We basically take thousands of lines within each benchmark capture, find the average frame time and then also parse out the lowest 1% of those results as a representation of the worse case frame time or smoothness. These frame time numbers are then converted to actual framerate for the sake of legibility within our charts.
There has also been some debate over whether or not we should include 0.1% (or 99.9th percentile) framerates as well. Our answer to that is a simple “NO”. The reason for this is simple: in GPU benchmarking an outside influence such as an SSD load or game engine shader caching issue could interject a single very high latency frame into the mix. While the 1% lowest calculation would include that in its average, 0.1% would display highlight it. However, since many of those 0.1% frames aren’t due to the GPU at all, they should also be perceived as a red herring and not valid for GPU comparisons.
- Test System & Setup / Methodologies
- 1080P: Battlefield 1 / Call of Duty: Infinite Warfare
- 1080P: Deus Ex – Mankind Divided / The Division
- 1080P: Doom / Fallout 4
- 1080P: Gears of War 4 / Grand Theft Auto V
- 1080P: Hitman / Overwatch
- 1080P: Quantum Break / Titanfall 2
- 1080P: Warhammer: Total War / Witcher 3
- 1440P: Battlefield 1 / Call of Duty: Infinite Warfare
- 1440P: Battlefield 1 / Call of Duty: Infinite Warfare
- 1440P: Doom / Fallout 4
- 1440P: Gears of War 4 / Grand Theft Auto V
- 1440P: Hitman / Overwatch
- 1440P: Quantum Break / Titanfall 2
- 1440P: Warhammer: Total War / Witcher 3
- Analyzing Temperatures & Frequencies Over Time
- Acoustical Testing / System Power Consumption
- Overclocking Results - A Bucket of Frustration
- Conclusion; One of the Best Just Got Better
Sorry, we couldn't find any posts. Please try a different search.